Quay lại Blog
Data EngineeringCập nhật: 11 tháng 3, 202518 phút đọc

Infrastructure as Code cho Data Platform: Terraform + dbt + Git

Infrastructure as Code (IaC) là standard trong software engineering, nhưng ít được áp dụng trong data. Hướng dẫn chi tiết về Terraform cho infrastructure, dbt cho transformations, Git workflows, CI/CD pipelines, và disaster recovery cho Data Platform.

Trần Thị Mai Linh

Trần Thị Mai Linh

Head of Data Engineering

Infrastructure as Code workflow showing Terraform, dbt, and Git version control for Data Platform
#Infrastructure as Code#Terraform#dbt#Git#CI/CD#DevOps#Data Platform#GitOps#GitHub Actions

"Server production của chúng ta bị down. Rebuild mất 3 ngày vì không ai nhớ configuration."

Nếu bạn làm software engineering, câu này nghe quen thuộc - và đó là lý do Infrastructure as Code (IaC) trở thành standard.

Nhưng trong data world, IaC vẫn chưa phổ biến. Majority Data Platforms được build manually:

  • ClickOps: Tạo BigQuery datasets through UI
  • Snowflake warehouses configured qua web console
  • Permissions granted manually
  • dbt models không có CI/CD
  • Disaster recovery = "Hope and pray"

Consequences:

  • Không reproduce được environment (dev, staging, prod khác nhau)
  • Changes không được tracked (ai thay đổi gì, khi nào?)
  • Rollback impossible (break production → can't revert)
  • Onboarding mới: 2 weeks để setup local environment
  • Disaster recovery: Mất days-weeks để rebuild

Theo Puppet State of DevOps Report, teams với IaC deploy 46x more frequently và có 96x faster mean time to recovery.

Trong bài này, bạn sẽ học:

  • Why IaC for Data: Reproducibility, version control, collaboration, disaster recovery
  • Terraform for Infrastructure: Provision BigQuery, Snowflake, GCS, S3, IAM
  • dbt for Transformations: SQL models as code in Git, environments (dev/staging/prod)
  • Git Workflows: Feature branches, pull request reviews, merge strategies
  • CI/CD Pipelines: GitHub Actions, GitLab CI, automated testing
  • Full Setup Tutorial: Repo structure, tools, step-by-step implementation
  • Rollback Strategy: How to safely revert changes

Sau bài này, bạn sẽ có complete IaC setup cho Data Platform của bạn.

Why Infrastructure as Code for Data?

Traditional Approach (Manual)

How it works:

  1. Data Engineer opens BigQuery console
  2. Clicks "Create Dataset"
  3. Fills in form manually
  4. Repeats for dev, staging, prod (slightly different each time)
  5. Grants permissions through UI
  6. Forgets what was clicked
  7. 6 months later: "Why is this permission here?"

Problems:

  • Not reproducible: Can't recreate infrastructure reliably
  • No version control: No history of changes
  • Configuration drift: Dev/staging/prod diverge over time
  • Collaboration hard: Can't review infrastructure changes
  • Disaster recovery slow: Rebuild takes days/weeks

IaC Approach

How it works:

┌──────────────┐
│  Code Editor │
│ (terraform/) │
│              │
│ main.tf      │  1. Write infrastructure code
│ bigquery.tf  │
│ iam.tf       │
└──────┬───────┘
       │
       ├──> 2. Commit to Git
       │
       ▼
┌──────────────┐
│     Git      │  3. Pull Request → Review
│   (GitHub)   │
└──────┬───────┘
       │
       ├──> 4. CI runs tests
       │
       ▼
┌──────────────┐
│     CI/CD    │  5. Terraform Plan (preview changes)
│ (GH Actions) │
└──────┬───────┘
       │
       ├──> 6. Approve → Merge
       │
       ▼
┌──────────────┐
│  Terraform   │  7. Terraform Apply (deploy changes)
│    Apply     │
└──────┬───────┘
       │
       ▼
┌──────────────┐
│  BigQuery    │  8. Infrastructure updated
│  Snowflake   │
│     IAM      │
└──────────────┘

Benefits:

  • Reproducible: Spin up identical environments from code
  • Version controlled: Full history of changes in Git
  • Reviewable: Code review infrastructure changes
  • Rollback: Git revert → infrastructure reverts
  • Disaster recovery: Rebuild from code in minutes
  • Documentation: Code is documentation

Terraform for Infrastructure

Terraform is IaC tool cho provisioning cloud resources.

What Terraform Manages

For Data Platform:

  • Data Warehouses: BigQuery datasets, Snowflake databases
  • Storage: GCS buckets, S3 buckets
  • Compute: Cloud Functions, Lambda functions
  • Networking: VPCs, subnets, firewall rules
  • IAM: Service accounts, permissions, roles
  • Orchestration: Airflow environments, Cloud Composer

Terraform Basics

Structure:

terraform/
├── main.tf           # Main configuration
├── variables.tf      # Input variables
├── outputs.tf        # Output values
├── bigquery.tf       # BigQuery resources
├── iam.tf            # IAM resources
├── terraform.tfvars  # Variable values (gitignored)
└── environments/
    ├── dev.tfvars
    ├── staging.tfvars
    └── prod.tfvars

Example: BigQuery Dataset

# bigquery.tf
resource "google_bigquery_dataset" "analytics" {
  dataset_id  = "${var.environment}_analytics"
  project     = var.gcp_project
  location    = "US"
  description = "Analytics data warehouse for ${var.environment}"

  default_table_expiration_ms = 31536000000  # 1 year

  labels = {
    environment = var.environment
    team        = "data"
    managed_by  = "terraform"
  }

  access {
    role          = "OWNER"
    user_by_email = var.data_team_email
  }

  access {
    role           = "READER"
    group_by_email = var.analysts_group_email
  }
}

# Create tables
resource "google_bigquery_table" "orders" {
  dataset_id = google_bigquery_dataset.analytics.dataset_id
  table_id   = "orders"
  project    = var.gcp_project

  schema = jsonencode([
    {
      name = "order_id"
      type = "INT64"
      mode = "REQUIRED"
    },
    {
      name = "customer_id"
      type = "INT64"
      mode = "REQUIRED"
    },
    {
      name = "order_date"
      type = "DATE"
      mode = "REQUIRED"
    },
    {
      name = "total_amount"
      type = "NUMERIC"
      mode = "REQUIRED"
    }
  ])

  time_partitioning {
    type  = "DAY"
    field = "order_date"
  }

  clustering = ["customer_id"]
}

Example: Snowflake Database

# snowflake.tf
terraform {
  required_providers {
    snowflake = {
      source  = "Snowflake-Labs/snowflake"
      version = "~> 0.80"
    }
  }
}

provider "snowflake" {
  account  = var.snowflake_account
  user     = var.snowflake_user
  password = var.snowflake_password  # Use env var in practice
  role     = "SYSADMIN"
}

resource "snowflake_database" "analytics" {
  name    = "${upper(var.environment)}_ANALYTICS"
  comment = "Analytics database for ${var.environment}"

  data_retention_time_in_days = var.environment == "prod" ? 90 : 7
}

resource "snowflake_schema" "raw" {
  database = snowflake_database.analytics.name
  name     = "RAW"
  comment  = "Raw data from sources"
}

resource "snowflake_schema" "staging" {
  database = snowflake_database.analytics.name
  name     = "STAGING"
  comment  = "Cleaned and typed data"
}

resource "snowflake_schema" "marts" {
  database = snowflake_database.analytics.name
  name     = "MARTS"
  comment  = "Business-ready data marts"
}

# Warehouse (compute)
resource "snowflake_warehouse" "analytics_wh" {
  name           = "${upper(var.environment)}_ANALYTICS_WH"
  warehouse_size = var.environment == "prod" ? "LARGE" : "X-SMALL"

  auto_suspend       = 60  # Suspend after 60 seconds
  auto_resume        = true
  initially_suspended = true

  comment = "Analytics warehouse for ${var.environment}"
}

# Roles and permissions
resource "snowflake_role" "analytics_admin" {
  name    = "${upper(var.environment)}_ANALYTICS_ADMIN"
  comment = "Admin role for analytics database"
}

resource "snowflake_database_grant" "analytics_admin_grant" {
  database_name = snowflake_database.analytics.name
  privilege     = "USAGE"
  roles         = [snowflake_role.analytics_admin.name]
}

Example: GCS Bucket

# storage.tf
resource "google_storage_bucket" "data_lake" {
  name          = "${var.gcp_project}-${var.environment}-data-lake"
  location      = "US"
  storage_class = "STANDARD"

  uniform_bucket_level_access = true

  versioning {
    enabled = true  # Keep versions for recovery
  }

  lifecycle_rule {
    condition {
      age = 90  # Move to Nearline after 90 days
    }
    action {
      type          = "SetStorageClass"
      storage_class = "NEARLINE"
    }
  }

  lifecycle_rule {
    condition {
      age = 365  # Delete after 1 year
    }
    action {
      type = "Delete"
    }
  }

  labels = {
    environment = var.environment
    team        = "data"
  }
}

# Service account for dbt
resource "google_service_account" "dbt" {
  account_id   = "dbt-${var.environment}"
  display_name = "dbt Service Account (${var.environment})"
  description  = "Service account for dbt to access BigQuery"
}

# Grant permissions
resource "google_project_iam_member" "dbt_bigquery_user" {
  project = var.gcp_project
  role    = "roles/bigquery.user"
  member  = "serviceAccount:${google_service_account.dbt.email}"
}

resource "google_project_iam_member" "dbt_bigquery_data_editor" {
  project = var.gcp_project
  role    = "roles/bigquery.dataEditor"
  member  = "serviceAccount:${google_service_account.dbt.email}"
}

Terraform Workflow

Commands:

# Initialize Terraform
terraform init

# Plan changes (preview)
terraform plan -var-file="environments/dev.tfvars"

# Apply changes (execute)
terraform apply -var-file="environments/dev.tfvars"

# Destroy resources
terraform destroy -var-file="environments/dev.tfvars"

Multi-environment:

# Dev environment
terraform workspace select dev
terraform apply -var-file="environments/dev.tfvars"

# Staging environment
terraform workspace select staging
terraform apply -var-file="environments/staging.tfvars"

# Production environment
terraform workspace select prod
terraform apply -var-file="environments/prod.tfvars"

dbt for Transformations

dbt (data build tool) = IaC for data transformations.

Why dbt is IaC

Traditional SQL (Manual):

-- Analyst writes SQL in BigQuery console
-- Clicks "Run"
-- Maybe saves to Drive
-- No version control
-- No testing
-- No documentation

dbt (IaC):

-- models/marts/fct_orders.sql
-- SQL as code in Git
-- Automated testing
-- Auto-generated documentation
-- Environments: dev, staging, prod

dbt Project Structure

dbt_project/
├── dbt_project.yml         # Project config
├── profiles.yml            # Connection profiles (gitignored)
├── packages.yml            # dbt packages
│
├── models/
│   ├── staging/            # Source → staging
│   │   ├── _sources.yml
│   │   ├── stg_orders.sql
│   │   └── stg_customers.sql
│   │
│   ├── intermediate/       # Business logic
│   │   └── int_orders_enriched.sql
│   │
│   └── marts/              # Analytics-ready
│       ├── _schema.yml
│       ├── fct_orders.sql
│       └── dim_customer.sql
│
├── tests/                  # Custom tests
│   └── assert_revenue_positive.sql
│
├── macros/                 # Reusable SQL
│   └── cents_to_dollars.sql
│
├── snapshots/              # SCD Type 2
│   └── customers_snapshot.sql
│
└── seeds/                  # CSV reference data
    └── country_codes.csv

dbt Models as Code

Example:

-- models/marts/fct_orders.sql
{{
  config(
    materialized='incremental',
    unique_key='order_id',
    partition_by={
      'field': 'order_date',
      'data_type': 'date'
    },
    cluster_by=['customer_id']
  )
}}

WITH source_orders AS (
  SELECT * FROM {{ ref('stg_orders') }}

  {% if is_incremental() %}
    WHERE order_date > (SELECT MAX(order_date) FROM {{ this }})
  {% endif %}
),

enriched AS (
  SELECT
    o.order_id,
    o.customer_id,
    o.order_date,
    o.order_status,

    -- Use macro
    {{ cents_to_dollars('o.order_total_cents') }} as order_total,

    -- Join dimension
    c.customer_segment,
    c.customer_lifetime_value,

    -- Calculated fields
    CASE WHEN o.order_date = c.first_order_date THEN true ELSE false END as is_first_order

  FROM source_orders o
  LEFT JOIN {{ ref('dim_customer') }} c ON o.customer_id = c.customer_id
)

SELECT * FROM enriched

Schema & Tests:

# models/marts/_schema.yml
version: 2

models:
  - name: fct_orders
    description: "Orders fact table"

    # Model-level tests
    tests:
      - dbt_utils.recency:
          datepart: day
          field: order_date
          interval: 1

    columns:
      - name: order_id
        description: "Unique order identifier"
        tests:
          - unique
          - not_null

      - name: customer_id
        tests:
          - not_null
          - relationships:
              to: ref('dim_customer')
              field: customer_id

      - name: order_total
        description: "Order total in dollars"
        tests:
          - not_null
          - dbt_expectations.expect_column_values_to_be_between:
              min_value: 0
              max_value: 1000000

      - name: order_status
        tests:
          - accepted_values:
              values: ['pending', 'processing', 'completed', 'cancelled']

dbt Environments

profiles.yml:

# profiles.yml (in ~/.dbt/ or project root, gitignored)
data_platform:
  target: dev  # Default target

  outputs:
    dev:
      type: bigquery
      method: service-account
      project: my-gcp-project-dev
      dataset: analytics_dev
      keyfile: /path/to/dev-service-account.json
      threads: 4

    staging:
      type: bigquery
      method: service-account
      project: my-gcp-project-staging
      dataset: analytics_staging
      keyfile: /path/to/staging-service-account.json
      threads: 8

    prod:
      type: bigquery
      method: service-account
      project: my-gcp-project-prod
      dataset: analytics_prod
      keyfile: /path/to/prod-service-account.json
      threads: 16

Run dbt:

# Development
dbt run --target dev
dbt test --target dev

# Staging
dbt run --target staging
dbt test --target staging

# Production
dbt run --target prod
dbt test --target prod

Git Workflows

Git is foundation của IaC. Every change goes through Git.

Repository Structure

data-platform-iac/
├── terraform/              # Infrastructure code
│   ├── main.tf
│   ├── bigquery.tf
│   ├── iam.tf
│   └── environments/
│       ├── dev.tfvars
│       ├── staging.tfvars
│       └── prod.tfvars
│
├── dbt/                    # dbt project
│   ├── dbt_project.yml
│   ├── models/
│   ├── tests/
│   └── macros/
│
├── .github/workflows/      # CI/CD
│   ├── terraform-plan.yml
│   ├── terraform-apply.yml
│   ├── dbt-test.yml
│   └── dbt-prod-run.yml
│
├── docs/                   # Documentation
│   ├── SETUP.md
│   └── CONTRIBUTING.md
│
└── README.md

Feature Branch Workflow

Process:

main (protected)
  │
  ├── feature/add-customer-segment
  │   │
  │   ├── 1. Create branch
  │   ├── 2. Make changes
  │   ├── 3. Commit
  │   ├── 4. Push
  │   ├── 5. Open Pull Request
  │   ├── 6. CI runs (tests)
  │   ├── 7. Code review
  │   ├── 8. Approve
  │   └── 9. Merge → main
  │
  └── main (updated)

Commands:

# 1. Create feature branch
git checkout -b feature/add-customer-segment

# 2. Make changes
# Edit dbt/models/marts/dim_customer.sql

# 3. Commit
git add dbt/models/marts/dim_customer.sql
git commit -m "Add customer_segment to dim_customer"

# 4. Push
git push origin feature/add-customer-segment

# 5. Open PR on GitHub
# → GitHub Actions run dbt tests

# 6-8. Code review, approve

# 9. Merge
git checkout main
git pull

Pull Request Template

# Pull Request Template

## Description
<!-- What does this PR do? -->

## Type of Change
- [ ] Infrastructure change (Terraform)
- [ ] Data model change (dbt)
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change

## Testing
- [ ] dbt tests pass locally
- [ ] Terraform plan reviewed
- [ ] Tested in dev environment
- [ ] Data quality validated

## Checklist
- [ ] Code follows style guidelines
- [ ] Added/updated tests
- [ ] Added/updated documentation
- [ ] Reviewed by at least 1 person

## Impact
<!-- Which tables/dashboards are affected? -->

## Rollback Plan
<!-- How to revert if this breaks production? -->

CI/CD Pipelines

Automate testing và deployment với CI/CD.

GitHub Actions for dbt

Workflow: dbt Test on PR

# .github/workflows/dbt-test.yml
name: dbt Test

on:
  pull_request:
    paths:
      - 'dbt/**'  # Only run when dbt files change

jobs:
  test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.11'

      - name: Install dbt
        run: |
          pip install dbt-bigquery==1.7.0

      - name: Setup dbt profile
        run: |
          mkdir -p ~/.dbt
          echo "${{ secrets.DBT_PROFILES_YML }}" > ~/.dbt/profiles.yml

      - name: Install dbt dependencies
        working-directory: ./dbt
        run: dbt deps

      - name: Run dbt tests
        working-directory: ./dbt
        run: |
          dbt test --target dev --select state:modified+
        env:
          DBT_GOOGLE_BIGQUERY_KEYFILE: ${{ secrets.GCP_SA_KEY_DEV }}

      - name: Comment PR with results
        uses: actions/github-script@v6
        if: always()
        with:
          script: |
            const output = `
            #### dbt Test Results
            - Status: ${{ job.status }}
            - Tests run: See logs for details

            Please review before merging.
            `;

            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: output
            });

Workflow: dbt Run Production

# .github/workflows/dbt-prod-run.yml
name: dbt Production Run

on:
  push:
    branches:
      - main
    paths:
      - 'dbt/**'

jobs:
  deploy:
    runs-on: ubuntu-latest
    environment: production  # Requires manual approval

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Setup dbt
        # ... same as above

      - name: Run dbt in production
        working-directory: ./dbt
        run: |
          dbt run --target prod
          dbt test --target prod
        env:
          DBT_GOOGLE_BIGQUERY_KEYFILE: ${{ secrets.GCP_SA_KEY_PROD }}

      - name: Generate dbt docs
        working-directory: ./dbt
        run: |
          dbt docs generate --target prod

      - name: Upload docs to GCS
        uses: google-github-actions/upload-cloud-storage@v1
        with:
          credentials: ${{ secrets.GCP_SA_KEY_PROD }}
          path: dbt/target
          destination: my-bucket/dbt-docs

      - name: Notify Slack
        uses: slackapi/slack-github-action@v1
        with:
          payload: |
            {
              "text": "✅ dbt production run completed successfully",
              "blocks": [
                {
                  "type": "section",
                  "text": {
                    "type": "mrkdwn",
                    "text": "*dbt Production Deployment*\n✅ Success\n<https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}|View Details>"
                  }
                }
              ]
            }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}

GitHub Actions for Terraform

Workflow: Terraform Plan

# .github/workflows/terraform-plan.yml
name: Terraform Plan

on:
  pull_request:
    paths:
      - 'terraform/**'

jobs:
  plan:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.6.0

      - name: Terraform Init
        working-directory: ./terraform
        run: terraform init
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_SA_KEY_TERRAFORM }}

      - name: Terraform Format Check
        working-directory: ./terraform
        run: terraform fmt -check

      - name: Terraform Validate
        working-directory: ./terraform
        run: terraform validate

      - name: Terraform Plan (Dev)
        working-directory: ./terraform
        run: |
          terraform plan \
            -var-file="environments/dev.tfvars" \
            -out=tfplan \
            -no-color
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_SA_KEY_TERRAFORM }}

      - name: Comment PR with plan
        uses: actions/github-script@v6
        with:
          script: |
            const fs = require('fs');
            const plan = fs.readFileSync('terraform/tfplan.txt', 'utf8');

            const output = `
            #### Terraform Plan Results
            \`\`\`
            ${plan}
            \`\`\`

            *Review carefully before merging*
            `;

            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: output
            });

Workflow: Terraform Apply

# .github/workflows/terraform-apply.yml
name: Terraform Apply

on:
  push:
    branches:
      - main
    paths:
      - 'terraform/**'

jobs:
  apply:
    runs-on: ubuntu-latest
    environment: production

    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2

      - name: Terraform Init
        working-directory: ./terraform
        run: terraform init

      - name: Terraform Apply (Production)
        working-directory: ./terraform
        run: |
          terraform apply \
            -var-file="environments/prod.tfvars" \
            -auto-approve
        env:
          GOOGLE_CREDENTIALS: ${{ secrets.GCP_SA_KEY_TERRAFORM }}

      - name: Notify team
        uses: slackapi/slack-github-action@v1
        with:
          payload: |
            {
              "text": "🚀 Terraform changes applied to production"
            }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}

Rollback Strategy

What if deployment breaks production?

dbt Rollback

Option 1: Git Revert

# Find commit that broke things
git log --oneline

# Revert
git revert <commit-hash>
git push

# CI/CD automatically deploys reverted code

Option 2: Re-run Previous Version

# Checkout previous commit
git checkout <previous-commit>

# Run dbt
dbt run --target prod

# Return to main
git checkout main

Option 3: Schema Rollback (Advanced)

-- If dbt model created wrong table
-- Manually restore from backup

-- BigQuery: Restore from snapshot
CREATE OR REPLACE TABLE analytics.fct_orders
AS SELECT * FROM analytics.fct_orders
FOR SYSTEM_TIME AS OF TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR);

-- Snowflake: Restore from time travel
CREATE OR REPLACE TABLE analytics.fct_orders
CLONE analytics.fct_orders AT(OFFSET => -3600);  -- 1 hour ago

Terraform Rollback

Option 1: Git Revert

# Revert commit
git revert <commit-hash>
git push

# CI/CD runs terraform apply with reverted config

Option 2: Terraform State Rollback

# List state versions
terraform state list

# Pull previous state
terraform state pull > previous_state.tfstate

# Restore
terraform state push previous_state.tfstate

# Apply
terraform apply

Option 3: Manual Fix

# If resource was deleted accidentally
# Add it back to Terraform code
# Run terraform apply

Prevention: Blue-Green Deployments

For critical changes:

Old (Blue):
  analytics_v1 dataset

New (Green):
  analytics_v2 dataset

Migration:
  1. Deploy to analytics_v2
  2. Test thoroughly
  3. Switch dashboards to analytics_v2
  4. Monitor for 24 hours
  5. If success: delete analytics_v1
  6. If fail: switch back to analytics_v1

Full Setup Tutorial

Step-by-step guide để setup IaC cho Data Platform.

Week 1: Setup Repository

# Create repo
git init data-platform-iac
cd data-platform-iac

# Create structure
mkdir -p terraform/{environments,modules}
mkdir -p dbt/{models,tests,macros}
mkdir -p .github/workflows
mkdir docs

# Initialize Terraform
cd terraform
terraform init

# Initialize dbt
cd ../dbt
dbt init

# Create README
cat > ../README.md << 'EOF'
# Data Platform Infrastructure as Code

This repository contains infrastructure and data transformation code for our Data Platform.

## Structure
- `terraform/`: Infrastructure code (BigQuery, IAM, etc.)
- `dbt/`: Data transformation code (SQL models)
- `.github/workflows/`: CI/CD pipelines

## Getting Started
See [docs/SETUP.md](docs/SETUP.md)
EOF

# Commit
git add .
git commit -m "Initial project structure"

Week 2: Add Terraform Infrastructure

# Add BigQuery datasets
cat > terraform/bigquery.tf << 'EOF'
# See example above
EOF

# Add variables
cat > terraform/variables.tf << 'EOF'
variable "environment" {
  description = "Environment name"
  type        = string
}

variable "gcp_project" {
  description = "GCP project ID"
  type        = string
}

variable "data_team_email" {
  description = "Data team email"
  type        = string
}
EOF

# Add environment configs
cat > terraform/environments/dev.tfvars << 'EOF'
environment       = "dev"
gcp_project       = "my-project-dev"
data_team_email   = "data-team@company.com"
EOF

# Test
terraform plan -var-file="environments/dev.tfvars"
terraform apply -var-file="environments/dev.tfvars"

Week 3: Setup dbt

cd dbt

# Add first model
mkdir -p models/staging
cat > models/staging/stg_orders.sql << 'EOF'
SELECT
  order_id,
  customer_id,
  order_date,
  order_total
FROM `{{ var('gcp_project') }}.raw.orders`
EOF

# Add schema
cat > models/staging/_schema.yml << 'EOF'
version: 2

models:
  - name: stg_orders
    columns:
      - name: order_id
        tests:
          - unique
          - not_null
EOF

# Test
dbt run --target dev
dbt test --target dev

Week 4: Setup CI/CD

# Add GitHub Actions
cat > .github/workflows/dbt-test.yml << 'EOF'
# See example above
EOF

# Push to GitHub
git remote add origin git@github.com:company/data-platform-iac.git
git push -u origin main

# Add secrets in GitHub Settings → Secrets:
# - GCP_SA_KEY_DEV
# - GCP_SA_KEY_PROD
# - SLACK_WEBHOOK

Kết Luận

Infrastructure as Code for Data Platform không phải optional - đó là best practice mà every mature data team nên adopt.

Key Takeaways:

  1. Why IaC?

    • Reproducibility, version control, collaboration
    • Disaster recovery: rebuild trong minutes
    • 46x more frequent deploys, 96x faster recovery
  2. Terraform for Infrastructure

    • Provision BigQuery, Snowflake, storage, IAM
    • Multi-environment: dev, staging, prod
    • Declarative: describe desired state
  3. dbt for Transformations

    • SQL models as code in Git
    • Automated testing (data quality)
    • Environments via profiles.yml
  4. Git Workflows

    • Feature branches → PR → Review → Merge
    • Protected main branch
    • Full history, easy rollback
  5. CI/CD Pipelines

    • Automate testing on PR
    • Automate deployment on merge
    • Slack notifications
  6. Rollback Strategy

    • Git revert for code
    • Terraform state for infrastructure
    • Time travel for data (BigQuery, Snowflake)

What Should You Do Next?

Month 1:

  • Setup Git repo structure
  • Add Terraform for 1 environment (dev)
  • Migrate 1-2 key dbt models to Git

Month 2:

  • Add CI/CD for dbt tests
  • Add staging, prod environments
  • Migrate more dbt models

Month 3:

  • Full team adoption
  • Documentation
  • Train new hires

Cần Help với Infrastructure as Code?

Carptech đã giúp 10+ companies implement IaC cho Data Platforms, từ startup đến enterprise.

Chúng tôi có thể giúp:

  • ✅ Assess current state: manual vs IaC
  • ✅ Design IaC architecture (Terraform + dbt + Git)
  • ✅ Implement: repo setup, CI/CD pipelines
  • ✅ Migrate existing infrastructure to Terraform
  • ✅ Train team về IaC best practices

Typical Results:

  • Deploy frequency: 1x/week → 10x/day
  • Mean time to recovery: 4 hours → 15 minutes
  • Onboarding time: 2 weeks → 2 days

Đặt lịch IaC assessment miễn phí →

Hoặc tìm hiểu thêm về Data Engineering:


Bài viết được viết bởi Carptech Team - chuyên gia về Infrastructure as Code và DevOps for Data. Nếu có câu hỏi về Terraform, dbt, CI/CD, hoặc GitOps, hãy liên hệ với chúng tôi.

Có câu hỏi về Data Platform?

Đội ngũ chuyên gia của Carptech sẵn sàng tư vấn miễn phí về giải pháp phù hợp nhất cho doanh nghiệp của bạn. Đặt lịch tư vấn 60 phút qua Microsoft Teams hoặc gửi form liên hệ.

✓ Miễn phí 100% • ✓ Microsoft Teams • ✓ Không cam kết dài hạn