E.g. to create a VM in the Google Cloud we would create a new file main.tf in a new directory, e.g. lab0. Instead of using terraform variables we use a yaml document to define the values as it is much easier to understand. At the same time we separate logic from data which makes our code reusable and easy to test. E.g. to test the code in a staging environment before we put it in the production environment we would create a config specific for the staging environment and another config for the production environment.
# lab0/main.tf
locals { config = yamldecode(file("./config.yml")) }
provider "google" {
project = local.config.project.id
region = local.config.project.region
zone = local.config.project.zone
}
resource "google_compute_network" "net1" {
name = local.config.network[0].name
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "sub1" {
network = local.config.network[0].name
name = local.config.network[0].subnet[0].name
ip_cidr_range = local.config.network[0].subnet[0].cidr
region = local.config.project.region
depends_on = [google_compute_network.net1]
}
resource "google_compute_instance" "vm1" {
project = local.config.project.id
name = local.config.vms[0].name
machine_type = local.config.vms[0].type
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = local.config.vms[0].image
}
}
network_interface {
subnetwork = local.config.vms[0].net
}
depends_on = [google_compute_subnetwork.sub1]
}
---
# lab0/config.yml
project:
id : my-lab-vpc-1
region : us-west1
zone : us-west1-c
network:
- name: net1
subnet:
- name: net1-lan1
cidr: 192.168.21.0/24
vms:
- name : vm-1
type : e2-micro
image: debian-cloud/debian-11
net : net1-lan1
Before terraform can create a deployment plan it needs to initialize the working environment to make sure all dependencies are available( e.g. providers). We do this by issuing the command terraform init. In the screenshots below we used an alias tf=/usr/bin/terraform for the terraform command.
Now that we have initialized the working environment we can create a plan. To do so we use the command terraform plan.
If we are happy with the plan we can apply it with terraform apply.
If we don't need the existing VM anymore we can destroy it with the terraform destroy command. The screenshot below shows only part of the terraform destroy command output.
If we take the example from above we notice that it doesn't scale well. To add a VM we would need to add another google_compute_instance resource and carefully add the correct indices which makes it quite error prone to work with.
# lab0a/main.tf
locals { config = yamldecode(file("./config.yml")) }
provider "google" {
project = local.config.project.id
region = local.config.project.region
zone = local.config.project.zone
}
resource "google_compute_network" "net1" {
name = local.config.network[0].name
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "sub1" {
network = local.config.network[0].name
name = local.config.network[0].subnet[0].name
ip_cidr_range = local.config.network[0].subnet[0].cidr
region = local.config.project.region
depends_on = [google_compute_network.net1]
}
resource "google_compute_instance" "vm1" {
project = local.config.project.id
name = local.config.vms[0].name
machine_type = local.config.vms[0].type
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = local.config.vms[0].image
}
}
network_interface {
subnetwork = local.config.vms[0].net
}
depends_on = [google_compute_subnetwork.sub1]
}
resource "google_compute_instance" "vm2" {
project = local.config.project.id
name = local.config.vms[1].name
machine_type = local.config.vms[1].type
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = local.config.vms[1].image
}
}
network_interface {
subnetwork = local.config.vms[1].net
}
depends_on = [google_compute_subnetwork.sub1]
}
---
# lab0a/config.yml
project:
id : my-lab-vpc-1
region : us-west1
zone : us-west1-c
network:
- name: net1
subnet:
- name: net1-lan1
cidr: 192.168.21.0/24
vms:
- name : vm-1
type : e2-micro
image: debian-cloud/debian-11
net : net1-lan1
- name : vm-2
type : e2-micro
image: debian-cloud/debian-11
net : net1-lan1
To fix that we need to be able to somehow loop over our configuration and create the resource code dynamically. To do so we use the for_each meta-argument. We will also add support for multiple networks and subnets within the networks which means that we have to adjust our config.yml a little so subnets have their own section as it is not possible (at least not trivially) to use nested loops with terraform. To tell subnets apart easily, we just add the network name at the beginning of the subnet name, e.g. if the network is called net1 then the subnet could be called net1-lan1.
# lab0b/main.tf
locals { config = yamldecode(file("./config.yml")) }
provider "google" {
project = local.config.project.id
region = local.config.project.region
zone = local.config.project.zone
}
resource "google_compute_network" "net" {
for_each = { for net in local.config.network : net.name => net }
name = each.value.name
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "sub" {
for_each = { for sub in local.config.subnet : sub.name => sub }
network = each.value.net
name = each.value.name
ip_cidr_range = each.value.cidr
region = local.config.project.region
depends_on = [google_compute_network.net]
}
resource "google_compute_instance" "vm" {
for_each = { for vm in local.config.vms : vm.name => vm }
project = local.config.project.id
name = each.value.name
machine_type = each.value.type
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = each.value.image
}
}
network_interface {
subnetwork = each.value.net
}
depends_on = [google_compute_subnetwork.sub]
}
---
# lab0b/config.yml
project:
id : my-lab-vpc-1
region : us-west1
zone : us-west1-c
network:
- name: net1
- name: net2
- name: net3
subnet:
- name: net1-lan1
net : net1
cidr: 192.168.11.0/24
- name: net2-lan1
net : net2
cidr: 192.168.21.0/24
- name: net2-lan2
net : net2
cidr: 192.168.22.0/24
- name: net3-lan1
net : net3
cidr: 192.168.31.0/24
vms:
- name : vm-1
type : e2-micro
image: debian-cloud/debian-11
net : net1-lan1
- name : vm-2
type : e2-micro
image: centos-7-v20231010
net : net1-lan1
- name : vm-3
type : e2-micro
image: debian-cloud/debian-11
net : net2-lan1
- name : vm-4
type : e2-micro
image: centos-7-v20231010
net : net2-lan2
Now that we have some code that also scales nicely we might want to reuse it in other projects. We could just copy the file main.tf to every project to do so, but in case we would like to add/change features that should effect all the projects (e.g. we want to add the os-login feature to the VMs ) we would need to go to every copy and edit or replace it. A better solution is to create a module out of our code that can be reused in different projects by just including it via a module block.
We will now transform our resource configuration into modules. To do so we create a new directory structure modules/gcp/.
lab0a/
lab0b/
+- main.tf # <<< resource config we will transform
+ config.yml # <<< yaml configuration we will keep
lab0c/
+- main.tf # <<< new resource config that uses the lab module
+- config.yml # <<< copied yaml configuration from lab0b
modules/
+- aws/
+- gcp/
+- net/
| +- main.tf # <<< new net module, from the network resource of lab0b
| +- variables.tf # <<< to pass the yaml config to the module
+- subnet/
| +- main.tf # <<< new subnet module, from the subnet resource of lab0b
| +- variables.tf # <<< to pass the yaml config to the module
+- vm/
| +- main.tf
| +- variables.tf
+- lab/ # <<< this is the new the lab module
+- main.tf
+- variables.tf
We start with the net module. We just copy the resource block from lab0b and change the variable from local.config.network to var.nets as the module doesn't have access to the local itself. We will see later that we need to pass the configuration to the module as a variable which we defined as nets as we can see in the variables.tf file.
# modules/gcp/net/main.tf
resource "google_compute_network" "net" {
for_each = { for net in var.nets : net.name => net }
name = each.value.name
auto_create_subnetworks = false
}
# modules/gcp/net/variables.tf
variable nets { type = any }
Next we create the subnet modul. We need to create 2 variables as we have to pass 2 sections of our yaml config, namely
subnet
project
# modules/gcp/subnet/main.tf
resource "google_compute_subnetwork" "sub" {
for_each = { for sub in var.subnets : sub.name => sub }
network = each.value.net
name = each.value.name
ip_cidr_range = each.value.cidr
region = var.project.region
}
# modules/gcp/subnet/variables.tf
variable subnets { type = any }
variable project { type = any }
Now we create the vm module. We also need 2 variables to pass the specific sections from the yaml config.
# modules/gcp/vm/main.tf
resource "google_compute_instance" "vm" {
for_each = { for vm in var.vms : vm.name => vm }
project = var.project.id
name = each.value.name
machine_type = each.value.type
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = each.value.image
}
}
network_interface {
subnetwork = each.value.net
}
}
# modules/gcp/vm/variables.tf
variable vms { type = any }
variable project { type = any }
Now we can create a dedicated lab module that can be reused by just changing the yaml config. The lab module uses the modules we just creates, i.e. net, subnet, vm. This way we can resuse those modules o create other composite module, e.g. a db-lab module, which uses an additional sqldb module.
# modules/gcp/lab/main.tf
# PROJECT
provider "google" {
project = var.config.project.id
region = var.config.project.region
zone = var.config.project.zone
}
# NETWORKS
module "net" {
source = "../net"
nets = var.config.network
}
#
# SUBNETS
module "subnet" {
source = "../subnet"
project = var.config.project
subnets = var.config.subnet
depends_on = [module.net]
}
# VMs
module "vm" {
source = "../vm"
project = var.config.project
vms = var.config.vms
depends_on = [module.subnet]
}
# modules/gcp/lab/variables.tf
variable config { type = any }
To use our newly created lab module we create a module block and reference the folder where the lab module is located in the source variable. We also pass our yaml config.
# lab0c/main.tf
locals { config = yamldecode( file("./config.yml") ) }
module "lab0c" {
source = "../modules/gcp/lab"
config = local.config
}
---
# lab0c/config.yml
project:
id : my-lab-vpc-1
region : us-west1
zone : us-west1-c
network:
- name: net1
- name: net2
- name: net3
subnet:
- name: net1-lan1
net : net1
cidr: 192.168.11.0/24
- name: net2-lan1
net : net2
cidr: 192.168.21.0/24
- name: net2-lan2
net : net2
cidr: 192.168.22.0/24
- name: net3-lan1
net : net3
cidr: 192.168.31.0/24
vms:
- name : vm-1
type : e2-micro
image: debian-cloud/debian-11
net : net1-lan1
- name : vm-2
type : e2-micro
image: centos-7-v20231010
net : net1-lan1
- name : vm-3
type : e2-micro
image: debian-cloud/debian-11
net : net2-lan1
- name : vm-4
type : e2-micro
image: centos-7-v20231010
net : net2-lan2
To apply the lab0c configuration, we issue the command terraform apply in the lab0c folder.
https://developer.hashicorp.com/terraform/language/meta-arguments/for_each
https://developer.hashicorp.com/terraform/language/expressions/for