5 Good Practices to Start a Terraform Project

Bootstrapping a new project with Terraform resources is not that obvious. I mean, that is obviously easy but if you want it to scale, if you don't be trapped at some points or if you want people to collaborate efficiently, there are, for sure, a few things you should consider from the very ground.

I've started with a short list of 5 good practices you might want to adopt from the beginning of your project! Not that I've only hit 5 issues by doing things wrong but because I'm lazy and I want to keep things simple and fast. If you face some issues too that you've addressed by using good practices in your Terraform projects, don't hesitate to share them too: leave a comment and explain how we could avoid some mistakes...

Use one directory per deployment

In other terms, use modules and module parameters rather than variables and .tfvars files stored somewhere outside the deployment directory

What I like the most about Terraform is it is super easy to create and destroy resources. Say you want to perform a test and you've described the resources you need, all you need to do is to deploy a new instance of your configuration by running it again somewhere else...

You might have figure out already there are a bunch of configuration items that should depend on the deployment. For instance you will *NOT* deploy the production DNS cname on your test configuration, right? So you will very likely introduce parameters to your configuration and because you can actually use parameters with your configuration, you will start to use them a lot. And by the way, that is good and fun!

Terraform allows you to define variables and store them into files. So you might consider organizing your project with (1) one directory containing the configuration you want to build and (2) a file or a set of files per set of variable values that represent the parameters of the instance configuration you want to deploy. Actually, that is a bad idea!

A way better idea is to keep one directory per deployed configuration so that you have something like below and rely on modules and module parameters rather than externally stored variables
  - [deployments]
    - [production]
      - [part1] (relies on module X, Y)
      - [part2] (relies on module Z and current state of production/part1)
    - [test]
      - [part1] (relies on module X, Y)
      - [part2] (relies on module Z and current state of test/part1)
    - ...
  - [modules]
    - [module A]
    - [module X] (relies on module A...)
    - [module Y]
    - [module Z]
The biggest benefit from that kind of approaches is that you actually don't need to reference any external files other than the one in the current directory and, as such, there is no chance to mess up your deployed configuration.

Just run terraform get, terraform remote pull, terraform plan, terraform apply. In addition to an easier and less error prone configuration, you'll figure out with time, working with module improves the way you can reuse and evolve your code.

Use the Random Provider

Relying on randomly generated keys to name resources outside Terraform is definitely a good idea too... Not only it could help to identify which resource is part of a deployed configuration but it also helps to solve potential problems with name collisions.

Considering AWS for instance, a bucket name must be unique accross regions. So does a role, a cloudwatch log group or a security group for a given account. Adding keys to name resources helps to deploy a single configuration multiple times and, as a result, the script below can work multiple times :
resource "random_id" "idkey" {
   byte_length = 8

resource "aws_s3_bucket" "terraform_bucket" {
   bucket = "terraform.${random_id.idkey.hex}"
   acl = "private"
   versioning {
        enabled = true

output "idkey" {
   value = "${random_id.idkey.hex}"

Use Provider Clients

Using provider clients, in addition to Terraform can be helpful in several ways: it allows to reference credentials without actually storing them in the configuration. For instance if you're using the parameters below in a module, all you need is to create a profile in ~/.aws/credentials with the access/secret keys to access the provider. No need for any variable files:
variable "aws_region" {}
variable "aws_profile" {}

provider "aws" {
    profile = "${var.aws_profile}"
    region = "${var.aws_region}"
local-exec provisioner can run custom commands that are *NOT* supported by Terraform yet. For instance, you could easily run custom commands or set up S3 Cross Region Replication or whatever comes to your mind and is not part of your version yet.

Below is an example of the use an aws ssm command:
resource "null_resource" "myresource" {
  provisioner "local-exec" {
        command = <<EOF 
export AWS_DEFAULT_REGION="${var.region}"
aws ssm send-command --profile ${var.myprofile} --instance-ids \
    ${aws_instance.server1.id} --document-name ${aws_ssm_document.mydoc.name}
  depends_on = [ "aws_ssm_document.mydoc", "aws_instance.server1"]

Store the State Externally

An important part of Terraform deployment is the state file. The main purpose of it is to keep track of resources that have been deployed by Terraform and link them to the model you've written. It is very important that file is shared between people. However, it does not make a lot of sense to keep it in the software configuration management repository. One of the reasons for not doing that, is because that file can contain values that are sensitive to the deployment. If you're using AWS and don't have access to Atlas, then S3 is probably the way to go.
  • Run a command like the one below to configure connectivity to your externally stored state:
terraform remote config -backend=S3 \
   -backend-config="bucket=terraform-myprod-xxxxxxxxxxxxx" \
   -backend-config="key=tf-state/xxx.state " \
   -backend-config="region=eu-west-1" \
  • The command above creates a state file named .terraform/terraform.tfstate that looks like below:
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "terraform-myprod-xxxxxxxxxxxxx",
            "key": "tf-state/xxx.state ",
            "profile": "myprofile",
            "region": "eu-west-1"
You should either, disable/enable the link to the state file every time you finish with the deployment. Another way to deal with it is to ignore the file from the repository and run terraform remote pull before any new use.

Layer your Project and Use Data Sources

Last but not least you'll end up figuring out that your configuration can be splitted into pieces. For instance:
  • you might not want to deploy as many docker registries, and simply cannot, as the number of configuration you own
  • you might not need to deploy Cloudfront, DNS or certificates on your test environments 
  • you might want to let different people manage different part of your deployments
Instead of building conditional logic in modules with the count parameter, a nice way to handle the task consists in creating separate configurations that rely on each other. In order to perform such an operation:
  • Create output for your configuration like in the configuration below:
resource "random_id" "idkey" {
   byte_length = 8

output "idkey" {
   value = "${random_id.idkey.hex}"
  • Link layers to others by referencing the state of another with a terraform_remote_state data provider:
data "terraform_remote_state" "key" {
    backend = "s3"
    config {
        bucket  = "terraform-prod-xxxxxxxxxxxxx"
        key     = "tf-state/xxxx.state"
        region  = "eu-west-1"
        profile = "myprofile"
Once done, you should be able to reference values from another configuration with some reference like ${data.terraform_remote_state.key.idkey}. Because you don't want those configurations to form cycling references, you'll have to think of them in terms of layers.

Layering your configuration will definitely also help you to speed up deployments and reduce the amount of downtime. It is very likely, if you are running continuous deployment of micro-services for instance, that your Docker application will be delivered daily while your Infrastructure might not change for weeks.

Consider Other Good practices

After a while, you'll find plenty of ways to improve your Infrastructure code. To name a few you should consider using null_resource, maps variables, Consul... You might also want to use Atlas, Docker, Chef or Ansible, AWS lambdas and tags...

And you, what do you consider good practices when building a project with Terraform?


  1. To track the use of multi-level module, see https://github.com/hashicorp/terraform/issues/4084


Post a Comment

Popular posts from this blog

Installing Oracle Database 12.1 in Command Line and "Silent Mode"

Oracle database 12c with Oracle Linux 7.x and LXC

Introduction to Oracle Linux 7 Network