Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4230.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-000dea1dc1c447ea0 Launch Stack
    HVM (arm64) ami-0ea902ff0a72557ba Launch Stack
    ap-east-1 HVM (amd64) ami-06efdac7ec94d001d Launch Stack
    HVM (arm64) ami-03f54a53ac36d3e0d Launch Stack
    ap-northeast-1 HVM (amd64) ami-01e8a647d87acbe8e Launch Stack
    HVM (arm64) ami-0e1449ec64f710f88 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0a4dba82acd25d0de Launch Stack
    HVM (arm64) ami-0f699ed64ebc9db47 Launch Stack
    ap-south-1 HVM (amd64) ami-0c203ab50f844fe45 Launch Stack
    HVM (arm64) ami-00282bfdc3e718cc6 Launch Stack
    ap-southeast-1 HVM (amd64) ami-05aeb8caa5c009d80 Launch Stack
    HVM (arm64) ami-09aac855cb95ac19c Launch Stack
    ap-southeast-2 HVM (amd64) ami-071d21559645cb996 Launch Stack
    HVM (arm64) ami-071e651d1963226e6 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0fc839c5b9e8cbfbf Launch Stack
    HVM (arm64) ami-069f4ca5957db8a20 Launch Stack
    ca-central-1 HVM (amd64) ami-0713b5794a56cc05b Launch Stack
    HVM (arm64) ami-050edb546cd451b21 Launch Stack
    eu-central-1 HVM (amd64) ami-0462387222ca72613 Launch Stack
    HVM (arm64) ami-00827455f8f0dd61a Launch Stack
    eu-north-1 HVM (amd64) ami-0ffb1ee6a305aaaa0 Launch Stack
    HVM (arm64) ami-039da22821d171e81 Launch Stack
    eu-south-1 HVM (amd64) ami-028974add2358627b Launch Stack
    HVM (arm64) ami-0a49b1dfd4713bd45 Launch Stack
    eu-west-1 HVM (amd64) ami-07e34bfda51fea4fb Launch Stack
    HVM (arm64) ami-0daad6ba160aca982 Launch Stack
    eu-west-2 HVM (amd64) ami-04b18150da2d46213 Launch Stack
    HVM (arm64) ami-0cf0c8acaf6a817be Launch Stack
    eu-west-3 HVM (amd64) ami-033f3ecade132233a Launch Stack
    HVM (arm64) ami-06d0e7f8dfc2c55df Launch Stack
    me-south-1 HVM (amd64) ami-0c6a04b2094f51652 Launch Stack
    HVM (arm64) ami-0667f44f86945b1ab Launch Stack
    sa-east-1 HVM (amd64) ami-0e50014f3044b32bd Launch Stack
    HVM (arm64) ami-05c2897d8c0c9fad2 Launch Stack
    us-east-1 HVM (amd64) ami-0cc969a1869dde521 Launch Stack
    HVM (arm64) ami-000fa922fa3d201e1 Launch Stack
    us-east-2 HVM (amd64) ami-088d0142fa99a16f2 Launch Stack
    HVM (arm64) ami-06b83bc2aa5939dd2 Launch Stack
    us-west-1 HVM (amd64) ami-06b054b92404effcd Launch Stack
    HVM (arm64) ami-0490de8e14e2dc942 Launch Stack
    us-west-2 HVM (amd64) ami-0eb0713169a550633 Launch Stack
    HVM (arm64) ami-0de6c8aab9a6135ec Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4186.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-02fca9744ddf5b67f Launch Stack
    HVM (arm64) ami-06bd8d441ac79bf01 Launch Stack
    ap-east-1 HVM (amd64) ami-0311ebc3d74c2db56 Launch Stack
    HVM (arm64) ami-0c20911cf694a6f52 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0f535584f03031771 Launch Stack
    HVM (arm64) ami-0c93b18130a24a6ab Launch Stack
    ap-northeast-2 HVM (amd64) ami-04b71d63807c6fb47 Launch Stack
    HVM (arm64) ami-06701631a705a29b1 Launch Stack
    ap-south-1 HVM (amd64) ami-04f20bfb885fdc145 Launch Stack
    HVM (arm64) ami-0120d8c728809bbc1 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0c4ac24e67d357897 Launch Stack
    HVM (arm64) ami-08f4c7b725ce3072f Launch Stack
    ap-southeast-2 HVM (amd64) ami-096e4ed5d4e5f11e8 Launch Stack
    HVM (arm64) ami-0479e322c3abead4b Launch Stack
    ap-southeast-3 HVM (amd64) ami-0f1384d2f806ecea9 Launch Stack
    HVM (arm64) ami-035fd65c1c7db620b Launch Stack
    ca-central-1 HVM (amd64) ami-0624eddaed8a3e9fa Launch Stack
    HVM (arm64) ami-04838ddd7fdbc7f07 Launch Stack
    eu-central-1 HVM (amd64) ami-07d2ef97ef8dd6327 Launch Stack
    HVM (arm64) ami-0009e88b5a6c4d7c7 Launch Stack
    eu-north-1 HVM (amd64) ami-0851ee7b0f84fec17 Launch Stack
    HVM (arm64) ami-07f0650f94e152cdf Launch Stack
    eu-south-1 HVM (amd64) ami-02c755205dafee066 Launch Stack
    HVM (arm64) ami-0313488a7b483402e Launch Stack
    eu-west-1 HVM (amd64) ami-02bf74becad60948b Launch Stack
    HVM (arm64) ami-09d3501256e30bee3 Launch Stack
    eu-west-2 HVM (amd64) ami-0a5a9d671bb262a9b Launch Stack
    HVM (arm64) ami-03e9e4e7f60a7bfd4 Launch Stack
    eu-west-3 HVM (amd64) ami-07070dbdea8d1f185 Launch Stack
    HVM (arm64) ami-0e68914936ecc2648 Launch Stack
    me-south-1 HVM (amd64) ami-006e0029a9d35f0e2 Launch Stack
    HVM (arm64) ami-0ceca50d2860e38bf Launch Stack
    sa-east-1 HVM (amd64) ami-09bf5df08fab5f7c3 Launch Stack
    HVM (arm64) ami-04bd47b5f1f103879 Launch Stack
    us-east-1 HVM (amd64) ami-078c3112042912f4a Launch Stack
    HVM (arm64) ami-0424b6544f860c92c Launch Stack
    us-east-2 HVM (amd64) ami-02faa37c67d72cc16 Launch Stack
    HVM (arm64) ami-07e7a1049d298d504 Launch Stack
    us-west-1 HVM (amd64) ami-0205fd0cd74cb5d6c Launch Stack
    HVM (arm64) ami-054b1cd2440099864 Launch Stack
    us-west-2 HVM (amd64) ami-082449480d2287652 Launch Stack
    HVM (arm64) ami-04bf5ce08601fdb9b Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4152.2.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-04c5ba92997a49f9e Launch Stack
    HVM (arm64) ami-09e407c8b36782b33 Launch Stack
    ap-east-1 HVM (amd64) ami-0b3b7ff22ce497db0 Launch Stack
    HVM (arm64) ami-066ed9d8e1158d84f Launch Stack
    ap-northeast-1 HVM (amd64) ami-07a1555a16c3661a7 Launch Stack
    HVM (arm64) ami-08669b0d756c8cff2 Launch Stack
    ap-northeast-2 HVM (amd64) ami-07e159df1169ccea6 Launch Stack
    HVM (arm64) ami-0bc640f891ab96a1d Launch Stack
    ap-south-1 HVM (amd64) ami-021bd333b9f80d9a9 Launch Stack
    HVM (arm64) ami-0e870e6705fb6b8e5 Launch Stack
    ap-southeast-1 HVM (amd64) ami-02c8d0eb921166f42 Launch Stack
    HVM (arm64) ami-02db6a410457d3ad1 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0f220988c385e310d Launch Stack
    HVM (arm64) ami-0c61c888143021681 Launch Stack
    ap-southeast-3 HVM (amd64) ami-056e10944c9e66005 Launch Stack
    HVM (arm64) ami-09a6340e957ec8581 Launch Stack
    ca-central-1 HVM (amd64) ami-0dd975add5295d6b3 Launch Stack
    HVM (arm64) ami-026cf249935d78d58 Launch Stack
    eu-central-1 HVM (amd64) ami-03a184adc607c344e Launch Stack
    HVM (arm64) ami-0990c8b4e1b6e10cd Launch Stack
    eu-north-1 HVM (amd64) ami-098db877d587cf4b2 Launch Stack
    HVM (arm64) ami-072f15b86dd163b97 Launch Stack
    eu-south-1 HVM (amd64) ami-088bfb8a108c08788 Launch Stack
    HVM (arm64) ami-01bb2889208476f10 Launch Stack
    eu-west-1 HVM (amd64) ami-046afe43b76170773 Launch Stack
    HVM (arm64) ami-04ba71192e1652e71 Launch Stack
    eu-west-2 HVM (amd64) ami-02eaec9df6d004228 Launch Stack
    HVM (arm64) ami-086decd1df2b2f6a1 Launch Stack
    eu-west-3 HVM (amd64) ami-0a559d15264bdad4a Launch Stack
    HVM (arm64) ami-04de622874149ea24 Launch Stack
    me-south-1 HVM (amd64) ami-013272f2710dcf65f Launch Stack
    HVM (arm64) ami-08fc5b22539a09246 Launch Stack
    sa-east-1 HVM (amd64) ami-0f62d2fc713eec23a Launch Stack
    HVM (arm64) ami-0f12a70eeb8b409ca Launch Stack
    us-east-1 HVM (amd64) ami-05bf1b5c5ae916040 Launch Stack
    HVM (arm64) ami-013ae6aa3c06a1dfa Launch Stack
    us-east-2 HVM (amd64) ami-085c0ceb5f1bb1fcc Launch Stack
    HVM (arm64) ami-073a450ef2740f00c Launch Stack
    us-west-1 HVM (amd64) ami-0b2e9dbbb7794fc86 Launch Stack
    HVM (arm64) ami-0903e138085e5df8a Launch Stack
    us-west-2 HVM (amd64) ami-0d6d000347f74056c Launch Stack
    HVM (arm64) ami-0ea352045f69ec411 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0cc969a1869dde521 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0cc969a1869dde521 (amd64), Beta ami-078c3112042912f4a (amd64), or Stable ami-05bf1b5c5ae916040 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .