Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3874.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0647fd86d11d396d3 Launch Stack
    HVM (arm64) ami-069deadd214c882b7 Launch Stack
    ap-east-1 HVM (amd64) ami-088f6e13a69594e32 Launch Stack
    HVM (arm64) ami-07d164507999aec68 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0a8ffacd56388a0f0 Launch Stack
    HVM (arm64) ami-096de7c623b501f0a Launch Stack
    ap-northeast-2 HVM (amd64) ami-051f852b440a00752 Launch Stack
    HVM (arm64) ami-081253b204243a2ad Launch Stack
    ap-south-1 HVM (amd64) ami-0134862d0cef75dfc Launch Stack
    HVM (arm64) ami-027115618a243bbd9 Launch Stack
    ap-southeast-1 HVM (amd64) ami-06f7dd6d6529dc5d5 Launch Stack
    HVM (arm64) ami-073d4c062272cc97f Launch Stack
    ap-southeast-2 HVM (amd64) ami-0adb6ee32ea56834c Launch Stack
    HVM (arm64) ami-0b8d260d1f1a3b1e1 Launch Stack
    ap-southeast-3 HVM (amd64) ami-040faa5e02dd26070 Launch Stack
    HVM (arm64) ami-09f57221e76fc4f49 Launch Stack
    ca-central-1 HVM (amd64) ami-057d57f2ce1909f2d Launch Stack
    HVM (arm64) ami-0e96a266e7ac07439 Launch Stack
    eu-central-1 HVM (amd64) ami-03432daf1fe4cb664 Launch Stack
    HVM (arm64) ami-06ec20ed99d545d36 Launch Stack
    eu-north-1 HVM (amd64) ami-09f5c4936a8e50b38 Launch Stack
    HVM (arm64) ami-04116b2805f6b2f69 Launch Stack
    eu-south-1 HVM (amd64) ami-066edc167f215695b Launch Stack
    HVM (arm64) ami-0984129cca82d67b9 Launch Stack
    eu-west-1 HVM (amd64) ami-0cfc0d9c2ba32881b Launch Stack
    HVM (arm64) ami-01c4496fc8b18ab04 Launch Stack
    eu-west-2 HVM (amd64) ami-01939c3ed6ce498f0 Launch Stack
    HVM (arm64) ami-01fcb157a446cee5f Launch Stack
    eu-west-3 HVM (amd64) ami-05aa1b25f1ea2b991 Launch Stack
    HVM (arm64) ami-043cb1828e6ac5fd3 Launch Stack
    me-south-1 HVM (amd64) ami-0e2167980563633d0 Launch Stack
    HVM (arm64) ami-0cc73db70bf56c7e9 Launch Stack
    sa-east-1 HVM (amd64) ami-0617bbda7c1df7a64 Launch Stack
    HVM (arm64) ami-0d838810e2f311d01 Launch Stack
    us-east-1 HVM (amd64) ami-08cd8705f88f72e0d Launch Stack
    HVM (arm64) ami-04f1fa6951f4c06f7 Launch Stack
    us-east-2 HVM (amd64) ami-0c994ee810773be63 Launch Stack
    HVM (arm64) ami-0659c38056e51ddd7 Launch Stack
    us-west-1 HVM (amd64) ami-0666f976aa7dbbcc0 Launch Stack
    HVM (arm64) ami-05e6e85f43b60901a Launch Stack
    us-west-2 HVM (amd64) ami-0817c7c88a0163349 Launch Stack
    HVM (arm64) ami-012ef9afa65135de4 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3850.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0d650fa50132b716b Launch Stack
    HVM (arm64) ami-0e5e724b0c9171225 Launch Stack
    ap-east-1 HVM (amd64) ami-0024b9deba69974b2 Launch Stack
    HVM (arm64) ami-0718eb47d4b13a8f8 Launch Stack
    ap-northeast-1 HVM (amd64) ami-089670dd09a8b0a5a Launch Stack
    HVM (arm64) ami-0784d1afc841ed704 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0bc1c8789d5052178 Launch Stack
    HVM (arm64) ami-015bda66abf8a6fb7 Launch Stack
    ap-south-1 HVM (amd64) ami-0cc951e72cf5f21a8 Launch Stack
    HVM (arm64) ami-0211bb65b5fda1517 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0b3c51cc2768aa169 Launch Stack
    HVM (arm64) ami-0e22afe1d122888b4 Launch Stack
    ap-southeast-2 HVM (amd64) ami-03095f0cf9357618d Launch Stack
    HVM (arm64) ami-0301a15c423c27646 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0602b20bbc45250e2 Launch Stack
    HVM (arm64) ami-09fdc92928f193ad8 Launch Stack
    ca-central-1 HVM (amd64) ami-04ebf266454b6f5fd Launch Stack
    HVM (arm64) ami-0fdad7469ebac4b8b Launch Stack
    eu-central-1 HVM (amd64) ami-074870d369d5c449f Launch Stack
    HVM (arm64) ami-0400e7fa335de92bd Launch Stack
    eu-north-1 HVM (amd64) ami-08f742974c8acb6c5 Launch Stack
    HVM (arm64) ami-04b35c9b78d8019d2 Launch Stack
    eu-south-1 HVM (amd64) ami-041619b3dfcc73dd7 Launch Stack
    HVM (arm64) ami-038c6f251f1054d75 Launch Stack
    eu-west-1 HVM (amd64) ami-095de6c51e7d4e1f1 Launch Stack
    HVM (arm64) ami-053dfe90cc1046291 Launch Stack
    eu-west-2 HVM (amd64) ami-0ca97939195e50e6d Launch Stack
    HVM (arm64) ami-0552d83d6cb69e627 Launch Stack
    eu-west-3 HVM (amd64) ami-01e602620dbca7a7f Launch Stack
    HVM (arm64) ami-0e4d4019c6a12a989 Launch Stack
    me-south-1 HVM (amd64) ami-0aa899069363824aa Launch Stack
    HVM (arm64) ami-085d8a7b3e5591504 Launch Stack
    sa-east-1 HVM (amd64) ami-0453646f2738e95a5 Launch Stack
    HVM (arm64) ami-0a5a731be9362bc83 Launch Stack
    us-east-1 HVM (amd64) ami-0e91653e81c21e32c Launch Stack
    HVM (arm64) ami-0e923528861787fbc Launch Stack
    us-east-2 HVM (amd64) ami-0611aef988cc6dea1 Launch Stack
    HVM (arm64) ami-09d8aac5d3095ffd3 Launch Stack
    us-west-1 HVM (amd64) ami-027b5915f5ef8a451 Launch Stack
    HVM (arm64) ami-0a0cbbdae42d3fcfb Launch Stack
    us-west-2 HVM (amd64) ami-0df2b556ea0e4e488 Launch Stack
    HVM (arm64) ami-0f824b4a8f6e06717 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3815.2.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0a81fd68075e84e8a Launch Stack
    HVM (arm64) ami-0a84bdb6d473c7d23 Launch Stack
    ap-east-1 HVM (amd64) ami-0852724b6813b50f8 Launch Stack
    HVM (arm64) ami-02abab3bdd8da2412 Launch Stack
    ap-northeast-1 HVM (amd64) ami-058f34902d4f2f755 Launch Stack
    HVM (arm64) ami-0e3c1f05d04937543 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0381f6fc3038269ea Launch Stack
    HVM (arm64) ami-00c8003103e0b14e7 Launch Stack
    ap-south-1 HVM (amd64) ami-0d894090ed3f32fb4 Launch Stack
    HVM (arm64) ami-01f61fef8747e235a Launch Stack
    ap-southeast-1 HVM (amd64) ami-043238225b03e36bf Launch Stack
    HVM (arm64) ami-094365b0e3514616d Launch Stack
    ap-southeast-2 HVM (amd64) ami-06981e6ced25ddfcf Launch Stack
    HVM (arm64) ami-0bfcb017e5d6c1e48 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0b563e62350daf24c Launch Stack
    HVM (arm64) ami-0094dd50b199d99bf Launch Stack
    ca-central-1 HVM (amd64) ami-035cd6644153a1701 Launch Stack
    HVM (arm64) ami-0de2a259d62d34c5b Launch Stack
    eu-central-1 HVM (amd64) ami-07dcdd09136891364 Launch Stack
    HVM (arm64) ami-0204a2e9c26b285c0 Launch Stack
    eu-north-1 HVM (amd64) ami-0061f9d16a577712b Launch Stack
    HVM (arm64) ami-088e813e806a21ace Launch Stack
    eu-south-1 HVM (amd64) ami-015216631fb5b208f Launch Stack
    HVM (arm64) ami-0df7e7d626ed38275 Launch Stack
    eu-west-1 HVM (amd64) ami-03e7be5ebcbdad1b6 Launch Stack
    HVM (arm64) ami-00feffa04fcb8c44b Launch Stack
    eu-west-2 HVM (amd64) ami-004dbb68401472961 Launch Stack
    HVM (arm64) ami-054905de5f874d949 Launch Stack
    eu-west-3 HVM (amd64) ami-058365ad192ade060 Launch Stack
    HVM (arm64) ami-0c44dea598e359ca8 Launch Stack
    me-south-1 HVM (amd64) ami-0bf56437333141635 Launch Stack
    HVM (arm64) ami-0512c6bcb2b9a18be Launch Stack
    sa-east-1 HVM (amd64) ami-0853690089d40d384 Launch Stack
    HVM (arm64) ami-050b1a2d487b95ecf Launch Stack
    us-east-1 HVM (amd64) ami-075769e51b1d134f5 Launch Stack
    HVM (arm64) ami-065ce4de03ac98e00 Launch Stack
    us-east-2 HVM (amd64) ami-00a37633b1dde815a Launch Stack
    HVM (arm64) ami-0eda39c018fc2800f Launch Stack
    us-west-1 HVM (amd64) ami-09bc63ee42db9abc1 Launch Stack
    HVM (arm64) ami-01c5fc927957e6bd5 Launch Stack
    us-west-2 HVM (amd64) ami-090c2b05e6e3e8d42 Launch Stack
    HVM (arm64) ami-0f2f91d398cb3a076 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-08cd8705f88f72e0d (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-08cd8705f88f72e0d (amd64), Beta ami-0e91653e81c21e32c (amd64), or Stable ami-075769e51b1d134f5 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .