Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4116.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0a2005a90b524d222 Launch Stack
    HVM (arm64) ami-02c83ba55cca86577 Launch Stack
    ap-east-1 HVM (amd64) ami-05fbba523e83e3a9b Launch Stack
    HVM (arm64) ami-012906a594fb52a0d Launch Stack
    ap-northeast-1 HVM (amd64) ami-090d2ec11347b32fa Launch Stack
    HVM (arm64) ami-0f5d36487a3cebc55 Launch Stack
    ap-northeast-2 HVM (amd64) ami-03332ba652418e7b9 Launch Stack
    HVM (arm64) ami-012dac9f48d33c3f9 Launch Stack
    ap-south-1 HVM (amd64) ami-08cc97f84a976c051 Launch Stack
    HVM (arm64) ami-092bb2c62475336cd Launch Stack
    ap-southeast-1 HVM (amd64) ami-0b67a46199598c429 Launch Stack
    HVM (arm64) ami-00139be6ef4d237ee Launch Stack
    ap-southeast-2 HVM (amd64) ami-049cd792623f58263 Launch Stack
    HVM (arm64) ami-0fb72076617ca7102 Launch Stack
    ap-southeast-3 HVM (amd64) ami-092a47cb0bc395de1 Launch Stack
    HVM (arm64) ami-04c3c92d946f945bb Launch Stack
    ca-central-1 HVM (amd64) ami-04b694fa80903efa8 Launch Stack
    HVM (arm64) ami-04a3f6b4a8c222f1d Launch Stack
    eu-central-1 HVM (amd64) ami-0a93288f39593a876 Launch Stack
    HVM (arm64) ami-0ed0f9d256e11ab12 Launch Stack
    eu-north-1 HVM (amd64) ami-0c369104aa4dfdf7d Launch Stack
    HVM (arm64) ami-003bc506ea04ab7d0 Launch Stack
    eu-south-1 HVM (amd64) ami-0256cdf83f3d19947 Launch Stack
    HVM (arm64) ami-05415cd3f63ef12a9 Launch Stack
    eu-west-1 HVM (amd64) ami-0566fbb9896e09ba1 Launch Stack
    HVM (arm64) ami-073295b9c8e1baf86 Launch Stack
    eu-west-2 HVM (amd64) ami-0e8754411e2374e61 Launch Stack
    HVM (arm64) ami-02d9d3acb34de8dac Launch Stack
    eu-west-3 HVM (amd64) ami-0ee9d1e40e51ca20d Launch Stack
    HVM (arm64) ami-0bf0ef66d676c59d6 Launch Stack
    me-south-1 HVM (amd64) ami-01fdf76f8c57c5685 Launch Stack
    HVM (arm64) ami-09e9438bc1b377dcd Launch Stack
    sa-east-1 HVM (amd64) ami-0e06f4e7693767086 Launch Stack
    HVM (arm64) ami-0074ed883275f52e9 Launch Stack
    us-east-1 HVM (amd64) ami-099de63b55374fe32 Launch Stack
    HVM (arm64) ami-0839623f75b35ff0b Launch Stack
    us-east-2 HVM (amd64) ami-0854ac6020c8bd348 Launch Stack
    HVM (arm64) ami-02fa0c3fc0b07fc7b Launch Stack
    us-west-1 HVM (amd64) ami-0bfd0697ee8937cbe Launch Stack
    HVM (arm64) ami-0d16087f78da0c708 Launch Stack
    us-west-2 HVM (amd64) ami-0491a417f081cc429 Launch Stack
    HVM (arm64) ami-047370862ad4657a3 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4081.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0f28de28cfde78f46 Launch Stack
    HVM (arm64) ami-0b541bf73d8184d22 Launch Stack
    ap-east-1 HVM (amd64) ami-00cb9c687b162e03e Launch Stack
    HVM (arm64) ami-0747960f200c13071 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0ace3dc0cb65dcaa0 Launch Stack
    HVM (arm64) ami-0098fc0b0a9d3da0d Launch Stack
    ap-northeast-2 HVM (amd64) ami-00e9e6b4221c0d2ed Launch Stack
    HVM (arm64) ami-0fa9b29423c77da7d Launch Stack
    ap-south-1 HVM (amd64) ami-07ccdaaa75445a34c Launch Stack
    HVM (arm64) ami-07bf40d380a7e26b8 Launch Stack
    ap-southeast-1 HVM (amd64) ami-06b3e4fe965b24901 Launch Stack
    HVM (arm64) ami-0e30a5206377dc11d Launch Stack
    ap-southeast-2 HVM (amd64) ami-0d0d7feddd03cfc10 Launch Stack
    HVM (arm64) ami-07679d106aa695393 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0849ccfee537937ca Launch Stack
    HVM (arm64) ami-07492a74a27c52d66 Launch Stack
    ca-central-1 HVM (amd64) ami-06b28ca7770ec7bde Launch Stack
    HVM (arm64) ami-089af4360fe79ad99 Launch Stack
    eu-central-1 HVM (amd64) ami-05179ea081177f88f Launch Stack
    HVM (arm64) ami-0b6d2271132b718e5 Launch Stack
    eu-north-1 HVM (amd64) ami-0e4d1ba3b307a37b1 Launch Stack
    HVM (arm64) ami-0ec2406993fb3aae2 Launch Stack
    eu-south-1 HVM (amd64) ami-0e616880935a910bb Launch Stack
    HVM (arm64) ami-09571ae1cb69d11dc Launch Stack
    eu-west-1 HVM (amd64) ami-0879c881f30b63056 Launch Stack
    HVM (arm64) ami-0eb73bd077a341b26 Launch Stack
    eu-west-2 HVM (amd64) ami-06d72401746675751 Launch Stack
    HVM (arm64) ami-00694b5b9f2f7c9ab Launch Stack
    eu-west-3 HVM (amd64) ami-03da334c512f11482 Launch Stack
    HVM (arm64) ami-034ac1492934fe10c Launch Stack
    me-south-1 HVM (amd64) ami-049442237a96895e3 Launch Stack
    HVM (arm64) ami-052a753f54ecabc75 Launch Stack
    sa-east-1 HVM (amd64) ami-04ed06c4b9c67c104 Launch Stack
    HVM (arm64) ami-0c941f642d175a75c Launch Stack
    us-east-1 HVM (amd64) ami-014952c4ec422d9aa Launch Stack
    HVM (arm64) ami-0d285952e2f081ce0 Launch Stack
    us-east-2 HVM (amd64) ami-0db4b1e00da3544a0 Launch Stack
    HVM (arm64) ami-0df6819b94f3a9f5f Launch Stack
    us-west-1 HVM (amd64) ami-0e7e012690a2f5d66 Launch Stack
    HVM (arm64) ami-0cb936318cedf2a92 Launch Stack
    us-west-2 HVM (amd64) ami-06de75e45eb701ff2 Launch Stack
    HVM (arm64) ami-02e5ac5175c04acd9 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3975.2.2.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0aef44d513982b6b1 Launch Stack
    HVM (arm64) ami-00df9d004f85f7c87 Launch Stack
    ap-east-1 HVM (amd64) ami-061a1d5772d73ce20 Launch Stack
    HVM (arm64) ami-020ae8749c3d8c4e9 Launch Stack
    ap-northeast-1 HVM (amd64) ami-00840e68d7bd6ebc1 Launch Stack
    HVM (arm64) ami-03b9104ea8e82eba8 Launch Stack
    ap-northeast-2 HVM (amd64) ami-035c5ca55f481f4f1 Launch Stack
    HVM (arm64) ami-071f0d4143e76427d Launch Stack
    ap-south-1 HVM (amd64) ami-0638a6325bf97fe34 Launch Stack
    HVM (arm64) ami-0ea39e1310cbf0953 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0e844cf551c123321 Launch Stack
    HVM (arm64) ami-034bf44290d4cac07 Launch Stack
    ap-southeast-2 HVM (amd64) ami-087bab382ff614589 Launch Stack
    HVM (arm64) ami-0abe04072c38b99ea Launch Stack
    ap-southeast-3 HVM (amd64) ami-0dd09e0a30e7b191b Launch Stack
    HVM (arm64) ami-0a3308ab8dbef0b22 Launch Stack
    ca-central-1 HVM (amd64) ami-027ecad6e9fdd9243 Launch Stack
    HVM (arm64) ami-0d6628daea661356e Launch Stack
    eu-central-1 HVM (amd64) ami-0efea9719c5208432 Launch Stack
    HVM (arm64) ami-086bcbbe4d9878721 Launch Stack
    eu-north-1 HVM (amd64) ami-0fefca8672bbb7552 Launch Stack
    HVM (arm64) ami-0d8da8a3e3808decb Launch Stack
    eu-south-1 HVM (amd64) ami-0185fc5431a75de4d Launch Stack
    HVM (arm64) ami-06a833d32541031f8 Launch Stack
    eu-west-1 HVM (amd64) ami-08e310a85b15129a6 Launch Stack
    HVM (arm64) ami-0b60d43841b21eb53 Launch Stack
    eu-west-2 HVM (amd64) ami-0a5b7fb147cfb0c19 Launch Stack
    HVM (arm64) ami-0367bf7d1d9186d35 Launch Stack
    eu-west-3 HVM (amd64) ami-0e8ecdf3cbadd68ba Launch Stack
    HVM (arm64) ami-09ce3e6b66b63f3a8 Launch Stack
    me-south-1 HVM (amd64) ami-027a563601e260546 Launch Stack
    HVM (arm64) ami-0c97b19e212051a80 Launch Stack
    sa-east-1 HVM (amd64) ami-0263dcb8fc3a20879 Launch Stack
    HVM (arm64) ami-03907ee568eb24c7a Launch Stack
    us-east-1 HVM (amd64) ami-06e4091f863a9854e Launch Stack
    HVM (arm64) ami-05e5c6e1f41d267a2 Launch Stack
    us-east-2 HVM (amd64) ami-0926761a70722c43e Launch Stack
    HVM (arm64) ami-0c9a065fe193a697d Launch Stack
    us-west-1 HVM (amd64) ami-0bc6390230d73cbe3 Launch Stack
    HVM (arm64) ami-07a6ad5b32799d92f Launch Stack
    us-west-2 HVM (amd64) ami-08e680ab61b7bfade Launch Stack
    HVM (arm64) ami-00fd765f2c3649e9d Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-099de63b55374fe32 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-099de63b55374fe32 (amd64), Beta ami-014952c4ec422d9aa (amd64), or Stable ami-06e4091f863a9854e (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .