Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4152.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-000a83f3c06c0f6d5 Launch Stack
    HVM (arm64) ami-08f861aa644b94553 Launch Stack
    ap-east-1 HVM (amd64) ami-053c8a34141ad12fa Launch Stack
    HVM (arm64) ami-0512a73a366952d93 Launch Stack
    ap-northeast-1 HVM (amd64) ami-048c04eee46120725 Launch Stack
    HVM (arm64) ami-06cd47265e49572fc Launch Stack
    ap-northeast-2 HVM (amd64) ami-074808911aed2f05e Launch Stack
    HVM (arm64) ami-09a361ea037c41bf0 Launch Stack
    ap-south-1 HVM (amd64) ami-0b3c8afefba7569da Launch Stack
    HVM (arm64) ami-097800a3f03ccfcd7 Launch Stack
    ap-southeast-1 HVM (amd64) ami-05fa4f2559f40c2b7 Launch Stack
    HVM (arm64) ami-0588f7aa13187ce1a Launch Stack
    ap-southeast-2 HVM (amd64) ami-0a6499b976e1ba6ae Launch Stack
    HVM (arm64) ami-0cb54bc13f300896b Launch Stack
    ap-southeast-3 HVM (amd64) ami-0aad2e0ccb2734a2c Launch Stack
    HVM (arm64) ami-0d6be58ab3e5631ee Launch Stack
    ca-central-1 HVM (amd64) ami-095477abf31f93487 Launch Stack
    HVM (arm64) ami-06cf73dea887dbfa6 Launch Stack
    eu-central-1 HVM (amd64) ami-027d59d15d8fccabc Launch Stack
    HVM (arm64) ami-02f1d8acd7251d4b6 Launch Stack
    eu-north-1 HVM (amd64) ami-07df39056bef4ca13 Launch Stack
    HVM (arm64) ami-05560909cddf619a5 Launch Stack
    eu-south-1 HVM (amd64) ami-0f8f7243d182edeab Launch Stack
    HVM (arm64) ami-0ed9954be77ab323d Launch Stack
    eu-west-1 HVM (amd64) ami-044ed81176ae40bdb Launch Stack
    HVM (arm64) ami-0f0c5f1a5a161fdf9 Launch Stack
    eu-west-2 HVM (amd64) ami-09586dc3f39868b67 Launch Stack
    HVM (arm64) ami-0b11a3a04ae24e174 Launch Stack
    eu-west-3 HVM (amd64) ami-0057f240f90429de6 Launch Stack
    HVM (arm64) ami-0e9ed4c87b0f7afdc Launch Stack
    me-south-1 HVM (amd64) ami-06abd561697698abc Launch Stack
    HVM (arm64) ami-0f090d0b967a2de2d Launch Stack
    sa-east-1 HVM (amd64) ami-078542554ecbc72d7 Launch Stack
    HVM (arm64) ami-0bd075f4b37bd06f4 Launch Stack
    us-east-1 HVM (amd64) ami-0774e426bcff7d91f Launch Stack
    HVM (arm64) ami-0c5abd12be479aec6 Launch Stack
    us-east-2 HVM (amd64) ami-0c0889fd3add0870a Launch Stack
    HVM (arm64) ami-0fbc4648ac7f2fd6b Launch Stack
    us-west-1 HVM (amd64) ami-00ae0234582afadbf Launch Stack
    HVM (arm64) ami-022458905dece6828 Launch Stack
    us-west-2 HVM (amd64) ami-08e5b1bc7d47aa63f Launch Stack
    HVM (arm64) ami-0426b72fb7e63249e Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4116.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-05fa164963c1af150 Launch Stack
    HVM (arm64) ami-0338aa5cac987ea52 Launch Stack
    ap-east-1 HVM (amd64) ami-02492673175d96399 Launch Stack
    HVM (arm64) ami-0aea5bc1c557e7147 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0239b17a16f85921f Launch Stack
    HVM (arm64) ami-0806ff2d79bffcd59 Launch Stack
    ap-northeast-2 HVM (amd64) ami-07990851f092299f4 Launch Stack
    HVM (arm64) ami-0a1f678be412190ac Launch Stack
    ap-south-1 HVM (amd64) ami-0456e86721e375810 Launch Stack
    HVM (arm64) ami-00f980bfec60aabbf Launch Stack
    ap-southeast-1 HVM (amd64) ami-0de39384e02d7c78e Launch Stack
    HVM (arm64) ami-062710a0df56f2a96 Launch Stack
    ap-southeast-2 HVM (amd64) ami-07e19066f034f591c Launch Stack
    HVM (arm64) ami-020a5f0cb2ebb8cf8 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0804aded44486b59d Launch Stack
    HVM (arm64) ami-0d7e21f21fe9ce000 Launch Stack
    ca-central-1 HVM (amd64) ami-073831517892d3e69 Launch Stack
    HVM (arm64) ami-0d859c8e61485cb31 Launch Stack
    eu-central-1 HVM (amd64) ami-060e725a577f6282f Launch Stack
    HVM (arm64) ami-0e685003ed78c072d Launch Stack
    eu-north-1 HVM (amd64) ami-080398d6d08eb2ec5 Launch Stack
    HVM (arm64) ami-076cf363aab5062af Launch Stack
    eu-south-1 HVM (amd64) ami-051fa1c9bfacf69ba Launch Stack
    HVM (arm64) ami-066536f7909473351 Launch Stack
    eu-west-1 HVM (amd64) ami-0b41929541fe0e22e Launch Stack
    HVM (arm64) ami-0f090e89f8c24c16a Launch Stack
    eu-west-2 HVM (amd64) ami-060dd230bee48ad26 Launch Stack
    HVM (arm64) ami-09844c6539ec8b6c7 Launch Stack
    eu-west-3 HVM (amd64) ami-073a1fc11db6b9f3a Launch Stack
    HVM (arm64) ami-00e41c0a98e31d3fa Launch Stack
    me-south-1 HVM (amd64) ami-011a952667a264bb4 Launch Stack
    HVM (arm64) ami-0e1e36756044f07cc Launch Stack
    sa-east-1 HVM (amd64) ami-0c41b389c827d63c6 Launch Stack
    HVM (arm64) ami-003d2231c45ac3b81 Launch Stack
    us-east-1 HVM (amd64) ami-03c0f950149a7984d Launch Stack
    HVM (arm64) ami-0390b14062524e48a Launch Stack
    us-east-2 HVM (amd64) ami-0db4a3021afa7ebc1 Launch Stack
    HVM (arm64) ami-03105025f4f803a1a Launch Stack
    us-west-1 HVM (amd64) ami-07f0848338aa0ff05 Launch Stack
    HVM (arm64) ami-0c869037b038989d7 Launch Stack
    us-west-2 HVM (amd64) ami-042ed581d834d7296 Launch Stack
    HVM (arm64) ami-0dd0d8012077e6ca0 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4081.2.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0f9a0507edbab481e Launch Stack
    HVM (arm64) ami-0c36656b9a34fbd0f Launch Stack
    ap-east-1 HVM (amd64) ami-024a9ddaf8e927c00 Launch Stack
    HVM (arm64) ami-0ad597ff2c3fb562f Launch Stack
    ap-northeast-1 HVM (amd64) ami-0f312728b3cb847ee Launch Stack
    HVM (arm64) ami-038bef83a328055f3 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0ae0068920c31caa3 Launch Stack
    HVM (arm64) ami-053648249be7a0cfc Launch Stack
    ap-south-1 HVM (amd64) ami-07667d183efce2bc9 Launch Stack
    HVM (arm64) ami-08d1f4ffecb3e2e0f Launch Stack
    ap-southeast-1 HVM (amd64) ami-02daa76a3545843a1 Launch Stack
    HVM (arm64) ami-095c678dc3d018116 Launch Stack
    ap-southeast-2 HVM (amd64) ami-03a92434c7fbc1dd9 Launch Stack
    HVM (arm64) ami-0f0edf216713fc6ab Launch Stack
    ap-southeast-3 HVM (amd64) ami-0f276ae4f678cadca Launch Stack
    HVM (arm64) ami-063be2f8e4773e0e1 Launch Stack
    ca-central-1 HVM (amd64) ami-06db99f06babc7ee8 Launch Stack
    HVM (arm64) ami-00dfccb8804c6aabc Launch Stack
    eu-central-1 HVM (amd64) ami-02556b5a5d90a5592 Launch Stack
    HVM (arm64) ami-067f53b9263065501 Launch Stack
    eu-north-1 HVM (amd64) ami-05462ff1546d2f97f Launch Stack
    HVM (arm64) ami-0356347a63a1e8a93 Launch Stack
    eu-south-1 HVM (amd64) ami-07472af63cb5a36fa Launch Stack
    HVM (arm64) ami-052cb82297f4439c6 Launch Stack
    eu-west-1 HVM (amd64) ami-0b92a4bb5a81e29ef Launch Stack
    HVM (arm64) ami-0122faeb6a4188789 Launch Stack
    eu-west-2 HVM (amd64) ami-0dad0dd48cd1235dc Launch Stack
    HVM (arm64) ami-0138829ca1dfb2d74 Launch Stack
    eu-west-3 HVM (amd64) ami-05d1cd90778484306 Launch Stack
    HVM (arm64) ami-030059df344b63027 Launch Stack
    me-south-1 HVM (amd64) ami-0ad999b5e0adf2959 Launch Stack
    HVM (arm64) ami-00400c8ba86b25bfb Launch Stack
    sa-east-1 HVM (amd64) ami-06fd97e39d3abe48d Launch Stack
    HVM (arm64) ami-0a946f7fa73701997 Launch Stack
    us-east-1 HVM (amd64) ami-0c7fbdf3eee4a7251 Launch Stack
    HVM (arm64) ami-05dd12c699d719713 Launch Stack
    us-east-2 HVM (amd64) ami-08b4a116652281113 Launch Stack
    HVM (arm64) ami-0574c0a06163c35b9 Launch Stack
    us-west-1 HVM (amd64) ami-0f33f65610459b269 Launch Stack
    HVM (arm64) ami-06efa62c471146200 Launch Stack
    us-west-2 HVM (amd64) ami-0aababf7bfd202a99 Launch Stack
    HVM (arm64) ami-057be35777d2e9d56 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0774e426bcff7d91f (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0774e426bcff7d91f (amd64), Beta ami-03c0f950149a7984d (amd64), or Stable ami-0c7fbdf3eee4a7251 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .