Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3305.0.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0d10f35dc04dd4604 Launch Stack
    HVM (arm64) ami-01b178532ff995203 Launch Stack
    ap-east-1 HVM (amd64) ami-025d5bd4dbe0fd93d Launch Stack
    HVM (arm64) ami-06f2891ec96491461 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0e6f1de5c792228e6 Launch Stack
    HVM (arm64) ami-0d1745c489b815914 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0cd74f73aa088d6b7 Launch Stack
    HVM (arm64) ami-0d789aaca7ea7b957 Launch Stack
    ap-south-1 HVM (amd64) ami-055713ec142b74072 Launch Stack
    HVM (arm64) ami-0e83fe8139f85cf44 Launch Stack
    ap-southeast-1 HVM (amd64) ami-08f16dcc20f6a652f Launch Stack
    HVM (arm64) ami-0f9829c7ae744667e Launch Stack
    ap-southeast-2 HVM (amd64) ami-09f55fff27d8f34e4 Launch Stack
    HVM (arm64) ami-0b5e04270106ee92e Launch Stack
    ap-southeast-3 HVM (amd64) ami-08500818d12d619cd Launch Stack
    HVM (arm64) ami-000453a3276dcd822 Launch Stack
    ca-central-1 HVM (amd64) ami-040944fa3e5b4afbb Launch Stack
    HVM (arm64) ami-05e41be58b20edf5d Launch Stack
    eu-central-1 HVM (amd64) ami-0c9e5e63948206d36 Launch Stack
    HVM (arm64) ami-0900a886356493d61 Launch Stack
    eu-north-1 HVM (amd64) ami-0fd49bc2a21f9a369 Launch Stack
    HVM (arm64) ami-0a3338bc1463bf630 Launch Stack
    eu-south-1 HVM (amd64) ami-07cb6688ab274807f Launch Stack
    HVM (arm64) ami-0025b8458f99b9539 Launch Stack
    eu-west-1 HVM (amd64) ami-0f3cee44b7c1c55e4 Launch Stack
    HVM (arm64) ami-042a7b0fb5d0efa3f Launch Stack
    eu-west-2 HVM (amd64) ami-013861f2efb018d08 Launch Stack
    HVM (arm64) ami-061b77e932a608c4e Launch Stack
    eu-west-3 HVM (amd64) ami-0f19162c2f23e9200 Launch Stack
    HVM (arm64) ami-024202944f5d04163 Launch Stack
    me-south-1 HVM (amd64) ami-090f2abf51acc3ee1 Launch Stack
    HVM (arm64) ami-0e4816763046a9876 Launch Stack
    sa-east-1 HVM (amd64) ami-0d4d2844d521e6533 Launch Stack
    HVM (arm64) ami-0d4487d81a8dcd345 Launch Stack
    us-east-1 HVM (amd64) ami-08a6e62be370c93fa Launch Stack
    HVM (arm64) ami-09f0c4618f0a80a7e Launch Stack
    us-east-2 HVM (amd64) ami-0f339694f5a1f42c0 Launch Stack
    HVM (arm64) ami-0e7e70f2b95a335d8 Launch Stack
    us-west-1 HVM (amd64) ami-004a6008d8940b7ad Launch Stack
    HVM (arm64) ami-00b62be6a2c5965a4 Launch Stack
    us-west-2 HVM (amd64) ami-0b298385577243ee4 Launch Stack
    HVM (arm64) ami-0743e69e21b9d2224 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3277.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-05ec8f6499e514593 Launch Stack
    HVM (arm64) ami-05631fb6dd1bf3465 Launch Stack
    ap-east-1 HVM (amd64) ami-0e5e2f07a879b875f Launch Stack
    HVM (arm64) ami-07b608881eda14e2d Launch Stack
    ap-northeast-1 HVM (amd64) ami-0ff27d3836e96c820 Launch Stack
    HVM (arm64) ami-07d7cd0f2ca565361 Launch Stack
    ap-northeast-2 HVM (amd64) ami-00adc081d925fa622 Launch Stack
    HVM (arm64) ami-08b12071ce31bb0a6 Launch Stack
    ap-south-1 HVM (amd64) ami-01d7a6d2e8fc7dc56 Launch Stack
    HVM (arm64) ami-05e3789899ad6d61c Launch Stack
    ap-southeast-1 HVM (amd64) ami-078dad4c4134d1b21 Launch Stack
    HVM (arm64) ami-01733d764d46a4fe8 Launch Stack
    ap-southeast-2 HVM (amd64) ami-02d93b71bf294d2e3 Launch Stack
    HVM (arm64) ami-0bf77ee5d8683c91c Launch Stack
    ap-southeast-3 HVM (amd64) ami-07b9ff9c46e2ae28f Launch Stack
    HVM (arm64) ami-0852ddda9a72c998f Launch Stack
    ca-central-1 HVM (amd64) ami-07884ebe77d971508 Launch Stack
    HVM (arm64) ami-0e04b90a4d6b012e6 Launch Stack
    eu-central-1 HVM (amd64) ami-0a68a0c6437cd01e0 Launch Stack
    HVM (arm64) ami-028f15e67f7385369 Launch Stack
    eu-north-1 HVM (amd64) ami-0b8927039aab1025c Launch Stack
    HVM (arm64) ami-06fb2fc9ed0eedd97 Launch Stack
    eu-south-1 HVM (amd64) ami-00e0c38e52f4c9007 Launch Stack
    HVM (arm64) ami-0db5b257ae926c640 Launch Stack
    eu-west-1 HVM (amd64) ami-04ff9c62c79836923 Launch Stack
    HVM (arm64) ami-0de10af329fd33843 Launch Stack
    eu-west-2 HVM (amd64) ami-07894fbb847b7a635 Launch Stack
    HVM (arm64) ami-0d854406ee139599d Launch Stack
    eu-west-3 HVM (amd64) ami-0117337e232ad08aa Launch Stack
    HVM (arm64) ami-056e49f5df6ac5792 Launch Stack
    me-south-1 HVM (amd64) ami-044a9e02580c2c639 Launch Stack
    HVM (arm64) ami-053a3c6a68d7232bd Launch Stack
    sa-east-1 HVM (amd64) ami-0f4ca76ee5fb52c66 Launch Stack
    HVM (arm64) ami-0e29482463f9d2ff8 Launch Stack
    us-east-1 HVM (amd64) ami-0a67b5b04061f5f9c Launch Stack
    HVM (arm64) ami-0c6eb9ca5b14100b0 Launch Stack
    us-east-2 HVM (amd64) ami-0b56042db0feb93c5 Launch Stack
    HVM (arm64) ami-0e3186bd772b97e9d Launch Stack
    us-west-1 HVM (amd64) ami-0d5590d423a8ca93c Launch Stack
    HVM (arm64) ami-018a3ada6f450879a Launch Stack
    us-west-2 HVM (amd64) ami-09d954c1b90ae0c5c Launch Stack
    HVM (arm64) ami-0f02078b3257b48fc Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3227.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0effea5d3772000da Launch Stack
    HVM (arm64) ami-039c9b0aaca656bc0 Launch Stack
    ap-east-1 HVM (amd64) ami-0311239b24bb6d2b6 Launch Stack
    HVM (arm64) ami-0d45a4b71baa34492 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0fe3dc4a36e563c2f Launch Stack
    HVM (arm64) ami-05225d62199b3a360 Launch Stack
    ap-northeast-2 HVM (amd64) ami-01343707ac70c99c3 Launch Stack
    HVM (arm64) ami-0c507d2fee2508d86 Launch Stack
    ap-south-1 HVM (amd64) ami-0330c380de94d8dce Launch Stack
    HVM (arm64) ami-0163e11a876181937 Launch Stack
    ap-southeast-1 HVM (amd64) ami-07700c8929aa8de9f Launch Stack
    HVM (arm64) ami-065061f9f38e42c13 Launch Stack
    ap-southeast-2 HVM (amd64) ami-02fb67a392caa73e8 Launch Stack
    HVM (arm64) ami-05bf9051593304606 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0b03b2488ac2446fd Launch Stack
    HVM (arm64) ami-00cd346f2b8bf6804 Launch Stack
    ca-central-1 HVM (amd64) ami-015358ac4b696292d Launch Stack
    HVM (arm64) ami-035a65b5f38d6b18e Launch Stack
    eu-central-1 HVM (amd64) ami-03f57b0b434c2cc19 Launch Stack
    HVM (arm64) ami-0bcfadc661e372b22 Launch Stack
    eu-north-1 HVM (amd64) ami-0e0f12c91db6518cb Launch Stack
    HVM (arm64) ami-0e4368d27b389997e Launch Stack
    eu-south-1 HVM (amd64) ami-03c92cdfd5e1db763 Launch Stack
    HVM (arm64) ami-030926a2e18c52bbf Launch Stack
    eu-west-1 HVM (amd64) ami-08caec8e80ebf3f96 Launch Stack
    HVM (arm64) ami-01db1cf7c149a4c91 Launch Stack
    eu-west-2 HVM (amd64) ami-058dc3f55f7a43186 Launch Stack
    HVM (arm64) ami-0c239b77f02e40276 Launch Stack
    eu-west-3 HVM (amd64) ami-0c22f5e7bb5915554 Launch Stack
    HVM (arm64) ami-039d3e4ab61ea8d90 Launch Stack
    me-south-1 HVM (amd64) ami-079141617cf6df4a9 Launch Stack
    HVM (arm64) ami-0aae180121509d271 Launch Stack
    sa-east-1 HVM (amd64) ami-0f1501485fdc1a547 Launch Stack
    HVM (arm64) ami-0b4038378f702f9c0 Launch Stack
    us-east-1 HVM (amd64) ami-012ac9cfa7d23b705 Launch Stack
    HVM (arm64) ami-04d25ba9058d5f8d1 Launch Stack
    us-east-2 HVM (amd64) ami-03161963b4f19dc20 Launch Stack
    HVM (arm64) ami-04a2581d2d52754af Launch Stack
    us-west-1 HVM (amd64) ami-09a43b6dd82921e8e Launch Stack
    HVM (arm64) ami-06f979f088ee98d4f Launch Stack
    us-west-2 HVM (amd64) ami-05b972f56f9576b59 Launch Stack
    HVM (arm64) ami-0812cb8f69fd1d7dd Launch Stack

    AWS China AMIs maintained by Giant Swarm

    The following AMIs are not part of the official Flatcar Container Linux release process and may lag behind (query version).

    View as json feed: amd64
    EC2 Region AMI Type AMI ID CloudFormation

    CloudFormation will launch a cluster of Flatcar Container Linux machines with a security and autoscaling group.

    Container Linux Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Container Linux Configs (CLC). These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this CLC YAML config will start an NGINX Docker container:

    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i ghcr.io/flatcar-linux/ct:latest -platform ec2 > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Container Linux Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    storage:
      filesystems:
        - mount:
            device: /dev/xvdb
            format: ext4
            wipe_filesystem: true
            label: ephemeral
    
    systemd:
      units:
        - name: media-ephemeral.mount
          enable: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Container Linux Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Container Linux Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh [email protected]<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-08a6e62be370c93fa (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-08a6e62be370c93fa (amd64), Beta ami-0a67b5b04061f5f9c (amd64), or Stable ami-012ac9cfa7d23b705 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Known issues

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh [email protected] with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .