Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3732.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-00ff009881f8f07c3 Launch Stack
    HVM (arm64) ami-0ef9366251a93506f Launch Stack
    ap-east-1 HVM (amd64) ami-0a2a4793c48c04cc3 Launch Stack
    HVM (arm64) ami-055439d639a3c3143 Launch Stack
    ap-northeast-1 HVM (amd64) ami-03414b3cbcf5635cb Launch Stack
    HVM (arm64) ami-07cb0ad1e5ea1c193 Launch Stack
    ap-northeast-2 HVM (amd64) ami-01d9f082283d25885 Launch Stack
    HVM (arm64) ami-0dcab87f51130ed16 Launch Stack
    ap-south-1 HVM (amd64) ami-0af24f44e503646bf Launch Stack
    HVM (arm64) ami-0f58d355d893b8441 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0b9fbb45d0a505376 Launch Stack
    HVM (arm64) ami-0929476c3f2bbbbf6 Launch Stack
    ap-southeast-2 HVM (amd64) ami-03f77921011e300d3 Launch Stack
    HVM (arm64) ami-0fb3e63ef6f13f7f4 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0bbcdd9c0043981a6 Launch Stack
    HVM (arm64) ami-083d2747ccd7e142c Launch Stack
    ca-central-1 HVM (amd64) ami-00ea4f8e43926c0e3 Launch Stack
    HVM (arm64) ami-0faf511437d45bc5f Launch Stack
    eu-central-1 HVM (amd64) ami-0ca9b4d96943c93b5 Launch Stack
    HVM (arm64) ami-0e68ca4a3270dfbc4 Launch Stack
    eu-north-1 HVM (amd64) ami-030ef0f47e7ee6e3d Launch Stack
    HVM (arm64) ami-0cc279388f0d02e19 Launch Stack
    eu-south-1 HVM (amd64) ami-0e6f48fab37c98594 Launch Stack
    HVM (arm64) ami-0875b6b2d24524b6f Launch Stack
    eu-west-1 HVM (amd64) ami-0c6599ca52e1afdea Launch Stack
    HVM (arm64) ami-07b9878decd640223 Launch Stack
    eu-west-2 HVM (amd64) ami-0d28a26afd09468a9 Launch Stack
    HVM (arm64) ami-076fdb542949a27bd Launch Stack
    eu-west-3 HVM (amd64) ami-056abda867a35b89b Launch Stack
    HVM (arm64) ami-00d84b16394603e10 Launch Stack
    me-south-1 HVM (amd64) ami-09b16babd9d3df55f Launch Stack
    HVM (arm64) ami-0818acc9e1b928178 Launch Stack
    sa-east-1 HVM (amd64) ami-00c4b9a1dae13d7f7 Launch Stack
    HVM (arm64) ami-00095e2d8531e0ee3 Launch Stack
    us-east-1 HVM (amd64) ami-008d0b98ae6c718ca Launch Stack
    HVM (arm64) ami-0b7330650ee483e62 Launch Stack
    us-east-2 HVM (amd64) ami-034423a3263a440d4 Launch Stack
    HVM (arm64) ami-07b7c301070a04dc5 Launch Stack
    us-west-1 HVM (amd64) ami-05170a6df64cc7b58 Launch Stack
    HVM (arm64) ami-028a3ea4e3ae38b42 Launch Stack
    us-west-2 HVM (amd64) ami-01c5796216438f6cd Launch Stack
    HVM (arm64) ami-0d48ed4ae6da8adad Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3602.1.6.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-07d7c589becc6905b Launch Stack
    HVM (arm64) ami-0123475b3ae947e4c Launch Stack
    ap-east-1 HVM (amd64) ami-0235ba1cff3e07195 Launch Stack
    HVM (arm64) ami-0da64efc84f199586 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0512107bfec617eaa Launch Stack
    HVM (arm64) ami-091c8ede53c32fe54 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0cc1a28f8b25edea9 Launch Stack
    HVM (arm64) ami-0842d140df2b774ab Launch Stack
    ap-south-1 HVM (amd64) ami-068cef90cda656480 Launch Stack
    HVM (arm64) ami-0a7ac3c3f9ff429f6 Launch Stack
    ap-southeast-1 HVM (amd64) ami-080e482bce38d1c9d Launch Stack
    HVM (arm64) ami-03c04972680cca75d Launch Stack
    ap-southeast-2 HVM (amd64) ami-0f116129e336b983f Launch Stack
    HVM (arm64) ami-0683342a4e1b57a9c Launch Stack
    ap-southeast-3 HVM (amd64) ami-02f0cfea1f0e52311 Launch Stack
    HVM (arm64) ami-058c31bb4c9a475cf Launch Stack
    ca-central-1 HVM (amd64) ami-0c99c722020bd2647 Launch Stack
    HVM (arm64) ami-0d014ba1fd99a99b2 Launch Stack
    eu-central-1 HVM (amd64) ami-019d6ba0ca227da06 Launch Stack
    HVM (arm64) ami-0fa53667b1ed0587c Launch Stack
    eu-north-1 HVM (amd64) ami-079aec299d0f69bd3 Launch Stack
    HVM (arm64) ami-04db91a98e7da9441 Launch Stack
    eu-south-1 HVM (amd64) ami-016847080e905ae32 Launch Stack
    HVM (arm64) ami-08f7b05e8eef9a51e Launch Stack
    eu-west-1 HVM (amd64) ami-0c7ae92f2dadb3984 Launch Stack
    HVM (arm64) ami-0e46832e1e2b8af17 Launch Stack
    eu-west-2 HVM (amd64) ami-03f0dfed66eabeb47 Launch Stack
    HVM (arm64) ami-01ca23d2ea1e0014d Launch Stack
    eu-west-3 HVM (amd64) ami-0f9d421d3326d4531 Launch Stack
    HVM (arm64) ami-065a9d54f99c6c645 Launch Stack
    me-south-1 HVM (amd64) ami-0512492628f856463 Launch Stack
    HVM (arm64) ami-0119d77d555ba0187 Launch Stack
    sa-east-1 HVM (amd64) ami-0ede9f71093b54b35 Launch Stack
    HVM (arm64) ami-0b5b5fa19c23fe8a8 Launch Stack
    us-east-1 HVM (amd64) ami-02f51ff9675649f78 Launch Stack
    HVM (arm64) ami-00be4c33265ef856a Launch Stack
    us-east-2 HVM (amd64) ami-0b455ab6bdfa5f0b2 Launch Stack
    HVM (arm64) ami-00049fbeff1872e6a Launch Stack
    us-west-1 HVM (amd64) ami-09bb18d69530924f3 Launch Stack
    HVM (arm64) ami-0e709e1f2e181afd7 Launch Stack
    us-west-2 HVM (amd64) ami-000adc330a9268783 Launch Stack
    HVM (arm64) ami-0e5d2601bb5715796 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3510.2.8.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-018d9b9d7151ab6c8 Launch Stack
    HVM (arm64) ami-012a0ca4f885c0902 Launch Stack
    ap-east-1 HVM (amd64) ami-0a20232670e0772af Launch Stack
    HVM (arm64) ami-0d11393c391335e03 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0233879a9dd92df11 Launch Stack
    HVM (arm64) ami-073caaca41127ba03 Launch Stack
    ap-northeast-2 HVM (amd64) ami-01bff27d84a7a7700 Launch Stack
    HVM (arm64) ami-07b3fa99c5769ce3f Launch Stack
    ap-south-1 HVM (amd64) ami-0cc2acda59cd3af78 Launch Stack
    HVM (arm64) ami-0f862f06e36f89e00 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0c42cc418ecd92604 Launch Stack
    HVM (arm64) ami-0666eb6fabac39c0e Launch Stack
    ap-southeast-2 HVM (amd64) ami-01691d179b95f6f8c Launch Stack
    HVM (arm64) ami-0b0f4b37e574b89f6 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0dfb599c8fc910601 Launch Stack
    HVM (arm64) ami-03fac32d0f0138ffd Launch Stack
    ca-central-1 HVM (amd64) ami-09dc356370f64513d Launch Stack
    HVM (arm64) ami-07e052769f4864f3f Launch Stack
    eu-central-1 HVM (amd64) ami-068829f5bf9333501 Launch Stack
    HVM (arm64) ami-024ac78a4418c2371 Launch Stack
    eu-north-1 HVM (amd64) ami-0a560c9c18e52fe3a Launch Stack
    HVM (arm64) ami-034143243e41a9c24 Launch Stack
    eu-south-1 HVM (amd64) ami-0b069d1752bac2e31 Launch Stack
    HVM (arm64) ami-0202c28927561b409 Launch Stack
    eu-west-1 HVM (amd64) ami-0e28f7527ce1dd0d1 Launch Stack
    HVM (arm64) ami-04199f09ad1bc9c3c Launch Stack
    eu-west-2 HVM (amd64) ami-03f8f46c0d8a86c5c Launch Stack
    HVM (arm64) ami-0e99364bfa72955c4 Launch Stack
    eu-west-3 HVM (amd64) ami-0328a3887119e6dd3 Launch Stack
    HVM (arm64) ami-08084c4353b994e61 Launch Stack
    me-south-1 HVM (amd64) ami-0f34884e3fbf09f49 Launch Stack
    HVM (arm64) ami-0184b89ec556fc660 Launch Stack
    sa-east-1 HVM (amd64) ami-0151f43dfd95a3865 Launch Stack
    HVM (arm64) ami-0e04f199138496576 Launch Stack
    us-east-1 HVM (amd64) ami-0604fd4b5a31a9e9f Launch Stack
    HVM (arm64) ami-0781fbb3848699258 Launch Stack
    us-east-2 HVM (amd64) ami-0a804cb708f395c8a Launch Stack
    HVM (arm64) ami-0514ea22cab66cf8f Launch Stack
    us-west-1 HVM (amd64) ami-0ba006ff9272db530 Launch Stack
    HVM (arm64) ami-0949da85c07721614 Launch Stack
    us-west-2 HVM (amd64) ami-0c3488073a14ab0af Launch Stack
    HVM (arm64) ami-0ee65e37cd8aae6e8 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-008d0b98ae6c718ca (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-008d0b98ae6c718ca (amd64), Beta ami-02f51ff9675649f78 (amd64), or Stable ami-0604fd4b5a31a9e9f (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .