Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4284.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0809759806919ce3c Launch Stack
    HVM (arm64) ami-04a843a53d106e22b Launch Stack
    ap-east-1 HVM (amd64) ami-0407e9a6089941ee2 Launch Stack
    HVM (arm64) ami-0537822cd1b49666b Launch Stack
    ap-northeast-1 HVM (amd64) ami-032dd975d6062baaf Launch Stack
    HVM (arm64) ami-05f9be50f8049ef4c Launch Stack
    ap-northeast-2 HVM (amd64) ami-077c9f4df811d3ddd Launch Stack
    HVM (arm64) ami-02f9b2520727b89b7 Launch Stack
    ap-south-1 HVM (amd64) ami-09c4e6d3fedd0f626 Launch Stack
    HVM (arm64) ami-00dce839c670340c6 Launch Stack
    ap-southeast-1 HVM (amd64) ami-051432159a042f727 Launch Stack
    HVM (arm64) ami-0de40d600ac4c58fe Launch Stack
    ap-southeast-2 HVM (amd64) ami-0ed579bf18d52f469 Launch Stack
    HVM (arm64) ami-0def5855da6c1fe12 Launch Stack
    ap-southeast-3 HVM (amd64) ami-01ec8d5416b7dd773 Launch Stack
    HVM (arm64) ami-0d87c4e3ba478f193 Launch Stack
    ca-central-1 HVM (amd64) ami-0b71d9fe1d5640bda Launch Stack
    HVM (arm64) ami-074c24d025165f150 Launch Stack
    eu-central-1 HVM (amd64) ami-0e5406627196beaed Launch Stack
    HVM (arm64) ami-0d3cb159320db8669 Launch Stack
    eu-north-1 HVM (amd64) ami-0979557d06688d148 Launch Stack
    HVM (arm64) ami-0abae6fa058d57e49 Launch Stack
    eu-south-1 HVM (amd64) ami-0e724d7fcdf6bd9a4 Launch Stack
    HVM (arm64) ami-01331d73b3a90f257 Launch Stack
    eu-west-1 HVM (amd64) ami-014c1e18c3dbbe4f7 Launch Stack
    HVM (arm64) ami-0a3d77a862455a8db Launch Stack
    eu-west-2 HVM (amd64) ami-0de86a61ac2ec6b98 Launch Stack
    HVM (arm64) ami-0b60c9611992ca27b Launch Stack
    eu-west-3 HVM (amd64) ami-02121c8224973c016 Launch Stack
    HVM (arm64) ami-0c3788e105313b125 Launch Stack
    me-south-1 HVM (amd64) ami-0e8a8a4caa1d1938d Launch Stack
    HVM (arm64) ami-08c06f40358e5d4ba Launch Stack
    sa-east-1 HVM (amd64) ami-0c92f2ac0bf04ad62 Launch Stack
    HVM (arm64) ami-0023c19acaddfcefb Launch Stack
    us-east-1 HVM (amd64) ami-0b01236406e24c960 Launch Stack
    HVM (arm64) ami-0a802207c36951db8 Launch Stack
    us-east-2 HVM (amd64) ami-021328ba48eefedca Launch Stack
    HVM (arm64) ami-0ffdc511986d2f5fa Launch Stack
    us-west-1 HVM (amd64) ami-03efe3683fcf1b8d9 Launch Stack
    HVM (arm64) ami-02e70563ea52dc5e2 Launch Stack
    us-west-2 HVM (amd64) ami-025a5edae11d822d4 Launch Stack
    HVM (arm64) ami-0bdf9c7c1b2e0d8cc Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4230.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-095844f91b140a4a6 Launch Stack
    HVM (arm64) ami-0f2987c6c9ce67e85 Launch Stack
    ap-east-1 HVM (amd64) ami-05db62a9145967422 Launch Stack
    HVM (arm64) ami-05753d8b407b2cdb3 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0cda4c835320d29c9 Launch Stack
    HVM (arm64) ami-0b098221d6f7e26be Launch Stack
    ap-northeast-2 HVM (amd64) ami-0f0831e677f7f2969 Launch Stack
    HVM (arm64) ami-03933f36fb2ed2e12 Launch Stack
    ap-south-1 HVM (amd64) ami-0fe73eb52cbdc378e Launch Stack
    HVM (arm64) ami-0c6c79f0457edf3ed Launch Stack
    ap-southeast-1 HVM (amd64) ami-0ca6670e302c0e93a Launch Stack
    HVM (arm64) ami-0676e57f8550c5ab5 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0d2dc52d70545309d Launch Stack
    HVM (arm64) ami-0aaffe773363e492e Launch Stack
    ap-southeast-3 HVM (amd64) ami-04c45e54c5ddda850 Launch Stack
    HVM (arm64) ami-061c8ab3f065b429b Launch Stack
    ca-central-1 HVM (amd64) ami-08b31d2be4d8d170d Launch Stack
    HVM (arm64) ami-0872ec3be8c7f7b90 Launch Stack
    eu-central-1 HVM (amd64) ami-0b05e3ca6adcacf3b Launch Stack
    HVM (arm64) ami-056727954973c1763 Launch Stack
    eu-north-1 HVM (amd64) ami-0b8e5dc4698c1db80 Launch Stack
    HVM (arm64) ami-02fedadf6fb27c8fa Launch Stack
    eu-south-1 HVM (amd64) ami-0d4558a2fa2dc9070 Launch Stack
    HVM (arm64) ami-0f7d32a30151e906f Launch Stack
    eu-west-1 HVM (amd64) ami-0c2ff373c12fa4280 Launch Stack
    HVM (arm64) ami-074b7dfd4601e1e97 Launch Stack
    eu-west-2 HVM (amd64) ami-0b2d364e748c6ef30 Launch Stack
    HVM (arm64) ami-0d433ff1fb51cf5b7 Launch Stack
    eu-west-3 HVM (amd64) ami-0962826312efe513d Launch Stack
    HVM (arm64) ami-0100684983b986ab6 Launch Stack
    me-south-1 HVM (amd64) ami-05a6f04f73da71f01 Launch Stack
    HVM (arm64) ami-0cbbddcb727a49076 Launch Stack
    sa-east-1 HVM (amd64) ami-0e16a011b507adecc Launch Stack
    HVM (arm64) ami-00eac2d7b43ef1b10 Launch Stack
    us-east-1 HVM (amd64) ami-0dcc5be17a05976c2 Launch Stack
    HVM (arm64) ami-096ef2a9305ae5ee5 Launch Stack
    us-east-2 HVM (amd64) ami-00684c1858aa5beac Launch Stack
    HVM (arm64) ami-00b49242be7a2e4f5 Launch Stack
    us-west-1 HVM (amd64) ami-0528c3c68665e6c57 Launch Stack
    HVM (arm64) ami-01bac46997ee43f0b Launch Stack
    us-west-2 HVM (amd64) ami-0786cc1e94c4db94e Launch Stack
    HVM (arm64) ami-04c617f6725c3400b Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4152.2.3.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-05f0b497004889270 Launch Stack
    HVM (arm64) ami-0bbb04cc0bbcb5499 Launch Stack
    ap-east-1 HVM (amd64) ami-04038ef57af336e6a Launch Stack
    HVM (arm64) ami-013b74a66a44e37b8 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0a96a6574305e32dd Launch Stack
    HVM (arm64) ami-0586bd9e4b71007d4 Launch Stack
    ap-northeast-2 HVM (amd64) ami-060b23ef804237bd1 Launch Stack
    HVM (arm64) ami-0dfd00858c0556d54 Launch Stack
    ap-south-1 HVM (amd64) ami-02cf4c0fafd093f55 Launch Stack
    HVM (arm64) ami-08c4273825aa792ba Launch Stack
    ap-southeast-1 HVM (amd64) ami-04ff9c744db1a6210 Launch Stack
    HVM (arm64) ami-0d4a1d71dbc88f3e5 Launch Stack
    ap-southeast-2 HVM (amd64) ami-039f3341ef229c75a Launch Stack
    HVM (arm64) ami-0fb0a71c971323c9d Launch Stack
    ap-southeast-3 HVM (amd64) ami-0b67bbc010a886505 Launch Stack
    HVM (arm64) ami-070d16144cd0d7e60 Launch Stack
    ca-central-1 HVM (amd64) ami-0b356d175911eb6ef Launch Stack
    HVM (arm64) ami-0257f6a5941a4c61c Launch Stack
    eu-central-1 HVM (amd64) ami-0ad676a0dc7177930 Launch Stack
    HVM (arm64) ami-08c070c77685a8817 Launch Stack
    eu-north-1 HVM (amd64) ami-072f085f9bcf0fabe Launch Stack
    HVM (arm64) ami-0f6324309ef9df4d1 Launch Stack
    eu-south-1 HVM (amd64) ami-01f0055136136fbaa Launch Stack
    HVM (arm64) ami-0645ffa110d821112 Launch Stack
    eu-west-1 HVM (amd64) ami-0c101a8e318bac39a Launch Stack
    HVM (arm64) ami-011c85bb18f4c6539 Launch Stack
    eu-west-2 HVM (amd64) ami-0d3a962975649836e Launch Stack
    HVM (arm64) ami-0ad53c1fd4cc98c7e Launch Stack
    eu-west-3 HVM (amd64) ami-0b2d5c832f013edfa Launch Stack
    HVM (arm64) ami-09e681a91d00a8686 Launch Stack
    me-south-1 HVM (amd64) ami-0b5f14e968ddafd9d Launch Stack
    HVM (arm64) ami-00fde97e56fab9449 Launch Stack
    sa-east-1 HVM (amd64) ami-0ca684a2bbcedec9c Launch Stack
    HVM (arm64) ami-057f99d501175a5bf Launch Stack
    us-east-1 HVM (amd64) ami-0af2e8c3dd2896a02 Launch Stack
    HVM (arm64) ami-09bf79d4448ac61a5 Launch Stack
    us-east-2 HVM (amd64) ami-063354e15536a09c3 Launch Stack
    HVM (arm64) ami-0705544fda5b4d400 Launch Stack
    us-west-1 HVM (amd64) ami-070f65e2de83ac297 Launch Stack
    HVM (arm64) ami-03bfe4039a8799446 Launch Stack
    us-west-2 HVM (amd64) ami-0050414e49445092b Launch Stack
    HVM (arm64) ami-0a0d698fe89d735e2 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0b01236406e24c960 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0b01236406e24c960 (amd64), Beta ami-0dcc5be17a05976c2 (amd64), or Stable ami-0af2e8c3dd2896a02 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .