Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4012.0.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-039e43a8b13f387dd Launch Stack
    HVM (arm64) ami-0e1d0e8a37a095983 Launch Stack
    ap-east-1 HVM (amd64) ami-0bf388aa2b74571c6 Launch Stack
    HVM (arm64) ami-02e5105ba0f360af5 Launch Stack
    ap-northeast-1 HVM (amd64) ami-07f232d58fa41447a Launch Stack
    HVM (arm64) ami-0ac74d5185e76a9ef Launch Stack
    ap-northeast-2 HVM (amd64) ami-091d57f24195a5149 Launch Stack
    HVM (arm64) ami-0d4597c4946a8df10 Launch Stack
    ap-south-1 HVM (amd64) ami-07ccda48eef491eca Launch Stack
    HVM (arm64) ami-0f49c9d4abaa66fa9 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0db4d231de291ff20 Launch Stack
    HVM (arm64) ami-0577f6f9f44f657b0 Launch Stack
    ap-southeast-2 HVM (amd64) ami-01d04ebc7e4bb9187 Launch Stack
    HVM (arm64) ami-0744b06c2c9b1157c Launch Stack
    ap-southeast-3 HVM (amd64) ami-0570a02a895885741 Launch Stack
    HVM (arm64) ami-06dfb4cdae9a68fe1 Launch Stack
    ca-central-1 HVM (amd64) ami-0935e666e6aa60e50 Launch Stack
    HVM (arm64) ami-02e45f0f4cf37473a Launch Stack
    eu-central-1 HVM (amd64) ami-09b624c68fd54ce93 Launch Stack
    HVM (arm64) ami-0358ee7716130739f Launch Stack
    eu-north-1 HVM (amd64) ami-0a2e7c65358f79067 Launch Stack
    HVM (arm64) ami-0c8c2eb462441cfb6 Launch Stack
    eu-south-1 HVM (amd64) ami-0c8d0024051e2fe05 Launch Stack
    HVM (arm64) ami-0b065cdf0591e1ff2 Launch Stack
    eu-west-1 HVM (amd64) ami-02f1d736d9191ff06 Launch Stack
    HVM (arm64) ami-0f83712dba26f6bf6 Launch Stack
    eu-west-2 HVM (amd64) ami-0a7896d6c37e890a1 Launch Stack
    HVM (arm64) ami-0b8d8c8a8e316292b Launch Stack
    eu-west-3 HVM (amd64) ami-07bce51236ba369d4 Launch Stack
    HVM (arm64) ami-02485e81ce02dc4f1 Launch Stack
    me-south-1 HVM (amd64) ami-0d879f2fa10051a4f Launch Stack
    HVM (arm64) ami-03188a147f071bf78 Launch Stack
    sa-east-1 HVM (amd64) ami-0dae3172c3ac10db3 Launch Stack
    HVM (arm64) ami-09e2583f2f21af9ae Launch Stack
    us-east-1 HVM (amd64) ami-0457a742c213787a4 Launch Stack
    HVM (arm64) ami-0882685d2fb6792e0 Launch Stack
    us-east-2 HVM (amd64) ami-02a42ca6fa96b6bce Launch Stack
    HVM (arm64) ami-0fb2aee76a78614bc Launch Stack
    us-west-1 HVM (amd64) ami-09372aa2ac3fcbbd8 Launch Stack
    HVM (arm64) ami-061892f112ec038b2 Launch Stack
    us-west-2 HVM (amd64) ami-05088ffe649b14478 Launch Stack
    HVM (arm64) ami-07450a72fe2716b00 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3975.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-09fb51ff037d94b1b Launch Stack
    HVM (arm64) ami-0d87cadfd99f109c4 Launch Stack
    ap-east-1 HVM (amd64) ami-066c43fb93de0fec3 Launch Stack
    HVM (arm64) ami-0d27c5328c753933b Launch Stack
    ap-northeast-1 HVM (amd64) ami-0b1844d24793ea301 Launch Stack
    HVM (arm64) ami-004a8e4a00ccdc4ba Launch Stack
    ap-northeast-2 HVM (amd64) ami-0df38bf2ffa00c109 Launch Stack
    HVM (arm64) ami-07f4548313fced600 Launch Stack
    ap-south-1 HVM (amd64) ami-0b6e32119c82ed569 Launch Stack
    HVM (arm64) ami-09f3e253f8b70523d Launch Stack
    ap-southeast-1 HVM (amd64) ami-0dbf92a491640d681 Launch Stack
    HVM (arm64) ami-08a7cd0f5e4d0fe36 Launch Stack
    ap-southeast-2 HVM (amd64) ami-04f73b5a1c5b85138 Launch Stack
    HVM (arm64) ami-0a856003ecb1f2b27 Launch Stack
    ap-southeast-3 HVM (amd64) ami-08df0fd872e0a249b Launch Stack
    HVM (arm64) ami-0991991440aea6bc9 Launch Stack
    ca-central-1 HVM (amd64) ami-04805313af433136e Launch Stack
    HVM (arm64) ami-08c1347d4421f7bd1 Launch Stack
    eu-central-1 HVM (amd64) ami-0abd0a759c394c51e Launch Stack
    HVM (arm64) ami-0addab4768e25928b Launch Stack
    eu-north-1 HVM (amd64) ami-07342b2dbd8d7d974 Launch Stack
    HVM (arm64) ami-0bc926bb2f89eedee Launch Stack
    eu-south-1 HVM (amd64) ami-03d2b1d32fa055359 Launch Stack
    HVM (arm64) ami-09ec541083318a695 Launch Stack
    eu-west-1 HVM (amd64) ami-01794609e9b2d3ab8 Launch Stack
    HVM (arm64) ami-06c3f880090b0f6fa Launch Stack
    eu-west-2 HVM (amd64) ami-086b1be6e51adb882 Launch Stack
    HVM (arm64) ami-02308c39c4f5d851c Launch Stack
    eu-west-3 HVM (amd64) ami-098168146c3f950ca Launch Stack
    HVM (arm64) ami-0978de64b783349ee Launch Stack
    me-south-1 HVM (amd64) ami-07b03af38a31b1374 Launch Stack
    HVM (arm64) ami-07ff136c6ab7f7d0f Launch Stack
    sa-east-1 HVM (amd64) ami-0e301608d8c093d50 Launch Stack
    HVM (arm64) ami-039e235bb4e2ef815 Launch Stack
    us-east-1 HVM (amd64) ami-09f2c4d0de648da87 Launch Stack
    HVM (arm64) ami-05475c0baae6afda8 Launch Stack
    us-east-2 HVM (amd64) ami-0dda141f7fdc319c3 Launch Stack
    HVM (arm64) ami-037a629cb9097270c Launch Stack
    us-west-1 HVM (amd64) ami-07fc6d3b9791d65f5 Launch Stack
    HVM (arm64) ami-0879631f7fd3d4f7c Launch Stack
    us-west-2 HVM (amd64) ami-0e5e546c180619622 Launch Stack
    HVM (arm64) ami-06d01e8814a148d6f Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3815.2.5.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0abc5e2505c899390 Launch Stack
    HVM (arm64) ami-0f323d89f46596022 Launch Stack
    ap-east-1 HVM (amd64) ami-037cbb0e3216e82f7 Launch Stack
    HVM (arm64) ami-0782450080df1ebbb Launch Stack
    ap-northeast-1 HVM (amd64) ami-038430d0d757d36ea Launch Stack
    HVM (arm64) ami-00a13c659f513c806 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0e9f52a7dc99d7552 Launch Stack
    HVM (arm64) ami-059e8f0e88ae8e442 Launch Stack
    ap-south-1 HVM (amd64) ami-0de5a2c116d998979 Launch Stack
    HVM (arm64) ami-059f38864e6453b00 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0267771201e112ed3 Launch Stack
    HVM (arm64) ami-0e5bf218786d7834c Launch Stack
    ap-southeast-2 HVM (amd64) ami-00ce7a4dc02c38d7a Launch Stack
    HVM (arm64) ami-078b5847a5f332567 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0c672241ab65e65b3 Launch Stack
    HVM (arm64) ami-03b1ab56627217571 Launch Stack
    ca-central-1 HVM (amd64) ami-0bc47ca31311163f7 Launch Stack
    HVM (arm64) ami-03e082e66a5c7550a Launch Stack
    eu-central-1 HVM (amd64) ami-075408e91eb32dfb7 Launch Stack
    HVM (arm64) ami-062466cfe4ed3b043 Launch Stack
    eu-north-1 HVM (amd64) ami-02c5beaf2b556d775 Launch Stack
    HVM (arm64) ami-004dc958d713bfc28 Launch Stack
    eu-south-1 HVM (amd64) ami-0423c40c3bfadf113 Launch Stack
    HVM (arm64) ami-0dbe602f381e1ceb2 Launch Stack
    eu-west-1 HVM (amd64) ami-0b51f28e763ce817d Launch Stack
    HVM (arm64) ami-0409da73afdfa29b6 Launch Stack
    eu-west-2 HVM (amd64) ami-0c00af4cbdab1d85d Launch Stack
    HVM (arm64) ami-0b934c8ed4a242db8 Launch Stack
    eu-west-3 HVM (amd64) ami-07bb27b4843513c0f Launch Stack
    HVM (arm64) ami-0173b8cbd22423c83 Launch Stack
    me-south-1 HVM (amd64) ami-0c9bfa472cee8b0ab Launch Stack
    HVM (arm64) ami-00728d6f96126712c Launch Stack
    sa-east-1 HVM (amd64) ami-0b2ae101293b9a3b8 Launch Stack
    HVM (arm64) ami-07ef4c8f664e84fba Launch Stack
    us-east-1 HVM (amd64) ami-06253dd5181f9fe62 Launch Stack
    HVM (arm64) ami-03727c3bfe4e9d0ef Launch Stack
    us-east-2 HVM (amd64) ami-0182b60a3ad85b892 Launch Stack
    HVM (arm64) ami-0542b7e59836f3aae Launch Stack
    us-west-1 HVM (amd64) ami-0ed4abdbf0f66fbed Launch Stack
    HVM (arm64) ami-0e824a3c994b9b70d Launch Stack
    us-west-2 HVM (amd64) ami-046331efb81934ea6 Launch Stack
    HVM (arm64) ami-07c9946fa9099df6b Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0457a742c213787a4 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0457a742c213787a4 (amd64), Beta ami-09f2c4d0de648da87 (amd64), or Stable ami-06253dd5181f9fe62 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .