Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3913.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0d8d9b2f904701f72 Launch Stack
    HVM (arm64) ami-0de52e196a16ee369 Launch Stack
    ap-east-1 HVM (amd64) ami-0ea13fff233663492 Launch Stack
    HVM (arm64) ami-08dd86a36e9027bca Launch Stack
    ap-northeast-1 HVM (amd64) ami-02c0bdaa11789f1a7 Launch Stack
    HVM (arm64) ami-083648b2beb648f8b Launch Stack
    ap-northeast-2 HVM (amd64) ami-02b7c7941715c0d9a Launch Stack
    HVM (arm64) ami-032d2547a6875d76f Launch Stack
    ap-south-1 HVM (amd64) ami-00f3c39585c6a9a2c Launch Stack
    HVM (arm64) ami-0e9061766a9c11321 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0f87ea4a52c8977da Launch Stack
    HVM (arm64) ami-039da2d4add936dd1 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0cacbcc0420701383 Launch Stack
    HVM (arm64) ami-0f33903c0c88d9f31 Launch Stack
    ap-southeast-3 HVM (amd64) ami-09601aaa2257ce4ed Launch Stack
    HVM (arm64) ami-0e999c379aeb054f9 Launch Stack
    ca-central-1 HVM (amd64) ami-0024bbf81bc510b6a Launch Stack
    HVM (arm64) ami-0ba7affafa64fbc30 Launch Stack
    eu-central-1 HVM (amd64) ami-06c844fb79b815362 Launch Stack
    HVM (arm64) ami-0594b63e4208e886a Launch Stack
    eu-north-1 HVM (amd64) ami-0510c241b2d249f61 Launch Stack
    HVM (arm64) ami-02eec1b241ff88f61 Launch Stack
    eu-south-1 HVM (amd64) ami-03c08dd19b3887b0b Launch Stack
    HVM (arm64) ami-0707b2d2cc13a268b Launch Stack
    eu-west-1 HVM (amd64) ami-0b31c7a0bead5f4e2 Launch Stack
    HVM (arm64) ami-0d74ebdd8ff763f08 Launch Stack
    eu-west-2 HVM (amd64) ami-0df45fba831d0bf3f Launch Stack
    HVM (arm64) ami-0adf6d7134a982ad9 Launch Stack
    eu-west-3 HVM (amd64) ami-02af0fc84f5cf97c2 Launch Stack
    HVM (arm64) ami-08c2d3278b11eb11c Launch Stack
    me-south-1 HVM (amd64) ami-0b7c14cede259ec8e Launch Stack
    HVM (arm64) ami-08519650a6351d5f9 Launch Stack
    sa-east-1 HVM (amd64) ami-08b905c338dc8c2d7 Launch Stack
    HVM (arm64) ami-031ad04ef00581d29 Launch Stack
    us-east-1 HVM (amd64) ami-0073d390d8d56145c Launch Stack
    HVM (arm64) ami-00f8af3d08a80cede Launch Stack
    us-east-2 HVM (amd64) ami-0b7ffcdb2c47700c2 Launch Stack
    HVM (arm64) ami-0315f2cc84650b449 Launch Stack
    us-west-1 HVM (amd64) ami-0f833d430aa795c2b Launch Stack
    HVM (arm64) ami-01650b05603d97c65 Launch Stack
    us-west-2 HVM (amd64) ami-0debf357b249e0406 Launch Stack
    HVM (arm64) ami-08d2fa57a66f99829 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3874.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0b54f9753a6c293b7 Launch Stack
    HVM (arm64) ami-011c1fe05b04b7e1a Launch Stack
    ap-east-1 HVM (amd64) ami-04dedf7d7e920c86a Launch Stack
    HVM (arm64) ami-011c5571b070ec549 Launch Stack
    ap-northeast-1 HVM (amd64) ami-03fc2e43f06cc0fe7 Launch Stack
    HVM (arm64) ami-0265d3a2c784f0244 Launch Stack
    ap-northeast-2 HVM (amd64) ami-054fb2a4a7abd97b1 Launch Stack
    HVM (arm64) ami-0581705c1a9ca23d3 Launch Stack
    ap-south-1 HVM (amd64) ami-0385004f0770180cc Launch Stack
    HVM (arm64) ami-0bcb1451860a9afd4 Launch Stack
    ap-southeast-1 HVM (amd64) ami-08dc39d4295c4b41d Launch Stack
    HVM (arm64) ami-066622ac73ec7e608 Launch Stack
    ap-southeast-2 HVM (amd64) ami-019ae096e1b7f4e93 Launch Stack
    HVM (arm64) ami-0a863ce23d8432747 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0a63513723a961275 Launch Stack
    HVM (arm64) ami-0db88abe8ea0e9a2f Launch Stack
    ca-central-1 HVM (amd64) ami-0d2e8a68650ed2983 Launch Stack
    HVM (arm64) ami-0126c4a7b06ba57fa Launch Stack
    eu-central-1 HVM (amd64) ami-01932a9742f43bf3a Launch Stack
    HVM (arm64) ami-05f37bac716a74c93 Launch Stack
    eu-north-1 HVM (amd64) ami-0d68735484dd0eaaf Launch Stack
    HVM (arm64) ami-02426508fa62c3e79 Launch Stack
    eu-south-1 HVM (amd64) ami-08ae6d36a9a44b150 Launch Stack
    HVM (arm64) ami-01a44f383180a4ee6 Launch Stack
    eu-west-1 HVM (amd64) ami-08f4005584b2a5835 Launch Stack
    HVM (arm64) ami-02eff28fcb02b9da1 Launch Stack
    eu-west-2 HVM (amd64) ami-0ebfb5cd54cd328f8 Launch Stack
    HVM (arm64) ami-02e711041eb2ec5ee Launch Stack
    eu-west-3 HVM (amd64) ami-04418bdd583ee4c47 Launch Stack
    HVM (arm64) ami-03f6eb93ba1bc1677 Launch Stack
    me-south-1 HVM (amd64) ami-09f9efde076b9e48e Launch Stack
    HVM (arm64) ami-03c2c1ef7ce1f21a0 Launch Stack
    sa-east-1 HVM (amd64) ami-0774f870b1479c446 Launch Stack
    HVM (arm64) ami-079da394ecac33bb7 Launch Stack
    us-east-1 HVM (amd64) ami-0c0fd16b4c8727c9c Launch Stack
    HVM (arm64) ami-005afce53a9f1c919 Launch Stack
    us-east-2 HVM (amd64) ami-09db16467af1168ee Launch Stack
    HVM (arm64) ami-05f73923d590a24b8 Launch Stack
    us-west-1 HVM (amd64) ami-0d5d50e5e471988a5 Launch Stack
    HVM (arm64) ami-0cbc384b500244250 Launch Stack
    us-west-2 HVM (amd64) ami-0d143829c8169ce8c Launch Stack
    HVM (arm64) ami-0e3c8c38037c9f0ce Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3815.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0193a879d3930675b Launch Stack
    HVM (arm64) ami-020137f9e7dc47cda Launch Stack
    ap-east-1 HVM (amd64) ami-07d764c319727356c Launch Stack
    HVM (arm64) ami-0e3ee09bb1809a03e Launch Stack
    ap-northeast-1 HVM (amd64) ami-00415c3c3d776d0e3 Launch Stack
    HVM (arm64) ami-076ed75e62b746106 Launch Stack
    ap-northeast-2 HVM (amd64) ami-00b71e34c15a74e64 Launch Stack
    HVM (arm64) ami-0cc6138fa89d9138e Launch Stack
    ap-south-1 HVM (amd64) ami-06ccdd0d472ba821e Launch Stack
    HVM (arm64) ami-03592844e6931a40b Launch Stack
    ap-southeast-1 HVM (amd64) ami-043384a44c7820177 Launch Stack
    HVM (arm64) ami-0f8f39c20ff9f17cc Launch Stack
    ap-southeast-2 HVM (amd64) ami-0fbcbdbd3c4a731cc Launch Stack
    HVM (arm64) ami-06f67d7dd077bbb9f Launch Stack
    ap-southeast-3 HVM (amd64) ami-0fc3fee63f2201ee2 Launch Stack
    HVM (arm64) ami-00928c0fa8fca714d Launch Stack
    ca-central-1 HVM (amd64) ami-097a4b15e7f630bc1 Launch Stack
    HVM (arm64) ami-00d3f4f341ef9ce85 Launch Stack
    eu-central-1 HVM (amd64) ami-0f647ea479eddd3c8 Launch Stack
    HVM (arm64) ami-00b5a52eb41681ea8 Launch Stack
    eu-north-1 HVM (amd64) ami-0fcc11b92cf34f1f8 Launch Stack
    HVM (arm64) ami-05f0ca7ef9450fc63 Launch Stack
    eu-south-1 HVM (amd64) ami-0156eda30b2701c86 Launch Stack
    HVM (arm64) ami-009b279227bec35b9 Launch Stack
    eu-west-1 HVM (amd64) ami-09e50d505f129f7ab Launch Stack
    HVM (arm64) ami-0aecc7e45ab6eca14 Launch Stack
    eu-west-2 HVM (amd64) ami-0095e8baa191d1c64 Launch Stack
    HVM (arm64) ami-09e23f9a2d483e74f Launch Stack
    eu-west-3 HVM (amd64) ami-0cbd907c900d3cea6 Launch Stack
    HVM (arm64) ami-07e1eb2c90d6bad92 Launch Stack
    me-south-1 HVM (amd64) ami-095d69d8dacfbf98b Launch Stack
    HVM (arm64) ami-01480083740c3ec73 Launch Stack
    sa-east-1 HVM (amd64) ami-0fa4bfdf03df33f30 Launch Stack
    HVM (arm64) ami-089a2fa262b5166ea Launch Stack
    us-east-1 HVM (amd64) ami-0bb44ddd0a3247f03 Launch Stack
    HVM (arm64) ami-060584149cf588a2d Launch Stack
    us-east-2 HVM (amd64) ami-0a413c0bdfd19a373 Launch Stack
    HVM (arm64) ami-0c8c525979aaa45b5 Launch Stack
    us-west-1 HVM (amd64) ami-02e6804a88242edaf Launch Stack
    HVM (arm64) ami-042f356bed47508a8 Launch Stack
    us-west-2 HVM (amd64) ami-09ca3caffdb05fd4e Launch Stack
    HVM (arm64) ami-0fd77f46f8e41388f Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0073d390d8d56145c (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0073d390d8d56145c (amd64), Beta ami-0c0fd16b4c8727c9c (amd64), or Stable ami-0bb44ddd0a3247f03 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: 
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .