Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3277.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0d5e3826d9dff0bc5 Launch Stack
    HVM (arm64) ami-0da11e6f2217f488c Launch Stack
    ap-east-1 HVM (amd64) ami-00ffdd8f3bc98e1a8 Launch Stack
    HVM (arm64) ami-04bc274435d967fba Launch Stack
    ap-northeast-1 HVM (amd64) ami-06c9c0d66ebe6d7b9 Launch Stack
    HVM (arm64) ami-065d4470c77fa540a Launch Stack
    ap-northeast-2 HVM (amd64) ami-0f666f6e98c047a88 Launch Stack
    HVM (arm64) ami-0a3c4b00bd83cb678 Launch Stack
    ap-south-1 HVM (amd64) ami-077ea532a0ca41315 Launch Stack
    HVM (arm64) ami-092ff89171662210b Launch Stack
    ap-southeast-1 HVM (amd64) ami-0b54ba00a66d5f4a2 Launch Stack
    HVM (arm64) ami-0b075c36bee344e15 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0e4ac2b22c984fbff Launch Stack
    HVM (arm64) ami-06aa64b85092cb0ec Launch Stack
    ap-southeast-3 HVM (amd64) ami-0aa968698bbd44370 Launch Stack
    HVM (arm64) ami-0afdf5f8cafa2f1dc Launch Stack
    ca-central-1 HVM (amd64) ami-0e511c3c2b5891ded Launch Stack
    HVM (arm64) ami-0f75828a02530bd06 Launch Stack
    eu-central-1 HVM (amd64) ami-04f9021842fbc09c5 Launch Stack
    HVM (arm64) ami-0bc3900b508d92759 Launch Stack
    eu-north-1 HVM (amd64) ami-06227c22945979134 Launch Stack
    HVM (arm64) ami-0d0d14dcafa422dbe Launch Stack
    eu-south-1 HVM (amd64) ami-0cb561d4e01486c64 Launch Stack
    HVM (arm64) ami-0348ad0889e666de7 Launch Stack
    eu-west-1 HVM (amd64) ami-0fb9b1672ad10340d Launch Stack
    HVM (arm64) ami-063b990ccc0babe50 Launch Stack
    eu-west-2 HVM (amd64) ami-06815207968b4d1bc Launch Stack
    HVM (arm64) ami-04207e27ead65f689 Launch Stack
    eu-west-3 HVM (amd64) ami-097c11a0ad191ec27 Launch Stack
    HVM (arm64) ami-0a8f28fec4d07966f Launch Stack
    me-south-1 HVM (amd64) ami-0ad47823e689627c0 Launch Stack
    HVM (arm64) ami-0bd8a986555da1860 Launch Stack
    sa-east-1 HVM (amd64) ami-0f801d8383ac92ead Launch Stack
    HVM (arm64) ami-0516e8f066384d247 Launch Stack
    us-east-1 HVM (amd64) ami-0b0265dd75f948493 Launch Stack
    HVM (arm64) ami-0a449ffea171f319d Launch Stack
    us-east-2 HVM (amd64) ami-06d7a1933636c7113 Launch Stack
    HVM (arm64) ami-01d7897c191c38e72 Launch Stack
    us-west-1 HVM (amd64) ami-0566eb9bea68e442e Launch Stack
    HVM (arm64) ami-05ac20d0bc1f11101 Launch Stack
    us-west-2 HVM (amd64) ami-0ec588fe31fff1e7b Launch Stack
    HVM (arm64) ami-03dc060c26ab765dc Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3227.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0f1b30b18694f421b Launch Stack
    HVM (arm64) ami-059534a0770c95deb Launch Stack
    ap-east-1 HVM (amd64) ami-0b2800539963c171f Launch Stack
    HVM (arm64) ami-092e687e2e0ac623a Launch Stack
    ap-northeast-1 HVM (amd64) ami-01f66c7ac9808a49f Launch Stack
    HVM (arm64) ami-0d93f5ca93c068602 Launch Stack
    ap-northeast-2 HVM (amd64) ami-07b88b1faff108cbc Launch Stack
    HVM (arm64) ami-0cf70421b2ca6b7c3 Launch Stack
    ap-south-1 HVM (amd64) ami-031dade042a83ccf3 Launch Stack
    HVM (arm64) ami-0f4603ba8d212a499 Launch Stack
    ap-southeast-1 HVM (amd64) ami-085f17865f2e28c58 Launch Stack
    HVM (arm64) ami-08c8cd603f815a098 Launch Stack
    ap-southeast-2 HVM (amd64) ami-021590cd2ef89db3b Launch Stack
    HVM (arm64) ami-0a6e6ee14ccea78bb Launch Stack
    ap-southeast-3 HVM (amd64) ami-01138f7b70a395eee Launch Stack
    HVM (arm64) ami-0a3652c1fc1af570e Launch Stack
    ca-central-1 HVM (amd64) ami-022c1911fd4edb5a5 Launch Stack
    HVM (arm64) ami-0bab1f99752be1bce Launch Stack
    eu-central-1 HVM (amd64) ami-029865b8be72161c7 Launch Stack
    HVM (arm64) ami-05c3f0e143ec7440a Launch Stack
    eu-north-1 HVM (amd64) ami-05e825c6209671504 Launch Stack
    HVM (arm64) ami-0ed837731298d4dbe Launch Stack
    eu-south-1 HVM (amd64) ami-0a1381a0592d4bac5 Launch Stack
    HVM (arm64) ami-0a08171a1a4618056 Launch Stack
    eu-west-1 HVM (amd64) ami-0db7908a52ac5bb82 Launch Stack
    HVM (arm64) ami-0cf4373dfe9457c5c Launch Stack
    eu-west-2 HVM (amd64) ami-0860277ebbcd3eb2b Launch Stack
    HVM (arm64) ami-08a62030003e3f312 Launch Stack
    eu-west-3 HVM (amd64) ami-02566d1f9005c036c Launch Stack
    HVM (arm64) ami-0ba37bae1e55b2236 Launch Stack
    me-south-1 HVM (amd64) ami-0ec7952c845eff2c6 Launch Stack
    HVM (arm64) ami-0c6d0953259e917f2 Launch Stack
    sa-east-1 HVM (amd64) ami-0410f581ef61fe2dd Launch Stack
    HVM (arm64) ami-02b6e61c12a5df241 Launch Stack
    us-east-1 HVM (amd64) ami-0ba32e3ed2d0ad2eb Launch Stack
    HVM (arm64) ami-081a9007685cf58da Launch Stack
    us-east-2 HVM (amd64) ami-022a4c0070e7f145d Launch Stack
    HVM (arm64) ami-043ed4d8acd296542 Launch Stack
    us-west-1 HVM (amd64) ami-032255e1e16ec6854 Launch Stack
    HVM (arm64) ami-018783c8dc1e77fa6 Launch Stack
    us-west-2 HVM (amd64) ami-0ac302abd535072db Launch Stack
    HVM (arm64) ami-05948530914da49d4 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3139.2.3.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-01ff4711be73fa0e0 Launch Stack
    HVM (arm64) ami-0b576003c8a794da4 Launch Stack
    ap-east-1 HVM (amd64) ami-0a6a730ad6c30198c Launch Stack
    HVM (arm64) ami-0ee21291ac9282d76 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0466d4dfea1e3ab4b Launch Stack
    HVM (arm64) ami-0a042be0d018d745e Launch Stack
    ap-northeast-2 HVM (amd64) ami-099f7f332ac4baa7d Launch Stack
    HVM (arm64) ami-0c9ab83d65fd59013 Launch Stack
    ap-south-1 HVM (amd64) ami-0c4534ef6c669a59e Launch Stack
    HVM (arm64) ami-0bed504006528af7d Launch Stack
    ap-southeast-1 HVM (amd64) ami-04393e1918d4825c6 Launch Stack
    HVM (arm64) ami-08abcddf0b90bd291 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0f7defc3de90c81b6 Launch Stack
    HVM (arm64) ami-05145d59af3b5bba7 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0ab775937842b26d7 Launch Stack
    HVM (arm64) ami-0f94cdbe5ec4989bb Launch Stack
    ca-central-1 HVM (amd64) ami-0dc074b2e01714300 Launch Stack
    HVM (arm64) ami-08ba2732622c9dc95 Launch Stack
    eu-central-1 HVM (amd64) ami-0b996e33cb5026db9 Launch Stack
    HVM (arm64) ami-09d2b23cf9140f8ed Launch Stack
    eu-north-1 HVM (amd64) ami-09bd5f0aed17dbfd8 Launch Stack
    HVM (arm64) ami-06e2fc6c0028cc6ce Launch Stack
    eu-south-1 HVM (amd64) ami-03816514b95167be2 Launch Stack
    HVM (arm64) ami-0403dc9f4c24264a4 Launch Stack
    eu-west-1 HVM (amd64) ami-0b5cc435ab27e968b Launch Stack
    HVM (arm64) ami-09219966d6788d68e Launch Stack
    eu-west-2 HVM (amd64) ami-04e8b33f7ff1aea2d Launch Stack
    HVM (arm64) ami-001f61e835cc893c4 Launch Stack
    eu-west-3 HVM (amd64) ami-0d90fadf65a6035b7 Launch Stack
    HVM (arm64) ami-0af5456f71b763758 Launch Stack
    me-south-1 HVM (amd64) ami-0ad7a85bd65e24720 Launch Stack
    HVM (arm64) ami-0b32724086c0b0752 Launch Stack
    sa-east-1 HVM (amd64) ami-0afe10b3060ef0032 Launch Stack
    HVM (arm64) ami-025cf51a4f2163753 Launch Stack
    us-east-1 HVM (amd64) ami-07fe947197276d27e Launch Stack
    HVM (arm64) ami-0815b9335a2ee0674 Launch Stack
    us-east-2 HVM (amd64) ami-0badfc5c068e0d180 Launch Stack
    HVM (arm64) ami-05aad3e55290fd6b3 Launch Stack
    us-west-1 HVM (amd64) ami-0d5e40aef127c6db6 Launch Stack
    HVM (arm64) ami-0650892eed1cbd373 Launch Stack
    us-west-2 HVM (amd64) ami-03967149a3e8ae59d Launch Stack
    HVM (arm64) ami-0fd43bf24cd594f66 Launch Stack

    AWS China AMIs maintained by Giant Swarm

    The following AMIs are not part of the official Flatcar Container Linux release process and may lag behind (query version).

    View as json feed: amd64
    EC2 Region AMI Type AMI ID CloudFormation

    CloudFormation will launch a cluster of Flatcar Container Linux machines with a security and autoscaling group.

    Container Linux Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Container Linux Configs (CLC). These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this CLC YAML config will start an NGINX Docker container:

    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target        
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i ghcr.io/flatcar-linux/ct:latest -platform ec2 > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Container Linux Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    storage:
      filesystems:
        - mount:
            device: /dev/xvdb
            format: ext4
            wipe_filesystem: true
    
    systemd:
      units:
        - name: media-ephemeral.mount
          enable: true
          contents: |
            [Mount]
            What=/dev/xvdb
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target        
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Container Linux Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Container Linux Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh [email protected]<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0b0265dd75f948493 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0b0265dd75f948493 (amd64), Beta ami-0ba32e3ed2d0ad2eb (amd64), or Stable ami-07fe947197276d27e (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Known issues

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys: ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}          
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh [email protected] with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .