Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux Matrix channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4459.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-04fb79e7db2b51d61 Launch Stack
    HVM (arm64) ami-030a60387a5f23d8f Launch Stack
    ap-east-1 HVM (amd64) ami-00c019332004f55ef Launch Stack
    HVM (arm64) ami-061b3faa33cfaa5a5 Launch Stack
    ap-northeast-1 HVM (amd64) ami-02327a6b1f1c9ad1f Launch Stack
    HVM (arm64) ami-0f966825e0c9c80f2 Launch Stack
    ap-northeast-2 HVM (amd64) ami-046a7df97ea2fe6c9 Launch Stack
    HVM (arm64) ami-0db07fbb5c491ed93 Launch Stack
    ap-south-1 HVM (amd64) ami-0ad6f87600d83491b Launch Stack
    HVM (arm64) ami-0ae88ea923943895e Launch Stack
    ap-southeast-1 HVM (amd64) ami-0a4c0978acfdd8e0c Launch Stack
    HVM (arm64) ami-01cbfa34145f2f165 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0b4e1a1389f5f02bc Launch Stack
    HVM (arm64) ami-0b5a897a3dcb55832 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0d0e12ae9a852a28f Launch Stack
    HVM (arm64) ami-0f4459d896df9706d Launch Stack
    ca-central-1 HVM (amd64) ami-0c44b846cceebbce4 Launch Stack
    HVM (arm64) ami-0753d366a2c9e34ff Launch Stack
    eu-central-1 HVM (amd64) ami-0c7a35703ddf4a9ef Launch Stack
    HVM (arm64) ami-0fe17df637b3bd8f9 Launch Stack
    eu-north-1 HVM (amd64) ami-084076fb0b73dc5e3 Launch Stack
    HVM (arm64) ami-0731e095bf1a61395 Launch Stack
    eu-south-1 HVM (amd64) ami-080b52787d7e92374 Launch Stack
    HVM (arm64) ami-04a5efc4c38556d91 Launch Stack
    eu-west-1 HVM (amd64) ami-08deafd82439a77a3 Launch Stack
    HVM (arm64) ami-0add9410c84281261 Launch Stack
    eu-west-2 HVM (amd64) ami-093f5910feece191c Launch Stack
    HVM (arm64) ami-0fe18e2d4d1f63af8 Launch Stack
    eu-west-3 HVM (amd64) ami-0714da32012034958 Launch Stack
    HVM (arm64) ami-0a52395def5d0e8bc Launch Stack
    me-south-1 HVM (amd64) ami-0aadcc0aa1e005a6c Launch Stack
    HVM (arm64) ami-0edf76ac20d9b239e Launch Stack
    sa-east-1 HVM (amd64) ami-06bc49103a5f52c14 Launch Stack
    HVM (arm64) ami-03f06d05b7a9b1882 Launch Stack
    us-east-1 HVM (amd64) ami-058acfe29618e62c3 Launch Stack
    HVM (arm64) ami-0440e51efd9117810 Launch Stack
    us-east-2 HVM (amd64) ami-08ed43a2d0dc320aa Launch Stack
    HVM (arm64) ami-0d447fe50db394937 Launch Stack
    us-west-1 HVM (amd64) ami-0de644e5adfab8c8e Launch Stack
    HVM (arm64) ami-0593db4ec72a8bcb7 Launch Stack
    us-west-2 HVM (amd64) ami-0b01dd7526f9a13b0 Launch Stack
    HVM (arm64) ami-076be1c87b2734a0a Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4459.1.2.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-023f97fba12045143 Launch Stack
    HVM (arm64) ami-097130e3f94905fda Launch Stack
    ap-east-1 HVM (amd64) ami-0fd2c549ed0e34643 Launch Stack
    HVM (arm64) ami-02c03323b8f4933d3 Launch Stack
    ap-northeast-1 HVM (amd64) ami-05ee7efe7ea728527 Launch Stack
    HVM (arm64) ami-01d8ebb0d0a5bb093 Launch Stack
    ap-northeast-2 HVM (amd64) ami-035a5d1a7750f8e56 Launch Stack
    HVM (arm64) ami-00372e6d905c98d18 Launch Stack
    ap-south-1 HVM (amd64) ami-08340ed45c5192429 Launch Stack
    HVM (arm64) ami-0552534830a43bba8 Launch Stack
    ap-southeast-1 HVM (amd64) ami-073fda37d9ad4a073 Launch Stack
    HVM (arm64) ami-048479c90f3dabbd2 Launch Stack
    ap-southeast-2 HVM (amd64) ami-092bf242afa1f4ecd Launch Stack
    HVM (arm64) ami-09e7e7f36bad682d4 Launch Stack
    ap-southeast-3 HVM (amd64) ami-014d479ce5d4c6843 Launch Stack
    HVM (arm64) ami-0b506a109d1b3e315 Launch Stack
    ca-central-1 HVM (amd64) ami-00115920d2a4ee24d Launch Stack
    HVM (arm64) ami-0680b3fb28bc38cf4 Launch Stack
    eu-central-1 HVM (amd64) ami-0205ca901cc0f21f5 Launch Stack
    HVM (arm64) ami-052ad136a06d35508 Launch Stack
    eu-north-1 HVM (amd64) ami-038f50aab91418db6 Launch Stack
    HVM (arm64) ami-0b79b1ab4905d5e58 Launch Stack
    eu-south-1 HVM (amd64) ami-074407852403b52b1 Launch Stack
    HVM (arm64) ami-0349457cb359fbf82 Launch Stack
    eu-west-1 HVM (amd64) ami-02ae362cc4e0cbdef Launch Stack
    HVM (arm64) ami-027c69c4d9c7b1a3c Launch Stack
    eu-west-2 HVM (amd64) ami-0f509eb7805b0a9f4 Launch Stack
    HVM (arm64) ami-05d7163155013d290 Launch Stack
    eu-west-3 HVM (amd64) ami-0d73e8ff121b6cb23 Launch Stack
    HVM (arm64) ami-091ece6824feaacaf Launch Stack
    me-south-1 HVM (amd64) ami-08a1049b97ce5069c Launch Stack
    HVM (arm64) ami-060f2319199f6364d Launch Stack
    sa-east-1 HVM (amd64) ami-0fd3ea3e3d843e195 Launch Stack
    HVM (arm64) ami-043720b7e3f96f9bf Launch Stack
    us-east-1 HVM (amd64) ami-087ec29cd0bfb9642 Launch Stack
    HVM (arm64) ami-0d3ef32ca71893809 Launch Stack
    us-east-2 HVM (amd64) ami-0a3de9ea7d030b6c6 Launch Stack
    HVM (arm64) ami-080d43fe3d18c4db0 Launch Stack
    us-west-1 HVM (amd64) ami-002b0764e41f284fb Launch Stack
    HVM (arm64) ami-0ca66e4581a8b91eb Launch Stack
    us-west-2 HVM (amd64) ami-0034ab68730661d4c Launch Stack
    HVM (arm64) ami-01decf8e56cfc4bd3 Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4515.0.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0d60b9920f5cb3f67 Launch Stack
    HVM (arm64) ami-0d17558d879c96e78 Launch Stack
    ap-east-1 HVM (amd64) ami-035b175c3b12f0b74 Launch Stack
    HVM (arm64) ami-0c3e60c163e3c2fff Launch Stack
    ap-northeast-1 HVM (amd64) ami-03d367f234e38fbb3 Launch Stack
    HVM (arm64) ami-04292923c9eba94aa Launch Stack
    ap-northeast-2 HVM (amd64) ami-0052aa0798452a50e Launch Stack
    HVM (arm64) ami-01f02f2313acceb31 Launch Stack
    ap-south-1 HVM (amd64) ami-039eff019250f7a90 Launch Stack
    HVM (arm64) ami-05c55547ae7c8e475 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0313658c43d6eb384 Launch Stack
    HVM (arm64) ami-0f773e3136e4c2347 Launch Stack
    ap-southeast-2 HVM (amd64) ami-04e720d2ea3ec5c74 Launch Stack
    HVM (arm64) ami-0c15c7740749b390b Launch Stack
    ap-southeast-3 HVM (amd64) ami-0b25d2b50c09ea209 Launch Stack
    HVM (arm64) ami-0a86eb2ea3744fab4 Launch Stack
    ca-central-1 HVM (amd64) ami-02dca8d705c3094bd Launch Stack
    HVM (arm64) ami-053304567e8d5f809 Launch Stack
    eu-central-1 HVM (amd64) ami-07eaa5eb115499332 Launch Stack
    HVM (arm64) ami-001e5fc41dde2845f Launch Stack
    eu-north-1 HVM (amd64) ami-0d5a2ca41df31f4c2 Launch Stack
    HVM (arm64) ami-01ff0f5674ab3c5b9 Launch Stack
    eu-south-1 HVM (amd64) ami-00775e8e187361a6b Launch Stack
    HVM (arm64) ami-0b4a94bf626a49296 Launch Stack
    eu-west-1 HVM (amd64) ami-0c034a11a755d8764 Launch Stack
    HVM (arm64) ami-0ba81f9e80813fd92 Launch Stack
    eu-west-2 HVM (amd64) ami-0a81a028665c1c520 Launch Stack
    HVM (arm64) ami-06d5b65db41352657 Launch Stack
    eu-west-3 HVM (amd64) ami-079083fd68d3039c5 Launch Stack
    HVM (arm64) ami-002b036863060b8c0 Launch Stack
    me-south-1 HVM (amd64) ami-07462ceb22da8baf9 Launch Stack
    HVM (arm64) ami-039b5e325e2af6ab0 Launch Stack
    sa-east-1 HVM (amd64) ami-02e35c55f0bf6c328 Launch Stack
    HVM (arm64) ami-06ad1256768559ad4 Launch Stack
    us-east-1 HVM (amd64) ami-016fdde11ee4671f0 Launch Stack
    HVM (arm64) ami-07b10ee907035c8e7 Launch Stack
    us-east-2 HVM (amd64) ami-0154a86ebc5f7fd38 Launch Stack
    HVM (arm64) ami-042978940515ede7b Launch Stack
    us-west-1 HVM (amd64) ami-06d35a3a71d72fcad Launch Stack
    HVM (arm64) ami-00d560bf198fcf933 Launch Stack
    us-west-2 HVM (amd64) ami-066b6904cd65a2416 Launch Stack
    HVM (arm64) ami-089a6f94fae60006b Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.6.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-065d742b53d039f10 Launch Stack
    HVM (arm64) ami-031e6aa017e3d66a4 Launch Stack
    ap-east-1 HVM (amd64) ami-05d861bfa50523be9 Launch Stack
    HVM (arm64) ami-00376960872d79ace Launch Stack
    ap-northeast-1 HVM (amd64) ami-05dd5c8176aae392e Launch Stack
    HVM (arm64) ami-0d187650ed489eb63 Launch Stack
    ap-northeast-2 HVM (amd64) ami-082997538fee72535 Launch Stack
    HVM (arm64) ami-03cc0c6cbfd15b96b Launch Stack
    ap-south-1 HVM (amd64) ami-05a8e27ad68c7c095 Launch Stack
    HVM (arm64) ami-0b2d1b5a81d288101 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bbc11922d35e88f7 Launch Stack
    HVM (arm64) ami-019dbbc6398ee063e Launch Stack
    ap-southeast-2 HVM (amd64) ami-0453f031a5311e96c Launch Stack
    HVM (arm64) ami-09d8d953473bdd4bb Launch Stack
    ap-southeast-3 HVM (amd64) ami-06a63dc511c9781f3 Launch Stack
    HVM (arm64) ami-074bb47a98f1747b4 Launch Stack
    ca-central-1 HVM (amd64) ami-080a9e8c39c377a17 Launch Stack
    HVM (arm64) ami-05895f696017a8301 Launch Stack
    eu-central-1 HVM (amd64) ami-0099e069036c934fa Launch Stack
    HVM (arm64) ami-0c6adc94939c2f348 Launch Stack
    eu-north-1 HVM (amd64) ami-0eb12fd4cf77da266 Launch Stack
    HVM (arm64) ami-00c4b52eb4c77f737 Launch Stack
    eu-south-1 HVM (amd64) ami-06548dff7a06688c4 Launch Stack
    HVM (arm64) ami-00c72fd113bab908e Launch Stack
    eu-west-1 HVM (amd64) ami-01b7787bc0f8621e5 Launch Stack
    HVM (arm64) ami-03448c137612fac2a Launch Stack
    eu-west-2 HVM (amd64) ami-0061694a1f70ac69b Launch Stack
    HVM (arm64) ami-0e6da03e8bfc266bd Launch Stack
    eu-west-3 HVM (amd64) ami-028ac53f4abd50a0a Launch Stack
    HVM (arm64) ami-08ff956abf5f1b861 Launch Stack
    me-south-1 HVM (amd64) ami-0597951317c148292 Launch Stack
    HVM (arm64) ami-09584968f1259e17c Launch Stack
    sa-east-1 HVM (amd64) ami-0e79099b46011b2a7 Launch Stack
    HVM (arm64) ami-0a3e84660861b4e0f Launch Stack
    us-east-1 HVM (amd64) ami-08f4bc25055494068 Launch Stack
    HVM (arm64) ami-086c5cca4129f4102 Launch Stack
    us-east-2 HVM (amd64) ami-0da2ef08fd5010737 Launch Stack
    HVM (arm64) ami-02da50159337b6b16 Launch Stack
    us-west-1 HVM (amd64) ami-08befc8df1e62f5a9 Launch Stack
    HVM (arm64) ami-08292a8b7fd99dd25 Launch Stack
    us-west-2 HVM (amd64) ami-033de58d5bfead60e Launch Stack
    HVM (arm64) ami-008bca8970ab8471d Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-016fdde11ee4671f0 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-016fdde11ee4671f0 (amd64), Beta ami-087ec29cd0bfb9642 (amd64), or Stable ami-058acfe29618e62c3 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .