Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux Matrix channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4230.2.2.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0d9c17839f87757f4 Launch Stack
    HVM (arm64) ami-02993a63851053b43 Launch Stack
    ap-east-1 HVM (amd64) ami-0dc25ab897a5163c8 Launch Stack
    HVM (arm64) ami-0f7522016586ca169 Launch Stack
    ap-northeast-1 HVM (amd64) ami-00701a770eaf2bbe2 Launch Stack
    HVM (arm64) ami-0ea17584f84a41d86 Launch Stack
    ap-northeast-2 HVM (amd64) ami-03becb259efa59756 Launch Stack
    HVM (arm64) ami-00cc228fb45215dde Launch Stack
    ap-south-1 HVM (amd64) ami-0151c697cae32d78f Launch Stack
    HVM (arm64) ami-093714f0158d52748 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0943a1982053e3a06 Launch Stack
    HVM (arm64) ami-0535f0c309e8baf78 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0175d4fc65d7a0dd2 Launch Stack
    HVM (arm64) ami-05cf7d9953147c083 Launch Stack
    ap-southeast-3 HVM (amd64) ami-02870b6bfb00d0a72 Launch Stack
    HVM (arm64) ami-03c95e66235257708 Launch Stack
    ca-central-1 HVM (amd64) ami-0b6aea1035ee8166d Launch Stack
    HVM (arm64) ami-0208727396846cef6 Launch Stack
    eu-central-1 HVM (amd64) ami-0c1bd1e9429acf481 Launch Stack
    HVM (arm64) ami-00efd191f9c7df810 Launch Stack
    eu-north-1 HVM (amd64) ami-0aa6003f7158789b2 Launch Stack
    HVM (arm64) ami-0eae66f6a4eff3ff8 Launch Stack
    eu-south-1 HVM (amd64) ami-052183f80d55ef678 Launch Stack
    HVM (arm64) ami-0fc9503422861c079 Launch Stack
    eu-west-1 HVM (amd64) ami-0e8f5e8c7c366bf53 Launch Stack
    HVM (arm64) ami-0357d1d2ec64947eb Launch Stack
    eu-west-2 HVM (amd64) ami-040c73a3b6f08b595 Launch Stack
    HVM (arm64) ami-08ee56b5d436b66cd Launch Stack
    eu-west-3 HVM (amd64) ami-0202548f5c9b1d2b5 Launch Stack
    HVM (arm64) ami-07ab0f1f841410224 Launch Stack
    me-south-1 HVM (amd64) ami-0201afb32684a0636 Launch Stack
    HVM (arm64) ami-07a2c1faa37f3b0bd Launch Stack
    sa-east-1 HVM (amd64) ami-04fc179f9fbf027d4 Launch Stack
    HVM (arm64) ami-0dcd0d96baa6848bb Launch Stack
    us-east-1 HVM (amd64) ami-00de77c9f54680b5d Launch Stack
    HVM (arm64) ami-069f743ddbe2c46a6 Launch Stack
    us-east-2 HVM (amd64) ami-0ad1d776cccbdcd26 Launch Stack
    HVM (arm64) ami-06fec11e4eb0be7a1 Launch Stack
    us-west-1 HVM (amd64) ami-05607dad55cee7643 Launch Stack
    HVM (arm64) ami-0373ac222ade8390b Launch Stack
    us-west-2 HVM (amd64) ami-0498f04434b71979d Launch Stack
    HVM (arm64) ami-072400116da2301e1 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4372.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-04e197335fea39b90 Launch Stack
    HVM (arm64) ami-0f853e8d5f51e4ce2 Launch Stack
    ap-east-1 HVM (amd64) ami-0e89a63d650fe97ac Launch Stack
    HVM (arm64) ami-0d60c2f1771db6ab4 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0ab9d70931bc1e9e2 Launch Stack
    HVM (arm64) ami-0938e4df16218370c Launch Stack
    ap-northeast-2 HVM (amd64) ami-09e34d7b5b5ac3a93 Launch Stack
    HVM (arm64) ami-0505d4b160eb9be8d Launch Stack
    ap-south-1 HVM (amd64) ami-0ce51f4ef59c0fd2c Launch Stack
    HVM (arm64) ami-00ad2c49bcb4db6a0 Launch Stack
    ap-southeast-1 HVM (amd64) ami-03758669bc34a0dc0 Launch Stack
    HVM (arm64) ami-02825238d62777259 Launch Stack
    ap-southeast-2 HVM (amd64) ami-01260f9e92166acfa Launch Stack
    HVM (arm64) ami-0159de668d09aea64 Launch Stack
    ap-southeast-3 HVM (amd64) ami-024b3029d51fe6140 Launch Stack
    HVM (arm64) ami-03680ab2ac73e8215 Launch Stack
    ca-central-1 HVM (amd64) ami-0c900d0b11781a87d Launch Stack
    HVM (arm64) ami-0d77493d89abf625f Launch Stack
    eu-central-1 HVM (amd64) ami-09985c9a0db87989a Launch Stack
    HVM (arm64) ami-017cdad7854bbde63 Launch Stack
    eu-north-1 HVM (amd64) ami-0e2b222053b0b2376 Launch Stack
    HVM (arm64) ami-025a980730bd75ac3 Launch Stack
    eu-south-1 HVM (amd64) ami-0bc26f77a1af06d09 Launch Stack
    HVM (arm64) ami-0ed21d2dce8082f31 Launch Stack
    eu-west-1 HVM (amd64) ami-097925379bef36e60 Launch Stack
    HVM (arm64) ami-059d8c26b1d81fa83 Launch Stack
    eu-west-2 HVM (amd64) ami-0ce732a9fd06703f2 Launch Stack
    HVM (arm64) ami-0e25079813e535e26 Launch Stack
    eu-west-3 HVM (amd64) ami-0cc792b88865d0ec1 Launch Stack
    HVM (arm64) ami-06ce4404f40580ffd Launch Stack
    me-south-1 HVM (amd64) ami-0ac74b51323e2b200 Launch Stack
    HVM (arm64) ami-0ee4355478d34ce7b Launch Stack
    sa-east-1 HVM (amd64) ami-00cab579989d710cf Launch Stack
    HVM (arm64) ami-0f9ef252133cc5c8c Launch Stack
    us-east-1 HVM (amd64) ami-087c172e891c43db8 Launch Stack
    HVM (arm64) ami-082731f28222474c1 Launch Stack
    us-east-2 HVM (amd64) ami-00541ae3f0c46fea1 Launch Stack
    HVM (arm64) ami-033593bb629ff0c2c Launch Stack
    us-west-1 HVM (amd64) ami-0b674ac4517447643 Launch Stack
    HVM (arm64) ami-07c1dece918070492 Launch Stack
    us-west-2 HVM (amd64) ami-043ebc15fd911bf4e Launch Stack
    HVM (arm64) ami-0cc50c86723cfa7a3 Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4426.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0ebb18437e7646cac Launch Stack
    HVM (arm64) ami-06f3834cedbb705fd Launch Stack
    ap-east-1 HVM (amd64) ami-082f7f02d480f6478 Launch Stack
    HVM (arm64) ami-0db6b59f63ee93992 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0bc072dcf44638c5d Launch Stack
    HVM (arm64) ami-02c2a56e7637ac99e Launch Stack
    ap-northeast-2 HVM (amd64) ami-0d8ea7c386837d24e Launch Stack
    HVM (arm64) ami-06eeba482bfa9869d Launch Stack
    ap-south-1 HVM (amd64) ami-02db744ecb0e36c69 Launch Stack
    HVM (arm64) ami-06c4d066724d54456 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0679e66ac3f1a534d Launch Stack
    HVM (arm64) ami-057cf2d223cc95020 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0904049910d9ecc8f Launch Stack
    HVM (arm64) ami-0ec29f4a1c2eb1525 Launch Stack
    ap-southeast-3 HVM (amd64) ami-022370cf76482d19d Launch Stack
    HVM (arm64) ami-0479bf0114665e516 Launch Stack
    ca-central-1 HVM (amd64) ami-0493ced24564b419a Launch Stack
    HVM (arm64) ami-05a5666946f94e7b1 Launch Stack
    eu-central-1 HVM (amd64) ami-0b82ec3d28c0f5b52 Launch Stack
    HVM (arm64) ami-0ee086169e8640168 Launch Stack
    eu-north-1 HVM (amd64) ami-07ded7c1453d1a432 Launch Stack
    HVM (arm64) ami-0327e7729548162df Launch Stack
    eu-south-1 HVM (amd64) ami-0e59e9e8bbd4d1da6 Launch Stack
    HVM (arm64) ami-055d80b2b250b1ebb Launch Stack
    eu-west-1 HVM (amd64) ami-0271f691728c52f0f Launch Stack
    HVM (arm64) ami-02d7e5f2575e706bf Launch Stack
    eu-west-2 HVM (amd64) ami-07630d4318fe9e0a0 Launch Stack
    HVM (arm64) ami-08165f97c32a25ed2 Launch Stack
    eu-west-3 HVM (amd64) ami-0893fbebee091e2cc Launch Stack
    HVM (arm64) ami-0a6dfa9c65579e51f Launch Stack
    me-south-1 HVM (amd64) ami-0f3a924f0deec0669 Launch Stack
    HVM (arm64) ami-0a5e4c87e81e9ecbc Launch Stack
    sa-east-1 HVM (amd64) ami-061b283c806d2aa08 Launch Stack
    HVM (arm64) ami-099ed320bb360f82b Launch Stack
    us-east-1 HVM (amd64) ami-0b2db74e31c4e8bc2 Launch Stack
    HVM (arm64) ami-026cab929abe26f0c Launch Stack
    us-east-2 HVM (amd64) ami-0e229ac58efd73c17 Launch Stack
    HVM (arm64) ami-0740b5698ac7eae0e Launch Stack
    us-west-1 HVM (amd64) ami-01daf83e4f205d902 Launch Stack
    HVM (arm64) ami-037a52592c61943f8 Launch Stack
    us-west-2 HVM (amd64) ami-0a51c7141c687ac6e Launch Stack
    HVM (arm64) ami-0c49cdfc959719357 Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.5.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0320583dde153859c Launch Stack
    HVM (arm64) ami-0e01ca7fe1e17f1c5 Launch Stack
    ap-east-1 HVM (amd64) ami-0f771f2f6dee766c8 Launch Stack
    HVM (arm64) ami-0eac004d2ae78232a Launch Stack
    ap-northeast-1 HVM (amd64) ami-025da74423c17aa65 Launch Stack
    HVM (arm64) ami-02819cb94a1252b56 Launch Stack
    ap-northeast-2 HVM (amd64) ami-092a49d9426a8860e Launch Stack
    HVM (arm64) ami-0920402700a8a0ce0 Launch Stack
    ap-south-1 HVM (amd64) ami-09e1bca403ee06377 Launch Stack
    HVM (arm64) ami-0946dadf342986e5c Launch Stack
    ap-southeast-1 HVM (amd64) ami-0a95c7bb70e048b77 Launch Stack
    HVM (arm64) ami-064906d297caa76a1 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0c2b78d67d5c1388c Launch Stack
    HVM (arm64) ami-02f5964ddc6a249b9 Launch Stack
    ap-southeast-3 HVM (amd64) ami-004e1f7cf2e86e2c6 Launch Stack
    HVM (arm64) ami-0d5d75785d55347ab Launch Stack
    ca-central-1 HVM (amd64) ami-0834126a9cadf8f9e Launch Stack
    HVM (arm64) ami-0fcdf2636697d1261 Launch Stack
    eu-central-1 HVM (amd64) ami-0e57769b515de9331 Launch Stack
    HVM (arm64) ami-09be8dd7dcab56642 Launch Stack
    eu-north-1 HVM (amd64) ami-06d7132555080b04f Launch Stack
    HVM (arm64) ami-003c808f8893a80f8 Launch Stack
    eu-south-1 HVM (amd64) ami-047157492c0b59e98 Launch Stack
    HVM (arm64) ami-00f73b0a28aeac19d Launch Stack
    eu-west-1 HVM (amd64) ami-06a8ab140ee02639b Launch Stack
    HVM (arm64) ami-04bf7eeffe931ab26 Launch Stack
    eu-west-2 HVM (amd64) ami-06558b20d50bbc406 Launch Stack
    HVM (arm64) ami-0d1a6e3559104267e Launch Stack
    eu-west-3 HVM (amd64) ami-0b1f7b607a5637ac5 Launch Stack
    HVM (arm64) ami-07dc6cc4836731597 Launch Stack
    me-south-1 HVM (amd64) ami-07c140a08eedfc34d Launch Stack
    HVM (arm64) ami-0f163fc1b4a122e06 Launch Stack
    sa-east-1 HVM (amd64) ami-04b6651cbf36d7e2f Launch Stack
    HVM (arm64) ami-0abae27cacf52b8f3 Launch Stack
    us-east-1 HVM (amd64) ami-0e6269d02550496a4 Launch Stack
    HVM (arm64) ami-0631300880a5396fe Launch Stack
    us-east-2 HVM (amd64) ami-0ea925ee062ec4c58 Launch Stack
    HVM (arm64) ami-09bab898c6ca34a29 Launch Stack
    us-west-1 HVM (amd64) ami-0360eb4ad2a4260bb Launch Stack
    HVM (arm64) ami-0daa2cd3671eb57fd Launch Stack
    us-west-2 HVM (amd64) ami-00199d21c1013ef91 Launch Stack
    HVM (arm64) ami-000d6427559302d6b Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0b2db74e31c4e8bc2 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0b2db74e31c4e8bc2 (amd64), Beta ami-087c172e891c43db8 (amd64), or Stable ami-00de77c9f54680b5d (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .