Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4230.2.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0a219d2f974a75fbc Launch Stack
    HVM (arm64) ami-01f53839c882b6b5a Launch Stack
    ap-east-1 HVM (amd64) ami-060123ffc3f968840 Launch Stack
    HVM (arm64) ami-025d8e1ad2f487345 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0fc2faf80fc8b0b2c Launch Stack
    HVM (arm64) ami-05f599bcc8d6f7499 Launch Stack
    ap-northeast-2 HVM (amd64) ami-013007818b4c7aa6e Launch Stack
    HVM (arm64) ami-0e3b97922556230e6 Launch Stack
    ap-south-1 HVM (amd64) ami-0bb07d2e8ad57e1ab Launch Stack
    HVM (arm64) ami-075b41f75899d26b8 Launch Stack
    ap-southeast-1 HVM (amd64) ami-09d22a2d1662974f8 Launch Stack
    HVM (arm64) ami-0e2388f283895ec7e Launch Stack
    ap-southeast-2 HVM (amd64) ami-0d48e6dad372623c8 Launch Stack
    HVM (arm64) ami-0b3b7af3b506d5513 Launch Stack
    ap-southeast-3 HVM (amd64) ami-072974c5a90b5d2d8 Launch Stack
    HVM (arm64) ami-0396ac1a750da67e7 Launch Stack
    ca-central-1 HVM (amd64) ami-076657362fef77847 Launch Stack
    HVM (arm64) ami-0c1f4103d99c45e02 Launch Stack
    eu-central-1 HVM (amd64) ami-092c5042de54bf9a7 Launch Stack
    HVM (arm64) ami-00c7d5ebf4ba88304 Launch Stack
    eu-north-1 HVM (amd64) ami-06f17cfbdb861d165 Launch Stack
    HVM (arm64) ami-0a1185035b6cdb1f7 Launch Stack
    eu-south-1 HVM (amd64) ami-0bd8f6d8328cced92 Launch Stack
    HVM (arm64) ami-0bb117d86f9c1f1a6 Launch Stack
    eu-west-1 HVM (amd64) ami-08efa59562815e8c7 Launch Stack
    HVM (arm64) ami-0acb857c62a74205e Launch Stack
    eu-west-2 HVM (amd64) ami-0592925b77cd2b146 Launch Stack
    HVM (arm64) ami-04ec1dbdd115d1036 Launch Stack
    eu-west-3 HVM (amd64) ami-098cee8d41348d951 Launch Stack
    HVM (arm64) ami-05c3a10ae13a1fccb Launch Stack
    me-south-1 HVM (amd64) ami-03c35a0cbdc141370 Launch Stack
    HVM (arm64) ami-07498b854cc23ad2f Launch Stack
    sa-east-1 HVM (amd64) ami-08aeef467cd302dbe Launch Stack
    HVM (arm64) ami-0ca5c125cdbe93d5e Launch Stack
    us-east-1 HVM (amd64) ami-0a43664ff679ea819 Launch Stack
    HVM (arm64) ami-05dfe080d7bd8dbb2 Launch Stack
    us-east-2 HVM (amd64) ami-09019c945877ebe18 Launch Stack
    HVM (arm64) ami-0b9f6b6c780767ab7 Launch Stack
    us-west-1 HVM (amd64) ami-008f4acd8ea9b92f5 Launch Stack
    HVM (arm64) ami-0a926e30cbc4144f3 Launch Stack
    us-west-2 HVM (amd64) ami-07eb802d90a7aeb59 Launch Stack
    HVM (arm64) ami-08beabe96c8832ead Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4344.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0a759524259ab9f0f Launch Stack
    HVM (arm64) ami-0868e4682817e43c5 Launch Stack
    ap-east-1 HVM (amd64) ami-0c61fc4a2d3f56d73 Launch Stack
    HVM (arm64) ami-0870e6b0a9c827365 Launch Stack
    ap-northeast-1 HVM (amd64) ami-01000e99b3a0f5b53 Launch Stack
    HVM (arm64) ami-025d847654b0a32e4 Launch Stack
    ap-northeast-2 HVM (amd64) ami-032074a6a99ed25bc Launch Stack
    HVM (arm64) ami-03caeacf3d087b6ed Launch Stack
    ap-south-1 HVM (amd64) ami-026b5f392f911ff14 Launch Stack
    HVM (arm64) ami-0b50d54df87750b80 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bffeb8b3548af281 Launch Stack
    HVM (arm64) ami-01a111f3b01888e19 Launch Stack
    ap-southeast-2 HVM (amd64) ami-05303d5855aab9c11 Launch Stack
    HVM (arm64) ami-07340ffaddb5d3f3e Launch Stack
    ap-southeast-3 HVM (amd64) ami-024f505da9954ea47 Launch Stack
    HVM (arm64) ami-058c970076177db8e Launch Stack
    ca-central-1 HVM (amd64) ami-0cdca0946e2d27537 Launch Stack
    HVM (arm64) ami-02850f90cd03a6188 Launch Stack
    eu-central-1 HVM (amd64) ami-0de8a8a64fee3aaa5 Launch Stack
    HVM (arm64) ami-05d9e0a5300383968 Launch Stack
    eu-north-1 HVM (amd64) ami-09c285c9cca8cb620 Launch Stack
    HVM (arm64) ami-012af9bb4d39e0fe9 Launch Stack
    eu-south-1 HVM (amd64) ami-0807caf12feec04b7 Launch Stack
    HVM (arm64) ami-08d3bfb5192d2aeaf Launch Stack
    eu-west-1 HVM (amd64) ami-0450a60e0c4ff174c Launch Stack
    HVM (arm64) ami-005ce3f721e46df81 Launch Stack
    eu-west-2 HVM (amd64) ami-0ca5c938cc4cb4949 Launch Stack
    HVM (arm64) ami-04d706f436390a934 Launch Stack
    eu-west-3 HVM (amd64) ami-08cab3863dedc86b9 Launch Stack
    HVM (arm64) ami-00dcfa05e9f9bba77 Launch Stack
    me-south-1 HVM (amd64) ami-05b1844cf14a33ade Launch Stack
    HVM (arm64) ami-0dd5bc83718d544aa Launch Stack
    sa-east-1 HVM (amd64) ami-0415c8e6a589d44c7 Launch Stack
    HVM (arm64) ami-03767df68536ee207 Launch Stack
    us-east-1 HVM (amd64) ami-0962efef55313a8df Launch Stack
    HVM (arm64) ami-0f47603b9ac6e6cdb Launch Stack
    us-east-2 HVM (amd64) ami-0d882d46c58fd2dcb Launch Stack
    HVM (arm64) ami-0daafdeb3ae09fe42 Launch Stack
    us-west-1 HVM (amd64) ami-03e142971c59770c5 Launch Stack
    HVM (arm64) ami-09bca60f057fa65a9 Launch Stack
    us-west-2 HVM (amd64) ami-0b07ab28aee5fbc3b Launch Stack
    HVM (arm64) ami-025c00869e5707178 Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4372.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-084a19651ef8a984d Launch Stack
    HVM (arm64) ami-0ba5dcab6e4bd0d49 Launch Stack
    ap-east-1 HVM (amd64) ami-069b6840144b4b329 Launch Stack
    HVM (arm64) ami-06e3f15287ec790bf Launch Stack
    ap-northeast-1 HVM (amd64) ami-02870f937696c256d Launch Stack
    HVM (arm64) ami-0adf0171d9ee71ff7 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0a6b63513435222f1 Launch Stack
    HVM (arm64) ami-09777b648a57604c3 Launch Stack
    ap-south-1 HVM (amd64) ami-0ad6efa8f17f79dd0 Launch Stack
    HVM (arm64) ami-0dc13f120aa1616cd Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bf223b1c13836f85 Launch Stack
    HVM (arm64) ami-0d2d9de28ba7267a7 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0a80cb442e5ba1cc9 Launch Stack
    HVM (arm64) ami-067fcccbde8711321 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0fbf4d9cce82ba543 Launch Stack
    HVM (arm64) ami-06858c2a533ba56e0 Launch Stack
    ca-central-1 HVM (amd64) ami-002f377056fb1e701 Launch Stack
    HVM (arm64) ami-05886180ff050b337 Launch Stack
    eu-central-1 HVM (amd64) ami-055ddde6136de64b3 Launch Stack
    HVM (arm64) ami-0a66e74e1ce3bd824 Launch Stack
    eu-north-1 HVM (amd64) ami-0c0e32fd237a96267 Launch Stack
    HVM (arm64) ami-0b37540db39f00660 Launch Stack
    eu-south-1 HVM (amd64) ami-000d48448ac3b39ab Launch Stack
    HVM (arm64) ami-0b824ae00ceebd14f Launch Stack
    eu-west-1 HVM (amd64) ami-0a06f511e9a7d75af Launch Stack
    HVM (arm64) ami-02639f86f13b09f87 Launch Stack
    eu-west-2 HVM (amd64) ami-0b107d59c2b13e82d Launch Stack
    HVM (arm64) ami-0dac682ee744be5db Launch Stack
    eu-west-3 HVM (amd64) ami-0207a9ef79e0ba5cc Launch Stack
    HVM (arm64) ami-03fe76a456a1df93b Launch Stack
    me-south-1 HVM (amd64) ami-0062f6e84eb7e3d05 Launch Stack
    HVM (arm64) ami-0f60c602a8ad2b660 Launch Stack
    sa-east-1 HVM (amd64) ami-0044fb98be0874a90 Launch Stack
    HVM (arm64) ami-051357c0cd955eb09 Launch Stack
    us-east-1 HVM (amd64) ami-08a89d94a096320f6 Launch Stack
    HVM (arm64) ami-05a8cf42941a8119d Launch Stack
    us-east-2 HVM (amd64) ami-0dd7aebaae47a8810 Launch Stack
    HVM (arm64) ami-0c8d11d5fa2caf357 Launch Stack
    us-west-1 HVM (amd64) ami-0ff13b7ada8ed39f5 Launch Stack
    HVM (arm64) ami-088aadaf4e44b59e6 Launch Stack
    us-west-2 HVM (amd64) ami-0a7ba1706220055e2 Launch Stack
    HVM (arm64) ami-00affb555e6128ac0 Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.3.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0c5c47cecb77306ad Launch Stack
    HVM (arm64) ami-0d392f9f870f0b6d1 Launch Stack
    ap-east-1 HVM (amd64) ami-0b15c22f0c941d30b Launch Stack
    HVM (arm64) ami-06cb7fdc6d7fe949b Launch Stack
    ap-northeast-1 HVM (amd64) ami-0d93e886973c58115 Launch Stack
    HVM (arm64) ami-064864e4ba3dd3a54 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0da5d1af5b2650e2a Launch Stack
    HVM (arm64) ami-061b573af5265e3cf Launch Stack
    ap-south-1 HVM (amd64) ami-011795aaeebcc5c5c Launch Stack
    HVM (arm64) ami-0183fe5370b5f5376 Launch Stack
    ap-southeast-1 HVM (amd64) ami-00d1fdad02712a7f2 Launch Stack
    HVM (arm64) ami-00b69c4f104ff5503 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0cb0c556a88e29e93 Launch Stack
    HVM (arm64) ami-08b5c6376f4ad0fa2 Launch Stack
    ap-southeast-3 HVM (amd64) ami-04347b5ce7f24e280 Launch Stack
    HVM (arm64) ami-0ed35c2d9d07c0f93 Launch Stack
    ca-central-1 HVM (amd64) ami-068405e920283170c Launch Stack
    HVM (arm64) ami-014e83d3b59187b3a Launch Stack
    eu-central-1 HVM (amd64) ami-061bbfbf14d56fd71 Launch Stack
    HVM (arm64) ami-0b0a1b9f993a33619 Launch Stack
    eu-north-1 HVM (amd64) ami-0e2e6a382ff66d814 Launch Stack
    HVM (arm64) ami-0f13b1a039a1a38d4 Launch Stack
    eu-south-1 HVM (amd64) ami-0d8d53b669aa2944d Launch Stack
    HVM (arm64) ami-08dfb14e86724a7bf Launch Stack
    eu-west-1 HVM (amd64) ami-01f6128357e4747db Launch Stack
    HVM (arm64) ami-037ddad13433c6bb5 Launch Stack
    eu-west-2 HVM (amd64) ami-05ba6b5a5c1b25fba Launch Stack
    HVM (arm64) ami-0699d3428218e3530 Launch Stack
    eu-west-3 HVM (amd64) ami-08887d811ea767909 Launch Stack
    HVM (arm64) ami-05f4ab1ed189af84c Launch Stack
    me-south-1 HVM (amd64) ami-074b1aa5e1001c12c Launch Stack
    HVM (arm64) ami-0d8aa5b2a30915dab Launch Stack
    sa-east-1 HVM (amd64) ami-07d479214ea906c73 Launch Stack
    HVM (arm64) ami-09069e60e3420f801 Launch Stack
    us-east-1 HVM (amd64) ami-0b908179fbee50596 Launch Stack
    HVM (arm64) ami-0a80fa2bedf599885 Launch Stack
    us-east-2 HVM (amd64) ami-0e9813667de033308 Launch Stack
    HVM (arm64) ami-0a4e9e3c1b91a2c04 Launch Stack
    us-west-1 HVM (amd64) ami-0b85281c3c84f4b4d Launch Stack
    HVM (arm64) ami-006a736c36e8e5c90 Launch Stack
    us-west-2 HVM (amd64) ami-02ff2d1e98c314105 Launch Stack
    HVM (arm64) ami-0d2e0bb87ff1045f7 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-08a89d94a096320f6 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-08a89d94a096320f6 (amd64), Beta ami-0962efef55313a8df (amd64), or Stable ami-0a43664ff679ea819 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .