Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux Matrix channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4230.2.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-02d1012e4355ed6c7 Launch Stack
    HVM (arm64) ami-0ad5736b9ae989f9d Launch Stack
    ap-east-1 HVM (amd64) ami-076d8eb3397873eb5 Launch Stack
    HVM (arm64) ami-07b3b5541822df0f0 Launch Stack
    ap-northeast-1 HVM (amd64) ami-049d65d2015f5994c Launch Stack
    HVM (arm64) ami-0bd1d6fad539018c6 Launch Stack
    ap-northeast-2 HVM (amd64) ami-09bfacbffe256ea42 Launch Stack
    HVM (arm64) ami-0341af62e794ba226 Launch Stack
    ap-south-1 HVM (amd64) ami-0df95bb76d89f141b Launch Stack
    HVM (arm64) ami-0a93db4111acb89a6 Launch Stack
    ap-southeast-1 HVM (amd64) ami-07a2671f677b77590 Launch Stack
    HVM (arm64) ami-03003fd9e1a465f48 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0eb433341d927d1a6 Launch Stack
    HVM (arm64) ami-0c6f632eeb87d40bf Launch Stack
    ap-southeast-3 HVM (amd64) ami-0a8c141bae36f5027 Launch Stack
    HVM (arm64) ami-0e8a1c0a742484176 Launch Stack
    ca-central-1 HVM (amd64) ami-0805840c67a8fc7ad Launch Stack
    HVM (arm64) ami-01f1efd0abd490eaa Launch Stack
    eu-central-1 HVM (amd64) ami-0f9a3959bda8ccc11 Launch Stack
    HVM (arm64) ami-04b64ba02f791e007 Launch Stack
    eu-north-1 HVM (amd64) ami-0bd6e19b1b6fcebd4 Launch Stack
    HVM (arm64) ami-0e898533334cf91f4 Launch Stack
    eu-south-1 HVM (amd64) ami-0e5f5c4687940cc59 Launch Stack
    HVM (arm64) ami-0a1b56b27e3c369cf Launch Stack
    eu-west-1 HVM (amd64) ami-083bbc6a04bf3e2e0 Launch Stack
    HVM (arm64) ami-084d692c60f4583e4 Launch Stack
    eu-west-2 HVM (amd64) ami-06c2d1da0b30a5eee Launch Stack
    HVM (arm64) ami-0d6feb7c81f0d80e8 Launch Stack
    eu-west-3 HVM (amd64) ami-09e649180d5ef9785 Launch Stack
    HVM (arm64) ami-062a91d88b8cd52d6 Launch Stack
    me-south-1 HVM (amd64) ami-0ef30b0926f836eac Launch Stack
    HVM (arm64) ami-06505b7abdab07621 Launch Stack
    sa-east-1 HVM (amd64) ami-0bc4475b55884d05b Launch Stack
    HVM (arm64) ami-0ac1b5ec4bcd97623 Launch Stack
    us-east-1 HVM (amd64) ami-04e380289235c3c8c Launch Stack
    HVM (arm64) ami-0e897dd40f158099b Launch Stack
    us-east-2 HVM (amd64) ami-0674a762370914b31 Launch Stack
    HVM (arm64) ami-06fe4c75df5f9a17c Launch Stack
    us-west-1 HVM (amd64) ami-0f08256f84104d18f Launch Stack
    HVM (arm64) ami-010b390ac352832f4 Launch Stack
    us-west-2 HVM (amd64) ami-0bdb9545cde16da72 Launch Stack
    HVM (arm64) ami-084747b41f0d5187b Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4344.1.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-02ae56f661dcecc2c Launch Stack
    HVM (arm64) ami-06d2a1c7bda5acf58 Launch Stack
    ap-east-1 HVM (amd64) ami-045d31dd64b9f17ce Launch Stack
    HVM (arm64) ami-03c0da2945f863680 Launch Stack
    ap-northeast-1 HVM (amd64) ami-034c96d247c00bcff Launch Stack
    HVM (arm64) ami-0595c69070c2ee4ad Launch Stack
    ap-northeast-2 HVM (amd64) ami-0ba50ff8b78a0b091 Launch Stack
    HVM (arm64) ami-0803c2df6285921e6 Launch Stack
    ap-south-1 HVM (amd64) ami-0d23cd6db24f12b75 Launch Stack
    HVM (arm64) ami-0b76bdfca560c93d6 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0fc590269c499de09 Launch Stack
    HVM (arm64) ami-08373aad1dcbf2f53 Launch Stack
    ap-southeast-2 HVM (amd64) ami-084d7ae27625ff12f Launch Stack
    HVM (arm64) ami-08879e8050bbeded1 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0bbf8653d5f2568ab Launch Stack
    HVM (arm64) ami-03e8ba21af2f42486 Launch Stack
    ca-central-1 HVM (amd64) ami-00e28af8fc390a0cc Launch Stack
    HVM (arm64) ami-0b9540ca7a7dfbb3a Launch Stack
    eu-central-1 HVM (amd64) ami-0caf06fceddc57af4 Launch Stack
    HVM (arm64) ami-01c90b5643db092b4 Launch Stack
    eu-north-1 HVM (amd64) ami-0dc33ed7e82b406c3 Launch Stack
    HVM (arm64) ami-0d086111918923250 Launch Stack
    eu-south-1 HVM (amd64) ami-06386666d600f25c2 Launch Stack
    HVM (arm64) ami-026db3fdd62cc3aab Launch Stack
    eu-west-1 HVM (amd64) ami-0fb0a05f372de487a Launch Stack
    HVM (arm64) ami-030d07ed316fcff7d Launch Stack
    eu-west-2 HVM (amd64) ami-0330b6e94f4ce1a8b Launch Stack
    HVM (arm64) ami-004ef781aad255e0f Launch Stack
    eu-west-3 HVM (amd64) ami-0b35c703bfc55ded4 Launch Stack
    HVM (arm64) ami-0142d4eac55ae58c4 Launch Stack
    me-south-1 HVM (amd64) ami-0a7f6528b498b2047 Launch Stack
    HVM (arm64) ami-0fb4df3cd5c8add13 Launch Stack
    sa-east-1 HVM (amd64) ami-0859600076ac238e9 Launch Stack
    HVM (arm64) ami-0c11705cd44ca0692 Launch Stack
    us-east-1 HVM (amd64) ami-0d55bd0c1fddd9f38 Launch Stack
    HVM (arm64) ami-0b3edf6b96e03bfaf Launch Stack
    us-east-2 HVM (amd64) ami-0ff6f2e3ca34f4e83 Launch Stack
    HVM (arm64) ami-060e839801cc80547 Launch Stack
    us-west-1 HVM (amd64) ami-089eb15d9bfa73be1 Launch Stack
    HVM (arm64) ami-05f80be9bb9224a2d Launch Stack
    us-west-2 HVM (amd64) ami-08676770c35aadf02 Launch Stack
    HVM (arm64) ami-099d35cd834ac096e Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4372.0.1.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0724020eb5a36837c Launch Stack
    HVM (arm64) ami-0cb65576ee0d3bb16 Launch Stack
    ap-east-1 HVM (amd64) ami-05cb334fcf1c04730 Launch Stack
    HVM (arm64) ami-0ccba3bed9baf20ba Launch Stack
    ap-northeast-1 HVM (amd64) ami-0cc053987a77ab303 Launch Stack
    HVM (arm64) ami-0f8e1c1b57a0daefe Launch Stack
    ap-northeast-2 HVM (amd64) ami-059a91e28c0c596eb Launch Stack
    HVM (arm64) ami-08c12bcf808197915 Launch Stack
    ap-south-1 HVM (amd64) ami-0ce763d98a4324897 Launch Stack
    HVM (arm64) ami-056aa950d04a8a8b4 Launch Stack
    ap-southeast-1 HVM (amd64) ami-051507336e0cc8004 Launch Stack
    HVM (arm64) ami-0bbbed8262dcd4699 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0e8875d933bb17241 Launch Stack
    HVM (arm64) ami-05f369aa83c0c7997 Launch Stack
    ap-southeast-3 HVM (amd64) ami-02a67ed8dd1a54e54 Launch Stack
    HVM (arm64) ami-0c1637c1331cebde8 Launch Stack
    ca-central-1 HVM (amd64) ami-03df85f2152dc907a Launch Stack
    HVM (arm64) ami-0dfccd30164aef406 Launch Stack
    eu-central-1 HVM (amd64) ami-0743fe5b145599da4 Launch Stack
    HVM (arm64) ami-0ec257c4be54b1d99 Launch Stack
    eu-north-1 HVM (amd64) ami-0308f9a4f2f774f3e Launch Stack
    HVM (arm64) ami-09c5d31eb70d74362 Launch Stack
    eu-south-1 HVM (amd64) ami-08515b3f0ee520a66 Launch Stack
    HVM (arm64) ami-0a7fd3651bde508e6 Launch Stack
    eu-west-1 HVM (amd64) ami-0278c1346cbbe5465 Launch Stack
    HVM (arm64) ami-0e9bbf7531f0ac8a6 Launch Stack
    eu-west-2 HVM (amd64) ami-0ed7cee19c9a0a90f Launch Stack
    HVM (arm64) ami-03eae5049f05a9dda Launch Stack
    eu-west-3 HVM (amd64) ami-063d2f497eeb080d9 Launch Stack
    HVM (arm64) ami-03cc1152448d41a18 Launch Stack
    me-south-1 HVM (amd64) ami-0e3dfab51dc585c75 Launch Stack
    HVM (arm64) ami-0fcd0193cfe9f5d97 Launch Stack
    sa-east-1 HVM (amd64) ami-08cb1cef0bb58eb16 Launch Stack
    HVM (arm64) ami-0be54d6d610ad4079 Launch Stack
    us-east-1 HVM (amd64) ami-052a3d2f4c04928ec Launch Stack
    HVM (arm64) ami-05436ca513065cd6c Launch Stack
    us-east-2 HVM (amd64) ami-03494115468bcb4b8 Launch Stack
    HVM (arm64) ami-090a5d968f7b7b87f Launch Stack
    us-west-1 HVM (amd64) ami-0935be5ea08d0b8c4 Launch Stack
    HVM (arm64) ami-0c3c77497450ea497 Launch Stack
    us-west-2 HVM (amd64) ami-0868afa13f7dcd131 Launch Stack
    HVM (arm64) ami-064fd8ba968bfa6c8 Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.4.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0a32b559c26467134 Launch Stack
    HVM (arm64) ami-00290e6cda375a17f Launch Stack
    ap-east-1 HVM (amd64) ami-0b5d981341bdaa749 Launch Stack
    HVM (arm64) ami-0a9bd7642f4ec9ce8 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0e4f14bfb6fbc060e Launch Stack
    HVM (arm64) ami-0ec26af7352956743 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0ba0a4b7f62b92069 Launch Stack
    HVM (arm64) ami-02eaf556313538db1 Launch Stack
    ap-south-1 HVM (amd64) ami-05974e2fbd419bb03 Launch Stack
    HVM (arm64) ami-032ab03b774faab45 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0219f18315a9ee6b7 Launch Stack
    HVM (arm64) ami-03f37f883a6a2fab8 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0a2934966f16423ff Launch Stack
    HVM (arm64) ami-077d3d8e3eb757cbd Launch Stack
    ap-southeast-3 HVM (amd64) ami-029ac13ba4a1d7ff7 Launch Stack
    HVM (arm64) ami-04386dfd50ddf91ad Launch Stack
    ca-central-1 HVM (amd64) ami-08f4856bbde561dec Launch Stack
    HVM (arm64) ami-0a81b6efded4b451a Launch Stack
    eu-central-1 HVM (amd64) ami-03d9c404df5f8bcd3 Launch Stack
    HVM (arm64) ami-0378efeed77c25a71 Launch Stack
    eu-north-1 HVM (amd64) ami-0c06c1efdb7433143 Launch Stack
    HVM (arm64) ami-0b17d6d948f59db4e Launch Stack
    eu-south-1 HVM (amd64) ami-00e92d9d2b96cb57e Launch Stack
    HVM (arm64) ami-06017e3dcdf4c9231 Launch Stack
    eu-west-1 HVM (amd64) ami-080108687739a6641 Launch Stack
    HVM (arm64) ami-0b605c70e94a8fc91 Launch Stack
    eu-west-2 HVM (amd64) ami-02d3978b968feda4c Launch Stack
    HVM (arm64) ami-024765fccc9a1ecab Launch Stack
    eu-west-3 HVM (amd64) ami-03e1c0ee0eefb1594 Launch Stack
    HVM (arm64) ami-0e8057d41900c4eda Launch Stack
    me-south-1 HVM (amd64) ami-041a4d649b7fa344a Launch Stack
    HVM (arm64) ami-05a7a6f3fb8476a71 Launch Stack
    sa-east-1 HVM (amd64) ami-0b2eefc7840614a93 Launch Stack
    HVM (arm64) ami-0a30ad0653e5fcb2f Launch Stack
    us-east-1 HVM (amd64) ami-09178d07f493bea86 Launch Stack
    HVM (arm64) ami-0c24476dcc11e9b11 Launch Stack
    us-east-2 HVM (amd64) ami-0abc1e769883c4471 Launch Stack
    HVM (arm64) ami-0af4acbec724ed67e Launch Stack
    us-west-1 HVM (amd64) ami-0906c1738e6d674c9 Launch Stack
    HVM (arm64) ami-0c9e80bfe1776212f Launch Stack
    us-west-2 HVM (amd64) ami-01067f157706b7965 Launch Stack
    HVM (arm64) ami-0bf8ccbe213ba5a62 Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-052a3d2f4c04928ec (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-052a3d2f4c04928ec (amd64), Beta ami-0d55bd0c1fddd9f38 (amd64), or Stable ami-04e380289235c3c8c (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .