Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux Matrix channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 4230.2.4.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-01d31506675f4d765 Launch Stack
    HVM (arm64) ami-0bcc673ef34b9fce1 Launch Stack
    ap-east-1 HVM (amd64) ami-080d4b6276336d7e6 Launch Stack
    HVM (arm64) ami-0e1a59495da91406e Launch Stack
    ap-northeast-1 HVM (amd64) ami-0c2d62aa56b48ebdd Launch Stack
    HVM (arm64) ami-049ed9e566b346671 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0efcc2fed5b9dc70c Launch Stack
    HVM (arm64) ami-048fecb3fb4cdb60c Launch Stack
    ap-south-1 HVM (amd64) ami-0b8e96a48b56a5f3d Launch Stack
    HVM (arm64) ami-0b579c283e534eed5 Launch Stack
    ap-southeast-1 HVM (amd64) ami-08cbba275a9864a91 Launch Stack
    HVM (arm64) ami-09f07482353e40f56 Launch Stack
    ap-southeast-2 HVM (amd64) ami-05ccd5acf33447d59 Launch Stack
    HVM (arm64) ami-0fd0b6463414060c1 Launch Stack
    ap-southeast-3 HVM (amd64) ami-038e7f26d853c4c57 Launch Stack
    HVM (arm64) ami-0604e0a3e1b260d5e Launch Stack
    ca-central-1 HVM (amd64) ami-0e1a18edbe84b7c0b Launch Stack
    HVM (arm64) ami-0592cc679151e8460 Launch Stack
    eu-central-1 HVM (amd64) ami-03864981402f4a68a Launch Stack
    HVM (arm64) ami-0238691aae074af7a Launch Stack
    eu-north-1 HVM (amd64) ami-0c2ccd588e56446af Launch Stack
    HVM (arm64) ami-0ef231f8de9fa5810 Launch Stack
    eu-south-1 HVM (amd64) ami-0f31f2de61af78fc4 Launch Stack
    HVM (arm64) ami-0e407a610c15ae0da Launch Stack
    eu-west-1 HVM (amd64) ami-0cfb6dafd550274db Launch Stack
    HVM (arm64) ami-02d94ae5d4360b407 Launch Stack
    eu-west-2 HVM (amd64) ami-0efcb6346dfd95276 Launch Stack
    HVM (arm64) ami-0c173605e76f625e2 Launch Stack
    eu-west-3 HVM (amd64) ami-0edbc10c641ab4b0f Launch Stack
    HVM (arm64) ami-08c9302c2f1b70859 Launch Stack
    me-south-1 HVM (amd64) ami-02340f93ada501e6c Launch Stack
    HVM (arm64) ami-04de2dfbb11767c78 Launch Stack
    sa-east-1 HVM (amd64) ami-046c7e2e4f6449d7d Launch Stack
    HVM (arm64) ami-0666f6e2b77655c85 Launch Stack
    us-east-1 HVM (amd64) ami-0eff1644191cb5f95 Launch Stack
    HVM (arm64) ami-0642984315b2bce90 Launch Stack
    us-east-2 HVM (amd64) ami-080149c5453bbe5b5 Launch Stack
    HVM (arm64) ami-072ba28fa0c9d7107 Launch Stack
    us-west-1 HVM (amd64) ami-0ea58aa4bae9c6cf0 Launch Stack
    HVM (arm64) ami-095dff8623bce5cc1 Launch Stack
    us-west-2 HVM (amd64) ami-08622b079b6926f1f Launch Stack
    HVM (arm64) ami-02310146bd8d74863 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 4459.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-096d7343d445c1e8e Launch Stack
    HVM (arm64) ami-02646a9f1c27e9c60 Launch Stack
    ap-east-1 HVM (amd64) ami-0605faf3fe4604389 Launch Stack
    HVM (arm64) ami-0b29fd8c4763d8e11 Launch Stack
    ap-northeast-1 HVM (amd64) ami-09b60774db21fe051 Launch Stack
    HVM (arm64) ami-061f104f76ce0f693 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0d8665f20b2b3833e Launch Stack
    HVM (arm64) ami-03af7a6c7f32b11ec Launch Stack
    ap-south-1 HVM (amd64) ami-08f40e2e00f43fdbf Launch Stack
    HVM (arm64) ami-06e6ee4eb4496e738 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0a62bde5109ecd212 Launch Stack
    HVM (arm64) ami-0e02c166a9efbce4a Launch Stack
    ap-southeast-2 HVM (amd64) ami-07647ad6a215224c7 Launch Stack
    HVM (arm64) ami-03b6773b9d84fc1ea Launch Stack
    ap-southeast-3 HVM (amd64) ami-071e933feba31d3b4 Launch Stack
    HVM (arm64) ami-04228be59c3d2b994 Launch Stack
    ca-central-1 HVM (amd64) ami-00fc60d322a8ca91f Launch Stack
    HVM (arm64) ami-0a665d52d89bd5175 Launch Stack
    eu-central-1 HVM (amd64) ami-07428f79fe715c795 Launch Stack
    HVM (arm64) ami-0970d4d6cf1d5a5fc Launch Stack
    eu-north-1 HVM (amd64) ami-0a7a4fe5208868e42 Launch Stack
    HVM (arm64) ami-0bfbb085d02e2a4b0 Launch Stack
    eu-south-1 HVM (amd64) ami-0125295ee87c6500f Launch Stack
    HVM (arm64) ami-0b1000bd38ccc1450 Launch Stack
    eu-west-1 HVM (amd64) ami-079778b5c5433b234 Launch Stack
    HVM (arm64) ami-015cb202d5fa7e7a1 Launch Stack
    eu-west-2 HVM (amd64) ami-0c83007e0696ce0fc Launch Stack
    HVM (arm64) ami-02b4031b7a6b31c22 Launch Stack
    eu-west-3 HVM (amd64) ami-0fa081992dd8134ae Launch Stack
    HVM (arm64) ami-05aa7239df49f88a9 Launch Stack
    me-south-1 HVM (amd64) ami-02dcc6d76b970cb3b Launch Stack
    HVM (arm64) ami-0ee091bd1c58d6874 Launch Stack
    sa-east-1 HVM (amd64) ami-0001c78376d784c85 Launch Stack
    HVM (arm64) ami-0624cca6d1430cd61 Launch Stack
    us-east-1 HVM (amd64) ami-01e277227eccf6c60 Launch Stack
    HVM (arm64) ami-09774dcdaec83b58a Launch Stack
    us-east-2 HVM (amd64) ami-0a4116a5602b3ca6d Launch Stack
    HVM (arm64) ami-07d2461f61c65ff71 Launch Stack
    us-west-1 HVM (amd64) ami-0baeceef89361b5b3 Launch Stack
    HVM (arm64) ami-09e51269b9d817f39 Launch Stack
    us-west-2 HVM (amd64) ami-0089ce959c9a5efc2 Launch Stack
    HVM (arm64) ami-0322fd294b2700412 Launch Stack

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 4487.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-09d37e8534e648206 Launch Stack
    HVM (arm64) ami-03a6cea751b3cd07f Launch Stack
    ap-east-1 HVM (amd64) ami-009a7bf8a84a53030 Launch Stack
    HVM (arm64) ami-00cc47f14e5c36386 Launch Stack
    ap-northeast-1 HVM (amd64) ami-020143f6a8b25a57e Launch Stack
    HVM (arm64) ami-06593f399aed27342 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0255c4047b5139c88 Launch Stack
    HVM (arm64) ami-02626ffbf614266f7 Launch Stack
    ap-south-1 HVM (amd64) ami-0d3170934743b6c43 Launch Stack
    HVM (arm64) ami-00603dc636b5c4e86 Launch Stack
    ap-southeast-1 HVM (amd64) ami-04329245d2ff5842a Launch Stack
    HVM (arm64) ami-03810ebcbd0e439c7 Launch Stack
    ap-southeast-2 HVM (amd64) ami-0a43daeb50df83a52 Launch Stack
    HVM (arm64) ami-083ab74f3733f0779 Launch Stack
    ap-southeast-3 HVM (amd64) ami-068cb10911ef1a77a Launch Stack
    HVM (arm64) ami-00d72314f6e3c8d77 Launch Stack
    ca-central-1 HVM (amd64) ami-07668f2410b841bae Launch Stack
    HVM (arm64) ami-00b8584c05cbc2f05 Launch Stack
    eu-central-1 HVM (amd64) ami-048a0fb239892022c Launch Stack
    HVM (arm64) ami-0c158c4de82271c69 Launch Stack
    eu-north-1 HVM (amd64) ami-06be23746f64e9918 Launch Stack
    HVM (arm64) ami-037107fe2d36720af Launch Stack
    eu-south-1 HVM (amd64) ami-0a4ba53d801c8e297 Launch Stack
    HVM (arm64) ami-0edf3d5c5b6ae7aaf Launch Stack
    eu-west-1 HVM (amd64) ami-0770e0df672e5b764 Launch Stack
    HVM (arm64) ami-0de0c4c0a3b0d1f49 Launch Stack
    eu-west-2 HVM (amd64) ami-05302c643c88b3b7a Launch Stack
    HVM (arm64) ami-08a938c5b89a3fd17 Launch Stack
    eu-west-3 HVM (amd64) ami-0a6d5e1cac34c752a Launch Stack
    HVM (arm64) ami-0be98600c87607496 Launch Stack
    me-south-1 HVM (amd64) ami-09c68c1b629f7c923 Launch Stack
    HVM (arm64) ami-064b91d2a35fe0682 Launch Stack
    sa-east-1 HVM (amd64) ami-03c7178388cfe9e97 Launch Stack
    HVM (arm64) ami-0331a373cfd007a65 Launch Stack
    us-east-1 HVM (amd64) ami-0257d6059665ebc31 Launch Stack
    HVM (arm64) ami-07923efa414820b87 Launch Stack
    us-east-2 HVM (amd64) ami-05200bcd578be7359 Launch Stack
    HVM (arm64) ami-00cb640f418687a10 Launch Stack
    us-west-1 HVM (amd64) ami-00abfe1137a6aa6bb Launch Stack
    HVM (arm64) ami-0ef4fa454d80e3adf Launch Stack
    us-west-2 HVM (amd64) ami-0071a957283c4cbcc Launch Stack
    HVM (arm64) ami-092bca5fbb007a09c Launch Stack

    LTS release streams are maintained for an extended lifetime of 18 months. The yearly LTS streams have an overlap of 6 months. The current version is Flatcar Container Linux 4081.3.6.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-065d742b53d039f10 Launch Stack
    HVM (arm64) ami-031e6aa017e3d66a4 Launch Stack
    ap-east-1 HVM (amd64) ami-05d861bfa50523be9 Launch Stack
    HVM (arm64) ami-00376960872d79ace Launch Stack
    ap-northeast-1 HVM (amd64) ami-05dd5c8176aae392e Launch Stack
    HVM (arm64) ami-0d187650ed489eb63 Launch Stack
    ap-northeast-2 HVM (amd64) ami-082997538fee72535 Launch Stack
    HVM (arm64) ami-03cc0c6cbfd15b96b Launch Stack
    ap-south-1 HVM (amd64) ami-05a8e27ad68c7c095 Launch Stack
    HVM (arm64) ami-0b2d1b5a81d288101 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0bbc11922d35e88f7 Launch Stack
    HVM (arm64) ami-019dbbc6398ee063e Launch Stack
    ap-southeast-2 HVM (amd64) ami-0453f031a5311e96c Launch Stack
    HVM (arm64) ami-09d8d953473bdd4bb Launch Stack
    ap-southeast-3 HVM (amd64) ami-06a63dc511c9781f3 Launch Stack
    HVM (arm64) ami-074bb47a98f1747b4 Launch Stack
    ca-central-1 HVM (amd64) ami-080a9e8c39c377a17 Launch Stack
    HVM (arm64) ami-05895f696017a8301 Launch Stack
    eu-central-1 HVM (amd64) ami-0099e069036c934fa Launch Stack
    HVM (arm64) ami-0c6adc94939c2f348 Launch Stack
    eu-north-1 HVM (amd64) ami-0eb12fd4cf77da266 Launch Stack
    HVM (arm64) ami-00c4b52eb4c77f737 Launch Stack
    eu-south-1 HVM (amd64) ami-06548dff7a06688c4 Launch Stack
    HVM (arm64) ami-00c72fd113bab908e Launch Stack
    eu-west-1 HVM (amd64) ami-01b7787bc0f8621e5 Launch Stack
    HVM (arm64) ami-03448c137612fac2a Launch Stack
    eu-west-2 HVM (amd64) ami-0061694a1f70ac69b Launch Stack
    HVM (arm64) ami-0e6da03e8bfc266bd Launch Stack
    eu-west-3 HVM (amd64) ami-028ac53f4abd50a0a Launch Stack
    HVM (arm64) ami-08ff956abf5f1b861 Launch Stack
    me-south-1 HVM (amd64) ami-0597951317c148292 Launch Stack
    HVM (arm64) ami-09584968f1259e17c Launch Stack
    sa-east-1 HVM (amd64) ami-0e79099b46011b2a7 Launch Stack
    HVM (arm64) ami-0a3e84660861b4e0f Launch Stack
    us-east-1 HVM (amd64) ami-08f4bc25055494068 Launch Stack
    HVM (arm64) ami-086c5cca4129f4102 Launch Stack
    us-east-2 HVM (amd64) ami-0da2ef08fd5010737 Launch Stack
    HVM (arm64) ami-02da50159337b6b16 Launch Stack
    us-west-1 HVM (amd64) ami-08befc8df1e62f5a9 Launch Stack
    HVM (arm64) ami-08292a8b7fd99dd25 Launch Stack
    us-west-2 HVM (amd64) ami-033de58d5bfead60e Launch Stack
    HVM (arm64) ami-008bca8970ab8471d Launch Stack

    Butane Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
    systemd:
      units:
        - name: nginx.service
          enabled: true
          contents: |
            [Unit]
            Description=NGINX example
            After=docker.service
            Requires=docker.service
            [Service]
            TimeoutStartSec=0
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
            ExecStop=/usr/bin/docker stop nginx1
            Restart=always
            RestartSec=5s
            [Install]
            WantedBy=multi-user.target
    

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
    

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Butane Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
    storage:
      filesystems:
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
    systemd:
      units:
        - name: media-ephemeral.mount
          enabled: true
          contents: |
            [Mount]
            What=/dev/disk/by-label/ephemeral
            Where=/media/ephemeral
            Type=ext4
    
            [Install]
            RequiredBy=local-fs.target
    

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh core@<ip address>
    

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-0257d6059665ebc31 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source: 0.0.0.0/0
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-0257d6059665ebc31 (amd64), Beta ami-01e277227eccf6c60 (amd64), or Stable ami-0eff1644191cb5f95 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
    $ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]
    

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Terraform

    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone https://github.com/flatcar/flatcar-terraform.git
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply
    

    Start with a aws-ec2-machines.tf file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        }
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        }
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        }
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
        }
      }
    }
    
    provider "aws" {
      region = var.aws_region
    }
    
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_subnet" "subnet" {
      vpc_id     = aws_vpc.network.id
      cidr_block = var.subnet_cidr
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_internet_gateway" "gateway" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table" "default" {
      vpc_id = aws_vpc.network.id
    
      route {
        cidr_block = "0.0.0.0/0"
        gateway_id = aws_internet_gateway.gateway.id
      }
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_route_table_association" "public" {
      route_table_id = aws_route_table.default.id
      subnet_id      = aws_subnet.subnet.id
    }
    
    resource "aws_security_group" "securitygroup" {
      vpc_id = aws_vpc.network.id
    
      tags = {
        Name = var.cluster_name
      }
    }
    
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id = aws_security_group.securitygroup.id
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = ["0.0.0.0/0"]
    }
    
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    }
    
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
    
      filter {
        name   = "architecture"
        values = ["x86_64"]
      }
    
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      }
    
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
      }
    }
    
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
    
      associate_public_ip_address = true
      subnet_id                   = aws_subnet.subnet.id
      vpc_security_group_ids      = [aws_security_group.securitygroup.id]
    
      tags = {
        Name = "${var.cluster_name}-${each.key}"
      }
    }
    
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    }
    
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
    
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key
      }
    }
    

    Create a variables.tf file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    }
    
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    }
    
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    }
    
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    }
    
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    }
    
    variable "vpc_cidr" {
      type    = string
      default = "172.16.0.0/16"
    }
    
    variable "subnet_cidr" {
      type    = string
      default = "172.16.10.0/24"
    }
    

    An outputs.tf file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
      }
    }
    

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]
    

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    ---
    passwd:
      users:
        - name: core
          ssh_authorized_keys:
            - ${ssh_keys}
    storage:
      files:
        - path: /home/core/works
          filesystem: root
          mode: 0755
          contents:
            inline: |
              #!/bin/bash
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              hostname="$(hostname)"
              echo My name is ${name} and the hostname is $${hostname}
    

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply
    

    Log in via ssh core@IPADDRESS with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .