Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3432.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0640df2ce10610a94 Launch Stack
    HVM (arm64) ami-02b4aa66b117efd6f Launch Stack
    ap-east-1 HVM (amd64) ami-0e3426e84801d2510 Launch Stack
    HVM (arm64) ami-06dc2b72efaaab19c Launch Stack
    ap-northeast-1 HVM (amd64) ami-01c92bb00094423dc Launch Stack
    HVM (arm64) ami-08b29bf1ef61c1a85 Launch Stack
    ap-northeast-2 HVM (amd64) ami-02fde1b93c6457461 Launch Stack
    HVM (arm64) ami-074c42d389a758a9f Launch Stack
    ap-south-1 HVM (amd64) ami-060c7d6d16ae0ee81 Launch Stack
    HVM (arm64) ami-00fdb895d338150f3 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0035e9ea2ca315064 Launch Stack
    HVM (arm64) ami-08d3b2fcc8822a63e Launch Stack
    ap-southeast-2 HVM (amd64) ami-06f974e2141108a8b Launch Stack
    HVM (arm64) ami-0ec6a6355b374d52a Launch Stack
    ap-southeast-3 HVM (amd64) ami-0c5ea0c7c78e0c45c Launch Stack
    HVM (arm64) ami-0d67bd0b9386eec43 Launch Stack
    ca-central-1 HVM (amd64) ami-09fb7ecbbc0ad2424 Launch Stack
    HVM (arm64) ami-0605c6282998752a9 Launch Stack
    eu-central-1 HVM (amd64) ami-0bb7e2fa3ff2d73c5 Launch Stack
    HVM (arm64) ami-0d01602e9f2141fb8 Launch Stack
    eu-north-1 HVM (amd64) ami-06cb7ae3095cf66c6 Launch Stack
    HVM (arm64) ami-0b9bf2ebb4e3064db Launch Stack
    eu-south-1 HVM (amd64) ami-06eb3e1ff67371d42 Launch Stack
    HVM (arm64) ami-04b4f520d3f08ae19 Launch Stack
    eu-west-1 HVM (amd64) ami-063b0ac1f876ba519 Launch Stack
    HVM (arm64) ami-0d604f17f9fa5d0dd Launch Stack
    eu-west-2 HVM (amd64) ami-0b2e206073cf375ab Launch Stack
    HVM (arm64) ami-0d6293cf1daf4a14c Launch Stack
    eu-west-3 HVM (amd64) ami-0107fb410531a4a1a Launch Stack
    HVM (arm64) ami-0e000241b800b4030 Launch Stack
    me-south-1 HVM (amd64) ami-0002fd92a8200fc14 Launch Stack
    HVM (arm64) ami-04562f539db1c1f41 Launch Stack
    sa-east-1 HVM (amd64) ami-04768c6502a3fe8cf Launch Stack
    HVM (arm64) ami-041d5c53aa39a727a Launch Stack
    us-east-1 HVM (amd64) ami-041e4b89f83cf21b7 Launch Stack
    HVM (arm64) ami-03abb4b0eca2f2c12 Launch Stack
    us-east-2 HVM (amd64) ami-0a4c0f517e1d0edfe Launch Stack
    HVM (arm64) ami-05d7e12598124c85a Launch Stack
    us-west-1 HVM (amd64) ami-004f7f799e7e00206 Launch Stack
    HVM (arm64) ami-03734032bfd52f231 Launch Stack
    us-west-2 HVM (amd64) ami-0566162fd21e33948 Launch Stack
    HVM (arm64) ami-09e4e80447ecc2120 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3417.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-0e46225504d0deba0 Launch Stack
    HVM (arm64) ami-0983a37e874b358d2 Launch Stack
    ap-east-1 HVM (amd64) ami-0b5c603d1c01e383e Launch Stack
    HVM (arm64) ami-0078e2469e71dcc0a Launch Stack
    ap-northeast-1 HVM (amd64) ami-0fb4f1a01a61a577c Launch Stack
    HVM (arm64) ami-061f02656313bb594 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0567ebaffd6cae4de Launch Stack
    HVM (arm64) ami-091d279eed5f797e6 Launch Stack
    ap-south-1 HVM (amd64) ami-0329a9c0dd8019145 Launch Stack
    HVM (arm64) ami-0c730ce171ecf3181 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0617e69f8f416221a Launch Stack
    HVM (arm64) ami-0201aa75037e2ce0d Launch Stack
    ap-southeast-2 HVM (amd64) ami-0fb73e89df9e3f012 Launch Stack
    HVM (arm64) ami-0e079733ca511e2b2 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0d5f4c7ec3d17d69f Launch Stack
    HVM (arm64) ami-07c3c02169b2fd1a5 Launch Stack
    ca-central-1 HVM (amd64) ami-04067c358d0ffaaf8 Launch Stack
    HVM (arm64) ami-0106c779411ffd64e Launch Stack
    eu-central-1 HVM (amd64) ami-023b1374a0d132e1c Launch Stack
    HVM (arm64) ami-0599ab59908dcf164 Launch Stack
    eu-north-1 HVM (amd64) ami-03ee4658c8e1e8a15 Launch Stack
    HVM (arm64) ami-0d55f07d16f826358 Launch Stack
    eu-south-1 HVM (amd64) ami-0cd32c73d4be80cb8 Launch Stack
    HVM (arm64) ami-0959496a6e9ef7a4f Launch Stack
    eu-west-1 HVM (amd64) ami-0d7440b06304f1297 Launch Stack
    HVM (arm64) ami-004cd907f8786dc82 Launch Stack
    eu-west-2 HVM (amd64) ami-0fd59034a3061fc07 Launch Stack
    HVM (arm64) ami-0b7818af4af9d6a42 Launch Stack
    eu-west-3 HVM (amd64) ami-0dac1ec77cdf05252 Launch Stack
    HVM (arm64) ami-0bd6661360eafa057 Launch Stack
    me-south-1 HVM (amd64) ami-0bb3aa20aafbc8323 Launch Stack
    HVM (arm64) ami-017f269ac7be646bb Launch Stack
    sa-east-1 HVM (amd64) ami-032550e97f1f3e5c5 Launch Stack
    HVM (arm64) ami-0a9560054e8d88d50 Launch Stack
    us-east-1 HVM (amd64) ami-000f8fdd6dcb04cf9 Launch Stack
    HVM (arm64) ami-00246916005e56541 Launch Stack
    us-east-2 HVM (amd64) ami-0f62716b51460d492 Launch Stack
    HVM (arm64) ami-0f531b6490a525b5a Launch Stack
    us-west-1 HVM (amd64) ami-0b2bdbad4093d12ff Launch Stack
    HVM (arm64) ami-0abe6d463b2324a77 Launch Stack
    us-west-2 HVM (amd64) ami-00857af16e26a4e43 Launch Stack
    HVM (arm64) ami-0db60513150f210d9 Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3374.2.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-07334bbd9946f1005 Launch Stack
    HVM (arm64) ami-081e5279c77de0825 Launch Stack
    ap-east-1 HVM (amd64) ami-0675782aa721d61b5 Launch Stack
    HVM (arm64) ami-0b0af9e1485e0f52b Launch Stack
    ap-northeast-1 HVM (amd64) ami-0f87333dc64867b17 Launch Stack
    HVM (arm64) ami-062257eb35f4b4056 Launch Stack
    ap-northeast-2 HVM (amd64) ami-03fd646a8e6d5ea32 Launch Stack
    HVM (arm64) ami-0b23efeb5b76ef92d Launch Stack
    ap-south-1 HVM (amd64) ami-018e5e85f4c5bd494 Launch Stack
    HVM (arm64) ami-01b716260d3535980 Launch Stack
    ap-southeast-1 HVM (amd64) ami-0589dc697af56a157 Launch Stack
    HVM (arm64) ami-0dcbcb2bc57a6ba65 Launch Stack
    ap-southeast-2 HVM (amd64) ami-058f25350e549daba Launch Stack
    HVM (arm64) ami-0fa19b6a21ad7d4c7 Launch Stack
    ap-southeast-3 HVM (amd64) ami-06dd3e5a6d7e18eb0 Launch Stack
    HVM (arm64) ami-00069dc769c058564 Launch Stack
    ca-central-1 HVM (amd64) ami-08d15df15bbde3961 Launch Stack
    HVM (arm64) ami-008082f760466c762 Launch Stack
    eu-central-1 HVM (amd64) ami-05a364bc5ff5cf608 Launch Stack
    HVM (arm64) ami-0bfb6fd5e934b83c9 Launch Stack
    eu-north-1 HVM (amd64) ami-03bb77379a7fefc2a Launch Stack
    HVM (arm64) ami-0fb1eaab29e26b658 Launch Stack
    eu-south-1 HVM (amd64) ami-0dd4359aabea34cc0 Launch Stack
    HVM (arm64) ami-04c86960aba601b51 Launch Stack
    eu-west-1 HVM (amd64) ami-0f2c879f20f201207 Launch Stack
    HVM (arm64) ami-0826167a19546267a Launch Stack
    eu-west-2 HVM (amd64) ami-07b3c74245dda05f5 Launch Stack
    HVM (arm64) ami-0c9a9856ea9267067 Launch Stack
    eu-west-3 HVM (amd64) ami-0de628fc10dd8fbb8 Launch Stack
    HVM (arm64) ami-02abfc56e2eb15c7d Launch Stack
    me-south-1 HVM (amd64) ami-09dead36e6ec92909 Launch Stack
    HVM (arm64) ami-084a61c799cf3ebd0 Launch Stack
    sa-east-1 HVM (amd64) ami-0dc1b8632daa252c4 Launch Stack
    HVM (arm64) ami-0b2cd0de8e78ed4ee Launch Stack
    us-east-1 HVM (amd64) ami-0c3460b18a42a37cd Launch Stack
    HVM (arm64) ami-089057bafaa696c7e Launch Stack
    us-east-2 HVM (amd64) ami-070be49e719a8aaba Launch Stack
    HVM (arm64) ami-0250cc5aee111215a Launch Stack
    us-west-1 HVM (amd64) ami-07aa8a42b73d77c08 Launch Stack
    HVM (arm64) ami-06b6c160e2279b9de Launch Stack
    us-west-2 HVM (amd64) ami-05f7ac3d85495b064 Launch Stack
    HVM (arm64) ami-0bf5304088a3be9d6 Launch Stack

    AWS China AMIs maintained by Giant Swarm

    The following AMIs are not part of the official Flatcar Container Linux release process and may lag behind (query version).

    View as json feed: amd64
    EC2 Region AMI Type AMI ID CloudFormation

    CloudFormation will launch a cluster of Flatcar Container Linux machines with a security and autoscaling group.

    Container Linux Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
        - name: nginx.service
          enabled: true
          contents: |
            Description=NGINX example
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --net host
            ExecStop=/usr/bin/docker stop nginx1

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i > ignition.json

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Container Linux Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
        - name: media-ephemeral.mount
          enabled: true
          contents: |

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Container Linux Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Container Linux Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh [email protected]<ip address>

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-041e4b89f83cf21b7 (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source:
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-041e4b89f83cf21b7 (amd64), Beta ami-000f8fdd6dcb04cf9 (amd64), or Stable ami-0c3460b18a42a37cd (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget
    $ wget
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Known issues


    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply

    Start with a file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
    provider "aws" {
      region = var.aws_region
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
      tags = {
        Name = var.cluster_name
    resource "aws_subnet" "subnet" {
      vpc_id     =
      cidr_block = var.subnet_cidr
      tags = {
        Name = var.cluster_name
    resource "aws_internet_gateway" "gateway" {
      vpc_id =
      tags = {
        Name = var.cluster_name
    resource "aws_route_table" "default" {
      vpc_id =
      route {
        cidr_block = ""
        gateway_id =
      tags = {
        Name = var.cluster_name
    resource "aws_route_table_association" "public" {
      route_table_id =
      subnet_id      =
    resource "aws_security_group" "securitygroup" {
      vpc_id =
      tags = {
        Name = var.cluster_name
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id =
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = [""]
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id =
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = [""]
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
      filter {
        name   = "architecture"
        values = ["x86_64"]
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
      associate_public_ip_address = true
      subnet_id                   =
      vpc_security_group_ids      = []
      tags = {
        Name = "${var.cluster_name}-${each.key}"
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key

    Create a file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    variable "vpc_cidr" {
      type    = string
      default = ""
    variable "subnet_cidr" {
      type    = string
      default = ""

    An file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    variant: flatcar
    version: 1.0.0
        - name: core
            - ${ssh_keys}
        - path: /home/core/works
          filesystem: root
          mode: 0755
            inline: |
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              echo My name is ${name} and the hostname is $${hostname}          

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply

    Log in via ssh [email protected] with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .