Running Flatcar Container Linux on AWS EC2

    The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux IRC channel or user mailing list .

    At the end of the document there are instructions for deploying with Terraform.

    Release retention time

    After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.

    Choosing a channel

    Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can disable this feature , although we don’t recommend it. Read the release notes for specific features and bug fixes.

    The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux 3549.0.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-08de99fdf7a6e686d Launch Stack
    HVM (arm64) ami-0cb9fd83612a78682 Launch Stack
    ap-east-1 HVM (amd64) ami-0e49e75d641ce3b45 Launch Stack
    HVM (arm64) ami-05950eb74d8b4ba55 Launch Stack
    ap-northeast-1 HVM (amd64) ami-04514a619cd781a4b Launch Stack
    HVM (arm64) ami-0b066adea533a6e49 Launch Stack
    ap-northeast-2 HVM (amd64) ami-0bf9c5cea4102987b Launch Stack
    HVM (arm64) ami-0939dbc519f94191d Launch Stack
    ap-south-1 HVM (amd64) ami-0d6436a600dd8b376 Launch Stack
    HVM (arm64) ami-0884344761400fc8c Launch Stack
    ap-southeast-1 HVM (amd64) ami-0f11f5d512a62b4cd Launch Stack
    HVM (arm64) ami-0656da4e878f5da5f Launch Stack
    ap-southeast-2 HVM (amd64) ami-0e8c9843d62f78afa Launch Stack
    HVM (arm64) ami-0a0245ec081ba7845 Launch Stack
    ap-southeast-3 HVM (amd64) ami-0a1eb7a2267be96d3 Launch Stack
    HVM (arm64) ami-0ce50d75c3b64494d Launch Stack
    ca-central-1 HVM (amd64) ami-0c881a49423a57b13 Launch Stack
    HVM (arm64) ami-0e2557e251f7d2388 Launch Stack
    eu-central-1 HVM (amd64) ami-05031ea344f6444f3 Launch Stack
    HVM (arm64) ami-09fff6896ad1dd846 Launch Stack
    eu-north-1 HVM (amd64) ami-058d2fb2567a9ed2a Launch Stack
    HVM (arm64) ami-0ce4985612a46a55a Launch Stack
    eu-south-1 HVM (amd64) ami-04c58b52d1bdb6db9 Launch Stack
    HVM (arm64) ami-076304051d82eb453 Launch Stack
    eu-west-1 HVM (amd64) ami-0c1a50ba7049e6792 Launch Stack
    HVM (arm64) ami-0fa8c1dfe9cac520d Launch Stack
    eu-west-2 HVM (amd64) ami-017500efbaf86f0b6 Launch Stack
    HVM (arm64) ami-0aed720405869efec Launch Stack
    eu-west-3 HVM (amd64) ami-0d4268cc24df1e173 Launch Stack
    HVM (arm64) ami-00f00ae76a930a5a9 Launch Stack
    me-south-1 HVM (amd64) ami-002fca6b69fc74f93 Launch Stack
    HVM (arm64) ami-021bf40cd17741d84 Launch Stack
    sa-east-1 HVM (amd64) ami-0b64e4774d86ea804 Launch Stack
    HVM (arm64) ami-09b8d743564facc76 Launch Stack
    us-east-1 HVM (amd64) ami-02cd2a051f34e677f Launch Stack
    HVM (arm64) ami-0766b070a890dcd74 Launch Stack
    us-east-2 HVM (amd64) ami-088c6170e7a9eab68 Launch Stack
    HVM (arm64) ami-00d5e3d664423e4a1 Launch Stack
    us-west-1 HVM (amd64) ami-0a605582e7c7112dd Launch Stack
    HVM (arm64) ami-078d7ae38702781ee Launch Stack
    us-west-2 HVM (amd64) ami-03ae7bf748b217d43 Launch Stack
    HVM (arm64) ami-0026c6bfb17a36b62 Launch Stack

    The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux 3510.1.0.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-055e8d6a51a1959a9 Launch Stack
    HVM (arm64) ami-0541ec773ad4d7f7d Launch Stack
    ap-east-1 HVM (amd64) ami-069c4ffe39aba872f Launch Stack
    HVM (arm64) ami-07d163d6744b38886 Launch Stack
    ap-northeast-1 HVM (amd64) ami-080c6e9e98f8088c2 Launch Stack
    HVM (arm64) ami-0c3fc13087768c8a8 Launch Stack
    ap-northeast-2 HVM (amd64) ami-020206aec8b09fad0 Launch Stack
    HVM (arm64) ami-0a7de289059bd3198 Launch Stack
    ap-south-1 HVM (amd64) ami-00b0bd8c4010df65a Launch Stack
    HVM (arm64) ami-0b494128987e119dc Launch Stack
    ap-southeast-1 HVM (amd64) ami-0ff98f7e6ac2ade35 Launch Stack
    HVM (arm64) ami-0a84e6703a5f57fba Launch Stack
    ap-southeast-2 HVM (amd64) ami-024e07f4190b687aa Launch Stack
    HVM (arm64) ami-06de11be992022337 Launch Stack
    ap-southeast-3 HVM (amd64) ami-06babc2eee6e19896 Launch Stack
    HVM (arm64) ami-0815a3ad81d301d28 Launch Stack
    ca-central-1 HVM (amd64) ami-085b9f7ba72d9cd56 Launch Stack
    HVM (arm64) ami-0e81c6a44018690f3 Launch Stack
    eu-central-1 HVM (amd64) ami-03c14ed161c241113 Launch Stack
    HVM (arm64) ami-02e7f1d8c812a006d Launch Stack
    eu-north-1 HVM (amd64) ami-0cb4e13e5e305c48d Launch Stack
    HVM (arm64) ami-027feecb63bbe7c31 Launch Stack
    eu-south-1 HVM (amd64) ami-07bd0a1ee937474ac Launch Stack
    HVM (arm64) ami-0a1c76ea68c3e8f4d Launch Stack
    eu-west-1 HVM (amd64) ami-07e17ac6d3f866440 Launch Stack
    HVM (arm64) ami-0e40a3d0cb77633f0 Launch Stack
    eu-west-2 HVM (amd64) ami-069ff2e51e12fbfa8 Launch Stack
    HVM (arm64) ami-0d2d75fb51a9c2083 Launch Stack
    eu-west-3 HVM (amd64) ami-033eeff0c0766a74c Launch Stack
    HVM (arm64) ami-01d082adb452f3d3d Launch Stack
    me-south-1 HVM (amd64) ami-0eaa4c5cbcffdcb4f Launch Stack
    HVM (arm64) ami-06b788b72e9723702 Launch Stack
    sa-east-1 HVM (amd64) ami-0cafa4695ac81bf43 Launch Stack
    HVM (arm64) ami-001aaaedef7a53018 Launch Stack
    us-east-1 HVM (amd64) ami-0620c21abe172ccd7 Launch Stack
    HVM (arm64) ami-0a1efc3acab473103 Launch Stack
    us-east-2 HVM (amd64) ami-0bc3a28c62bffe8bf Launch Stack
    HVM (arm64) ami-01f34de8309394dc3 Launch Stack
    us-west-1 HVM (amd64) ami-0ffdebf437c7efe34 Launch Stack
    HVM (arm64) ami-0a4af4a45e4944b06 Launch Stack
    us-west-2 HVM (amd64) ami-0cf7a6eee223d62fc Launch Stack
    HVM (arm64) ami-04dcbb0c9a9ddefda Launch Stack

    The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux 3374.2.5.

    View as json feed: amd64 arm64
    EC2 Region AMI Type AMI ID CloudFormation
    af-south-1 HVM (amd64) ami-07ba3d6b50d99482c Launch Stack
    HVM (arm64) ami-0a70114a07b73e0e6 Launch Stack
    ap-east-1 HVM (amd64) ami-07b27b98341b12d55 Launch Stack
    HVM (arm64) ami-09b38022972d28486 Launch Stack
    ap-northeast-1 HVM (amd64) ami-0e8a9bc05687da680 Launch Stack
    HVM (arm64) ami-0c709499bf7ba6305 Launch Stack
    ap-northeast-2 HVM (amd64) ami-025560fd5c8f923e7 Launch Stack
    HVM (arm64) ami-088150d07c1cb45e1 Launch Stack
    ap-south-1 HVM (amd64) ami-04f35b8032acbf32b Launch Stack
    HVM (arm64) ami-032810f11b3c00a16 Launch Stack
    ap-southeast-1 HVM (amd64) ami-012989ed483c5eb4f Launch Stack
    HVM (arm64) ami-08326604d8a85661a Launch Stack
    ap-southeast-2 HVM (amd64) ami-08da4333dcc6f9e44 Launch Stack
    HVM (arm64) ami-08180f3731df7efbb Launch Stack
    ap-southeast-3 HVM (amd64) ami-04491776801a9c207 Launch Stack
    HVM (arm64) ami-09050a6dfe5094193 Launch Stack
    ca-central-1 HVM (amd64) ami-0f50eedbe398eaa07 Launch Stack
    HVM (arm64) ami-0dee9b9134dff9993 Launch Stack
    eu-central-1 HVM (amd64) ami-02b356f9769b955fa Launch Stack
    HVM (arm64) ami-06e94cbe20e34ab0d Launch Stack
    eu-north-1 HVM (amd64) ami-077b8247be1bd3b52 Launch Stack
    HVM (arm64) ami-0cdd2ecdfb0a5ed0b Launch Stack
    eu-south-1 HVM (amd64) ami-0da31022df4ba635c Launch Stack
    HVM (arm64) ami-04a2a57b597e5376f Launch Stack
    eu-west-1 HVM (amd64) ami-05143b3ea05a5944b Launch Stack
    HVM (arm64) ami-00ea5f2dc1e0ae459 Launch Stack
    eu-west-2 HVM (amd64) ami-0c8c2cac012f5c89e Launch Stack
    HVM (arm64) ami-0648c883cce76317d Launch Stack
    eu-west-3 HVM (amd64) ami-051ffc5aa558b7dbd Launch Stack
    HVM (arm64) ami-09fbfd5fecef7d8b8 Launch Stack
    me-south-1 HVM (amd64) ami-0626b5b075b99e56e Launch Stack
    HVM (arm64) ami-046074287f4897af5 Launch Stack
    sa-east-1 HVM (amd64) ami-00e0b40cd550d276e Launch Stack
    HVM (arm64) ami-03196c2879479f477 Launch Stack
    us-east-1 HVM (amd64) ami-0ba1aac16966efa09 Launch Stack
    HVM (arm64) ami-09df8c9c42fa10a7e Launch Stack
    us-east-2 HVM (amd64) ami-0a77adb1004715426 Launch Stack
    HVM (arm64) ami-00f11f5fd599ebf6e Launch Stack
    us-west-1 HVM (amd64) ami-0c3ba8b8a957c11f4 Launch Stack
    HVM (arm64) ami-094b34e70a7b8f917 Launch Stack
    us-west-2 HVM (amd64) ami-01d91e07373a1216b Launch Stack
    HVM (arm64) ami-05102546fbdb5a9d0 Launch Stack

    AWS China AMIs maintained by Giant Swarm

    The following AMIs are not part of the official Flatcar Container Linux release process and may lag behind (query version).

    View as json feed: amd64
    EC2 Region AMI Type AMI ID CloudFormation

    CloudFormation will launch a cluster of Flatcar Container Linux machines with a security and autoscaling group.

    Container Linux Configs

    Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the docs to learn about the supported features .

    You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or via the EC2 API .

    As an example, this Butane YAML config will start an NGINX Docker container:

    variant: flatcar
    version: 1.0.0
        - name: nginx.service
          enabled: true
          contents: |
            Description=NGINX example
            ExecStartPre=-/usr/bin/docker rm --force nginx1
            ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host
            ExecStop=/usr/bin/docker stop nginx1

    Transpile it to Ignition JSON:

    cat cl.yaml | docker run --rm -i > ignition.json

    Instance storage

    Ephemeral disks and additional EBS volumes attached to instances can be mounted with a .mount unit. Amazon’s block storage devices are attached differently depending on the instance type . Here’s the Container Linux Config to format and mount the first ephemeral disk, xvdb, on most instance types:

    variant: flatcar
    version: 1.0.0
        - device: /dev/xvdb
          format: ext4
          wipe_filesystem: true
          label: ephemeral
        - name: media-ephemeral.mount
          enabled: true
          contents: |

    For more information about mounting storage, Amazon’s own documentation is the best source. You can also read about mounting storage on Flatcar Container Linux .

    Adding more machines

    To add more instances to the cluster, just launch more with the same Container Linux Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.

    SSH to your instances

    Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the core user instead of root and doesn’t use a password for authentication. You’ll need to add an SSH key(s) via the AWS console or add keys/passwords via your Container Linux Config in order to log in.

    To connect to an instance after it’s created, run:

    ssh [email protected]<ip address>

    Multiple clusters

    If you would like to create multiple clusters you will need to change the “Stack Name”. You can find the direct template file on S3 .

    Manual setup

    TL;DR: launch three instances of ami-02cd2a051f34e677f (amd64) in us-east-1 with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same “User Data” of each host. SSH uses the core user and you have etcd and Docker to play with.

    Creating the security group

    You need open port 2379, 2380, 7001 and 4001 between servers in the etcd cluster. Step by step instructions below.

    Note: This step is only needed once

    First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.

    1. Go to the security group page in the EC2 console.
    2. Click “Create Security Group”
      • Name: flatcar-testing
      • Description: Flatcar Container Linux instances
      • VPC: No VPC
      • Click: “Yes, Create”
    3. In the details of the security group, click the Inbound tab
    4. First, create a security group rule for SSH
      • Create a new rule: SSH
      • Source:
      • Click: “Add Rule”
    5. Add two security group rules for etcd communication
      • Create a new rule: Custom TCP rule
      • Port range: 2379
      • Source: type “flatcar-testing” until your security group auto-completes. Should be something like “sg-8d4feabc”
      • Click: “Add Rule”
      • Repeat this process for port range 2380, 4001 and 7001 as well
    6. Click “Apply Rule Changes”

    Launching a test cluster

    We will be launching three instances, with a few parameters in the User Data, and selecting our security group.

    • Open the quick launch wizard to boot: Alpha ami-02cd2a051f34e677f (amd64), Beta ami-0620c21abe172ccd7 (amd64), or Stable ami-0ba1aac16966efa09 (amd64)
    • On the second page of the wizard, launch 3 servers to test our clustering
      • Number of instances: 3, “Continue”
    • Paste your Ignition JSON config in the EC2 dashboard into the “User Data” field, “Continue”
    • Storage Configuration, “Continue”
    • Tags, “Continue”
    • Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, “Continue”
    • Choose one or more of your existing Security Groups: “flatcar-testing” as above, “Continue”
    • Launch!

    Installation from a VMDK image

    One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in https://${CHANNEL}${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2. Make sure you download the signature (it’s available in https://${CHANNEL}${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig) and check it before proceeding.

    $ wget
    $ wget
    $ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
    gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
    gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
    gpg:                using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
    gpg: Good signature from "Flatcar Buildbot (Official Builds) <[email protected]>" [ultimate]

    Then, follow the instructions in Importing a Disk as a Snapshot Using VM Import/Export . You’ll need to upload the uncompressed vmdk file to S3.

    After the snapshot is imported, you can go to “Snapshots” in the EC2 dashboard, and generate an AMI image from it. To make it work, use /dev/sda2 as the “Root device name” and you probably want to select “Hardware-assisted virtualization” as “Virtualization type”.

    Using Flatcar Container Linux

    Now that you have a machine booted it is time to play around. Check out the Flatcar Container Linux Quickstart guide or dig into more specific topics .

    Known issues


    The aws Terraform Provider allows to deploy machines in a declarative way. Read more about using Terraform and Flatcar here .

    The following Terraform v0.13 module may serve as a base for your own setup. It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.

    You can clone the setup from the Flatcar Terraform examples repository or create the files manually as we go through them and explain each one.

    git clone
    # From here on you could directly run it, TLDR:
    cd aws
    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    # Edit the server configs or just go ahead with the default example
    terraform plan
    terraform apply

    Start with a file that contains the main declarations:

    terraform {
      required_version = ">= 0.13"
      required_providers {
        ct = {
          source  = "poseidon/ct"
          version = "0.7.1"
        template = {
          source  = "hashicorp/template"
          version = "~> 2.2.0"
        null = {
          source  = "hashicorp/null"
          version = "~> 3.0.0"
        aws = {
          source  = "hashicorp/aws"
          version = "~> 3.19.0"
    provider "aws" {
      region = var.aws_region
    resource "aws_vpc" "network" {
      cidr_block = var.vpc_cidr
      tags = {
        Name = var.cluster_name
    resource "aws_subnet" "subnet" {
      vpc_id     =
      cidr_block = var.subnet_cidr
      tags = {
        Name = var.cluster_name
    resource "aws_internet_gateway" "gateway" {
      vpc_id =
      tags = {
        Name = var.cluster_name
    resource "aws_route_table" "default" {
      vpc_id =
      route {
        cidr_block = ""
        gateway_id =
      tags = {
        Name = var.cluster_name
    resource "aws_route_table_association" "public" {
      route_table_id =
      subnet_id      =
    resource "aws_security_group" "securitygroup" {
      vpc_id =
      tags = {
        Name = var.cluster_name
    resource "aws_security_group_rule" "outgoing_any" {
      security_group_id =
      type              = "egress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = [""]
    resource "aws_security_group_rule" "incoming_any" {
      security_group_id =
      type              = "ingress"
      from_port         = 0
      to_port           = 0
      protocol          = "-1"
      cidr_blocks       = [""]
    resource "aws_key_pair" "ssh" {
      key_name   = var.cluster_name
      public_key = var.ssh_keys.0
    data "aws_ami" "flatcar_stable_latest" {
      most_recent = true
      owners      = ["aws-marketplace"]
      filter {
        name   = "architecture"
        values = ["x86_64"]
      filter {
        name   = "virtualization-type"
        values = ["hvm"]
      filter {
        name   = "name"
        values = ["Flatcar-stable-*"]
    resource "aws_instance" "machine" {
      for_each      = toset(var.machines)
      instance_type = var.instance_type
      user_data     = data.ct_config.machine-ignitions[each.key].rendered
      ami           = data.aws_ami.flatcar_stable_latest.image_id
      key_name      = aws_key_pair.ssh.key_name
      associate_public_ip_address = true
      subnet_id                   =
      vpc_security_group_ids      = []
      tags = {
        Name = "${var.cluster_name}-${each.key}"
    data "ct_config" "machine-ignitions" {
      for_each = toset(var.machines)
      content  = data.template_file.machine-configs[each.key].rendered
    data "template_file" "machine-configs" {
      for_each = toset(var.machines)
      template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
      vars = {
        ssh_keys = jsonencode(var.ssh_keys)
        name     = each.key

    Create a file that declares the variables used above:

    variable "machines" {
      type        = list(string)
      description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
    variable "cluster_name" {
      type        = string
      description = "Cluster name used as prefix for the machine names"
    variable "ssh_keys" {
      type        = list(string)
      description = "SSH public keys for user 'core'"
    variable "aws_region" {
      type        = string
      default     = "us-east-2"
      description = "AWS Region to use for running the machine"
    variable "instance_type" {
      type        = string
      default     = "t3.medium"
      description = "Instance type for the machine"
    variable "vpc_cidr" {
      type    = string
      default = ""
    variable "subnet_cidr" {
      type    = string
      default = ""

    An file shows the resulting IP addresses:

    output "ip-addresses" {
      value = {
        for key in var.machines :
        "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip

    Now you can use the module by declaring the variables and a Container Linux Configuration for a machine. First create a terraform.tfvars file with your settings:

    cluster_name           = "mycluster"
    machines               = ["mynode"]
    ssh_keys               = ["ssh-rsa AA... [email protected]"]

    The machine name listed in the machines variable is used to retrieve the corresponding Container Linux Config . For each machine in the list, you should have a machine-NAME.yaml.tmpl file with a corresponding name.

    For example, create the configuration for mynode in the file machine-mynode.yaml.tmpl (The SSH key used there is not really necessary since we already set it as VM attribute):

    variant: flatcar
    version: 1.0.0
        - name: core
            - ${ssh_keys}
        - path: /home/core/works
          filesystem: root
          mode: 0755
            inline: |
              set -euo pipefail
               # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
              echo My name is ${name} and the hostname is $${hostname}          

    Finally, run Terraform v0.13 as follows to create the machine:

    export AWS_ACCESS_KEY_ID=...
    export AWS_SECRET_ACCESS_KEY=...
    terraform init
    terraform apply

    Log in via ssh [email protected] with the printed IP address (maybe add -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null).

    When you make a change to machine-mynode.yaml.tmpl and run terraform apply again, the machine will be replaced.

    You can find this Terraform module in the repository for Flatcar Terraform examples .