Intro: Hashicorp `Packer`

The problem space…

In today’s modern application development process an applications has many homes. Local development, (docker) containers, on premise Linux servers, maybe even Unix mainframes for production. Often an application is developed using a container then deployed to an cloud compute instance that is ‘build like’ the container. Then someone somewhere has to move the application to production and ensure it still works correctly. Boiling this problems down to the core we see it is a problem of creating machine images that match desired state. Docker container? machine image. On premise Linux integration server? machine image. Production cloud host? again, machine image. So exactly how are we suppose to create matching machine images over such a wide range of under lying systems?

Hashi-who pack-what-?

Hashicorp is a development and management tool publisher. Most famous there Infrastructure as Code (IaC) tool Terraform. Packer is one of the tools available from them. Created specifically to ease the creation of machine images, Packer is super easy to learn and has a very low bearer of entry. Container, Linux Machines and even Virtual Box and VMWare images can all be created with Packer. In one solid day and you can publishing machine images like a pro!

Image from Brett Jordan @ Unsplash: https://unsplash.com/@brett_jordan

Learning Curve

As stated above the learning curve for Packer takes one solid day. The packer help CLI command returns 6 options. 6! build, console, fix, inspect, validate, and version. Pretty self explanatory right?! Try that with any other CLI application and scrolling up and down becomes a way of life. Moving on to the configuration files, they are plan text JSON, not even HCL, just plain JSON. Inside the configuration files are what I like to call three ‘top level concepts’:

  • Builders: Who is building the machine image? Think of this similar to the build stage in a CI/CD pipeline; but much more versatile. AWS, Docker, GCP, 1&1, OpenStack, Oracle, VMWare, and even custom builds are available. Checkout the complete list to see over on the docs.
  • Provisioners: This part of the configuration determines what is used to install dependencies, update the core OS, create users, set file permissions, and other configuration processes INSIDE the image.
  • Post-Processors (optional): When these are run after the machine image is created additional commands can be executed. Upload the image for storage to an artifact repository, re-package a a VM from VMWare to VirtualBox, run build time reports, and other post event actions.

That is it; three main concepts, and one of them is optional!

Examples

Build a local Docker image and push to image repository.

{
    "builders": [{
        "commit": true,
        "image": "ubuntu:16.04",
        "type": "docker"
    }],
    "provisioners": [
        {
            "type": "shell",
            "inline": [
                "apt-get update -y && apt-get install -y python python-dev"
            ]
        }
    ],
    "post-processors": [
        {
            "repository": "example-ubuntu-16.04-updated",
            "tag": "latest",
            "type": "docker-tag"
        }
    ]
}

In the above Packer configuration Packer pulls the Ubuntu 16.04 Docker image from Docker Hub via the builders section. Followed by the shell provisioner to update the system and install the Python; now it is ready for you Flask or Django app! Finally, the last part, the post-processor adds a tag to the image and pushes to your image repository.

Photo by chuttersnap on Unsplash

Building an AWS AMI, Updating the OS, and saving the image as an AMI.

{
    "variables": {
      "aws_access_key": "",
      "aws_secret_key": ""
    },
    "builders": [{
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "us-east-1",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*",
          "root-device-type": "ebs"
        },
        "owners": ["099720109477"],
        "most_recent": true
      },
      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "packer-aws-ami-{{timestamp}}"
    }],
    "provisioners": [{
        "type": "shell",
        "inline": [
            "sleep 30",
            "sudo apt-get update",
            "apt-get install mysql-server libmysqlclient-dev"
        ]
    }]
  }
}

In the configuration above we use local AWS credentials to build an image based on Ubuntu 16.04. In the improvisers section the OS is updated and MySQL-server is installed using the shell provisioner. That is all. You now have a EC2 based MySQL server image.

This is just two simple examples of what Packer can do. Imagine it as part of your CI/CD pipeline! It is even possible to build different images for different targets with the same provisioner execution on each, at the same time! Parallel builds are an amazing advanced feature to look into as you dig into the full feature set.

Conclusion

All told Packer fits a nice niche in the build process: creating the underlying machine image. From there a provisioner like Shell scripts, Ansible, or PowerShell pick up and execute custom application specific commands. A fast, easy to understand, amazingly simple (and REPEATABLE) way to configure those sweet sweet golden images. Now there is no reason for base images to be out of date an unpatched.

So what do you think? Can we see what Packer fits into your daily build cycle, or even simplify an effort intensive process of creating images? Let me know your thoughts in the comments below.

Further Reading

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.