--- layout: post title: A More Flexible Dockerfile for Rails tag: - rails - docker - devops --- One of my primary motivations for working with [Docker](https://www.docker.com/) was creating a single artifact that I could toss into any environment. It has been fantastic at this. I can throw together a simple Dockerfile that will build my [Rails](http://rubyonrails.org/) application as an image for production in about five minutes. ``` FROM ruby:2.3-alpine ADD Gemfile* /app/ RUN apk add --no-cache --virtual .build-deps build-base \ && apk add --no-cache postgresql-dev tzdata \ && cd /app; bundle install --without test production \ && apk del .build-deps ADD . /app RUN chown -R nobody:nogroup /app USER nobody ENV RAILS_ENV production WORKDIR /app CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "-p", "8080"] ``` Except now that when I need to run the application’s test suite, I do not have the dependencies I need. That Dockerfile might look something like this. ``` FROM ruby:2.3-alpine RUN apk add --no-cache build-base postgresql-dev tzdata ADD Gemfile* /app/ RUN cd /app; bundle install ADD . /app RUN chown -R nobody:nogroup /app USER nobody WORKDIR /app CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "-p", "8080"] ``` Many people decide to include both of these Dockerfiles in their repository as Dockerfile and Dockerfile.dev. This works perfectly fine. But now we have a production Dockerfile that never gets used during development. Of course, it is going through at least one staging environment (hopefully) but it would be nice if we had a little more testing against it. Much like Docker provides us the ability to have a single artifact to move from system to system, I wanted to have a single Dockerfile shared between all environments. Luckily, Docker provides us with [build arguments](https://docs.docker.com/engine/reference/builder/#/arg). A build argument allows us to specify a variable when building the image and then use that variable inside our Dockerfile. In our current Rails Dockerfile, we have two primary differences between our environments: - The gem groups that are installed - The environment that the application runs as Bundler’s [BUNDLE_WITHOUTBUNDLE_WITHOUT](http://bundler.io/man/bundle-config.1.html#LIST-OF-AVAILABLE-KEYS) allows us to specify the gem groups to skip via an environment variable making both of these resolvable through environment configuration. Using this, our shared Dockerfile could look like this: ``` FROM ruby:2.3-alpine ARG BUNDLE_WITHOUT=test:development ENV BUNDLE_WITHOUT ${BUNDLE_WITHOUT} ADD Gemfile* /app/ RUN apk add --no-cache --virtual .build-deps build-base \ && apk add --no-cache postgresql-dev tzdata \ && cd /app; bundle install \ && apk del .build-deps ADD . /app RUN chown -R nobody:nogroup /app USER nobody ARG RAILS_ENV=production ENV RAILS_ENV ${RAILS_ENV} WORKDIR /app CMD ["bundle", "exec", "rails", "s", "-b", "0.0.0.0", "-p", "8080"] ``` The secret sauce here is `ARG BUNDLE_WITHOUT=test:development`. Running `docker build -t rails-app .` will use the default value provided for the `BUNDLE_WITHOUT` build argument, test:development, and a production Docker image will be built. And if we specify the appropriate build arguments, we can generate an image suitable for development. ``` docker build -t rails-app --build-arg BUNDLE_WITHOUT= --build-arg RAILS_ENV=development . ``` will generate our Docker image with all test and development dependencies available. Typing this for building in development would get pretty tedious so we can use docker-compose to make it easier ``` version: '2' services: app: build: context: . args: - BUNDLE_WITHOUT= - RAILS_ENV=development links: - database ports: - "3000:8080" env_file: - .env volumes: - .:/app tty: true stdin_open: true ``` Now, `docker-compose up -d` is all we need in development to both build and launch our development image. Finally, we have a single Dockerfile that can be used to build an image for any of our application’s needs. Of course, there are some trade-offs. For example, build time in development will suffer in some cases. But I have found only maintaining a single Dockerfile to be worth these costs. Have another way to deal with this issue? Please share!