Clojure websocket server on Elastic Beanstalk

1 Comment

Clojure websocket server on Elastic Beanstalk

In this blog post, we'll show how to run a Clojure websocket server on AWS Elastic Beanstalk. We'll use HTTP Kit (a Ring-compatible HTTP and websocket server), Compojure for routing, and Docker to deploy our server to Elastic Beanstalk.

Building a basic websocket server in Clojure

Create a new Leiningen project by running in a terminal:

lein new app ws-eb-sample
cd ws-eb-sample

Edit the project.clj file to look like this:

(defproject ws-eb-sample "0.1.0-SNAPSHOT"
  :description "Sample websocket server on AWS Elastic Beanstalk"
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [compojure "1.3.0"]
                 [http-kit "2.1.16"]
                 [javax.servlet/servlet-api "2.5"]]

  :main ws-eb-sample.core

  :profiles {:uberjar {:aot :all}})

Now, we can write a really basic server. Edit the src/ws_eb_sample/core.clj file to look like this:

(ns ws-eb-sample.core
  (:require [compojure.core :refer [defroutes GET]]
            [compojure.handler :as handler]
            [org.httpkit.server :as ws])

(defn handle-websocket [req]
  (ws/with-channel req con
    (println "Connection from" con)

    (ws/on-receive con #(ws/send! con (str "You said: " %)))

    (ws/on-close con (fn [status]
                       (println con "disconnected with status" (name status))))))

(defroutes routes
  (GET "/ws" [] handle-websocket))

(def application (handler/site routes))

(defn -main [& _]
  (let [port (-> (System/getenv "SERVER_PORT")
                 (or "8080")
    (ws/run-server application {:port port, :join? false})
    (println "Listening for connections on port" port)))

Now, let's fire up the server! From the project's directory, run lein run. In your terminal, you should see:

Listening for connections on port 8080

Now, grab yourself a websocket client. I use the excellent Simple WebSocket Client Chrome extension, which I'll assume you're using for the rest of this section, but it should be very easy to follow along with any websocket client of your choosing.

Start Simple Websocket Client, enter "ws://localhost:8080/ws" as the URL in the Server Location section, then click the Open button. Now, in the Request textarea, type a message like "Hello, world!", and click the Send button. In the Message Log section, you should see your "Hello, world!" message, followed almost immediately by the server's "You said: Hello, world!" response. Go ahead and click the Close button up in the Server Location section, and head back to your terminal. You should see something like:

Listening for connections on port 8080
Connection from #<AsyncChannel /0:0:0:0:0:0:0:1:8080<->/0:0:0:0:0:0:0:1:47874>
#<AsyncChannel /0:0:0:0:0:0:0:1:8080<->/0:0:0:0:0:0:0:1:47874> disconnected with status normal

OK, you now have a working websocket server!

You can see the entire sample project on Github:

Running in Docker

If you haven't already, go ahead and install Docker. When you've got it installed and running, let's create an image for our server! Go back to your terminal in the project directory and type:

lein clean && lein uberjar

If everything went well, you should see something like this:

Compiling ws-eb-sample.core
Created /home/jmglov/ws-eb-sample/target/ws-eb-sample-0.1.0-SNAPSHOT.jar
Created /home/jmglov/ws-eb-sample/target/ws-eb-sample-0.1.0-SNAPSHOT-standalone.jar

The ws-eb-sample-0.1.0-SNAPSHOT-standalone.jar file is the so-called uberjar, which is a runnable jar containing our server and all of its dependencies. Let's try it out by typing:

cd target/
java -jar ws-eb-sample-0.1.0-SNAPSHOT-standalone.jar

You should see the startup message and be able to connect to the server with your websocket client on ws://localhost:8080/ws as before.

Now, we'll create a file called Dockerfile in the target directory, with the following contents:

FROM java:7
ADD ws-eb-sample-0.1.0-SNAPSHOT-standalone.jar ws-eb-sample.jar
CMD ["/usr/bin/java","-jar","/ws-eb-sample.jar"]

We can build a Docker image by running:

tar czf context.tar.gz Dockerfile ws-eb-sample-0.1.0-SNAPSHOT-standalone.jar
docker build -t 'clojure/ws-eb-sample:0.0.1-SNAPSHOT' - <context.tar.gz

You'll see something like this:

Sending build context to Docker daemon 5.678 MB
Sending build context to Docker daemon 
Step 0 : FROM java:7
 ---> 2711b1d6f3aa
Step 1 : ADD ws-eb-sample-0.1.0-SNAPSHOT-standalone.jar ws-eb-sample.jar
 ---> a3f7c7532dc4
Removing intermediate container a26dc08e4bcd
Step 2 : EXPOSE 80
 ---> Running in 8d5d20438db4
 ---> f2ae5a747885
Removing intermediate container 8d5d20438db4
Step 3 : CMD /usr/bin/java -jar /ws-eb-sample.jar
 ---> Running in 1cf4873b8611
 ---> d8deb4f7eb7a
Removing intermediate container 1cf4873b8611
Successfully built d8deb4f7eb7a

Now, you can run the Docker image in a new container:

docker run -p 8080:8080 'clojure/ws-eb-sample:0.0.1-SNAPSHOT'

Again, you should see the startup message and be able to connect to the server on ws://localhost:8080/ws .

Creating a Dockerisable file for Elastic Beanstalk

OK, so "Dockerisable" is not really a word, but I didn't want to type "a file containing the uberjar and Dockerfile from which Elastic Beanstalk will be able to build a Docker image". Which I've now typed, but I like my new word so much I'm leaving it in.

Still in the target directory, run:

zip Dockerfile ws-eb-sample-0.1.0-SNAPSHOT-standalone.jar

And now we've got something that Elastic Beanstalk can Dockerise!

Deploying to Elastic Beanstalk

First, let's create an Elastic Beanstalk application and environment:

  1. From the Elastic Beanstalk console, click the Create New Application button.
  2. On the Application Information page, give your application and name, then click the Next button.
  3. On the Environment Type page, select Web Server as your environment tier, Docker as your predefined configuration, and Load balancing, autoscaling as your environment type, then click the Next button.
  4. On the Application Version page, select Upload your own, then click the Choose file button and select the file that you prepared previously. The default deployment settings are fine, so click the Next button. The file you selected will be uploaded before the console proceeds to the next page, which may take a little while, depending on the uplink speed of your Internet connection.
  5. On the Environment Information page, enter something like "sample" as your environment name, then pick something unique as your environment URL. It's probably a good idea to click the Check availability button to ensure that you can get the URL before proceeding. Click the Next button once you've chosen a unique URL.
  6. On the Additional Resources page, simply click the Next button.
  7. On the Configuration Details page, you can just accept the defaults and click the Next button.
  8. On the Environment Tags page, click the Next button.
  9. On the Review page, click the Launch button.

Wait for the environment to be created and launched. This will take a few minutes. Once the environment is up, you can hit it with your web browser. You'll get an nginx 502 Bad Gateway message, because your server is listening on port 8080, but the load balancer is trying to connect to port 80. We could easily fix this in the load balancer, but for the sake of pedagogy, let's use Elastic Beanstalk's environment configuration.

First, we need to enable websocket connections through the load balancer, which is configured for HTTP by default.

  1. In the Elastic Beanstalk console, click on Configuration on the left-hand side of the environment dashboard.
  2. In the Network Tier section, click on the cogwheel next to Load Balancing.
  3. In the Load Balancer section, simply change the Protocol to TCP, then click the Save button at the bottom of the page.

Elastic Beanstalk will take a couple of minutes to apply the change to the load balancer. When it's done, you can add the bit of configuration to make your server listen on the correct port.

Remember this bit in the server's core.clj file?

(let [port (-> (System/getenv "SERVER_PORT")
                 (or "8080")
    (ws/run-server application {:port port, :join? false}))

This allows you to run your server on any port you want by setting an environment variable named SERVER_PORT. Elastic Beanstalk makes this really easy to do:

  1. In the Elastic Beanstalk console, click on Configuration again, then click on the cogwheel next to Software Configuration.
  2. In the Environment Properties section, scroll down to the bottom and add a new property named SERVER_PORT with the value 80.
  3. Click the Save button and wait for Elastic Beanstalk to redeploy your environment.

Once the environment is deployed, try it out in your websocket client by connecting to ws:// !

Next steps

This has been a very manual process, but I wanted to share my discoveries so anyone else out there who is trying to figure out how to run a websocket server whilst still utilising the awesome power of Elastic Beanstalk won't have to bang her head against a wall for a day, like I did.

In my Github repo, I've added a dockerise script to automate some of the drudgery, and I'm planning to add Docker images to the lein-beanstalk Leiningen plugin that I use for Tomcat deployments, so I'll add a comment here once that is done.

If you have any questions about any of this stuff, just leave a comment, and I'll help you out if I can.

1 Comment

Building a Mobile Game Backend with Clojure and AWS


Building a Mobile Game Backend with Clojure and AWS

Nuday Games is a Stockholm-based game company who make a mobile game called Rock Science, a turn-based rock and roll trivia game. The backend is written in Clojure, and runs on Amazon Web Services. This post will give an overview of our architecture and discuss some things we've learned about Clojure, AWS, and distributed systems in general.

The Rock Game of the Century

Why Clojure?

Before we get into any technical details, let's deal with the elephant in the room. When writing the backend for a mobile game in 2014, why in the world would we choose a dialect of an esoteric language only useful for artificial intelligence research in academia that was invented in 1958?

Well, even if we believed any of the stuff in that extremely loaded question, Clojure is not your grand-daddy's Lisp (unless you happen to be Rich Hickey's grandchild, of course). It is a modern take on a classic which runs on the Java Virtual Machine (and the .Net Common Language Runtime, and your web browser, though we'll have to come back to these in a later blog post). Say what you will about Java the language, but you cannot deny that the JVM (and the ecosystem around it) has matured into a fantastic piece of technology. Clojure not only runs on the JVM, it also has great native interoperability with Java, which means that you as a developer have full access to not only the Java standard library, but also any other Java library you can get your hands on. There are some serious Java libraries out there that have proven themselves at absurd scale, and to ignore them would be rather like cutting off your nose to spite your face.

But the advantages of Clojure don't stop with the JVM--otherwise we'd just use Java. Clojure is first and foremost a functional programming language which defaults to immutability. As fundamental laws of physics are beginning to supersede Moore's Law, at least for current methods of producing integrated circuits, chip designers have turned to packing more cores into processors, rather than just making faster and faster cores. As pointed out in "The Free Lunch is Over", this means that programs that want to run fast and do a lot need to embrace concurrency. Concurrency turns out to be very difficult to get right when you have multiple bits of code trying to update values.

Luckily, this is where Clojure really shines. Immutability means that you can have as much concurrency as you want with no fear of strange interleaving of reads and writes, because there are no writes. Tragically, the real world is not a mathematical proof, so we occasionally have to mutate something, but Clojure makes this safe and easy with Software Transactional Memory and agents. Reasoning about concurrency in Clojure is usually quite trivial, especially compared to multi-threaded programming in Java, which basically requires one to read, thoroughly understand, and constantly refer to "Java Concurrency in Practice" (384 pages).

Finally, as a dynamically typed Lisp, Clojure lets you prototype, sketch, refactor, and otherwise rewrite code extremely rapidly. With the REPL, you can even do this without having to write a bunch of code in a file and fire up a debugger. The REPL is a really convenient way to explore new libraries or perform one-off systems administration tasks.

For a startup like us, being able to produce high-quality code quickly with a small development team is of vital importance. And then change that code to support a completely new way of thinking next week. Clojure and Amazon Web Services together enable this kind of rapid development, for all the reasons Paul Graham laid out in his "Beating the Averages" essay.

Clojure gives us one last benefit, which is finding great engineers to join our small development team. In general, people attracted to a language like Clojure tend to be smarter and more inquisitive than the average bear. People who understand Lisp and functional programming are able to produce lots of value with few lines of code. And that is what we need!

Why AWS?

As noted above, we have a small team of engineers that needs to get stuff done in a hurry. One thing that tends to be a massive time-sink for a backend team is the care and feeding of MySQL, RabbitMQ, Cassandra, etc. Every hour spent figuring out that you need to increase the number of file descriptors in a config file or tune JVM garbage collection settings to avoid stopping the world a full GC is an hour not spent working on the next great feature for your users.

We don't have the time to become experts on all the technologies we need to build a highly available, performant system, but Amazon does. Using AWS means we can depend on things to just work, and if we need more servers or better performance, we can get it right away.


Our game has all the stuff you'd expect:

  • Frontend (iOS, Android coming soon)
  • Load balancers
  • HTTP servers
  • RESTful API
  • Databases

Since a picture is worth 1000 words, here's some programmer art:

Rock Science architecture

Everything in this diagram runs on AWS. The grey boxes are things that we have to manage ourselves, whilst the green ones are fully managed services provided by AWS.

A typical flow through the system is thus:

  1. A player does something in the game app on her phone, like answer the last round of questions in a game.
  2. The app resolves the DNS name of our API.
  3. Route 53 responds with the IP addresses of our frontend load balancers, and the app's resolver library picks one (DNS round robin).
  4. The app makes an HTTP request to one of our frontend load balancers, which are nginx reverse proxies running on EC2 instances.
  5. The frontend load balancer inspects the request, and proxies it on either to S3 / CloudFront (static content) or our REST service (dynamic content).
  6. Assuming the request was proxied to our REST service, it enters the Elastic Beanstalk cloud and hits an Elastic Load Balancer.
  7. The ELB sends the request on to a Tomcat server, running on an EC2 instance that is fully managed by Elastic Beanstalk.
  8. The HTTP request is handled by our Clojure webservice.
  9. The webservice grabs some data from a Relational Database Service-managed MySQL instance and/or a DynamoDB table.
  10. The webservice publishes a "game ended" message on the Simple Queue Service event bus.
  11. The webservice sends a push notification over Simple Notification Service to the opponent's phone, letting him know that the game is over.
  12. The webservice returns an HTTP response with the outcome of the game, which the app renders and displays to the player.
  13. A Clojure process listening for "game ended" events on the SQS event bus picks up the message, updates some statistics in DynamoDB, and generates a new JSON leaderboard, which it writes to S3.
  14. The player looks at the leaderboard in the app, thus causing an HTTP request to go to the frontend load balancer, which proxies it directly to S3 for the static JSON file containing the leaderboard, which her app then renders.

There are other Clojure processes that run asynchronously, doing things like killing games that have been abandoned, rotating unaccepted challenges to new players, etc.

Overview of AWS components

An Elastic Beanstalk application is the heart and soul of the backend. The application has several environments, each running our Clojure web app on a fleet of Tomcat servers behind an Elastic Load Balancer. Deployments are managed through Elastic Beanstalk, so with the lein beanstalk plugin for Leiningen, the standard Clojure build tool, a deployment is as simple as saying lein beanstalk deploy production (we have a patch to lein beanstalk that is required to do zero-downtime deployments with the new rolling version deployment feature of Elastic Beanstalk). The "elastic" part of Elastic Beanstalk is really the killer app, though. Not only does the Elastic Load Balancer scale up and down to handle incoming traffic, Elastic Beanstalk maintains an Auto Scaling group, which can add or remove EC2 instances based on conditions you define (we use CPU utilisation as our trigger, but you can also trigger on inbound and outbound network rates, disk IOPS or throughput, latency, or healthy/unhealthy hosts).

We use additional environments in the Elastic Beanstalk application for our development and staging environments, and occasionally spin up an environment for some load testing. We use Codeship to automatically deploy to some environments, as I've previously written about.

We use the Relational Database Service (RDS) to host our MySQL database, which gives us automatic snapshots and read replicas, and makes it easy to scale up or down based on the load we're putting on the database.

We've also been moving lots of data to DynamoDB. It turns out that a lot less of your data than you probably realise is actually relational in nature, and if you can identify chunks that aren't, DynamoDB has some fantastic properties. The most important one for us is that we can scale tables individually (and in fact, scale reads and writes separately), so we're not forced into vertically scaling our entire MySQL database due to a couple of high-traffic tables. Some examples of where we use DynamoDB tables are our session store, leaderboards, and push token management.

Speaking of push notifications, Simple Notification Service is a fantastic abstraction that lets you send push notifications to iOS, Android (even in China!), Kindle, and Windows Phone devices without having to care which one your player is using. We also use SNS as part of our event bus, which I'll cover in my next blog post.

Types, what are they good for?

Despite my thinly veiled criticisms of static typing (I mean, in this blog post; in real life, I don't bother with the veil), I thought it was important to mention one of our key learnings from building a large system in Clojure: you better know what data you're mapping, filtering, and reducing.

While Clojure's dynamic typing makes building up code piece by piece a joy, compared to having to meet the demands of a strict type system just to try things out, in a running system, you will want a bit more precision. Prismatic's excellent Schema library lets you describe your data like this:

(require '[schema.core :as s])

(def EntityId
  (s/both s/Int
          (s/pred pos? 'pos?)))

(def Timestamp
  (s/both s/Int
          (s/pred #(>= % rockscience/epoch) 'ts?)))

These are two simple "types": EntityId is something that is both an Int (Schema's built-in notion of an integral number--including long and BigInteger) and is positive; and Timestamp is something that is both an Int and also more recent than the Rock Science epoch, as expressed in milliseconds since the Unix epoch.

Schemas, being nothing more than Clojure data structures, can be used to build other schemas:

(def event-types #{:game-started :game-ended})
(def EventType (apply s/enum event-types))

(def Event
  {:type EventType
   :entity-id EntityId
   :timestamp Timestamp})

And furthermore, can be manipulated with all the usual Clojure functions:

(def ApiEventType
  (->> event-types
      (map ->SNAKE_CASE)
      (map str)
      (apply s/enum)))

(def ApiEvent
  (assoc Event :type ApiEventType))

->SNAKE_CASE, by the way, is part of the excellent camel-snake-kebab library, which converts camelCaseStuff to snake_case_stuff to :kebab-case-stuff (or CamelCaseStuff to SNAKE_CASE_STUFF, or any other permutation you can think of). This is great when working when taking data out of a database with a foo_id column name, putting it in a Clojure map with the :foo-id key, then finally writing it to a JSON object with the fooId key. In one of those eerie coincidences, the author of the camel-snake-kebab library was, unbeknownst to me, in the audience when I gave the talk this blog post is based on. :)

But back to Schema. In addition to being a nice tool to document your data, you can also use it to document your functions:

(require '[schema.macros :as sm])

(sm/defn get-events :- [ApiEvent]
  [player-id :- EntityId
   ts :- Timestamp]
  (->> (db/get-events player-id timestamp)
       (map ->api-event)))

This says that the get-events function takes a player-id argument, which should conform to the EntityId schema, and a ts argument, which should conform to the Timestamp schema, and returns a list, for which each element should conform to the ApiEvent schema.

Schema can also make all those "should conform" statements "must conform". When we initialise our app, we do this:

(require [schema.core :as s])
(s/set-fn-validation! true)

This tells the schema.macros/defn macro to actually validate the schemas at runtime. When a piece of data doesn't validate, an exception is thrown:

ExceptionInfo Value does not match schema: (not (#{:game-ended :game-started} a-clojure.lang.Keyword))

You can also turn on validation for unit tests:

  (:require [clojure.test :refer :all]
            [schema.test :refer [validate-schema]]))

(use-fixtures :once validate-schemas)

Generative testing with test.check would make likely this even more valuable.

Schema validation is really nice for dealing with input to REST endpoints, as I'll talk about next.

Framework alert!

We'll focus mainly on the REST webservice for the rest of this post, and cover the other bits of the system in a future post on event sourcing. This will be a lot to take in, but, it is not that important that you understand the details. I just want to show you the shape of our solution for exposing a REST API.

The Clojure community has long preferred libraries over frameworks, but the code I am about to show you will smell a bit frameworky. There is not an inherent contradiction here, however, as all we're really doing is building a simpler language on top of the languages provided by Liberator and Compojure, which themselves build on the language provided by Ring, which builds on the Apache HTTP client, which builds on the Java I/O standard libraries. This is very much in the spirit of bottom-up application design as discussed in "Structure and Interpretation of Computer Programs", AKA the source of all (or at least much) Lisp and functional programming wisdom.

The key reason to discourage frameworks is that they try to please everyone and often end up pleasing 80% of the users who just want to get a basic web app (or whatever) out, but make slightly non-trivial things more difficult. However, when you are the only user of your framework, you already know the use cases you need to support, and are also free to make massive breaking changes whenever you discover a new use case that you didn't anticipate.

So the "framework" shown below is not anything we'd ever release as Open Source, because while it meets the needs of our very specific webservice, it is unlikely to meet yours without hacks or further indirection and abstraction. The beauty of Clojure is that it is very easy to whip up what amounts to a domain-specific language describing precisely one level of the problem you are trying to solve. And more importantly, thanks to the joys of dynamic typing, it is easy to make changes (even sweeping ones) to the DSL once you realise the domain was more or less complicated than you first realised.

As previously mentioned, our webservice is a Clojure application running in the Tomcat application server. We use Liberator, a library that was inspired by the Erlang Webmachine, to handle HTTP requests. Incoming requests are routed to Liberator handlers by Compojure. Routes look like this:

(ns insurrection.routes.core
  (:require [insurrection.resource.get :as get]
            [ :as post]
            [insurrection.resource.put :as put]))

(defroutes rockscience-routes
  (POST "/games" [] post/game)
  (GET "/players/:playerId/profile" [] get/profile)
  (PUT "/players/:playerId/email" [] put/email))

These routes let a client create a new game, view a player's profile, or update a player's email address, respectively.

A resource ties a route to the Clojure function that will handle it. For example, the post/game resource looks like this:

(def game
  (post-resource :game view/create-game [:gamepackId :playerId :opponentPlayerId]
                 :optional-args [:seed]
                 :transform-fn {[:gamepackId :playerId :opponentPlayerId] ->EntityId
                                :seed #(when % (Long/valueOf %))}))

The view/create-game function has this signature:

(defn create-game [pack-id owner-id opponent-id & [seed]]

So the game POST resource will somehow cause view/create-game to be called with the game pack ID, the ID of the player who created the game, the ID of her opponent, and an optional seed (used for the random number generator; which be mentioned in a later post on event sourcing). The three ID arguments are picked up from either the HTTP query parameters or the JSON body of the request (thanks to the wrap-keyword-params and wrap-json-params Ring middleware), and then transformed by applying the ->EntityId function:

(def EntityId s/Int)

(defn ->int [v]
  (try (Integer/valueOf v)
       (catch Exception _ nil)))

(sm/defn ->EntityId :- EntityId
  (->int id))

All that is going on here is that ->EntityId is adding validation (remember, sm/defn is the schema.macro version of defn, which validates its arguments and return value) to the ->int function, which itself returns either the integer value of its argument (which might be the string "123", for example), or nil if the conversion failed.

The optional seed parameter is easier to follow. It is just turned into a Long.

Note that any of the transformations may result in an exception being thrown. This is perfectly OK, because the POST resource catches the exception and returns a nice 500 using standard Liberator handlers, as you will see.

Now that we've seen a call of the post-resource function, let's take a look at what it actually does:

(defn post-resource
  [resource-name resource-fn args & {:keys [content-types exists-fn mutation-type] :as opts}]
  (-> (lib/resource :allowed-methods [:post]
                    :available-media-types default-media-types
                    :available-charsets ["UTF-8"]
                    :authorized? authorized?
                    :exists? (if exists-fn
                               (make-exists? resource-name exists-fn args opts)
                               (constantly true))
                    :known-content-type? #(check-content-type % (or content-types default-content-types))
                    :post! (make-mutator (or mutation-type :create) resource-name resource-fn args opts)
                    :handle-created (make-response-handler resource-name :created))

lib/resource is Liberator's resource definition function. You can read about handling POSTs in the Liberator documentation, but the basic idea is that we register a :post! handler function that gets called by Liberator if the POST request is legal according to its decision graph.

In our case, the :post! handler function will be built by calling:

(make-mutator :create :game view/create-game [:gamepackId :playerId :opponentPlayerId] opts)

The opts on the end are the transformation functions we saw above. We won't go into detail on how the transformation works.

make-mutator returns a function which Liberator will use to handle the POST:

(defn make-mutator [type resource-name resource-fn args & [opts]]
  (fn [ctx]
    (let [params-str (params->str ctx args opts)
          {:keys [before-word error-word]} (mutator-types-dictionary type)]
      (log/infof "%s %s" before-word (log-str "%s" resource-name params-str))
        (when-let [resource (apply resource-fn (params->values ctx args opts))]
          {resource-name resource})
        (catch Exception e
          (log-error e (format "Failed to %s %s" error-word (log-str "%s" resource-name params-str)))
          {:exception e})))))

We're just doing some wrapping of the view/create-game function to ensure that:

  • We log "Creating game for gamepackId => 42, playerId => 123, opponentId => 456"
  • The function gets called with the correct arguments from the query parameters and/or JSON request body
  • Anything the function returns gets put in the Liberator context (more on this in a moment) under the :game key
  • If the function throws an exception, it gets logged and then put in the Liberator context

Since Clojure is functional and considers mutation a necessary evil at best, the way Liberator handler functions communicate with each other is through a context map. Each handler receives the context as its only argument, and anything it returns is merged into the context that is passed to the next handler (or decision function) in Liberator's state machine. The context contains not only stuff put there by handlers, but also the request itself, as a Ring request map.

OK, so once the mutator function has done its mutating (which almost certainly means using Korma to put some data into a relational database, and Amazonica to put some data into a DynamoDB table or two and send a message on the SNS event bus), we have to let the HTTP client know what happened. That is covered by the one of the :handle-created, :handle-ok, or :handle-not-found Liberator handlers that our post-resource function defined.

Those handlers are all built by another one of our functions:

(defn make-response-handler [resource-name & [status]]
  (let [status (or status :ok)]
    (fn [ctx]
      (let [exception (:exception ctx)
            resource (if exception
                       {:status false, :message (.getMessage exception)}
                       (resource-name ctx))
            resp (make-response resource)
            http-status (if exception
                          (http/status status))]

        (ring-response (-> resp
                           (assoc :status http-status)
                           (assoc :cookies (:cookies ctx))))))))

The last function call is the important part: we make a Ring response (Ring is the Clojure web application library on top of which Liberator is built) containing either the value (always a map, in our case) of the :game key in the context, or if something went wrong, a map containing the message of the exception that was caught and stuffed into the context somewhere in the Liberator decision graph. We also manually set the HTTP status (this is a little icky, as we should let Liberator do this for us, but we have some legacy issues with the iOS app itself that require this hack) and then add any cookies that we found in the context (courtesy of the :authorized? Liberator handler, but we'll not get into that now).

The return value of the response handler, which is a Ring response map, will finally be subjected to the wrap-json-response and wrap-cookies Ring middleware, which turn the Clojure map in the response body into a JSON string, and write the cookies into the response headers.

Say what now?

This blog post has tried to show, for the most part at a very high level, how we're currently thinking about a game backend. There are certainly things that we're doing wrong, but we're optimistic that Clojure and AWS will allow us to correct past mistakes quickly, and not get bogged down in a legacy swamp.

If you have questions, suggestions, constructive criticism, or war stories to share, please feel free to do so in the comments.



Deploying to Elastic Beanstalk from your Continuous Integration system

Hopefully, you are using some sort of Continuous Integration tool to ensure the quality of your production software!

We here at Nuday Games use a fantastic hosted continuous integration service called Codeship. In using a hosted service, you get to avoid worrying about configuring and maintaining a CI system, but you do need to be careful about security.

The good news is that with a tiny bit of effort, one can enable secure deployments to an Elastic Beanstalk environment from Codeship (or any other CI service). This blog post will show you how!

Note: we are deploying a Clojure web application using the lein-beanstalk plugin for Leiningen, the leading Clojure build tool. You will almost certainly have to make slight adjustments if you are using another tool to do your Elastic Beanstalk deployments (such as the Beanstalker plugin for Maven), but this post should get you most of the way there!

AWS Identity and Access Management (IAM)

The first step is to create an AWS Identity and Access Management user which your Codeship deployment profile will use. I'll walk you through doing this in the AWS Console, but if you prefer to use another tool like the AWS Command Line Interface, you should be able to follow along easily.

So without further ado, let's get started!

Create a user and group

  1. Go to the IAM Dashboard in the AWS Console.
  2. Click on the Users link in the pane on the left of the window, then the Create New Users button at the top.
  3. In one of the text boxes under Enter User Names, type in your desired username. I recommend something like "codeship".
  4. Make sure that the Generate an access key for each User box is ticked, then click the Create button.
  5. A window will pop up with the security credentials for your new user. Click the Show User Security Credentials link and note down the Access Key and Secret Access Key, or click the Download Credentials button and save the credentials file somewhere safe (make sure to run chmod 600 or the equivalent on the file so no-one can snoop on your credentials!).

For the rest of this walk-through, I will use ACCESS_KEY to refer to your user's Access Key, and SECRET_KEY to refer to the Secret Access Key, so whenever you see those in the instructions below, be sure to replace them with your actual access and secret keys.

Now that you have a user, you need to add it to a group so you can assign it some permissions. Let's create the group now:

  1. Click on the Groups link in the pane on the left of the window, then the Create New Group button at the top.
  2. In the Group Name box, type in your desired group name. I recommend something like "CI" (Continuous Integration) so you can use the group for other CI tools like Jenkins, then click the Continue button.
  3. In the next step, select No Permissions and then click the Select Button to continue. Don't worry, we'll be assigning some permissions soon, but that topic is complicated enough to deserve its own section.
  4. Click the Create Group button.
  5. Select the newly created group, then click the Add Users to Group button.
  6. Select your new user and then click the Add Users button.

Now that you have a user and group ready to receive some permissions, let's shift gears and set things up in your build tool.

Create Build Profile

Again, I'll use lein-beanstalk as an example in this post, but the basic principle should apply to your build tool of choice (here's how to set up credentials for Maven-based build tools like Beanstalker).

I'm assuming that you have an existing Elastic Beanstalk application with a staging or development environment created that you use for testing your application. I'll refer to this environment as "staging" hereafter.

I'm also assuming you've configured your build tool for deployment to your Elastic Beanstalk staging environment so that you can kick off a deployment locally using your normal AWS credentials. If you haven't done that yet, please refer to the documentation of your build tool (here are links to the docs for lein-beanstalk and Beanstalker). Please ensure that you can do a local deployment using your normal credentials before we begin, so you don't waste time debugging issues with the Codeship credentials that actually apply to your entire build tool or Elastic Beanstalk configuration.

Let's set up a profile for Codeship to use:

  1. Open your project.clj (or pom.xml, etc.) file in your text editor of choice.
  2. Create a profile to use the new credentials:

    (defproject myapp "1.0-SNAPSHOT"
      ;; all your usual config elided
      :profiles {
        ;; existing profiles
    :codeship {:aws {:access-key "ACCESS_KEY"
                     :secret-key "SECRET_KEY"}}})
  3. Try a build with the new profile: lein with-profile +codeship beanstalk deploy staging.

The build should fail with some sort of permissions error when it tries to upload the war file to S3. That is perfectly OK, as we'll address permissions in the next section. In fact, running a local build with the codeship credentials is a great way to debug permissions issues, as the AWS exceptions usually tell you exactly what action and resource failed.

Write IAM policy document

Elastic Beanstalk itself uses a set of AWS services, and your build tool may require an additional permission or two. Again, as we're using lein-beanstalk as an example, your exact configuration may vary, albeit probably ever so slightly. I recommend you start out with the set of permissions described here, then run your build tool, note any permissions errors you get, adjust your IAM profile, and try again.

At the very least, your build tool will need to perform the following actions:

  • Upload the war file (or other artifact, if you are using one of the other Elastic Beanstalk services instead of Tomcat).
  • Create a new Elastic Beanstalk application version from the uploaded artifact.
  • Get the Cloud Formation template for your Elastic Beanstalk environment.
  • Get the uploaded artifact from S3.
  • Describe EC2 images to determine which AMI is used by your environment.
  • Get the patches Elastic Beanstalk needs to apply, based on the platform and application server that your environment uses.
  • Update the Elastic Beanstalk environment with the new application version.

lein-beanstalk performs the following actions in addition to the ones listed above:

  • Describe the environments belonging to your Elastic Beanstalk application to determine the currently active one.

To grant the permissions necessary to perform these actions, you will need to write an IAM policy document. If you like, you can use the Policy Generator to do this, though we'll write the policy by hand.

As per the documentation, "An IAM policy is a JSON document that consists of one of more statements. Each statement is structured as follows:"


Now, let's translate all of that into an IAM policy document. Remember to replace the following:

  • REGION : the AWS region in which your Elastic Beanstalk exists
  • ACCOUNT : your AWS account number
  • APPLICATION : your Elastic Beanstalk application name
  • ENVIRONMENT : your Elastic Beanstalk environment name
  • ENVIRONMENT_ID : your Elastic Beanstalk environment ID, which you can find in the Elastic Beanstalk console by clicking on your environment, then Tags on the left side of the page, and looking at the value of the elasticbeanstalk:environment-id tag
  • LB_NAME : the name of the Elastic Load Balancer used by your environment, which you can find by using DescribeStackResources from the CloudFormation API

Here is the policy:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

The Action names and Resource ARNs (Amazon Resource Names) above can be found in the documentation for each AWS service. For example, here are the valid actions for the services we've included in our policy document:

The chink in our armour

The eagle-eyed amongst you will have noticed one startling statement in our policy doc:

            "Effect": "Allow",
            "Action": [
            "Resource": [

This statement grants all S3 privileges to the group on all S3 resources. That is really not what we want, but updating the environment's configuration settings fails unless we leave S3 open, as per my question on Stack Overflow on this very topic. If anyone reading this post can figure out what Elastic Beanstalk is trying to do, please comment here or add an answer on Stack Overflow.

There are a couple of other S3 services that sadly do not support filtering actions by resource, such as EC2 and Auto Scaling, so actions granted on those services must also apply to all resources (i.e. "Resource": ["*"]).

Amazon is working on implementing filtering (at least for EC2), so we should be able to lock down these services soon.

Grant permissions to IAM group

Now that you have a policy, let's add it to the IAM group:

  1. Go to the IAM console, then click on the Groups link in the left-hand pane.
  2. Select the group you created previously, then click on the Permissions tab in the bottom pane, then click the Attach Policy button.
  3. In the popup, click on Custom Policy, then click the Select button.
  4. Enter a policy name, something like "ElasticBeanstalkCI".
  5. In the Policy Document text area, paste in the JSON we wrote in the previous step.
  6. Click the Apply Policy button.

Now, try your deployment again. It should succeed! If not, read the error message(s), and you should see which action or resource is wrong or missing in your policy. Correct it and try again.

This is your final warning

As is wise in any blog post related to security, let me make sure to emphasise two things:

  • Just because you've locked down the CI group doesn't mean you shouldn't keep the AWS access and secret keys for the users in that group (e.g. Codeship et al.) secret and safe! Good security is always layered.
  • The stuff I've covered in this post is hopefully helpful, but should not be misconstrued as any sort of guarantee of security. As always, review this advice carefully before using it in your environment!

The End

This is the end of the general instructions for allowing Elastic Beanstalk deployments from a continuous integration system. If you're a Clojure user, stick around and I'll show you a couple of neat tricks.

Exploring AWS with the Clojure REPL

If you're new to Clojure, you may need to know that REPL stands for "Read, Eval, Print, Loop", and has been a cornerstone of Lisp development for 60 years now. In fact, the idea of a REPL is so powerful that it has crept into many language environments (Perl and Python interactive mode--just type perl or python at your command-line prompt; Ruby's irb; the Scala interpreter, etc.).

To start a Clojure repl, one simply types lein repl inside a Leiningen project.

Now, let's use an excellent Clojure library called Amazonica to poke and prod at our AWS resources.

If you haven't yet installed Leiningen, please do so now. Once Leiningen is installed, we'll need to add the lein-try plugin to our ~/.lein/profiles.clj, so open it up in a text editor and add the following:

{:user {:plugins [[lein-try "0.4.1"]
                  ;; any other plugins that might already exist in the file

Now, you can use lein-try to play with any Clojure library of your choice, without needing to create a Leiningen project.

In the REPL session below, I'll be using the standard convention that lines starting with ;=> are the result of evaluated code. Anything else should be treated as code. I will omit return values unless they are meaningful.

Now, type lein try amazonica 0.2.10 at your command prompt, and let's get started!

The first order of business is to pull in the Amazonica namespaces we need and then authenticate ourselves:

(require '[amazonica.core :as aws]
         '[ :as eb])

(def region "YOUR REGION HERE; e.g. eu-west-1")
(def access-key "YOUR ACCESS KEY")  ; use your normal key, not the CI one
(def secret-key "YOUR SECRET KEY")

(aws/defcredential access-key secret-key region)

Now that we're authenticated, we can take a look at our Elastic Beanstalk applications:

(-> (eb/describe-applications) pprint)
;=> {:applications
;=>  [{:application-name "foo",
;=>    :date-updated #<DateTime data-preserve-html-node="true" 2014-05-15T06:56:14.612+02:00>,
;=>    :date-created #<DateTime data-preserve-html-node="true" 2014-05-15T06:56:14.612+02:00>,
;=>    :versions
;=>    ["1.0-SNAPSHOT-20140526131214"
;=>     "1.0-SNAPSHOT-20140519164537"],
;=>    :configuration-templates [],
;=>    :description "Clojure application"}]}

And given an application name, we can get information about its environments:

(def app-name "foo")
(-> (eb/describe-environments :application-name app-name) pprint)
;=> {:environments
;=>  [{:tier {:version "1.0", :name "WebServer", :type "Standard"},
;=>    :environment-id "e-w8pgmw3j3k",
;=>    :date-created #<DateTime data-preserve-html-node="true" 2014-05-15T06:59:14.711+02:00>,
;=>    :application-name "foo",
;=>    :date-updated #<DateTime data-preserve-html-node="true" 2014-05-26T13:13:28.102+02:00>,
;=>    :endpoint-url
;=>    "",
;=>    :version-label "1.0-SNAPSHOT-20140519164537",
;=>    :status "Ready",
;=>    :cname "",
;=>    :solution-stack-name "64bit Amazon Linux running Tomcat 7",
;=>    :health "Green",
;=>    :environment-name "staging"}]}

Amazonica is such a great Clojure library in that it is the thinnest veneer over Amazon's native Java libraries that lets you write idiomatic Clojure code. One of the great strengths of this approach is that you can leverage the AWS SDK Javadoc to figure out how to do things, saving Amazonica from having to document everything itself.

With this newfound power in mind, let's do a manual deployment of our Elastic Beanstalk environment. This is a great way to test out IAM policies, as you can make a change to your policy, then try something at the REPL to see if the policy does what you wanted, without having to wait for a potentially lengthy build.

;; Minor annoyance: S3 doesn't want you to use a region, so you need to re-authenticate
;; without one in order to upload the file.
(aws/defcredential access-key secret-key)

(require '[ :as s3]
         '[ :as s3t]
         '[ :as io])

(def war-file
  (io/file "/home/me/elastic-beanstalk-foo-app/target/foo-1.0-SNAPSHOT-20140522180802.war)) ; assuming you've build the war somehow
(def versions-bucket "")

(s3/put-object versions-bucket (.getName war-file) war-file)
:=> {:etag "acc29c49892c68f2fdd6645346f8d493", :content-md5 "rMKcSYksaPL91mRTRvjUkw=="}

(def app-name "foo")
(-> (eb/create-application-version
     :auto-create-application true
     :application-name app-name
     :version-label (.getName war-file)
     :description "Clojure application"
     :source-bundle {:s3-bucket versions-bucket
                     :s3-key (.getName war-file)})
;=> {:application-version
;=>  {:date-updated #<DateTime data-preserve-html-node="true" 2014-05-26T14:55:48.737+02:00>,
;=>   :application-name "foo",
;=>   :version-label "foo-1.0-SNAPSHOT-20140522180802.war",
;=>   :date-created #<DateTime data-preserve-html-node="true" 2014-05-26T14:55:48.737+02:00>,
;=>   :source-bundle
;=>   {:s3bucket "",
;=>    :s3key "foo-1.0-SNAPSHOT-20140522180802.war"},
;=>   :description "Clojure application"}}

(def env-name "staging")
(def env-id (-> (eb/describe-environments :application-name app-name
                                          :environment-name env-name)
(-> (eb/update-environment :environment-id env-id
                           :environment-name env-name
                           :option-settings [{:option-name "log4j.configuration",
                                              :namespace "aws:elasticbeanstalk:application:environment",
                                              :value ""}])
;=> {:status "Updating",
;=>  :environment-id "e-w8pgmw3j3k",
;=>  :health "Grey",
;=>  :endpoint-url
;=>  "",
;=>  :tier {:name "WebServer", :type "Standard", :version "1.0"},
;=>  :version-label "1.0-SNAPSHOT-20140523100733",
;=>  :cname "",
;=>  :application-name "insurrection",
;=>  :date-updated #<DateTime data-preserve-html-node="true" 2014-05-26T15:06:12.731+02:00>,
;=>  :solution-stack-name "64bit Amazon Linux running Tomcat 7",
;=>  :environment-name "staging",
;=>  :date-created #<DateTime data-preserve-html-node="true" 2014-05-19T17:13:30.439+02:00>}

That's it! If everything succeeded, you know that your CI system can deploy to Elastic Beanstalk!

But don't let this be your first and last use of Amazonica; it is great for production apps, and equally great for exploring your AWS environment and quickly trying things out before going to all the trouble of actually writing code. Even if you don't use Clojure in production, you'll probably find Amazonica fantastic for prototyping.