Matteo Depalo's Blog

Will driven life

Ember Integration Testing With Konacha

Ember is truly an awesome framework. The exciting community and the quality of the code has brought joy to front-end development. That said, testing (more importantly integration testing) is a part of the framework that isn’t quite there yet, and for various reasons:

  • Lack of well defined best practices
  • Few complete examples
  • Debugging issues during tests is hard

Problems

The guide on the Ember website is a good start but it’s not enough. It won’t tell you anything about how to handle the run loop during tests, how to work with timers or how to configure the store. If you look around for examples they are either outdated or don’t work with the framework you are using (Mocha, QUnit).

Towards a viable stack

After spending some time trying and failing I believe I’ve reached a stack that makes me happy and that I consider solid enough:

  • Rails (asset pipeline)
  • Konacha (Chai and Mocha)
  • Sinon

Rails might seem an overkill for just the asset pipeline, but currently it’s the most convenient way to build Ember applications. I’ve tried ember-app-kit and although it’s going in the right direction with the ES6 module system-aware resolver, it still has some rough edges like slow compilation times and a vast API surface.

Once you go with Rails you can draw from a nice pool of libraries built around it. Konacha is one of them. If you too think that testing like this is cool, keep reading:

Konacha uses Mocha and Chai in combo. These libraries will make if you feel at home if you’re coming from the RSpec world. It also spins up a web server on the port 3500 that you can visit to run your tests (don’t worry there is still a command for your CI).

The main problem is that Konacha uses Mocha to run tests and Ember supports only QUnit for integration testing out of the box; fortunately teddyzeenny built an adapter for this purpose. Include it in the spec_helper file like this:

1
2
3
4
5
6
7
#= require sinon
#= require application
#= require ember-mocha-adapter

Ember.Test.adapter = Ember.Test.MochaAdapter.create()
App.setupForTesting()
App.injectTestHelpers()

Now you can use Ember test helpers like visit or click without worrying about asynchronous behavior. Just chain them or call then if you want to execute some code after asynchronous actions have been performed. For example:

1
2
3
4
5
6
7
8
describe 'Notices - Integration', ->
  beforeEach ->
    visit('/')

  it 'adds a Notice to the list', ->
    fillIn('input[type="text"]', 'test')
    .click('input[type="submit"]').then ->
      find('.title').text().should.equal('test')

There are some other important things to add to the spec_helper file:

1
2
3
4
5
6
7
8
9
10
11
12
mocha.globals(['Ember', 'DS', 'App', 'MD5'])
mocha.timeout(500)
chai.Assertion.includeStack = true
Konacha.reset = Ember.K

$.fx.off = true

afterEach ->
  App.reset()

App.setup()
App.advanceReadiness()

The first 4 lines will make Konacha play nicely with Ember. They will tell it to ignore leaks on globals and to avoid clearing the body of the application after each test, which is something that Ember doesn’t like.

Removing animations is always a good idea during testing, it will improve speed and cause less accidental problems.

We also tell Mocha to reset the App after each test, which will destroy and reload everything bringing the router to its initial status.

The last lines are important if you have to setup your application before loading it. When you visit localhost:3500 Konacha will load the page and Ember will run App initializers and advance App readiness on document ready. In order to have full control over this process remember to add App.deferReadiness() at the end of the application.coffee file, after creating the App.

If you need to perform some setup before resetting (setup is a custom method I’ve added), override the reset method like this:

1
2
3
4
5
6
7
window.YourApplication = Ember.Application.extend
  setup: ->
    # some setup code

  reset: ->
    @setup()
    @_super()

Under the hood

There are some things that this spec_helper will do under the hood. First of all it will set Ember.testing to true. This will stop the auto-run feature of the Ember RunLoop during tests to give you control over what can run with async side effects and what cannot.

For example if you want to create a fixture you need to wrap it in a Ember.run block or it won’t execute all the async operation that will be scheduled by the application model adapter, like this:

1
2
3
4
Ember.run ->
  notice = App.__container__.lookup('store:main').createRecord('notice', { title: 'test' })
  notice.save().then ->
    # check something

I strongly suggest to read about how the Ember loop works because sooner or later you will need that knowledge in order to debug tests. There is a good SO answer about it.

The Ember.Test.MochaAdapter will also enable the bdd interface for you, so you can use stuff like describe and it during tests.

Stubbing the server

In order to test interactions with a web server some people suggest to switch to FixtureAdapters during tests, but I don’t like this approach because you wouldn’t be testing the actual code of your application and some features, like associations, are implemented properly only on RESTAdapters.

What I’ve found useful instead is mocking the xhr object itself with the sinon.fakeServer. Suppose you want to stub the /api/notices endpoint, which should return a list of notices, you can do it like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
window.server = sinon.fakeServer.create()

server.autoRespond = true

server.respondWith('GET', '/api/notices', [
  200,
  { 'Content-Type': 'application/json' },
  '{ "notices": [
    {
      "id": "a5babb5f-e5b2-4ccf-85fc-4893f8d08d1f",
      "title": "test",
      "created_at": "2014-01-02T14:01:02.810Z"
    }
  ]}'
])

This way you won’t need to change your adapter at runtime and tests will run superfast.

Gotchas

Beware of timers. If your application has long running or self scheduling timers, every function that uses wait under the hood, like visit, will never resolve. It has been discussed that you should be able to explicitly avoid waiting for specific timers during tests, but in the meanwhile you can use the following hack:

Before:

1
2
3
4
5
6
tick: ->
  # do something

  Ember.run.later(this, ->
    @tick()
  , 1000)

After:

1
2
3
4
5
6
7
8
tick: ->
  # do something

  setTimeout(=>
    Ember.run(=>
      @tick()
    )
  , 1000)

This way you won’t use the Ember internal setTimeout (which is not optimal), but you won’t risk of executing async code outside of the run loop while allowing your tests to pass.

Conclusion

Ember is still a relatively young framework which means that you will have to work more to get simple stuff done. However I believe the community is very conscious of this and it’s pushing towards a common and strong approach for getting started quickly and testing.

Rails API Documentation

Lately I’ve been working on a new project for mobile called Playround. My job is to design and implement the API. I must say that with the help of Rails 4 and Rails API the experience has been the smoothest possible. With requests and models specs I can test the whole application at blazing speeds. At a certain point however I faced an issue: documentation.

Especially in the first stages of development, working inside a team mandates transparency about the current status of the API so the mobile developers know exactly what to expect from the server while testing locally. Of course documenting is exceptionally useful also for the mature stage of the project when we will have to publish the API documentation in a beautiful layout. In order to achieve documentation nirvana I started experimenting various ways of building it, possibly in a way that would output something I can use for our public doc.

At first I started putting “debugger” in every test and printing the output of the response, however this task got tedious pretty fast. Looking around I found a gem that compiles a documentation, however it forced me to use a specific dsl which means I had to rewrite my tests. When I started I used the convention adopted by the first requests tests you find with scaffolds and I wanted to keep that.

In order to achieve this I wrote a simple script in the spec helper that does the following things:

  • For every request spec file it creates a corresponding txt file inside the doc folder.
  • For each test the path, status, request and response are written inside the corresponding file.

Request tests have to follow a convention:

  • Top level descriptions are named after the model (plural form) followed by the word “Requests”. For the model Arena it would be “Arenas Requests”.
  • Actions are in the form of “VERB path”. For the show action of the Arenas controller it would be “GET /arenas/:id”.

The code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
config.after(:each, type: :request) do
  if response
    example_group = example.metadata[:example_group]
    example_groups = []

    while example_group
      example_groups << example_group
      example_group = example_group[:example_group]
    end

    action = example_groups[-2][:description_args].first if example_groups[-2]
    example_groups[-1][:description_args].first.match(/(\w+)\sRequests/)
    file_name = $1.underscore

    File.open(File.join(Rails.root, "/docs/#{file_name}.txt"), 'a') do |f|
      f.write "#{action} \n\n"

      request_body = request.body.read

      if request.headers['Authorization']
        f.write "Headers: \n\n"
        f.write "Authorization: #{request.headers['Authorization']} \n\n"
      end

      if request_body.present?
        f.write "Request body: \n\n"
        f.write "#{JSON.pretty_generate(JSON.parse(request_body))} \n\n"
      end

      f.write "Status: #{response.status} \n\n"

      if response.body.present?
        f.write "Response body: \n\n"
        f.write "#{JSON.pretty_generate(JSON.parse(response.body))} \n\n"
      end
    end unless response.status == 401 || response.status == 403 || response.status == 301
  end
end

Example output for “Rounds Requests” POST action:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
POST /v1/rounds

Headers:

Authorization: Token token="36260243e091bfe56f96483592afc723"

Request body:

{
  "round": {
    "game_name": "dota2",
    "arena": {
      "foursquare_id": "5104"
    }
  }
}

Status: 201

Response body:

{
  "round": {
    "id": "ec6add8b-709f-475d-8f06-8ad44d8a95d3",
    "state": "waiting_for_players",
    "created_at": "2013-07-24T12:16:14.700Z",
    "game": {
      "id": "1c59b30e-599a-4ea1-9d5c-a364079528ad",
      "name": "dota2",
      "display_name": "Dota 2",
      "picture_url": "http://localhost:8080/assets/dota2.jpg",
      "teams": [
        {
          "name": "radiant",
          "display_name": "Radiant",
          "number_of_players": 5
        },
        {
          "name": "dire",
          "display_name": "Dire",
          "number_of_players": 5
        }
      ]
    },
    "arena": {
      "id": "5c593125-a114-4a1a-936f-2cc4b21fa0a8",
      "name": "Clinton St. Baking Co. & Restaurant",
      "latitude": 40.721294,
      "longitude": -73.983994,
      "foursquare_id": "5104"
    },
    "teams": [

    ],
    "user": {
      "id": "87fb0fe4-2f0c-400d-ba00-000c3f5ea642",
      "name": "Test User",
      "picture_url": "http://graph.facebook.com/12132/picture?type=square",
      "facebook_id": "12132"
    }
  }
}

I’m excluding 401, 403 and 301 status codes because those cases are grouped and documented inside a common area in my documentation, but there is nothing special about them.

Now to the beautiful layout part. Right now I’m copy pasting those response and requests bodies inside the templates of a Jekyll application hosted on Github Pages. One way to automate this would be to use a templating language in order to output html documents instead of plain txt files. Since the production documentation should change way less frequently than the development one, this is a automation I can skip for now. It’s far more important to keep a fresh copy of the features for internal usage, which can be rebuilt anytime by anyone with no effort.

Minimum Viable Stack

After migrating to my own VPS solution from Heroku at Responsa, I’ve traveled across the internet land in order to select and build a solid production stack. Now that I think I’ve reached it I want to share my choices in case anyone has to go through the same process.

Service selection criterion

I’m using Chef as provisioning tool so while looking around for services I immediately run the query on Google: “#{service} cookbook”. Usually there is always a cookbook, but most of them are really bad. However if the cookbook is good the chances of that service to be picked are very high.

Grocery list

The list of things I need boils down to this:

  • DB and cache
  • Ruby
  • Monitoring service for CPU, memory, IO etc…
  • Service manager
  • Logs aggregator
  • Backups manager
  • Web server

DB

MongoHQ has a very good service with awesome customer support. Can’t recommend them enough.

For session and cache storage I chose Redis and Memcached respectively.

I’ve yet to find a good cookbook for Redis 2.6. I guess I’ll have to upgrade the cookbook myself when I have the chance.

Ruby

rbenv and ruby_build are the killer combo. Just install your Ruby version as global. After all who needs multiple versions of ruby on the production machine?

Services manager

I’m not really opinionated about service managers so this was the most community driven choice. Many are using Runit and the cookbook is pretty good.

Monitoring

I’ve tried Munin but it felt like using a walkman in 2013. Hard to configure with chef solo (no need for server and client in that case) and badly documented.

New Relic kicks ass. The cookbook is one include_recipe away from running and the dashboard is feature rich and easy to use. The downside is that if you have many instances the price might go out of bounds…

Logs aggregator

There are some nice competitors in this space. Loggly, Logentries and Papertrail. Again this was a decision driven by the quality of the cookbook and Papertrail has a pretty good one.

I use rsyslog to put all my important logs in one place, Papertrail grabs them and cleverly separates services like Unicorn, sshd, Nginx etc… The search feature is also useful.

Backups

Whenever + Backup are the killer combo. I’ve set them up with a custom made cookbook that updates the crontab with whenever at every chef run.

Web server

Nginx. Boom. Done.

Bonus

I don’t like when I close my laptop and after reopening it ssh hangs. To solve that issue I use mosh which needs a server installed on the machine. Luckily there is a cookbook also for that.

You’re welcome to share your stack in the comments below.

How I Migrated From Heroku to Digital Ocean With Chef and Capistrano

UPDATE:

  • Removed ElasticSearch and MongoDB recipes since they were not so useful for this tutorial.
  • Added unicorn.rb
  • Added ssh authentication step
  • Added file paths

I’ve always loved deploying to Heroku. The simplicity of a git push let me focus on developing my applications which is what I really care about. However, both because of the scandal about the routing system and because I wanted to expand my skill set by entering the sysadmin land, at Responsa I decided to migrate to a VPS solution.

At this point I had three choices to make:

  1. Hosting provider
  2. Technology stack
  3. Deploy strategy

Provider

Many hackers I follow were recommending Digital Ocean so I gave it a try. I must say I was very impressed with the simplicity and power of their dashboard, so I decided to use it.

I immediately changed my root password

1
passwd

Copied over my ssh key with

1
ssh-copy-id root@$IP

And disabled password access setting PasswordAuthentication no in /etc/ssh/sshd_config

Technology

The decision of the web server was also quick. I wanted to achieve 0 downtime deployments so Github use of Unicorn + Nginx jumped to my mind.

Deploy strategy

This is where things got a little bit complicated. Disclaimer: I’m not a Linux/Unix pro, so many system administration practices where unknown to me prior to this week. Having said that, It was clear to me that the community is very fragmented. There were so many solutions to the same problems and so many scripts! After digging, trying and failing miserably I settled on the stack that caused me the least suffering:

  1. Chef solo and Knife for the machine provisioning
  2. Capistrano for the deployment

Chef

Chef is a provisioning tool written in Ruby. Its DSL is very expressive and powerful. The community is full of useful cookbooks that ease the setup of common services, however it seemed to lack a way to handle community cookbooks. This is where Librarian Chef comes in. I just had to write a Cheffile with all the dependencies and I was done.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Cheffile
#!/usr/bin/env ruby
#^syntax detection

site 'http://community.opscode.com/api/v1'

cookbook 'libqt4',
  :git => 'https://github.com/phlipper/chef-libqt4'

cookbook 'nodejs'
cookbook 'nginx'
cookbook 'runit'
cookbook 'java'
cookbook 'imagemagick'
cookbook 'vim'
cookbook 'ruby_build', :git => 'git://github.com/fnichol/chef-ruby_build.git'
cookbook 'rbenv', :git => 'git://github.com/fnichol/chef-rbenv.git'
cookbook 'redis', :git => 'git://github.com/cassianoleal/chef-redis.git'
cookbook 'memcached'

To bootstrap the machine with Chef and Ruby many people where using custom Knife templates that were not working for me. Some installed ruby with RVM, others with rbenv. In the end I found Knife Solo that solved all my problems. With one command after the initialization I could install Chef AND run all my recipes to install Ruby and every other service I needed.

1
2
knife solo init
knife solo bootstrap root@$IP node.json

Librarian and Knife Solo forced me to use a specific project structure:

1
2
3
4
5
6
mychefrepo/
├── cookbooks
├── site-cookbooks
├── Cheffile
├── Cheffile.lock
└── node.json

The node.json contains the run list of recipes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
  "user": {
    "name": "deployer",
    "password": $PASSWORD
  },
  "environment": "production",
  "server_name": "goresponsa.com",
  "deploy_to": "/var/www/responsa",
  "ruby-version": "1.9.3-p286",
  "run_list": [
    "recipe[vim]",
    "recipe[libqt4]",
    "recipe[imagemagick]",
    "recipe[java]",
    "recipe[redis::source]",
    "recipe[memcached]",
    "recipe[nodejs]",
    "recipe[ruby_build]",
    "recipe[rbenv::system]",
    "recipe[runit]",
    "recipe[nginx]",
    "recipe[main]"
  ]
}

All recipes except the “main” one are taken from community cookbooks.

The main recipe contains machine/application specific setup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
# chef/site-cookbooks/main/recipes/default.rb

# setup

rbenv_ruby node['ruby-version']
rbenv_global node['ruby-version']

rbenv_gem 'bundler'

group 'admin' do
  gid 420
end

user node[:user][:name] do
  password node[:user][:password]
  gid 'admin'
  home "/home/#{node[:user][:name]}"
  shell '/bin/bash'
  supports :manage_home => true
end

directory "#{node[:deploy_to]}/tmp/sockets" do
  owner node[:user][:name]
  group 'admin'
  recursive true
end

# certificates

directory "#{node[:deploy_to]}/certificate" do
  owner node[:user][:name]
  recursive true
end

cookbook_file "#{node[:deploy_to]}/certificate/#{node[:environment]}.crt" do
  source "#{node[:environment]}.crt"
  action :create_if_missing
end

cookbook_file "#{node[:deploy_to]}/certificate/#{node[:environment]}.key" do
  source "#{node[:environment]}.key"
  action :create_if_missing
end

# configuration

template '/etc/nginx/sites-enabled/default' do
  source 'nginx.erb'
  owner 'root'
  group 'root'
  mode 0644
  notifies :restart, 'service[nginx]'
end

["sv", "service"].each do |dir|
  directory "/home/#{node[:user][:name]}/#{dir}" do
    owner node[:user][:name]
    group 'admin'
    recursive true
  end
end

runit_service "runsvdir-#{node[:user][:name]}" do
  default_logger true
end

runit_service 'responsa' do
  sv_dir "/home/#{node[:user][:name]}/sv"
  service_dir "/home/#{node[:user][:name]}/service"
  owner node[:user][:name]
  group 'admin'
  restart_command '2'
  restart_on_update false
  default_logger true
end

service 'nginx'

I’m using runit to manage the unicorn service that is declared in a template file:

1
2
3
4
5
# chef/site-cookbooks/main/templates/default/sv-runsvdir-deployer-run.erb

#!/bin/sh
exec 2>&1
exec chpst -u deployer runsvdir /home/deployer/service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# chef/site-cookbooks/main/templates/default/sv-responsa-run.erb

#!/bin/bash
exec 2>&1

<% unicorn_command = @options[:unicorn_command] || 'unicorn_rails' -%>

#
# Since unicorn creates a new pid on restart/reload, it needs a little extra love to
# manage with runit. Instead of managing unicorn directly, we simply trap signal calls
# to the service and redirect them to unicorn directly.

function is_unicorn_alive {
    set +e
    if [ -n $1 ] && kill -0 $1 >/dev/null 2>&1; then
        echo "yes"
    fi
    set -e
}

echo "Service PID: $$"

CUR_PID_FILE=/var/www/responsa/shared/pids/unicorn.pid
OLD_PID_FILE=$CUR_PID_FILE.oldbin

if [ -e $OLD_PID_FILE ]; then
    OLD_PID=$(cat $OLD_PID_FILE)
    echo "Waiting for existing master ($OLD_PID) to exit"
    while [ -n "$(is_unicorn_alive $OLD_PID)" ]; do
        /bin/echo -n '.'
        sleep 2
    done
fi

if [ -e $CUR_PID_FILE ]; then
    CUR_PID=$(cat $CUR_PID_FILE)
    if [ -n "$(is_unicorn_alive $CUR_PID)" ]; then
        echo "Unicorn master already running. PID: $CUR_PID"
        RUNNING=true
    fi
fi

if [ ! $RUNNING ]; then
    echo "Starting unicorn"
    cd /var/www/responsa/current
    export PATH="/usr/local/rbenv/shims:/usr/local/rbenv/bin:$PATH"
    # You need to daemonize the unicorn process, http://unicorn.bogomips.org/unicorn_rails_1.html
    bundle exec <%= unicorn_command %> -c config/unicorn.rb -E <%= @options[:environment] || 'staging' %> -D
    sleep 3
    CUR_PID=$(cat $CUR_PID_FILE)
fi

function restart {
    echo "Initialize new master with USR2"
    kill -USR2 $CUR_PID
    # Make runit restart to pick up new unicorn pid
    sleep 2
    echo "Restarting service to capture new pid"
    exit
}

function graceful_shutdown {
    echo "Initializing graceful shutdown"
    kill -QUIT $CUR_PID
}

function unicorn_interrupted {
    echo "Unicorn process interrupted. Possibly a runit thing?"
}

trap restart HUP QUIT USR2 INT
trap graceful_shutdown TERM KILL
trap unicorn_interrupted ALRM

echo "Waiting for current master to die. PID: ($CUR_PID)"
while [ -n "$(is_unicorn_alive $CUR_PID)" ]; do
    /bin/echo -n '.'
    sleep 2
done
echo "You've killed a unicorn!"

Nginx is used as a reverse proxy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# chef/site-cookbooks/main/templates/default/nginx.erb

upstream unicorn {
  server unix:/var/www/responsa/tmp/sockets/responsa.sock fail_timeout=0;
}

server {
  listen 80;
  listen 443 default ssl;
  server_name <%= node[:server_name] %>;
  root /var/www/responsa/current/public;
  # set far-future expiration headers on static content
  expires max;

  server_tokens off;

  # ssl                  on;
  ssl_certificate      <%= "/var/www/responsa/certificate/#{node[:environment]}.crt" %>;
  ssl_certificate_key  <%= "/var/www/responsa/certificate/#{node[:environment]}.key" %>;

  ssl_session_timeout  5m;

  ssl_protocols  SSLv2 SSLv3 TLSv1;
  ssl_ciphers  HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers   on;

  # set up the rails servers as a virtual location for use later
  location @unicorn {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP  $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_intercept_errors on;
    proxy_redirect off;
    proxy_pass http://unicorn;
    expires off;
  }

  location / {
    try_files $uri @unicorn;
  }

  # error_page 500 502 503 504 /500.html;
}

And here’s the unicorn configuration file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# config/unicorn.rb

rails_env = ENV['RAILS_ENV'] || 'production'

worker_processes (rails_env == 'production' ? 6 : 3)

preload_app true

# Restart any workers that haven't responded in 30 seconds
timeout 30

working_directory '/var/www/responsa/current'

# Listen on a Unix data socket
pid '/var/www/responsa/shared/pids/unicorn.pid'
listen "/var/www/responsa/tmp/sockets/responsa.sock", :backlog => 2048

stderr_path '/var/www/responsa/shared/log/unicorn.log'
stdout_path '/var/www/responsa/shared/log/unicorn.log'

before_exec do |server|
  ENV["BUNDLE_GEMFILE"] = "/var/www/responsa/current/Gemfile"
end

before_fork do |server, worker|
  ##
  # When sent a USR2, Unicorn will suffix its pidfile with .oldbin and
  # immediately start loading up a new version of itself (loaded with a new
  # version of our app). When this new Unicorn is completely loaded
  # it will begin spawning workers. The first worker spawned will check to
  # see if an .oldbin pidfile exists. If so, this means we've just booted up
  # a new Unicorn and need to tell the old one that it can now die. To do so
  # we send it a QUIT.
  #
  # Using this method we get 0 downtime deploys.

  old_pid = '/var/www/responsa/shared/pids/unicorn.pid.oldbin'

  if File.exists?(old_pid) && server.pid != old_pid
    begin
      Process.kill("QUIT", File.read(old_pid).to_i)
    rescue Errno::ENOENT, Errno::ESRCH
      # someone else did our job for us
    end
  end
end

Capistrano

After setting up the machine I created a snapshot on Digital Ocean, in case I had to restart from scratch.

Time to deploy! Capistrano was an easy choice.

Using Capistrano multistage I set up the production script

1
2
3
4
5
6
# config/deploy/production.rb

set :server_ip, $MY_IP
server server_ip, :app, :web, :primary => true
set :rails_env, 'production'
set :branch, 'master'

This is used in combo with the deploy script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# config/deploy.rb

require 'bundler/capistrano'
require 'sidekiq/capistrano'
require 'capistrano/ext/multistage'

set :stages, %w(production staging)
set :default_stage, 'staging'

default_run_options[:pty] = true
ssh_options[:forward_agent] = true

set :application, 'responsa'
set :repository,  $PATH_TO_GITHUB_REPO
set :deploy_to, "/var/www/#{application}"
set :branch, 'development'

set :scm, :git
set :scm_verbose, true

set :deploy_via, :remote_cache
set :use_sudo, true
set :keep_releases, 3
set :user, 'deployer'

set :bundle_without, [:development, :test, :acceptance]

set :rake, "#{rake} --trace"

set :default_environment, {
  'PATH' => '/usr/local/rbenv/shims:/usr/local/rbenv/bin:$PATH'
}

after 'deploy:update_code', :upload_env_vars

after 'deploy:setup' do
  sudo "chown -R #{user} #{deploy_to} && chmod -R g+s #{deploy_to}"
end

namespace :deploy do
  desc <<-DESC
  Send a USR2 to the unicorn process to restart for zero downtime deploys.
  runit expects 2 to tell it to send the USR2 signal to the process.
  DESC
  task :restart, :roles => :app, :except => { :no_release => true } do
    run "sv 2 /home/#{user}/service/#{application}"
  end
end

task :upload_env_vars do
  upload(".env.#{rails_env}", "#{release_path}/.env.#{rails_env}", :via => :scp)
end

Now with two simple commands I can deploy with 0 downtime!

1
2
cap deploy:setup
cap deploy

I must thank czarneckid for sharing his setup on Github from which I stole some useful portions and also @bugant for his patience.

Refactor: Replace Method With Method Object

In my previous post I described how to implement a feature that allows our customers to create custom stylesheets for their widget. Altough it worked just fine, the compile class method of the CustomTheme class was blatantly big, so I decided to refactor it.

The biggest issue I faced was that since this was a class method, in order to split it I should have created many little class methods and pass around the theme instance; a solution that didn’t satisfy me. The reason compile needed to stay a class method is that I don’t want to serialize the whole CustomTheme object and pass it to Sidekiq. Having considered this premises I could proceed in two ways:

  • Delegate the class method compile to an instance method of a new custom theme, something along the lines of:
1
2
3
4
5
6
7
8
9
def self.compile(theme_id)
  CustomTheme.find(theme_id).compile
end

private

def compile
  # perform the actual compilation
end
  • Create a class with the name of the method and extract everything there (thanks @bugant for reminding me of this refactor)

I decided to go with the latter so I followed these steps:

  1. Create the class ThemeCompiler
  2. Give the new class an attribute for the object that hosted the original method (theme) and an attribute for each temporary variable in the method
  3. Give the new class a method “compute”
  4. Copy the body of the original method into compute
  5. Split the compute method in smaller methods

Final considerations

The first approach has the advantage of keeping everything in one class and use encapsulation properly, however it forces you to keep temp variables at the top of the compile method and increases the length of the class.

The second one puts every temp variables in the constructor but has the disadvantage of being envious of the CustomTheme class data to the point that it forces the promotion of one CustomTheme private method to public. Something like friend classes would have helped in this refactor.

The final result, indipendent of the methodology, is that the compile method is now much clearer.

The code

1
2
3
4
5
# custom_theme.rb

def self.compile(theme_id)
  ThemeCompiler.new(theme_id).compute
end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# theme_compiler.rb
class ThemeCompiler
  attr_reader :theme, :body, :tmp_themes_path, :tmp_asset_name, :widget, :compressed_body, :asset, :env

  def initialize(theme_id)
    @theme = CustomTheme.find(theme_id)
    @body = ERB.new(File.read(File.join(Rails.root, 'app', 'assets', 'stylesheets', 'widget_custom.scss.erb'))).result(theme.get_binding)
    @tmp_themes_path = File.join(Rails.root, 'tmp', 'themes')
    @tmp_asset_name = theme.widget_id.to_s
    @widget = theme.widget
    @env = if Rails.application.assets.is_a?(Sprockets::Index)
      Rails.application.assets.instance_variable_get('@environment')
    else
      Rails.application.assets
    end
  end

  def compute
    create_temporary_file
    compile
    compress
    upload
  end

  private

  def compile
    @asset = env.find_asset(tmp_asset_name)
  rescue Sass::SyntaxError => error
    widget.user.notifications.create(:message => error.message.gsub(/ \(.+\)$/, ''), :type => 'error')
    theme.revert
  end

  def compress
    @compressed_body = ::Sass::Engine.new(asset.body, {
      :syntax => :scss,
      :cache => false,
      :read_cache => false,
      :style => :compressed
    }).render
  end

  def create_temporary_file
    FileUtils.mkdir_p(tmp_themes_path) unless File.directory?(tmp_themes_path)
    File.open(File.join(tmp_themes_path, "#{tmp_asset_name}.scss"), 'w') { |f| f.write(body) }
  end

  def upload
    theme.delete_asset

    if Rails.env.production?
      FOG_STORAGE.directories.get(ENV['FOG_DIRECTORY']).files.create(
        :key    => theme.asset_path(asset.digest),
        :body   => StringIO.new(compressed_body),
        :public => true,
        :content_type => 'text/css'
      )
    else
      File.open(File.join(Rails.root, 'public', theme.asset_path(asset.digest)), 'w') { |f| f.write(compressed_body) }
    end

    theme.update_attribute(:digest, asset.digest)
  end
end

How to Create Custom Stylesheets Dynamically With Rails and Sass

At Responsa we have the need to create custom stylesheets for our widget administrators. In order to accomplish this we leverage the power of Sass and the Rails asset pipeline.

In this blog post I’ll show you how we implemented this feature and how to deploy it to an Heroku + Amazon S3 production environment.

Tools

Let’s take a loot at our toolbelt:

  • Sass and Sprockets to dynamically compile the asset
  • Sidekiq to delay the compilation and upload to S3, which in our case takes between 10 and 15 seconds
  • Fog gem to store on S3

Models

We have 2 models: Widget and CustomTheme

1
2
3
4
5
6
7
8
9
class CustomTheme
  include Mongoid::Document

  belongs_to :widget

  field :main_color, :type => String, :default => "#2ba6cb"
  field :text_font, :type => String, :default => "\"Helvetica Neue\", \"Helvetica\", Helvetica, Arial, sans-serif"
  field :digest, :type => String
end
1
2
3
4
5
class Widget
  include Mongoid::Document

  has_one :custom_theme
end

The custom theme model has the fields used in a widget_custom.scss stylesheet built with the Foundation CSS framework:

1
2
3
4
$mainColor: <%= main_color %>;
$bodyFontFamily: <%= text_font %>;

@import "widget/index";

Compilation

CustomTheme has a method we call every time we need to compile a fresh asset which occurs when the fields change. It performs a few actions in order:

  1. Write a temporary and not compiled scss file with the variables taken from the custom theme and give it a unique name.
  2. Use the Sprockets environment to find this temporary file and compile it.
  3. Compress the compiled css file.
  4. Store it either on amazon S3 or the file system.
  5. Delete the previous asset.

Caveats

Developing this solution we encountered a few problems mainly due to our production setup and the way Sprockets works:

  • If the compilation fails we need to restore the previous asset. To accomplish this we basically keep track of the previous asset and revert to it if anything goes wrong.
  • In production we need to avoid using the cached Sprockets environment, else Sprockets will cache the entire file system at the beginning.
  • It’s important to run validations of the custom theme fields in order to avoid css injection.

Code

1
2
# application.rb
config.assets.paths << Rails.root.join('tmp', 'themes')
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
COMPILED_FIELDS = [:main_color, :text_font]

after_save :compile, :if => :compiled_attributes_changed?

def self.compile(theme_id)
  theme = CustomTheme.find(theme_id)
  body = ERB.new(File.read(File.join(Rails.root, 'app', 'assets', 'stylesheets', 'widget_custom.scss.erb'))).result(theme.get_binding)
  tmp_themes_path = File.join(Rails.root, 'tmp', 'themes')
  tmp_asset_name = theme.widget_id.to_s

  FileUtils.mkdir_p(tmp_themes_path) unless File.directory?(tmp_themes_path)
  File.open(File.join(tmp_themes_path, "#{tmp_asset_name}.scss"), 'w') { |f| f.write(body) }

  widget = theme.widget

  begin
    env = if Rails.application.assets.is_a?(Sprockets::Index)
      Rails.application.assets.instance_variable_get('@environment')
    else
      Rails.application.assets
    end

    asset = env.find_asset(tmp_asset_name)

    compressed_body = ::Sass::Engine.new(asset.body, {
      :syntax => :scss,
      :cache => false,
      :read_cache => false,
      :style => :compressed
    }).render

    theme.delete_asset

    if Rails.env.production?
      FOG_STORAGE.directories.get(ENV['FOG_DIRECTORY']).files.create(
        :key    => theme.asset_path(asset.digest),
        :body   => StringIO.new(compressed_body),
        :public => true,
        :content_type => 'text/css'
      )
    else
      File.open(File.join(Rails.root, 'public', theme.asset_path(asset.digest)), 'w') { |f| f.write(compressed_body) }
    end

    theme.update_attribute(:digest, asset.digest)
  rescue Sass::SyntaxError => error
    theme.revert
  end

  widget.save
end

def revert
  # Revert to your previous theme and notify the user of the failure
end

def get_binding
  binding
end

def delete_asset
  return unless digest?

  if Rails.env.production?
    FOG_STORAGE.directories.get(ENV['FOG_DIRECTORY']).files.get(asset_path).try(:destroy)
  else
    File.delete(File.join(Rails.root, 'public', asset_path))
  end
end

def asset_path(digest)
  "assets/themes/#{asset_name(digest)}.css"
end

def asset_name(digest = self.digest)
   "#{widget_id}-#{digest}"
end

def asset_url
  "#{ActionController::Base.asset_host}/#{asset_path}"
end

private

def compile
  self.class.delay.compile(id)
end

def compiled_attributes_changed?
  changed_attributes.keys.map(&:to_sym).any? { |f| COMPILED_FIELDS.include?(f) }
end

Finally we use the asset url in our template:

1
<%= stylesheet_link_tag @custom_theme.asset_url %>

Check out my next blog post which goes into a refactor of this code!

Cube Loves Geckoboard

Introduction

This is a summary of my experiences and a mini-guide regarding the deploying and usage of a Cube server and Geckoboard to track statistics at Responsa.

I’ll explain how I deployed Cube to a VPS in the cloud and how I’ve integrated it in Responsa. I’ll also talk about Geckoboard and how we used it to draw graphs based on metrics extracted from Cube, but first here’s a brief description of Cube and Geckoboard.

Cube

Cube is a system for collecting timestamped events and deriving metrics. By collecting events rather than metrics, Cube lets you compute aggregate statistics post hoc. It also enables richer analysis, such as quantiles and histograms of arbitrary event sets.

Geckoboard

Geckoboard is a service for drawing graphs and statistics and organizing them in widgets that populate dashboards.

Deploying

To deploy the Cube server I’ve chosen Linode, a service that gives you an empty Ubuntu Server virtual machine in the cloud. After the creation of the machine you can just ssh inside and start installing your server.

To make the deploying process automatic I’ve chosen chef, and in particular chef-solo.

Since chef uses ruby we need to install it first in our machine so just run these commands to get some basic stuff:

1
2
apt-get -y update
apt-get -y install curl git-core python-software-properties
1
2
3
4
5
6
7
8
curl -L https://raw.github.com/fesplugas/rbenv-installer/master/bin/rbenv-installer | bash
vim ~/.bashrc # add rbenv to the top
. ~/.bashrc
rbenv bootstrap-ubuntu-10-04
rbenv install 1.9.3-p125
rbenv global 1.9.3-p125
gem install bundler --no-ri --no-rdoc
rbenv rehash

Then you can install Cube with:

1
2
git clone git://github.com/matteodepalo/cube.git
cd cube

Now all you need to do is to download the cookbooks and run chef:

1
2
3
gem install librarian
librarian-chef install
chef-solo -c solo.rb

And your Cube server should be up and running ready to track events!

Tracking events

We use Ruby on Rails as our stack so I’ve chosen the cube-ruby gem to communicate with the server. With this gem you can talk with the Cube collector in order to track events.

For example if we want to track a request we can write:

1
2
$cube = Cube::Client.new 'your-host.com'
$cube.send "request", :value => 'somevalue'

Analysis

To compute metrics I’ve created a ruby gem called cube-evaluator, which talks with the Cube evaluator.

Let’s say we want the daily requests on our website in this month we can write:

1
2
3
4
5
6
7
$cube_evaluator = Cube::Evaluator.new 'your-host.com'
daily_requests = $cube_evaluator.metric(
                 :expression => 'sum(request)',
                 :start => 1.month.ago,
                 :stop => Time.now,
                 :step => '1day'
               )

daily_requests will be an Hash containing the array of times and the array of corresponding values

Drawing

Geckoboard needs an endpoint in your server to poll in order to draw the data. To ease the creation of these endpoints I’ve improved and used the chameleon gem. Just add it to your gemfile

1
gem 'chameleon', :git => 'git://github.com/matteodepalo/chameleon.git'

then run bundle to install it

1
bundle install

Let’s draw the daily_requests now. Create a line widget graph

1
rails g chameleon:widget requests line

and use your daily_requests hash to populate it

1
2
3
4
5
6
7
8
9
10
11
Chameleon::Widget.new :requests do
  key "3618c90ec02d5a57061ad7b78afcbb050e50b608"
  type "line"
  data do
    {
      :items => daily_requests[:values],
      :x_axis => daily_requests[:times],
      :y_axis => [daily_requests[:times].min, daily_requests[:times].max]
    }
  end
end

Congrats! You are now tracking statistics in the coolest way possible ;)