Fluxus Frequency

How I Hacked The Mainframe

Seven Reasons I Love Minitest

This post originally appeared on Engine Yard. It was also a featured article in Ruby Weekly.

The other day at our company standup, I mentioned that I was eager to read an article on [Concurrency in Minitest] (http://chriskottom.com/blog/2014/10/exploring-minitest-concurrency/) that was featured in Ruby Weekly. One of my coworkers asked: “people still use Minitest?” My reply: “you mean you’re not using Minitest yet?”

I love Minitest. It’s small, lightweight, and ships with Ruby. It’s used by respected programmers like Aaron Patterson, Katrina Owen, Sandi Metz, and of course, DHH. Here’s a look at why Minitest remains a powerful and popular choice for testing Ruby code.

Witness the Firepower of this Fully Armed and Operational Testing Tool

Although I come from a family of programmers, I entered the profession by going to a bootcamp. I was a student in the second gSchool class. At that time, it was being taught by Jumpstart Lab. My instructors were Jeff Casimir, Franklin Webber, and [Katrina Owen] (https://twitter.com/kytrinyx).

My classmates and I were brought up with TDD from day one. We practiced it in everything we did. The tool we used was Minitest. When it was first introduced, I scoffed a little because of the name.

“Why are we using a ‘mini’ testing framework? I want to use what the pros use,” I complained to Katrina.

“Minitest is a fully-featured testing framework, and is used in plenty of production apps,” was her reply.

I’ve been using it ever since. Here are seven reasons I think Minitest is the bees’ knees.

1. It’s Simple

Minitest ships with Ruby because it’s easy to understand, and can be written by anyone who knows the language. I love this simplicity because it makes it easy to focus on designing code. Just imagine what you want it to do, and write your assertion, and make it pass.

I recently reached out to Katrina to ask her thoughts on what’s good about Minitest. Its simplicity was at the top of her list:

“It’s simpler. There’s no ‘magic’ (just plain Ruby) […] When there is “magic” then it’s very easy to assume that you can’t understand it […] You can read through the [Minitest] source code, and understand what is going on.”

I also asked Sandi Metz what she thought of Minitest. She said:

“Minitest is wonderfully simple and encourages this same simplicity in tests and code.”

Minitest is low in sugar. All you need to know is assert for booleans, and assert_equal for everything else. If you want to test a negative case, use refute or refute_equal instead. For everything else, just write Ruby.

2. It’s Extensible

The relatively “low-level” status of Minitest makes it easy to customize tests. Katrina observes:

“Because Minitest is so simple, it’s not too hard to extend. It feels like a tool that I can shape to my needs, rather than a tool that I need to use the way it was intended.”

If you want to repeat an assertion in multiple contexts, you can write a custom assertion for it, and call it in as many tests as you need.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def assert_average_speed(swallow)
  # Do some additional work here
  assert_equal '11 MPS', swallow.speed
end

def test_african
  swallow = Swallow.new(type: 'African')
  assert_average_speed(swallow)
end

def test_european
  swallow = Swallow.new(type: 'European')
  assert_average_speed(swallow)
end

If you are testing something that requires nearly the same test to be run repeatedly, for example when testing a controller that has a user authentication before_action, you can create a shared example by including a module.

If you need even more control, you can create an object with custom behavior that inherits from Minitest::Test, and have your tests inherit from it. Doing so allows you to completely customize your test suite with new methods as you see fit.

Finally, Minitest comes with hooks to help you easily write extensions to the gem. You can define a plugin by adding a minitest/XXX_plugin.rb file to your project folder, and Minitest will automatically find and require it. You can use extensions to define command-line options, custom reporters, and anything else you want Minitest to be able to do.

3. It’s Flat If You Use Assert

The default syntax for Minitest is assert. Although the BDD expect syntax is available in mintest/spec, I recommend giving assert a try. Maybe you love expectations because they read like English, sort of. Although assert doesn’t try to imitate natural language, it’s actually quite intuitive to use. It also provides the benefit of making your test files flat.

With nested context, describe, and it blocks, it can be difficult to remember what it refers to, and which before blocks are accessible in your scope. I find myself scanning indentation in BDD tests to figure out what scope I’m working in.

When you use assert syntax, your test file is flat. You start with a test class, then you write a bunch of test methods inside of it. It’s clear that the only variables available are those defined in the setup method or in the test itself. This flatness also means it gets painful quickly if your test is tied to too many dependencies. As Katrina puts it, “the lack of nested contexts means that I’m faced with the appropriate amount of pain when I make bad choices. It quickly becomes ugly if I have too many dependencies. I like that.”

Using assert also makes it easy to document the desired behavior of your code: just name the test method to describe what you are testing. If you’re really worried you’ll forget what the test was for, you can output a message to the console if the test fails:

1
2
3
4
5
6
7
test 'it calculates the air-speed velocity of an unladen swallow' do
  swallow = Swallow.new(wing_span: 30, laden: false, type: 'European')
  expected = '11 MPS'
  actual = swallow.average_speed
  assert_equal expected, actual, 'The average speed of an unladen
    swallow was incorrect'
end

Minitest’s flatness is also beneficial when it comes to practicing a Test-Driven workflow. You can skip all of the tests in a file easily, without scanning through nested blocks. Then you can make them pass, one at a time.

4. It Lends Itself to A Good Test-Driven Workflow

Minitest is awesome for getting into a red/green/refactor loop. Write a test, watch it fail, make it pass, refactor. Repeat. A Minitest file is just a list of tests that are waiting for you to make them pass.

Plus, since you’re just writing Ruby, you can use a style like this to get the next test set up with a minimum of effort:

1
2
3
4
5
test 'something' do
  expected = # some code
  actual = # some simple result
  assert_equal expected, actual
end

If you want to repeat an assertion in different contexts, you can write a method for it, and call it in as many tests as you want to. Need a shared example? Include a module.

5. Minitest::Benchmark is Awesome

If you are dealing with large amounts of data, and performance is a concern, Minitest::Benchmark is your new best friend.

It lets you test your algorithms in a repeatable manner, to make sure that their algorithmic efficiency don’t accidentally get changed. You can collect benchmarks in “tab-separated format, making it easy to paste into a spreadsheet for graphing or further analysis”.

Here are a few of the assertions in Minitest::Benchmark that might be of interest:

  • assert_performance_constant
  • assert_performance_exponential
  • assert_performance_logarithmic
  • assert_performance_linear

6. It’s Randomized By Default

Running the tests in a different order each time can help catch bugs caused by unintended dependencies between examples. Here’s how Aaron Patterson described the benefit of randomized tests in an episode of Ruby Rogues:

“I’m sure you’ve been in a situation where your […] test fails in isolation, but when you run [the entire test suite] it works. [T]hat’s typically because one test set up some particular environment that another test depended on. […] Since Minitest runs the test in a random order, you can’t make one test depend another, so you’ll see an error case.”

7. It’s Faster

Minitest doesn’t create any matchers or example objects that have to be garbage collected. Many people have benchmarked Minitest against other frameworks, and it usually comes out at ahead. Often it’s a marginal difference, but sometimes it comes out significantly ahead.

Minitest also supports concurrent test runs. Although I thought this would lead to great speed gains, it turns out that it only makes a difference in JRuby and Rubinius. The Matz Ruby Implementation (MRI) doesn’t get speed gains when using concurrency. Still, it’s nice to know that the option is there, in case you are using JRuby or Rubinius, or the MRI changes in the future.

Have You Tried It?

I’ve talked to a lot of people that are surprised when I say I prefer Minitest. They often ask, “why should I switch to Minitest?” Perhaps a better question to ask is this one, posed by Ken Collins: “What is in [other testing frameworks] that you need that Minitest does not offer?”

Programming tools are a matter of individual taste. When I’ve asked programmers about Minitest, they’ve all expressed that they were not against RSpec or other frameworks. Sandi Metz said: “I’m agnostic about testing frameworks […] I also use RSpec and I find it equally useful in it’s own way.” Katrina Owen said: “I don’t dislike RSpec, but I do prefer Minitest.” In the end, I think most would agree that testing frameworks are a personal choice.

That said, if you haven’t used Minitest lately (or ever), why not check it out? If you’re curious, but feel like you still need some convincing, just try it! It’s a small investment. You’ll be up and running in a few minutes. Why not go try the first Ruby Exercism?

I hope this tour of Minitest’s features has been informative, and piqued your interest in this fabulous test framework. Is there anything I missed? Disagree? Let’s talk! Please share your point of view in the comments section, or tweet at me at @fluxusfrequency.

Other Resources

A Big Look At Minitest
An informative slide deck that explores the ins and outs of Minitest.

Bow Before Minitest
A more opinionated slide deck comparing Minitest with other testing tools.

Caching Asynchronous Queries in Backbone

This post originally appeared on Engine Yard.

I was working on a Backbone project with Bob Bonifield recently, when we came across a problem. We were building an administration panel to be used internally by a client, and there were a couple of views that needed to display the same information about users in a slightly different way. To prevent unnecessary AJAX calls, we decided to cache the result of Backbone.Model#fetch at two levels of our application.

The result: less network time and a snappier user experience.

Here’s how we did it.

Caching the Controller Call

We decided to use Brent Ertz’s backbone-route-control package. It makes separation of concerns in the Backbone router easier by splitting the methods associated with each route into controllers.

I’ll show how we set up the UsersController to handle the first level of caching. In this example, we’ll use backbone-route-control. If you weren’t using it, you could accomplish the same thing in the Backbone router.

First, we set up the app view and initialized a new Router as a property of it, passing in the controllers we wanted to use.

1
2
3
4
5
6
7
8
9
10
11
// Main

var UsersController = require('controllers/users');

var app = new AppView();

app.router = new Router({
  controllers: {
    users: new UsersController(app)
  }
});

Next, we defined the router and set it up to use the UsersController to handle the appropriate routes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// Router

var BackboneRouteControl = require('backbone-route-control')

var Router = BackboneRouteControl.extend({
  routes: {
    'users/:id': 'users#show',
    'users/:id/dashboard': 'users#dashboard'
  },

  initialize: function(options) {
    this.app = options.app;
  }
});

return Router;

Caching the User at the Controller Level

After we got the router set up, we defined the UsersController and the appropriate route methods. We needed to wait until the user was loaded before we could generate the DOM, because we needed to display some data about the user.

We opted to cache the ID of the last user that was accessed by either the show or dashboard method, so that we wouldn’t repeat the fetch call when we didn’t need to. We set the result of the call to Backbone.Model#fetch (a promise) to a variable called userLoadedDeferred, and passed it down the the views themselves.

In doing so, we took advantage of the fact that, behind the scenes, fetch uses jQuery.ajax and returns a deferred object. When saving the result of a call to jQuery.ajax to variable, the value of the deferred’s .complete or .fail callback will always return the same payload after it has been fetched from the server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
// UsersController

var UsersController = function(app) {
  var lastUserID,
      userLoadedDeferred,
      user,
      lastUser;

  return {
    show: function(id) {
      this._checkLastUser();

      var usersView = new UserShowView({
        app: app,
        user: lastUser,
        userLoadedDeferred: userLoadedDeferred
      });

      app.render(usersView);
    },

    dashboard: function(id) {
      this._checkLastUser();

      var usersView = new UserDashboardView({
        app: app,
        user: lastUser,
        userLoadedDeferred: userLoadedDeferred
      });

      app.render(usersView);
    },

    _checkLastUser: function(id) {
      if (lastUserId != id) {
        lastUserId = id;
        lastUser = new User({ id: id });
        userLoadedDeferred = lastUser.fetch();
      }
    }
  }
});

Caching the User at the Model Level

Although our UsersController was now caching the result of a fetch for a given user, we soon found that also needed to refetch the user to display their information in a sidebar view as well.

Since the UsersController and the SidebarView were making two separate calls to the User model fetch method, we decided to do some more caching in the Backbone Model. We opted to save the results of the fetch call for 30 seconds, only making a new server request if the timer had expired.

This allowed us to simply call fetch from within the view, without needing to know whether the User model was making an AJAX call or just returning the cached user data.

Here’s what the code looked like in the model:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// User Model

var Backbone = require('backbone');
// Set the timeout length at 30 seconds.
var FETCH_CACHE_TIMEOUT = 30000;

var User = Backbone.Model.extend({
  fetch: function() {

    // set a flag to bust the cache if we have ever set it
    // before, and it's been more than 30 seconds
    var bustCache = !(this.lastFetched && new Date() - this.lastFetched < FETCH_CACHE_TIMEOUT);

    // if we've never cached the call to `fetch`, or if we're busting
    // the cache, make a note of the current time, hit the server, and
    // set the cache to this.lastFetchDeferred.
    if (bustCache) {
      this.lastFetched = new Date();
      this.lastFetchDeferred = Backbone.Model.prototype.fetch.apply(this, arguments);
    }

    // return the promise object that was cached
    return this.lastFetchDeferred;
  }
});

return User;

Busting the Cache

Later on in our development, we came across a situation where we needed to force a new fetch of User, right after updating some of their attributes. Because we were caching the result for 30 seconds, the newly updated attributes were not getting pulled from the server on our next fetch call. To overcome this, we neede to bust our cache manually. To make this happen, we changed our overridden fetch method to take an option that allowed us to force a refetch.

1
2
3
4
5
6
7
8
9
10
11
12
13
// User Model

  ...

  fetch: function(options) {
    // check if we passed the forceRefetch flag in the options
    var forceRefetch = options && options.forceRefetch;

    ...

    // updated the check to if the flag was passed
    if (!this.lastFetchDeferred || bustCache || forceRefetch) {
    ...

Conclusion

Caching the User model in this app reduced our network time by quite a bit. Initially, we were making two server calls per route, because we had to fetch the user to display data in both the main view and the sidebar. After saving the result of the fetch in the controller, we were now only calling to the server once per User ID.

With the addition of model-level caching, we were also able to remove the duplicated call between the main views and the sidebar view, by saving the results of the fetch call for 30 seconds.

Overall, we reduced four calls per route to one call per 30 seconds. Making these adjustments helped make our application behave a lot more smoothly, and reduced server load in the process.

P.S. Have you implemented anything like this before? What are some of the tricks you use to make Backbone more efficient? Tweet at me @fluxusfrequency.

Building a Ruby List Comprehension

This post originally appeared on Engine Yard. It was also a featured article in Ruby Weekly.

As developers, we’re in the business of continually bettering ourselves. Part of that process is pushing ourselves to learn and use better code patterns, try new libraries, and pick up new languages. For me, the latest self-learning project has been picking up Python.

As I’ve worked with it, I’ve discovered the joy of list comprehensions, and I’ve been wondering what it would take to implement a similar syntax in Ruby. I decided to give it a try. This exercise yielded several insights into the inner workings of Ruby, which we’ll explore in this post.

Snake Handling

I’m primarily a Rubyist. I’ve always enjoyed the natural way that Ruby flows off the fingers, and heard that Python was similar. It sounded easy enough, especially since this article promised I could learn Python in ten minutes.

It took a little while to get used to some of the differences in Python, like capitalizing booleans and passing self into all of my methods–but they were mostly superficial differences. As I solved more and more Python problems on exercism.io, the syntax began to feel natural. I felt like everything I wrote in Python had an analog in Ruby. Except list comprehensions.

You Cannot Grasp The True Form Of Lists

In Python, Coffeescript, and many functional languages, there is a really cool operation you can do called a list comprehension. It’s something like a map, but it’s written in mathematical set builder notation, which looks like this:

Instead of doing this:

1
(1..100).map { |i| 2 * i if i.even? }

You do this:

1
[2 * x for x in xrange(1, 100)]

You can even nest comprehensions:

1
return [[print(str(x) for x in y] for 2 * y in xrange(1, 100)]

These are so cool, in fact, that I decided to see if I could implement them in Ruby.

How It Works

My first thought in writing a list comprehension in Ruby was to make it look like Python. I wanted to be able to write square brackets with an expression inside of them. To make that happen, I would need to override the main []() method. It turns out that that method is an alias for the Array#[] singleton method. It’s not so easy to monkey patch, because you can’t really call super on it.

I decided to abandon this approach and create a new method called c(), that I would put in a module and include in main. Ideally, this method would take a single argument with this syntax: x for x in my_array if x.odd?. Since you can’t really pass an expression like this into a method and expect Ruby to parse it properly, I opted to pass it in as a string and parse it. I was into this idea, but not really interested in rewriting the Ruby source code.

Caught In A Loop

My first goal was to get the basic x for x in my_array part working.

I wrote a test:

1
2
3
4
5
6
7
8
9
10
class ComprehensionTest < Minitest::Test
  def setup
    extend ListComprehension
  end

  def test_simple_array
    result = c('n for n in [1, 2, 3, 4, 5]')
    assert_equal [1, 2, 3, 4, 5], result
  end
end

This was fairly straightforward to get passing.

1
2
3
4
5
6
7
8
module ListComprehension
  def c(comprehension)
    parts = comprehension.split(' ')
    if parts[0] == parts[2]
      eval(comprehension.scan(/\[.*\]/).last)
    end
  end
end

I continued working along, refactoring to have the module instantiate a Comprehension class and evaluate parts of the string:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
module ListComprehension
  def c(expression)
    Comprehension.new(expression).comprehend
  end
end

class Comprehension
  def initialize(expression)
    @expression = expression
  end

  def comprehend
    # Do some parsing and call `eval` on some stuff
  end
end

But pretty soon, I ran into a problem.

No Scope

When I defined a variable in the test scope, then tried to evaluate it in the comprehension argument string, my Comprehension couldn’t access it.

For example, I wrote this test:

1
2
3
4
5
def test_array_variable
  example = [1, 2, 3, 4, 5]
  result = c ('x for x in example)
  assert_equal [1, 2, 3, 4, 5], result
end

When I tried to call eval on example, I got:

1
NameError: undefined local variable or method `example' for #<Comprehension:0x007f9cec989fc8>

So how could I acceess the scope where example was defined? I did some googling, and discovered that you can access the calling scope a lot more easily from block than a string that you pass into a method.

With that in mind, I changed the interface of Comprehension to take a block instead of a string. To call c(), you would now write c{ 'x for x in example' } instead of c('x for x in example').

Inside of the Comprehension class, I did:

1
2
3
4
5
6
7
8
9
class Comprehension
  def initialize(&block)
    @comprehension = block.call
    @scope = block.send(:binding)
  end

  ...

end

Now I could call eval on the calling scope by doing:

1
2
3
4
5
def comprehend
  # some parsing
  collection = scope.send(:eval, parts.last)
  # carry out some actions on the collection
end

I had no idea you could access the calling scope like this in Ruby. It opened my eyes to the whole world of accessing callers and callees in a way that I normally don’t think about in Ruby.

You Obviously Worked Hard On These Plans, Kernel Klink

I wasn’t all that happy with having to define my c() method in a module and include it in the main scope. I really just wanted it to be available automagically if you required comprehension.rb.

After poking around a bit, I found an article on metaprogramming that showed me how you can monkey patch Kernel itself.

After changing module ListComprehension to module Kernel, I was able to remove the setup method entirely from my test suite. I didn’t realize it was this easy to get methods defined in the main scope. Even though it is probably very wrong in many situations, it’s cool to get an understanding of how Ruby itself is put together.

Lessons Learned

I set out to write a list comprehension in Ruby, and in a way, I failed. I was hoping to be able to write an expression inside of square brackets and have Ruby parse it. I ended up settling for a string inside of a block instead.

What’s more, my comprehension implementation is lacking several features. It doesn’t support method calls in the conditional, so you can’t write c{'x for x in (0..10) if Array(x)'}. You can’t pass arguments to the conditional either, so you can’t do c{'x for x in (1..10) if x.respond_to?(:even?)'}. You can’t access an index while you’re looping. And perhaps most disappointing of all, you can’t nest comprehensions.

But despite these shortcomings, I felt like this exercise was a great success, because I learned three things:

  1. When you call [] to create an array, you’re really calling Array#[], which delegates to Array#new.

  2. You can access the calling scope of a block with block.send(:binding).

  3. You can monkey patch module Kernel to get methods available in main.

To the the knowledge gained from this adventure was totally worth it. Although I did not create a library I would expect people to use, I learned a lot about how Ruby works, and had a great time solving the problem. To check out the full results, please visit my GitHub profile.

P.S. How would you have solved it? Maybe you’ve written a natively parsed list comprehension yourself? If so, tweet at me @fluxusfrequency.

Better SOA Development With Foreman and NGINX

This post originally appeared on Engine Yard. It also appeard on the Quick Left Blog.

MOAR!

Everyone knows more is better. More kittens, more money, more apps. Why settle for one Ruby project, when you can have three? We’ll take one Rails app for authorization and one to serve an API. Hey, let’s throw in a Sinatra proxy server serving up an AngularJS app to while we’re at it! Now we’re cookin’!

There are many ways organizations stand to gain by splitting their application into multiple projects running in symphony. If we’re being good programmers and following the Single Responsibility Principle (SRP), it makes sense to embrace it at all levels of organization, from our methods and classes up through our project structure. To organize this on the macro level, we can use a Service Oriented Architecture (SOA) approach. In this article, we’ll explore some patterns that make it easier to develop SOA apps with a minimum of headaches.

Service Oriented Architecture

In the mid-2000s, some programmers began to organize their applications in a new way. Led by enterprise behemoths like Microsoft and IBM, the programming community saw a rise in the use of Web Services: applications that provide data or functionality to others. When you stick a few of these services together, and coordinate them in some kind of client-facing interface, you’ve built an application using SOA. The benefits of this approach remain relevant in modern web development:

  • Data storage is encapsulated
  • You can reuse services between multiple applications (e.g. authentication)
  • You can monitor messages sent between services for business intelligence
  • Services are modular, and can be composed into new applications without repetition of common functionality

A Sample Project

To illustrate some processes that make it easier to develop an SOA project, we’ll imagine that we’re building a project called Panda, that is composed of three services:

PandaAPI: A RESTful API that serves data about Giant Pandas PandaAuth: Login page and user authentication service PandaClient: An AngularJS app sitting atop a Sinatra proxy server

Setting Up GitHub

To deal with an SOA project like this, it’s helpful to make sure you have everything well-structured on GitHub, so that all developers working on it can get up to speed quickly, and stay in sync with each other. I recommend creating an organization, and setting up all of the service project repositories under that organization. For Panda, I would start with a structure that looks like this:

1
2
3
4
5
panda-org (organization)
|- panda-auth (repo)
|- panda-api (repo)
|- panda-client (repo)
|- processes (repo)

The first three repos will hold the actual service projects, and the processes repo will hold scripts and processes that are shared between them.

Git Your S#*& Together

It can be pretty annoying to have your projects all out of sync. To make it easier to keep things up to date, here’s a bash script you can use to pull all of your projects down and update them in one go. Inside of the processes folder, touch a file called git_update.sh.

1
2
3
4
5
6
7
8
9
10
#git_update.sh

#!/bin/bash

BRANCH=$1
: ${BRANCH:="master"}

cd ../panda-auth   && git checkout $BRANCH && git pull origin $BRANCH
cd ../panda-api    && git checkout $BRANCH && git pull origin $BRANCH
cd ../panda-client && git checkout $BRANCH && git pull origin $BRANCH

When executing this script, you can specify the branch by running sh ./git_update <feature-name>.

We can do something similar for bundling and running migrations.

Create the bundle_and_migrate.sh file.

It should look like this:

1
2
3
4
5
6
7
8
9
10
#!/bin/bash

# Load RVM
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*
export PATH="/usr/local/bin:/usr/local/sbin:~/bin:$PATH"
[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" # Load RVM function

cd ../panda-auth   && bundle && bundle exec rake db:migrate
cd ../panda-api    && bundle && bundle exec rake db:migrate
cd ../panda-client && bundle && bundle exec rake db:migrate

Now the projects are all updated and ready to go, and we want to begin developing some features. We could write another script to go start rails server in each of our directories, but there is a better way.

Call In The Foreman

The foreman gem is my favorite way to manage multiple applications: it runs them all in a single terminal session. It’s pretty simple to get set up, and saves you having to run a lot of shell sessions (and a lot of headaches).

First, we’ll need to gem install foreman, to make sure we have the global executable available. Then, we’ll set up a Procfile to tell it which processes we want it to run. We’ll create ours in the processes directory, since that’s where we’re keeping things that pertain to all of the projects in our SOA app.

1
2
3
4
5
#Procfile

auth:     sh -c 'cd ../panda-auth   && bundle exec rails s -p 3000'
api:       sh -c 'cd ../panda-api    && bundle exec rails s -p 3001'
client:   sh -c 'cd ../panda-client && bundle exec rails s -p 3002'

This will work great as long as all of your apps are running on the same gemset. If not, you will need to check out the subcontractor gem.

From the processes folder, run foreman start. Sweet. Now everything is all set up. Just open up your browser and navigate to http://localhost:3000. Oh, and pop open two more tabs for http://localhost:3001 and http://localhost:3002 in them.

Man, wouldn’t it be nice if we could just run everything under a single host name?

NGINX, NGINX #9

To get around the problem of having three different localhosts, we can use NGINX. If you’re not familiar with NGINX, it’s an Apache alternative that acts as “a web server, a reverse proxy server and an application load balancer” (from the official site). We can use it to serve up all three of our apps from the same host, and make things a whole lot easier on ourselves.

To install NGINX, I recommend using Homebrew. If you have Homebrew, installation is as simple as brew install nginx. If you don’t, you can try one of these alternatives.

Once NGINX is installed, we’ll want to locate our nginx.conf. If you installed using Homebrew, it will be located at /usr/local/etc/nginx/nginx.conf. Otherwise, you’ll want to use ack, mdfind, or another search tool to locate it.

Once you’ve located it, open it in your text editor and locate the server section. Find the block that starts with location / (line 43 for me) and replace it with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#nginx.conf

http {
  ...

  server {
    listen       8080;
    ...

    # Client
    location / {
      proxy_pass        http://127.0.0.1:3002;
      proxy_set_header  X-Real-IP  $remote_addr;
    }

    # Auth
    location /auth {
      proxy_pass        http://127.0.0.1:3000;
      proxy_set_header  X-Real-IP  $remote_addr;
    }

    # API
    location /api {
      proxy_pass        http://127.0.0.1:3001;
      proxy_set_header  X-Real-IP  $remote_addr;
    }

  ...
  }
  ...
}

Now start NGINX with the nginx command. With these proxy_pass settings in place, we should be able to visit see all of our apps from http://localhost:8080:

  • / takes us to the client app
  • /auth takes us to the auth app
  • api takes us to the API app

Dealing With Redirects

One last tricky part of developing SOA apps is figuring out how to deal with url redirects between our apps. Let’s say that you want the client app to redirect users to the auth app if they haven’t logged in yet.

You would probably want to start with something like this in the client app:

1
2
3
4
5
6
7
8
9
10
11
12
13
#app/controllers/application_controller.rb

class ApplicationController < ActionController::Base
  before_action :check_user

  private

  def check_user
    if !session[:user_id]
      redirect_to '/auth'
    end
  end
end

Looks good, and it should work locally.

But it could be a problem if some of your apps are served from subdomains in production. Fortunately, there’s an easy way to get around this.

Create a config/service_urls.yml file in each project. Inside of it, define the url for each app:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#config/service_urls.yml

defaults: &defaults
  service-urls:
    panda-api: 'localhost:8080/api'
    panda-auth: 'localhost:8080/auth'
    panda-client: 'localhost:8080'

development:
  <<: *defaults

test:
  <<: *defaults

production:
  service-urls:
    panda-api: 'TBD'
    panda-auth: 'TBD'
    panda-client: 'TBD'

We’ll also need to register this configuration file in config/application.rb:

1
2
3
4
5
6
7
8
9
10
11
12
#config/application.rb

module PandaClient
  class Application < Rails::Application
    ...
    # In Rails 4.2, you can use:
    Rails.configuration.urls = config_for(:service_urls)[Rails.env]

    # For older Rails versions, use:
    Rails.configuration.urls = YAML.load_file(Rails.root.join('config', 'service_urls.yml'))[Rails.env]
  end
end

With this configuration in place, we can now update the url redirect to look like the following, and it will work in all environments.

That will look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#app/controllers/application_controller.rb

class ApplicationController < ActionController::Base
  ...
  def check_user
    if !session[:user_id]
      redirect_to auth_url
    end
  end

  def auth_url
    @auth_url ||= Rails.application.config.urls['service-urls']['panda-auth']
  end
end

With these changes in place, our applications will now redirect to the appropriate url in all environments.

All Your App Are Belong To Us

By now, you sould have a better idea of what it takes to develop an application using SOA principals. We’ve taken a look at using shell scripts to keep our files in sync, foreman to run several servers at once, and NGINX to pull everything together into a single host address that makes it easier to work with all our services in the browser.

Juggling several services can be pretty confusing, but if you start with the right set up, it makes things a lot easier. All your apps will be under control if you manage them from a central place, and using the strategies discussed in this article should help make the process less painful. Good luck!

P.S. What tricks do you use when you’re developing SOA apps? Did I leave anything out? If you think so, tweet at me @fluxusfrequency.

Getting Started With Active Job

This post originally appeared on Engine Yard. It was also a featured article in Ruby Weekly.

With the announcement of Rails 4.2 came some exciting news: Rails now has built-in support for executing jobs in the background using Active Job. The ability to schedule newsletters, follow-up emails, and database housekeeping tasks is vital to almost any production application. In the past, developers had to hand-roll this functionality, and configuration varied between different queueing services. With the release of Rails 4.2, setting up jobs to be executed by workers at a later time is standardized. In this article, we’ll take a look at how to set up Active Job and use it to send a follow-up email to a new user.

Updating Rails

You’ll need Rails 4.2.0beta1 or greater if you want to Active Job available by default (in older versions of Rails, you can require it as a gem). This tutorial is based on Rails 4.2.0beta2 (edge Rails). If you want to use edge Rails, use gem 'rails', github: 'rails/rails' in your Gemfile, and run bundle update.

Setting Up Resque

In order to send emails outside of our main application process, we’ll need to make use of a queueing system. There are many choices of technology for setting up background workers available, and Active Job abstracts the differences between them. Today we’ll use Resque, as it’s widely-used and stable.

To use Resque, you’ll need to make sure you have Redis installed. If you don’t, I recommend getting it with Homebrew. Otherwise, you can follow the instructions for download from the official site. Once it’s set up, make sure redis-server is running.

The next step is to install and configure the Resque gem. We’ll also need resque-scheduler to use ActiveJob. Add them the Gemfile with gem 'resque' and gem 'resque-scheduler' and bundle install. We’ll also need to create a Resque configuration file:

1
2
3
4
#config/initializers/resque.rb

Resque.redis = Redis.new(:url => 'http://localhost:6379')
Resque.after_fork = Proc.new { ActiveRecord::Base.establish_connection }

We’ll also require the Resque and Resque Scheduler rake tasks, so we can start our workers and scheduler with rake:

1
2
3
4
5
6
7
8
9
10
11
#lib/tasks/resque.rake

require 'resque/tasks'
require 'resque/scheduler/tasks'

namespace :resque do
  task :setup do
    require 'resque'
    require 'resque-scheduler'
  end
end

We can now start a worker with QUEUE=* rake environment resque:work. If everything’s working right, we should be able to see it in the Resque console. Run resque-web and visit http://0.0.0.0:5678/overview. If you see “0 of 1 Workers Working”, all’s well. We’ll also need to boot up the scheduler in a separate process with rake environment resque:scheduler.

Creating the Mailer

Now that we have a worker, we need it an email to send. Let’s imagine that we want to send a follow up email to a user that recently registered for our site. We’ll create a UserMailer, with a follow_up_email method that takes an email address.

1
2
3
4
5
6
7
8
9
10
11
12
#app/mailers/user_mailer.rb

class UserMailer < ActionMailer::Base
  default from: 'noreturn@example.com'

  def follow_up_email(email)
    mail(
      to: email,
      subject: 'We hope you are enjoying our app'
    )
  end
end

We’ll also need to write a follow-up email template.

1
2
3
4
#app/views/user_mailer/follow_up_email.text

Hey, we saw that you recently signed up for our app.
We hope you're enjoying it!

Creating the Job

Now that we have a working mailer, we can set up Active Job. All we really need to do is configure it to use the Resque adapter.

1
2
3
#config/initializers/active_job.rb

ActiveJob::Base.queue_adapter = :resque

Next, we’ll create a job that tells the background worker to send the email. The conventions for a job include: giving it a queue_as, and defining a perform method.

1
2
3
4
5
6
7
8
9
#app/jobs/follow_up_email_job.rb

class FollowUpEmailJob < ActiveJob::Base
  queue_as :email

  def perform(email)
    UserMailer.follow_up_email(email).deliver_now
  end
end

Now when a user signs up, we can have the UsersController enqueue the job for execution at a later time. Although you would probably delay the job a few days in a real application, we’ll just wait 10 seconds for easier testing.

1
2
3
4
5
6
7
8
9
10
11
12
13
#app/controllers/users_controller.rb

class UsersController < ApplicationController
  def new
    @user = User.new
  end

  def create
    @user = User.create(user_params)
    FollowUpEmailJob.new(@user.email).enqueue(wait: 10.seconds)
    # redirect somewhere
  end
end

To make this work, we’ll need some routes and a view template:

1
2
3
4
5
#config/routes.rb

Rails.application.routes.draw do
  resources :users, only: [:new, :create]
end
1
2
3
4
5
6
#app/views/users/new.html.erb

<%= form_for @user do |f| %>
  <%= f.email_field :email %>
  <%= f.submit %>
<% end %>

Setting Up Mailcatcher

Before we try our job, we’ll want to make sure we can intercept the emails we’re expecting the mailer to send. To achieve this, we’ll use the mailcatcher gem. Do a global install with gem install mailcatcher, and run mailcatcher. Once we configure Action Mailer to send the emails to localhost:1025 via smtp, we’ll be able to view intercepted emails at http://127.0.0.1:1080.

1
2
3
4
5
6
7
#config/environments/development.rb

Rails.application.configure do
  ...
  config.action_mailer.delivery_method = :smtp
  config.action_mailer.smtp_settings = { :address => "localhost", :port => 1025 }
end

Trying It Out

Now everything is set up. To try it out, we’ll sign up as a new user and watch the job get enqueued in the queue, then catch the mail in Mailcatcher. At this point, we have four processes running:

  • mailcatcher
  • QUEUE=* rake environment resque:work
  • rake environment resque:scheduler
  • rails server

In your browser, view the Resque dashboard at http:/http://0.0.0.0:5678. In another tab, visit http://127.0.0.1:1080 to see the Mailcatcher dashboard.

Now, for the moment of truth. Visit localhost:3000/users/new and sign up as a new user. Ten seconds later, a new job will appear in the emails queue of the Resque dashboard. Just afterward, the email will appear in Mailcatcher.

Using Acive Job with Action Mailer

The pattern we’ve written here woks for scheduling any job. But, since ActiveJob is now baked into ActionMailer, we can also schedule the job directly with the UserMailer in the UsersController.

1
2
3
4
5
6
7
8
9
#app/controllers/users_controller.rb

class UsersController < ApplicationController
  ...
  def create
    ...
    UserMailer.follow_up_email(email).deliver_later!(wait: 10.seconds)
  end
end

Conclusion

Active Job makes scheduling background jobs easier. It’s also a great way to set up your job infrastructure without knowing too much about what queueing system you’re using. If you needed to switch to Sidekiq or Delayed Job in the future, it would be as simple as setting ActiveJob to the appropriate adapter.

It can be a little bit tricky to get Active Job set up correctly. Hopefully, this tutorial made the process a little more transparent for you.

Until next time,

Happy hacking!

Wrapping Your API in a Custom Ruby Gem

This post originally appeared on Engine Yard.

Introduction

In the modern web, API-based projects are becoming the norm. Why? For one thing, APIs are necessary to serve Single Page Applications, which are all the rage right now. From a business standpoint, APIs give companies a new way to charge others for access to their data. If you are part of a company that offers such a service, a great way to generate interest in your API is to offer a Ruby gem that makes fetching and consuming your data easy for Ruby developers.

Today, we’ll take a look at how to wrap an imaginary API in a new Ruby gem and share it with the world. If you want to follow along at home, you can clone the project from my GitHub account.

Our API

Let’s pretend we have an application called Ben’s Benzes that serves data about cars for sale. We’re exposing a RESTful API so that developers from other companies can serve our car data on their websites. For our first iteration, here are the routes we’ve set up:

Get info about a certain car currently for sale: GET http://www.bensbenzes.com/api/v1/cars/active/:id

Get all the cars that are currently for sale: GET http://www.bensbenzes.com/api/v1/cars/active

Setting Up the Gem

We’ll be using bundler, so begin by making sure you have that installed (you probably do). We’ll create the gem by running bundle gem <gem-name> from the command line. I’m going to use benzinator as the name of my gem. That name is now taken, so you’ll have to come up with your own. Open up the project directory and you should see:

1
2
3
4
5
6
7
8
9
├── Gemfile
├── LICENSE.txt
├── README.md
├── Rakefile
├── benzinator.gemspec
└── lib
    ├── benzinator
    │   └── version.rb
    └── benzinator.rb

Let’s open up benzinator.gemspec and do a little configuration. Update the following lines with your name, email and a summary and description of the gem.

1
2
3
4
5
6
...
spec.authors       = ["Ben Lewis"]
spec.email         = ["blewis@example.com"]
spec.summary       = %q{Gem to wrap BensBenzes.com API}
spec.description   = %q{Gem to wrap BensBenzes.com API}
...

While we’re in here, let’s add the dependencies we’ll be using, right after bundler and rake. We’ll add testing tools as development dependencies, as well as hard dependencies on Faraday and json.

1
2
3
4
5
6
7
8
...
spec.add_development_dependency "minitest"
spec.add_development_dependency "vcr"
spec.add_development_dependency "webmock"

spec.add_dependency "faraday"
spec.add_dependency "json"
...

Writing a Test

We’ll need to make a test folder, and put a test_helper.rb into it. We’ll use the helper to pull in our gem and testing dependencies. We’ll also configure VCR and Webmock, which we’re using to stub out our server responses so that our gem isn’t dependent on access to the API for testing. See the VCR documentation for more about how this works.

1
2
3
4
5
6
7
8
9
10
#test/test_helper.rb
require './lib/benzinator'
require 'minitest/autorun'
require 'webmock/minitest'
require 'vcr'

VCR.configure do |c|
  c.cassette_library_dir = "test/fixtures"
  c.hook_into :webmock
end

Don’t forget to create the test/fixtures folder so VCR has somewhere to put the fixtures. With that done, we’ll write the first test for our gem. Our goal is to create an object called Benzinator::Car that exposes #all and #find methods to wrap calls to our API. Let’s start by making sure that object exists:

1
2
3
4
5
6
7
8
#test/car/car_test.rb
require './test/test_helper'

class BenzinatorCarTest < Minitest::Test
  def test_exists
    assert Benzinator::Car
  end
end

Creating The Wrapper Model

If we now run ruby test/car/car_test.rb, we get this error: uninitialized constant Benzinator::Car. Looks like it’s time to write some code:

1
2
3
4
5
#lib/benzinator/car.rb
module Benzinator
  class Car
  end
end

We’ll also need to require it in lib/benzinator.rb:

1
2
3
require_relative "benzinator/version"
require_relative "benzinator/car"
...

Now if we run the test, it passes. Let’s write another one to make sure that our Benzinator::Car model can give back the data for a car. Let’s imagine that an API call to http://www.bensbenzes.com/api/v1/cars/active/68 returns this JSON object:

1
2
3
4
5
6
7
8
9
"{
  \"id\": 68,
  \"make\": \"Honda\",
  \"model\": \"Civic\",
  \"year\": \"1996\",
  \"color\": \"Blue\",
  \"vin\": \"XXXXXXXXXXXXXX\",
  \"dealer_id\": 34
}"

We’ll want to make sure that our Benzinator::Car object has getter convenience methods all of the fields shown for each car. Given these API results, we could write a test like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#test/car/car_test.rb
  ...
  def test_it_gives_back_a_single_car
    VCR.use_cassette('one_car') do
      car = Benzinator::Car.find(68)
      assert_equal Benzinator::Car, car.class

      # Check that the fields are accessible by our model
      assert_equal 68, car.id
      assert_equal "Honda", car.make
      assert_equal "Civic", car.model
      assert_equal "1996", car.year
      assert_equal "Blue", car.color
      assert_equal "XXXXXXXXXXXXXX", car.vin
      assert_equal 34, car.dealer_id
    end
  end
end

Running this test, we get undefined method 'find' for Benzinator::Car:Class. Let’s go define it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#lib/benzinator/car.rb
require 'faraday'
require 'json'

API_URL = "http://www.bensbenzes.com/api/v1/cars/active"

module Benzinator
  class Car
    def self.find(id)
      response = Faraday.get("#{API_URL}/#{id}")
      attributes = JSON.parse(response.body)
    end
  end
end

Now the test says we forgot to make it a Benzinator::Car model:

1
2
Expected: Benzinator::Car
Actual: Hash

We can fix that, and make the attributes into getters at the same time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#lib/benzinator/car.rb
...
module Benzinator
  class Car
    attr_reader :id, :make, :model, :year, :color, :vin, :dealer_id
    def initialize(attributes)
      @id = attributes["id"]
      @make = attributes["make"]
      @model = attributes["model"]
      @year = attributes["year"]
      @color = attributes["color"]
      @vin = attributes["vin"]
      @dealer_id = attributes["dealer_id"]
    end

    def self.find(id)
      ...
      new(attributes)
    end
    ...
  end
end

That should take care of it. Now to test the #all method.

Let’s imagine that a call to http://www.bensbenzes.com/api/v1/cars/active responds with an array of 64 cars, with this Honda being the first one. We can rely on the API call giving back 64 cars today, but we hope there will be 6000 listed tomorrow. This is why we’re using VCR to save the result of the call as a fixture.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#test/car/car_test.rb
class BenzinatorCarTest < Minitest::Test
  ...
  def test_it_gives_back_all_the_cars
    VCR.use_cassette('all_cars') do
      result = Benzinator::Car.all

      # Make sure we got all the cars
      assert_equal 64, result.length

      # Make sure that the JSON was parsed
      assert result.kind_of?(Array)
      assert result.first.kind_of?(Benzinator::Car)
    end
  end
end

Running this, we get undefined method 'all' for Benzinator::Car:Class. Let’s define it:

1
2
3
4
5
6
7
8
9
10
11
12
#lib/benzinator/car.rb
...
module Benzinator
  class Car
    ...
    def self.all
      response = Faraday.get(API_URL)
      cars = JSON.parse(response.body)
      cars.map { |attributes| new(attributes) }
    end
  end
end

Sweet success! The tests pass!

Publishing and Using Our Gem

Now that our gem works, we can publish it to RubyGems. This is a pretty easy process. First, we’ll bump the version to 1.0:

1
2
3
4
#lib/benzinator/version.rb
module Benzinator
  VERSION = "0.1.0"
end

Then we can bundle it up by running gem build benzinator.gemspec. This will create a benzinator-0.1.0.gem file in our project directory. To publish it , all we have to do is run gem push benzinator-0.1.0.gem. You’ll be prompted for your RubyGems username and password, which you’ll need to create if you don’t have an account yet. After you enter your credentials, your gem is live!

Now anyone can use our gem in their Ruby projects. All they have to do is add it to their Gemfile with gem 'benzinator', and run bundle.

Conclusion

Winning! Now the whole wide world can access the Ben’s Benzes API from their Ruby project, with convenience methods to make things easier to work with.

For our next iteration, we could get a lot more in depth. We might want to add the ability to create, edit or destroy cars. If we decided to that, we might first build sort of authentication process into the gem. Once users have gotten a taste of our data and rely on it, our API might get so popular that we need to limit the number of API calls a user can make per day. We could then write subscription service to allow users greater access to your data at a cost.

One last note: you might want to use this technique to wrap someone else’s API, too! If you’re using a service that doesn’t offer a gem for its API, you can always write one and release it as open source!

Happy hacking!

Using Services to Keep Your Rails Controllers Clean and DRY

This post originally appeared on Engine Yard.

It also appeared in Ruby Weekly.

Using Services to Keep Your Rails Controllers Clean and DRY

We’ve heard it again and again, like a nagging schoolmaster: keep your Rails controllers skinny. Yeah, yeah, we know. But easier said than done, sometimes. Things get complex. We need to talk to some other parts of our codebase or to external APIs to get the job done. Mailers. Stripe. External APIs. All that code starts to add up.

Ah Tss Push It…Push It Down the Stack

If we ask: “where, pray tell, should this code live?”, the answer comes like a resounding chorus: “push it down to the model layer!”

But what if we want to keep our models simple? They should actually reflect the business objects related to our app, according to Domain Driven Design and other approaches.

Time to get custom!

Crack open the old app folder. What do you see? The usual fare? Guess what? Just because Rails comes with six folders doesn’t mean we’re restricted to six types of object. Let’s make some new folders!

At Your Service

I like to create various kinds of service objects in my Rails app. Tom Pewiński’s recent article in Ruby Weekly does a great job of covering how to write service objects that help complete an action, like create_invoice or register_user. While he puts all of his service objects into a single services folder, I like to get a little more granular. I’ll typically create an actions folder for things like create_invoice, and folders for other service objects such as decorators, policies, and support. I also use a services folder, but I reserve it for service objects that talk to external entities, like Stripe, AWS, or geolocation services.

Here’s how the app folder might look with all of these subfolders in it:

1
2
3
4
5
6
7
8
9
10
app
|- actions
|- assets
|- controllers
|- decorators
|- models
|- policies
|- services
|- support
|- views

Earning Our Stripes

Let’s give it a try, right now! We’ll make a credit card service that uses the Stripe gem.

We’ll create an app/services folder and touch a credit_card_service.rb inside of it. It’s going to be a Plain Old Ruby Object™ (PORO).

It’s probably a good idea to wrap the calls to the Stripe gem in local methods like external_customer_service and external_charge_service, in case we ever want to switch over to Braintree or something else. On object initialization, we’ll use dependency injection to accept charge amounts, card tokens, and emails. Our service will expose charge! and create_customer! methods to hook our controllers into.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
# app/services/credit_card_service.rb

require 'stripe'

class CreditCardService
  def initialize(params)
    @card = params[:card]
    @amount = params[:amount]
    @email = params[:email]
  end

  def charge
    begin
      # This will return a Stripe::Charge object
      external_charge_service.create(charge_attributes)
    rescue
      false
    end
  end

  def create_customer
    begin
      # This will return a Stripe::Customer object
      external_customer_service.create(customer_attributes)
    rescue
      false
    end
  end

  private

  attr_reader :card, :amount, :email

  def external_charge_service
    Stripe::Charge
  end

  def external_customer_service
    Stripe::Customer
  end

  def charge_attributes
    {
      amount: amount,
      card: card
    }
  end

  def customer_attributes
    {
      email: email,
      card: card
    }
  end
end

Hook it Up

Now we can write some clean, easily maintainable controller code. We keep the registration logic private, and if we ever want to change it, the controller doesn’t have to know anything about it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# app/controllers/users_controller.rb

class UsersController < ActionController::Base
  def create
    @user = User.create(user_params)

    registration = register_with_credit_card_service
    if registration
      # Save the id from the Stripe::Customer object
      add_customer_id_to_user(registration["id"])
      ...
    else
      ...
    end
  end

  private

  ...

  def register_with_credit_card_service
    CreditCardService.new({
      card: params[:stripe_token]
      email: params[:user][:email]
    }).create_customer
  end

  def add_customer_id_to_user(id)
    @user.update_attributes(external_customer_id: id)
  end
end

Test It Out

Since we’re just using a PORO, this should be nice and easy to test. Let’s make a test/services folder. If you want to add its contents to your rake tasks, try this. Let’s assume that we already have a test_helper.rb that includes the Rails helpers in ActiveSupport::TestCase and mocha.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# test/services/credit_card_service_test.rb

require 'test_helper'

class CreditCardServiceTest < ActiveSupport::TestCase
  test 'it creates charges' do
    params = {
      amount: 500,
      card: 'TOKEN'
    }
    Stripe::Charge.expects(:create).with(params).returns(true)
    # This will return false if it fails
    charge = CreditCardService.new(params).charge
    assert charge
  end

  test 'it creates customers' do
    params = {
      email: 'test@example.card',
      card: 'TOKEN'
    }
    Stripe::Customer.expects(:create).with(params).returns(true)
    # This will return false if it fails
    customer = CreditCardService.new(params).create_customer
    assert customer
  end
end

Keep It Clean

The last thing you want in your Rails app is a bunch of complicated controllers that are hard to change. Though it may sound pedantic, those voices chanting “Skinny Controller, Fat Model” are right. It’s easy to get caught in the trap of answering “where should I put this code” with “let’s open the app folder and see what cubbies I was given”. Don’t be afraid to take your Rails project by the horns! You can create your own actions, decorators, support objects, and services. Start including these patterns in your Rails app, and your code will come out clean and DRY: so fresh, so clean!

Looking for a Needle in a Haystack! (or Using Ack to Improve Your Development Workflow)

This post originally appeared on the Quick Left Blog.

or Using Ack to Improve Your Development Workflow

Introduction

Every day when I’m programming, I invariably come to a point where I’m looking for a certain line of code in my project. Usually, there’s a pattern that I saw and want to reuse, and I can’t find it. I could just use my editor’s “find in files” feature and look for it, but sometimes I need more fine grained control. What if I want to find all the lines of code that don’t contain a certain phrase? What if I want to search on a Regular Expression? What if I want to easily save the search results to a file?

When I need more control in finding something, I turn my favorite command line search tool: ack.

Why Ack?

Why should you use ack, when your unix distribution comes with find, mdfind, and grep? Well, because it has these advantages:

  • It only searches the stuff you care about. It excludes Git, Subversion, binary files, and other irrelevant file types.
  • It can search all the files in a certain language, regardless of file extension.
  • It is a lot easier to remember the command flags to scope your search than with other tools.

Installing Ack

To install Ack, I would suggest using homebrew. If you have it installed, just type brew install ack.

You can also use package on many other Unix distributions, as well as Macports, OpenBSD and FreeBSD, or just download it with this command:

1
curl http://beyondgrep.com/ack-2.12-single-file > ~/bin/ack && chmod 0755 !#:3

Basic Seach

To do a basic search for a string in a file with ack, the syntax is as simple as:

1
ack <search-pattern> <directory>

This will recursively search for the pattern in specified directory. I usually just use a simple text search. For example, if you were to do this:

1
2
3
4
5
mkdir tmp && cd tmp
echo "Hello, world" > hello.txt
echo "Goodbye, world" > goodbye.txt
echo "Hello, squirrel" > hi.txt
echo "Goodbye, squirrel" > bye.txt

Then you could search for the files containing hello with:

1
ack hello .

The . here means search the current directory. This is a recursive search by default. To turn off recursion, pass the -n flag, like this:

1
2
3
mkdir child-directory
mv hello.txt child-directory
ack -n hello .

Now you’ll only see one result, hi.txt, because it’s the only file with ‘hello’ in it that lives in the current directory.

Sorting

Sometimes it would be nice to sort your search results. Easy enough!

1
ack --sort-files -l

Inverse Search

One of my flags is -v, which lets you do a search for all the files that don’t match a given pattern. It comes in pretty handy.

1
ack -v <search-pattern> <directory>

Careful, though, because that could be a lot of output. You might want to shovel it into a file like we did before, or at least pipe it to less.

Searching By File Type

One of my favoritate uses for ack is to searching for all the files in a certain language. Ack supports Ruby, Python, JavaScript, Shell, Clojure, HTML, and a bunch of other file types. To see a full listing, do:

1
ack --help-types

If you want to, you can add new types, change the ones that are already there, or delete them, with --type-add, --type-set, and --type-del, respectively.

Assuming you’re good with the default types, let’s take it for a spin. Want a list of “all the things” Ruby? Open a Rails project and run this:

1
ack -f --ruby > all-ruby-files.txt

Want to find all the Ruby files that call puts?

1
ack --ruby puts .

Want to find all the files that say hello, but aren’t Ruby files?

1
ack --type=noruby hello .

Using an .ackrc file

If you do a lot of acking, and you want to set up some ack options systemwide, or for your specific project, you can define an .ackrc file. To have ack generate one for you, run:

1
ack --create-ackrc > .ackrc

Then, if you run a search in the current directory, the settings in the .ackrc will be used.

You can also put your ack options into an ACK_OPTIONS environment variable like so:

1
export ACK_OPTIONS="--nocolor"

If you defined some ack options in an .ackrc or an environment variable and want to run a search without those options, you can also turn them off with:

1
ack --noenv <search> <directory>

Advanced Searches

If you’re diggin’ it, here are some other fun things you can do with Ack.

Case Insenstivite Search

1
ack -i <search-pattern> <directory>

Match Whole Words Only

1
ack -w <search-pattern> <directory>

Only Output the Filenames, Without Highlighted Text

1
ack -l <search-pattern> <directory>
1
ack -L <search-pattern> <directory>

Just One Result (AKA “I’m Feeling Lucky”)

1
ack -1 <search-pattern> <directory>

Vim Integration (AckVim)

If you’re a vim user, you might want to check out the AckVim Plugin. It lets you run ack inside of vim and see the results in a split window.

Once you’ve added it with Git or Vundle, it’s as easy as typing:

:Ack [options] {pattern} [{directories}]

It’s pretty nice to have around!

Ack the Cat, Cathy and Bar

Well, you’ve made it this far, so here’s some candy for you. Run these:

1
2
3
ack --thpppt
ack --cathy
ack --bar

Goodbye!

Sometimes it can be like looking for a needle in a haystack trying to find what you need in your project.

Seriously, though, I hope this little tour of the small command line tool ack improves your programming experience. Cheers!

AngularJS Unit Testing, for Real Though

This post originally appeared on the Quick Left Blog.

It also appeared as a featured article in JavaScript Weekly.

Introduction

When it comes to contemporary web development, AngularJS is the new hotness. Its unique approach to HTML compilation and two-way data binding make it an effective tool for efficiently building client-side web apps. When I found out that Quick Left would be using it to build a production application for one of our clients, I was excited to learn as much about it as I could. I scoured the interwebs for every tutorial and walkthrough to be found on the Google Machine. They were really helpful in understanding directives, template compilation, and the event loop, but when it came to testing, I found that the topic was often hand-waved.

I was trained to practice Test-Driven Development, and I feel like something’s out of place whenever I’m out of the “Red-Green-Refactor” flow. Since we were still learning the ropes for effective testing in Angular, the team sometimes had to rely on ‘test-after’ development. This started to make me feel itchy, so I decided to focus on figuring out testing. I sprinted on it for a week, and we soon went from about 40% test coverage to 86% (By the way, if you haven’t tried it yet, check out Istabul for checking out your test coverage in JS apps).

Today I’d like to share some things I learned along the way. As good as the Angular docs are, testing a production app is rarely as simple as the examples you’ll find there. There are a lot of gotchas that pop up along the way, and I had to struggle my way through figuring out how to make things work. I found several workarounds that came in handy time and time again. In this article, we’re going to look at some of them:

  1. Reusable End-to-End (e2e) pages
  2. Dealing with functions that return a promise
  3. Mocking controller and directive dependencies
  4. Accessing child and isolate scopes

This article is written for intermediate to advanced developers using AngularJS to build production applications, who would like to reduce some of the pain of testing. It is my hope that feeling secure in testing workflow will enable the reader to practice a TDD workflow and build a more solid app.

Test Tools

There are many test frameworks and tools available to the Angular developer, and you may already have preferences around tooling. Here’s the setup that we chose, and we’ll be using for the rest of this article:

  • Karma: The official AngularJS team test runner. We’ll use it to launch Chrome, Firefox, and PhantomJS.

  • AngularMocks: Provides support for injecting and mocking Angular services in unit tests.

  • Protractor: The feature testing tool for AngularJS, which launches your app in a browser and interacts with it via Selenium.

  • Mocha: A node.js based test framework. Gives us the ability to write describe blocks and make assertions.

  • Chai: Assertion library that hooks into Mocha, and gives us access to Behavior-Driven Development assertions like expect, should, and assert. In this example, we’ll be using expect.

  • Chai-as-promised: This Chai plugin is really helpful for dealing with function calls that return a promise. It gives us the ability to say things like: expect(foo).to.be.fulfilled, or expect(foo).to.eventually.equal(bar).

  • Sinon: Stubbing and mocking library. We’ll use it to mock out directive and controller dependencies in unit tests, and to check that functions are being called with the correct arguments.

  • Browserify: Allows us to easily require modules of code between files in the project.

  • Partialify: Allows us to require HTML templates inline in our AngularJS directives.

  • Lodash: Utilities used to extend JavaScript and make it easier to work with.

Setting Up A Test Helper

We’ll start by creating a test helper that will load in the necessary dependencies. Here, I’m pulling in Angular Mocks, Chai, Chai-as-promised, and Sinon.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// test/test-helper.js

// Load in our actual project
require('widgetProject');

// Dependencies
require('angular-mocks');
var chai = require('chai');
chai.use('sinon-chai');
chai.use('chai-as-promised');

var sinon = require('sinon');

beforeEach(function() {
  // Create a new sandbox before each test
  this.sinon = sinon.sandbox.create();
});

afterEach(function() {
  // Cleanup the sandbox to remove all the stubs
  this.sinon.restore();
});

module.exports = {
  rootUrl: 'http://localhost:9000',
  expect: chai.expect
}

Getting Started: Top-Down Testing

I’m a big proponent of a top-down testing style. Starting with a feature that I know I want to build, I like to write a pseudo-gherkin scenario describing the desired behavior and translate it into a feature test. I run that test and let it fail. Then I can begin building all the parts of the system that I need to make the feature work, using unit tests to guide me along the way.

For these demos, I’ll building an imaginary application called “Widgets”, that can display a list of widgets, create new widgets, and edit existing widgets. The code you’ll see here is not enough to build the complete application, just enough to help the test examples make sense. We’ll start by writing an e2e test describing the workflow for creating a new widget.

To start things off, I’ll describe a pattern we found useful in e2e testing: creating a reusable “page” file. For this example, we’ll imagine that we’re working on a form to create a new widget.

Reusable e2e Test Pages

When working on a one-page app, it makes sense to DRY up the feature tests by writing a reusable “page” that you can reference from within multiple e2e tests.

There are many ways to structure the tests in an Angular project. Today, we’ll go with this setup:

1
2
3
4
5
6
7
widgets-project
|-test
|  |
|  |-e2e
|  |  |-pages
|  |
|  |-unit

Inside of the pages folder, we’ll create a WidgetsPage function that we can require into our e2e tests. It has five references:

  • widgetRepeater: a list of widgets contained in an ng-repeat
  • firstWidget: the first widget in the repeater
  • widgetCreateForm: the form used to create a widget
  • widgetCreateNameField: form field to enter the widget’s name
  • widgetCreateSubmit: form submit button

In the end, it looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// test/e2e/pages/widgets-page.js

var helpers = require('../../test-helper');

function WidgetsPage() {
  this.get = function() {
    browser.get(helpers.rootUrl + '/widgets');
  }

  this.widgetRepeater = by.repeater('widget in widgets');
  this.firstWidget = element(this.widgetRepeater.row(0));

  this.widgetCreateForm = element(by.css('.widget-create-form'));
  this.widgetCreateNameField = this.widgetCreateForm.element(by.model('widget.name');
  this.widgetCreateSubmit = this.widgetCreateForm.element(by.buttonText('Create');
}

module.exports = WidgetsPage

From within my e2e tests, I can now load up this page and interact with the elements on it. Here’s how I would use the test page in a test for the widget create form:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// e2e/widgets_test.js

var helpers = require('../test-helper');
var expect = helpers.expect;
var WidgetsPage = require('./pages/widgets-page');

describe('creating widgets', function() {
  beforeEach(function() {
    this.page = new WidgetsPage();
    this.page.get();
  });

  it('should create a new widget', function() {
    expect(this.page.firstWidget).to.be.undefined;
    expect(this.page.widgetCreateForm.isDisplayed()).to.eventually.be.true;
    this.page.widgetCreateNameField.sendKeys('New Widget');
    this.page.widgetCreateSubmit.click();
    expect(this.page.firstWidget.getText()).to.eventually.equal('Name: New Widget');
  });
});

Let’s step through what’s happening here. First, we load up the test helpers and get expect and the reusable WidgetsPage from them. In the beforeEach, we load up the page in the browser. Then, in the example, we use the page elements we defined in the WidgetsPage to interact with the page. We check that there are no widgets, then fill out the form to create one named “New Widget”, and check that it is displayed on the page.

By splitting the logic for the form out into a reusable “page”, we can now reuse it to test form validations or custom form directives later on.

Dealing With Functions That Return a Promise

The assertions we get from Protractor in the test above return promises, so we use Chai-as-promised to check that functions like isDisplayed and getText return what we expect after they’re resolved.

We can also deal with promises inside of unit tests. Take a look at this example, in which we test a modal that can be used to edit an existing widget. It makes use of the UI Bootstrap $modal service. When a user opens the modal, this service returns a promise. When she saves or cancels the modal, the promise is resolved or rejected. Here, we’ll test that the save and cancel methods are properly hooked up, again using Chai-as-promised.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// widget-editor-service.js
var angular = require('angular');
var _ = require('lodash');

angular.module('widgetProject.widgetEditor').service('widgetEditor', ['$modal', '$q', '$templateCache', function (
  $modal,
  $q,
  $templateCache
) {
  return function(widgetObject) {
    var deferred = $q.defer();

    var templateId = _.uniqueId('widgetEditorTemplate');
    $templateCache.put(templateId, require('./widget-editor-template.html'));

    var dialog = $modal({
      template: templateId
    });

    dialog.$scope.widget = widgetObject;

    dialog.$scope.save = function() {
      // Do some saving things
      deferred.resolve();
      dialog.destroy();
    });

    dialog.$scope.cancel = function() {
      deferred.reject();
      dialog.destroy();
    });

    return deferred.promise;
  };
}]);

This service loads the widget editor template into the template cache, loads a widget into it, and and sets up a deferred object that will be resolved or rejected depending on whether the user saves or cancels from the editor. It returns a promise from the deferred.

Here’s how you might test something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
// test/unit/widget-editor-directive_test.js

var angular = require('angular');
var helpers = require('../test_helper');
var expect = helpers.expect;

describe('widget storage service', function() {
  beforeEach(function() {
    var self = this;

    self.modal = function() {
      return {
        $scope: {},
        destroy: self.sinon.stub()
      }
    }

    angular.mock.module('widgetProject.widgetEditor', { $modal: self.modal });
  });

  it('should persist changes when the user saves', function(done) {
    var self = this;

    angular.mock.inject(['widgetModal', '$rootScope', function(widgetModal, $rootScope) {
      var widget = { name: 'Widget' };
      var promise = widgetModal(widget);

      self.modal.$scope.save();

      // Somehow test that the widget was saved
      expect(self.modal.destroy).to.have.been.called;
      expect(promise).to.be.fulfilled.and.notify(done);
st
      $rootScope.$digest();
    }]);
  });

  it('should not save when the user cancels', function(done) {
    var self = this;

    angular.mock.inject(['widgetModal', '$rootScope', function(widgetModal, $rootScope) {
      var widget = { name: 'Widget' };
      var promise = widgetModal(widget);

      self.modal.$scope.cancel();
      expect(self.modal.destroy).to.have.been.called;
      expect(promise).to.be.rejected.and.notify(done);

      $rootScope.$digest();
    }]);
  });
});

To deal with the complexity of the promise that the modal returns in the widget editor test, we have to do a few things. First, we build a mock $modal service in the beforeEach function, replacing it with a function that returns $scope as an empty object, and stubs destroy. In angular.mock.module, we pass this modal double into the options to get Angular Mocks to use it instead of the real $modal service. This pattern is extremely useful in stubbing out dependencies, as we’ll discover shortly.

There are two examples here, and each has to wait for the promise returned by the widget editor to be resolved before it can be completed. Because of this, we have to pass done as parameter to the example itself, and notify(done) when the test is complete.

Within the tests, we use Angular Mocks again to inject the widget modal and the AngularJS $rootScope service into the test. Having $rootScope gives us the ability to trigger a $digest loop. In each of the tests, we load up the modal, cancel or reject it, and use Chai-as-expected to test whether the promise returned was rejected or resolved. To trigger the actual promise resolution and call destroy, we have to have a $digest loop, so we do that at the end of each assertion as well.

We’ve now looked at how to deal with promises in both e2e and unit tests, using these assertions:

  • expect(foo).to.eventually.equal(bar)
  • expect(foo).to.be.fulfilled
  • expect(foo).to.be.rejected

Mocking Controller and Directive Dependencies

In the previous example, we had a service that relied on the $modal service, which we mocked out so that we could ensure that destroy was being called. The pattern we used to get that hooked up is very useful in getting unit tests to work properly in Angular.

The pattern is as follows:

  • Set var self = this in the beforeEach block.
  • Build a double and stub its methods, then make it a property of the self object:
1
2
3
self.dependency = {
  dependencyMethod: self.sinon.stub()
}

` - Pass your doubles into the module under test:

1
2
3
4
angular.mock.module('mymodule', {
  dependency: self.dependecy,
  otherDependency: self.otherDependency
});
  • Check that the mocked methods within your test examples. You can use expect(foo).to.have.been.called.withArgs, passing in the arguments you expect, for more precise coverage.

Sometimes directives or controllers depend on many external and internal dependencies, and you need to mock them out. Here’s a more complicated example, in which directive watches a widgetStorage service and updates the widgets in its scope whenever the collection changes. There’s also an edit method that opens the widgetEditor we created above.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// widget-viewer-directive.js

var angular = require('angular');

angular.module('widgetProject.widgetViewer').directive('widgetViewer', ['widgetStorage', 'widgetEditor', function(
  widgetStorage,
  widgetEditor
) {
  return {
    restrict: 'E',
    template: require('./widget-viewer-template.html'),
    link: function($scope, $element, $attributes) {
      $scope.$watch(function() {
        return widgetStorage.notify;
      }, function(widgets) {
        $scope.widgets = widgets;
      });

      $scope.edit = function(widget) {
        widgetEditor(widget);
      });
    }
  };
}]);

Here’s how we might test something like this, mocking out the dependencies on widgetStorage and the widgetEditor:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// test/unit/widget-viewer-directive_test.js

var angular = require('angular');
var helpers = require('../test_helper');
var expect = helpers.expect;

describe('widget viewer directive', function() {
  beforeEach(function() {
    var self = this;

    self.widgetStorage = {
      notify: self.sinon.stub()
    };

    self.widgetEditor = self.sinon.stub();

    angular.mock.module('widgetProject.widgetViewer', {
      widgetStorage: self.widgetStorage,
      widgetEditor: self.widgetEditor
    });
  });

  // The rest of the test...
});

Accessing Child and Isolate Scopes

Sometimes you need to write a directive that has an isolate or child scope inside of it. For example, when using the Angular Strap $dropdown service, an isolate scope is created. It can be a pain to try to access these from within your tests. Knowing about self.element.isolateScope() is the key to solving this problem. Here’s an example using $dropdown, which creates an isolate scope:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// nested-widget-directive.js
var angular = require('angular');

angular.module('widgetSidebar.nestedWidget').directive('nestedSidebar', ['$dropdown', 'widgetStorage', 'widgetEditor', function(
  $dropdown,
  widgetStorage,
  widgetEditor
) {
  return {
    restrict: 'E',
    template: require('./widget-sidebar-template.html'),
    scope: {
      widget: '='
    },
    link: function($scope, $element, $attributes) {
      $scope.actions = [{
        text: 'Edit',
        click: 'edit()'
      }, {
        text: 'Delete',
        click: 'delete()'
      }]

      $scope.edit = function() {
        widgetEditor($scope.widget);
      });

      $scope.delete = function() {
        widgetStorage.destroy($scope.widget);
      });
    }
  };
}]);

Assuming this directive inherits the widget from a parent directive that has a collection of widgets, it can be tough to get ahold of the child scope to test its properties are being updated as expected. But it can be done. Here’s how:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
// test/unit/nested-widget-directive_test.js
var angular = require('angular');
var helpers = require('../test_helper');
var expect = helpers.expect;

describe('nested widget directive', function() {
  beforeEach(function() {
    var self = this;

    self.widgetStorage = {
      destroy: self.sinon.stub()
    };

    self.widgetEditor = self.sinon.stub();

    angular.mock.module('widgetProject.widgetViewer', {
      widgetStorage: self.widgetStorage,
      widgetEditor: self.widgetEditor
    });

    angular.mock.inject(['$rootScope', '$compile', '$controller', function($rootScope, $compile, $controller) {
      self.parentScope = $rootScope.new();
      self.childScope = $rootScope.new();

      self.compile = function() {
        self.childScope.widget = { id: 1, name: 'widget1' };
        self.parentElement
        = $compile('<widget-organizer></widget-organizer>')(self.parentScope);

        self.parentScope.$digest();

        self.childElement = angular.element('<nested-widget
        widget="widget"></nested-widget>');

        self.parentElement.append(self.childElement);

        self.element = $compile(self.childElement)(self.childScope);
        self.childScope.$digest();
      }]);
    });

    self.compile();
    self.isolateScope = self.element.isolateScope();
  });

  it('edits the widget', function() {
    var self = this;
    self.isolateScope.edit();
    self.rootScope.$digest();
    expect(self.widgetEditor).to.have.been.calledWith(self.childScope.widget);
  });

Craziness, right? First we mock out the widgetStorage and widgetEditor again, then we proceed to create a compile function. This function will instantiate two scopes, a parentScope and a childScope, stub out a widget, and put it on the child scope. Then compile goes on to do some complicated template and scope setup: first, compiling a parent element called widget-organizer, which gets the parent scope passed into it. Once that’s all set up, we add a nested-widget child element to it, pass it the child scope, and trigger the $digest loop.

Finally, we get to the magic: we call the compile function, then hook into the compiled template’s isolate scope (which is the $dropdown scope), with self.element.isolateScope(). When we actually get to the assertion at the end, we can hook into the isolate scope to call edit, and finally check that the stubbed out widgetEditor was called with the stubbed widget.

Conclusion

Testing can get painful. I know that there were several times in our project where the pain of figuring out what to do was so great that it was tempting to just move on, writing code and falling back to the “click test” to make sure everything was working. Unfortunately, once you get out of that flow, the feeling of uncertainty begins to grow and grow.

After we took the time to figure out how to deal with these difficult cases, it became a lot easier to know what to do when similar complicated cases presented themselves. Armed with the patterns described in this article, we were able to get into a TDD workflow and move forward with confidence.

I hope that the testing patterns we’ve looked at today prove useful in your own development practice. AngularJS is still a young, growing framework. What other patterns have you found to make it easier to test? Please tweet at me @fluxusfrequency!

Five Capybara Hacks to Make Your Testing Experience Less Painful

This post originally appeared on the Quick Left Blog.

Testing, Testing…

Everyone knows it’s important to test your code. But sometimes, the experience can be a little bit painful. Ok, sometimes it’s very painful.

Painful Testing

What to do? Abandon the tests? Never! Smokey, this is not ‘Nam. This is coding. There are rules.

Today I want to share a few of the things I’ve learned that help mitigate that pain when testing Rails with Capybara. Ladies and gentlemen, I give you…

The Hacks

1) Execute Script

In one project I was working on, I wanted to use a fake password input field so I could display a text field instead of a password field. I wanted to do that so I could give it placeholder text that wouldn’t appear as dots. When the fake field received focus, it was replaced with the real password field using jQuery.

This solution passed the click test, but it was a headache when testing. My Capybara spec couldn’t find the password field, because it was hidden. Since pretty much every integration test I had needed to log in before it could do anything, I was stuck.

In order to overcome the difficulty, I had the test execute a script to show the field I was looking for.

1
  page.execute_script("$('#new_password').show()")

I could then fill out the form normally and access the rest of the app.

This trick also comes in useful when using Bootstrap for responsive design. Sometimes, the mobile menu dropdown can cause conflicts with other elements on the page, and Capybara can’t click on them.

2) Tail the Test Log File

I’ve gotten so used to being able to read the Rails server stack trace when troubleshooting in development, that I sometimes I don’t know what to do to when I want to find my errors in while testing. The test stack trace and pry will only take you so far. When things get really hairy, you can get a similar output as you would see in development by running tail -f log/test.log in a separate terminal tab before running the specs.

3) Find the Port Address and Pause the Test

Sometimes I just want to see what’s going on in the browser. Although save_and_open_page gives you some idea of what’s going on, you can’t really click around, because the page it gives you is static and lacking assets. To dig into what’s going on, I like to use a trick I learned from my mentor, Mike Pack.

Just above the broken line in your test, add these two lines of code. Note that you have to have pry installed in your current gemset or specified in the Gemfile for this to work.

1
2
puts current_url
require 'pry'; binding.pry

Run the specs, and when they pause, copy the url and port number from the test output. Open your browser and paste the address into the window. Voila! You’re now browsing your site in test mode!

4) Make Sure DatabaseCleaner Plays Nice with PhantomJS

It can get really tricky to test JavaScript in Rails. Arguably, using some Jasmine tests may be the best thing if you have a lot of JS code. That may work for unit tests, but if you want to test a feature from end to end, it usually makes sense to use Capybara. To drive the JavaScript, you need an engine like Poltergeist, which relies on PhantomJS.

Meanwhile, you also need to take care of cleaning the database between specs to make sure that they are independent. In my apps, I usually take care of this with the DatabaseCleaner gem.

Unfortunately, PhantomJS runs the JavaScript code in a separate thread from the application code, which means that JS transactions don’t always get committed to the database before DatabaseCleaner attempts to clean them.

The answer is to make sure that you set it to use the :truncation strategy for JavaScript tests. See Avdi Grimm’s blog post on the subject for more details.

Here’s how to set it up in your spec_helper:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
RSpec.configure do |config|

  config.before :suite do
    DatabaseCleaner.strategy = :truncation
  end

  config.before(:each) do
    DatabaseCleaner.strategy = :transaction
  end

  config.before(:each, :js => true) do
    DatabaseCleaner.strategy = :truncation
  end
end

5) Split VCR/Webmock Specs Into a Separate Rake Task From Your JavaScript Tests

Sometimes I have service objects that interact with an external API. The easiest way to test them without relying on actual HTTP requests is to use the VCR gem. With VCR, you can also hook into Webmock, which keeps your computer from making any external HTTP requests during the test cycle. This is all well and good until you are making requests against a local API with JavaScript as well. In that case, Webmock will block the requests. Over the line!

I’ve played around with various ways of setting a condition in my test helper and before blocks to only run Webmock for a certain test, but I’ve come up with another solution that I prefer. Instead of running rspec, I split the tests into a :services group, and a :local group. Then, I write a rake task in lib/tasks called spec.rake that looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
namespace :spec do
  desc "run all local specs"
  task :local  do
    system 'rspec spec/models'
    system 'rspec spec/features'
    system 'rspec spec/controllers'
  end

  desc "run all service specs"
  task :services do
    system 'rspec spec/services'
  end

  task :all do
    system 'rspec spec/models'
    system 'rspec spec/features'
    system 'rspec spec/controllers'
    system 'rspec spec/services'
  end
end

This way, you only have to type one command to run all of your specs, just as you would if you had typed rspec. It’s also easy to hook it into your Travis CI configuration like this:

1
2
3
4
5
6
7
8
language: ruby
rvm:
  - "2.0.0-p353"
script:
  - bundle exec rake db:create
  - bundle exec rake db:migrate RAILS_ENV=test
  - bundle exec rake spec:local
  - bundle exec rake spec:services

Boom! Local and external APIs all tested, JavaScript tested, and Travis build passing.

Conclusion

If you’re using a lot of JavaScript (and who isn’t these days?) in your Rails app, it can sometimes hurt to write your feature tests. I hope these five hacks will make your life a little easier when you are testing, so you can get back to writing the code that makes your app go. Until then, ‘the Dude abides’.

The Dude Abides