Fluxus Frequency

How I Hacked The Mainframe

Seven Unusual Ruby Datastores

This post appeared in Ruby Weekly #240. It was also included in issue #27.1 of the Pointer.io newsletter. It originally appeared on the Engine Yard Blog

Introduction

Admit it: you like the unusual. We all do. Despite constant warnings against premature optimization, an emphasis on “readable code”, and the old aphorism, “keep it simple, stupid”, we just can’t help ourselves. As programmers, we love exploring new things.

In that spirit, let’s go on an adventure. In this post, we’ll take a look at seven lesser-known ways to store data in the Ruby language.

The Ones We Already Know

Before we get started, we’ll set a baseline. What are the ways to store data in Ruby that we use every day? Well, these are the ones that come to mind for me: string, array, hash, CSV, JSON, and the filesystem.

We can skip all of these.

So what are some of the other ways to store data in Ruby? Let’s find out.

Struct

What Is It?

A struct is a way of bundling together a group of variables under a single name. If you’ve done any C programming, you’ve probably come across structs before.

A struct is similar to a class. At its most basic, it’s a group of bundled attributes with accessor methods. You can also define methods that instances of the struct will respond to.

In Ruby, structs inherit from Enumerable, so they come with all kinds of great behavior, like to_a, each, map, and member access with [].

You can define a struct object by setting a constant equal to Struct.new and passing in some default attribute names. From there, you can create any number of instances of the struct, passing in attribute values for that instance.

Let’s explore one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Cat = Struct.new(:name, :breed, :hair_length) do
  def meow
    "m-e-o-w-w"
  end
end

tabby = Cat.new("Tabitha", "Russian Blue", "short")

tabby.name
=> "Tabitha"
tabby.meow
=> "m-e-o-w-w"
tabby[0]
=> "Tabitha"
tabby.each do |attribute|
  puts attribute
end
"Tabitha"
"Russian Blue"
"short"
=> #<struct Cat name="Tabitha", breed="Russian Blue", hair_length="short">

When Would You Use It?

If you want to quickly define a class that has easily accessible attributes and little other behavior, structs are a great choice. Since they also respond to enumerable methods, they are great for use as stubs in tests.

If you want to stub a class and send it a message in a test, but you don’t want to use a double, you can fake it with a struct in a single line of code.

Look how simple that is:

1
fake_stripe_charge = Struct.new(:create)

Next we’ll take a look at Struct’s close cousin, OpenStruct.

OpenStruct

What Is It?

An OpenStruct is somewhat like a hash. It’s a data structure that you can use to store and access key-value pairs. In fact, it really is a hash. Under the hood, each OpenStruct uses a hash for data storage. It also defines getters and setters automatically using method_missing and define_method.

There are three main differences between a struct and an open struct.

The first is that when you initialize a struct, you get back a class that inherits from Struct, which you must further instantiate, whereas calling new on an OpenStruct gives you back an OpenStruct object.

Secondly, OpenStructs don’t allow you to define behaviors by passing a block to the initializer as we did with the struct above.

Finally, OpenStructs must be passed an argument that responds to each_pair (such as a hash), whereas stucts expect a list of strings or symbols (to define their attribute names).

In the end, an OpenStruct is a much simpler than a struct.

OpenStruct lives in the Ruby Standard Library, so to use it in your code, you’ll have to require 'ostruct'.

Let’s explore one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
luke = OpenStruct.new({
  home: "Tatooine",
  side: :light,
  weapon: :light_saber
})

luke
=> #<OpenStruct home="Tatooine", side=:light, weapon=:light_saber>
luke.home
=> "Tatooine"
luke.side = :dark
=> :dark
luke
=> #<OpenStruct home="Tatooine", side=:dark, weapon=:light_saber>

When Would You Use It?

As with Structs, I like to use OpenStructs as test stubs. Unfortunately, the metaprogramming used behind the scenes makes OpenStructs much slower than hashes, and they also respond to far fewer methods, so they aren’t as flexible for everyday use. However, their built-in getters make them really useful anywhere that you need to inject an object that responds to a certain method.

Marshalling

What Is It?

Marshaling is way to serialize Ruby objects into a binary format. It converts them into a bytestream that can be saved and reconstituted later.

You marshal objects by calling Marshal.dump and Marshal.load.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
SpaceCaptain = Struct.new(:name, :rank, :affiliation)
=> SpaceCaptain

picard = SpaceCaptain.new("Jean-Luc Picard", "Captain", "United Federation of Planets")
=> #<struct SpaceCaptain name="Jean-Luc Picard", rank="Captain", affiliation="United Federation of Planets">

saved_picard = Marshal.dump(picard)
=> "\x04\bS:\x11SpaceCaptain\b:\tnameI\"\x14Jean-Luc Picard\x06:\x06ET:\trankI\"\fCaptain\x06;\aT:\x10affiliationI\"!United Federation of Planets\x06;\aT"
# Write to disk

loaded_picard = Marshal.load(saved_picard)
=> #<struct SpaceCaptain name="Jean-Luc Picard", rank="Captain", affiliation="United Federation of Planets">

When Would You Use it?

There are plenty of use cases for serializing code running in memory and saving it for later reuse. For example, if you were writing a video game and you wanted to make it possible for a player to save their game for later, you could marshal the objects in memory (e.g. the player, her location in a map, and any enemies that are nearby) and persist them. You could then load them up again when the player is ready to continue.

Although there are other data serialization formats available, such as JSON, XML, and YAML (which we’ll look at next), marshalling is by far the fastest option available in Ruby. That makes it particularly well-suited to situations where you dealing with large volumes of data or processing it at high speed.

YAML

What is It?

YAML, which stands for YAML Ain’t Markup Language, is a widely-used format for serializing data in a human-readable format. It’s available in many languages, of which Ruby is only one. The most widely-used Ruby YAML parser, psych, is a wrapper around libyaml, the C language parser.

YAML lives in the Ruby Standard Library, so to use it in your code, you’ll have to require 'yaml'. You can use the YAML::Store library to easily save data to disk.

Here’s an example of how to use that library:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
require 'yaml/store'

class Database
  DATABASE = YAML::Store.new('my_database')

  def self.save_person(user_data)
    DATABASE.transaction do
      DATABASE["people"] ||= []
      DATABASE["people"] << user_data
    end
  end
end

bilbo = {
  race: :hobbit,
  aliases: ["Bilba Labingi"],
  home: "The Shire",
  inventory: [:the_one_ring, :arkenstone]
}

Database.save_person(bilbo)
=> [{:race=>:hobbit, :aliases=>["Bilba Labingi"], :home=>"The Shire", :inventory=>[:the_one_ring, :arkenstone]}, {:race=>:hobbit, :aliases=>["Bilba Labingi"], :home=>"The Shire", :inventory=>[:the_one_ring, :arkenstone]}]

Here’s what my_database would look like after running this code:

1
2
3
4
5
6
7
8
9
---
people:
- :race: :hobbit
  :aliases:
  - Bilba Labingi
  :home: The Shire
  :inventory:
  - :the_one_ring
  - :arkenstone

When Would You Use it?

YAML serves the same function as marshaling: it’s a way to serialize Ruby objects for storage. It’s quite a bit slower, but it’s human-readable.

YAML is working behind the scenes when ActiveRecord is used to serialize a record attribute containing a hash or an array and save it to a text column in the database. When the attribute is retrieved, ActiveRecord deserializes it back from YAML into a Ruby object of its original data type.

Set

What Is It?

If you’re familiar with mathematical set theory, the Set class should be pretty intuitive. Sets respond to intersection, difference, merge, and many other Set operations.

It allows you to define a data structure that behaves like an unordered array that can only contain unique members. It exposes many of the same methods available when accessing arrays, but with a faster lookup. Like OpenStruct, Set uses hash under the hood.

Sets can be saved in redis, which makes it possible to look them up very quickly.

Set lives in the Ruby Standard Library, so to use it in your code you’ll have to require 'set'.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
require 'set'

basic_lands = Set.new
[:swamp, :island, :forest, :mountain, :plains].each do |land|
  basic_lands << land
end

basic_lands
=> #<Set: {:swamp, :island, :forest, :mountain, :plains}>

basic_lands << :swamp
# does nothing
=> #<Set: {:swamp, :island, :forest, :mountain, :plains}>

fires_lands = Set.new
[:forest, :mountain, :city_of_brass, :karplusan_forest, :rishadan_port].each do |land|
  fires_lands << land
end
fires_lands
=> #<Set: {:forest, :mountain, :city_of_brass, :karplusan_forest, :rishadan_port}>

basic_lands.intersection(fires_lands)
=> #<Set: {:forest, :mountain}>

basic_lands.difference(fires_lands)
=> #<Set: {:swamp, :island, :plains}>

basic_lands.subset?(fires_lands)
=> false

basic_lands.merge(fires_lands)
=> #<Set: {:swamp, :island, :forest, :mountain, :plains, :city_of_brass, :karplusan_forest, :rishadan_port}>

When Would You Use it?

Sets are great for situations where you need to make sure that a given element isn’t contained in a collection more than once. For example, if you were using tags in an application that was not backed by a database.

They’re also great for comparing the equality of two lists without caring about their order (as an array would). You could use this feature to check whether the data stored in memory is in sync with another collection fetched from a remote server.

Queue

What Is It?

A Queue is a place that can be used hold values that you want to share between threads. It’s basically a stack that is visible to all of the concurrently running thread process in a given Ruby environment.

If you want to limit the amount of data that can be shared, you can use a SizedQueue.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
require 'thread'
chess_moves = Queue.new

player_moves = Thread.new do
  chess_moves << "e4"
  sleep(1)
  chess_moves << "e5"
  sleep(1)
  chess_moves << "f4"
end

game_board = Thread.new do
  while chess_moves.length > 0
    move = chess_moves.pop
    # update the ui with the move
  end
end

When Would You Use it?

Queues are extremely helpful in any application that runs code concurrently. For example, background processing libraries like Redis make use of a queue to check for the latest jobs and instruct workers to run them.

ObjectSpace

What Is It?

The ObjectSpace module is a collection of methods that can be used to interact with all of the living objects in the current Ruby environment, as well as the garbage collector.

You can use it to check out all of the objects currently living in memory, look up objects by the object ID reference, and trigger garbage collector runs. You can also define a hook to be triggered when any object of a given class is removed from the ObjectSpace using ObjectSpace#define_finalizer.

How is ObjectSpace a data store? Well, it’s the highest-level data store (that hasn’t been interpreted or compiled yet) in any place that you can run Ruby code. Any time you define or remove an object from memory, you are changing what is visible in the ObjectSpace.

Let’s take a look at everything that’s available in an IRB session.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
object_counts = Hash.new(0)
ObjectSpace.each_object do |o|
  object_counts[o.class] += 1
end

require "pp"
pp object_counts

{
  String=>67073,
  Array=>14474,
  Regexp=>164,
  Gem::Specification=>299,
  Hash=>1023,
  # and many more...
}

If we create a new object, it ends up in the ObjectSpace.

1
2
3
4
5
6
7
8
require "ostruct"

ObjectSpace.each_object(OpenStruct).count
=> 0
spidey = OpenStruct.new({ name: "Peter Parker", species: "Human Mutate" })

ObjectSpace.each_object(OpenStruct).count
=> 1

When Would You Use it?

Although you already use the ObjectSpace all the time, whether you realize it or not, knowing about the methods it exposes opens up a lot of possibilities for investigating and improving the performance of your code.

The best use I’ve seen for ObjectSpace so far is using it to detect memory leaks. This article shows an interesting way to map the objects in your object space to create a graph that is useful in tracking down and fixing memory leaks.

Conclusion

Ruby is such a fun language to write because there are so many ways to say the same thing. It doesn’t stop at writing statements and expressions, though. You can also store data in a huge number of ways.

In this post, we looked at seven fairly unusual ways to handle data in Ruby. Hopefully, reading through them has given you some ideas for how to handle persistence or in-memory storage in your own applications.

Until next time, happy coding!

    1. We know that there are other unusual datastores out there. What are some of your favorites and how do you use them? Leave us a comment!

Integrating React With Backbone

This post originally appeared on Engine Yard.

Introduction

There are so many JS frameworks! It can get tiring to keep up to date with them all.

But like any developer who writes JavaScript, I try to keep abreast of the trends. I like to tinker with new things, and rebuild TodoMVC as often as possible.

Joking aside, when it comes to choosing frameworks for a project, emerging frameworks just haven’t been battle-tested enough for me to recommend to clients in most cases.

But like much of the community, I feel pretty confident in the future of React. It’s well documented, makes reasoning about data easy, and it’s performant.

Since React only provides the view layer of a client-side MVC application, I still have to find a way to wrap the rest of the application. When it comes to choosing a library that I’m confident in, I still reach for BackboneJS. A company that bets on Backbone won’t have trouble finding people who can work on their code base. It’s been around for a long time, is unopionated enough to be adaptable to many different situations. And as an added bonus, it plays well with React.

In this post, we’ll explore the relationship between Backbone and React, by looking at one way to structure a project that uses them together.

A Note About Setting Up Dependencies

I won’t go over setting up all of the package dependencies for the project here, since I’ve covered this process in a previous post. For the purposes of this article, you can assume that we’re using Browserify.

One package that is worth noting, though, is ReactBackbone. It will allow us to trigger an automatic update of our React components whenever Backbone model or collection data changes. You can get it with npm install --save react.backbone.

We’ll also be making use of backbone-route-control to make it easier to split our URL routes into logically encapsulated controllers. See “caching the controller call” in this article for more information about how to set this package up.

Project Structure

There are many ways to structure the directories for a client-side JS application, and every project lends itself to a slightly different setup. Today we’ll be creating our directory structure in a fairly typical fashion for a Backbone project. But we’ll also be introducing the concept of screens to our application, so we’ll also be extending it slightly.

Here’s what we’ll need:

1
2
3
4
5
6
7
8
9
10
11
12
assets/
  |- js/
     |- collections/
     |- components/
     |- controllers/
     |- models/
     |- screens/
     |- vendor/
     |- app.js
     |- base-view.js
     |- index.js
     |- router.js

Much of this is standard Backbone boilerplate. The collections/, models/, and vendor/ directories are self-explanatory. We’ll store reusable UI components, such as pagination, pills, and toggles, in components/.

The heart of our app will live in the screens/ directory. Here, we’ll write React components that will handle the display logic, taking the place of traditional Backbone views and templates. However, we’ll still include thin Backbone views to render these components.

We’ll talk more about screens in a moment. For now, let’s take a look at the how a request will flow through the application, starting from the macro level.

The Application

We’ll begin by writing a root-level index.js file, which will be the source of the require tree that Browserify will use.

1
2
3
4
window.$ = window.jQuery = require('jquery');
var Application = require('./app');

window.app = new Application();

What is this Application, you may ask? Simply put, it’s the function we’ll use to bootstrap the entire project. Its purpose is to get all of the dependencies set up, instantiate the controllers and router, kick off Backbone history, and render the main view.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
var Backbone = require('backbone');

var Router = require('./router');
var MainView = require('./screens/main/index');

var UsersController = require('./controllers/users-controller');

Backbone.$ = $;

var Application = function() {
  this.initialize();
};

Application.prototype.initialize = function() {
  this.controllers = {
    users: new UsersController({ app: this })
  };

  this.router = new Router({
    app: this,
    controllers: this.controllers
  });

  this.mainView = new MainView({
    el: $('#app'),
    router: this.router
  });

  this.showApp();
};

Application.prototype.showApp = function() {
  this.mainView.render();
  Backbone.history.start({ pushState: true });
};

module.exports = Application;

Router

Once the application has been booted up, we’ll want to be able to accept requests. When one comes in, our app will need to be able to take a look at the URL path in the navigation bar and decide what to do. This is where the router comes in. It’s a pretty standard part of any Backbone project, so it probably won’t look too out of the ordinary, especially if you’ve used backbone-route-control before.

1
2
3
4
5
6
7
8
9
10
11
12
var Backbone = require('backbone');
var BackboneRouteControl = require('backbone-route-control');

var Router = BackboneRouteControl.extend({
  routes: {
    '':          'users#index',
    'users':     'users#index',
    'users/:id': 'users#show'
  }
});

module.exports = Router;

When one of these routes is hit, the router will take a look at the controllers we passed into it during app initialization, find the controller with the name to the left of the # and try to call the method name to the right of the # in the string defined for that route.

Controllers

Now that the request has been routed through one of the routes, the router will take a look in the matching controller for the method that is to be called. Note that these controllers are not a part of Backbone, but Plain Old JavaScript Objects.

For the purposes of this post, we’ll just have a UsersController with two actions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
var UsersCollection = require('../collections/users-collection');
var UserModel = require('../models/user');
var UsersIndexView = require('../screens/users/index');
var UserShowView = require('../screens/users/show');

var UsersController = function(options) {
  var app = options.app;

  return {
    index: function() {
      var usersCollection = new UsersCollection();

      usersCollection.fetch().done(function() {
        var usersView = new UsersIndexView({
          users: usersCollection
        });
        app.mainView.pageRender(usersView);
      });
    },

    show: function(id) {
      var user = new UserModel({
        id: id
      });

      user.fetch().done(function() {
        var userView = new UserShowView({
          user: user
        });
        app.mainView.pageRender(userView);
      });
    }
  };
};

module.exports = UsersController;

This controller loads the User model and collection, and uses them to display the user index and show screens. It instantiates a Backbone collection or model, depending on the route, fetches its data from the server, loads it into the screen (which we’ll get to momentarily), then shows that screen in the app’s mainView container.

Screens

At this point, we’ve accepted a request, routed it through a controller action, decided what kind of collection or model we are dealing with, and fetched the data from the server. We’re ready to render a Backbone view. In this case, it will do little more than pass the data on to the React component.

The Base View

Since there’s going to be a lot of repeated boilerplate in our Backbone views, it makes sense to abstract it out into a BaseView, which child views will extend from.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var React = require('react');
var Backbone = require('backbone');

var BaseView = Backbone.View.extend({
  initialize: function (options) {
    this.options = options || {};
  },

  component: function () {
    return null;
  },

  render: function () {
    React.renderComponent(this.component(), this.el);
    return this;
  }
});

module.exports = BaseView;

This base view sets any options passed in as properties on itself, and defines a render() method that renders whatever React component is defined in the component() method.

The Main View

In order to switch between screens without doing a page re-render, we’ll wrap all of our screens in an outer screen called the mainView. This view acts as a sort of “picture frame” for the other screens in the app, displaying, hiding, and cleaning them up.

As with all of our screens, it will consist of two parts: a Backbone view, defined in screens/main/index.js, and a React component, defined in screens/main/component.js.

Backbone View

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var Backbone = require('backbone');
var BaseView = require('../../base-view');
var MainComponent = require('./component');

var MainView = BaseView.extend({
  component: function () {
    return new MainComponent({
      router: this.options.router
    });
  },

  pageRender: function (view) {
    this.$('#main-container').html(view.render().$el);
  }
});

module.exports = MainView;

Since we passed #app as the element for this view to attach to back in app.js, it will render itself there. Thinking through what render actually means, we know that it will call the code defined in the BaseView, which means it will render the whatever’s returned by the component() function. In this case, it’s the React MainComponent. We’ll take a look at that in a moment.

The other special thing this view does is to render any subviews passed to pageRender in the #main-container element found within #app. As I said, it’s basically just a frame for whatever else is going to happen.

React Component

Now let’s take a look at that MainComponent. It’s a very simple React component that does nothing more than render the “container” into the DOM.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
/** @jsx React.DOM */
var React = require('react');
var ReactBackbone = require('react.backbone');

var MainComponent = React.createBackboneClass({
  render: function () {
    return (
      <div>
        <div id="main-container"></div>
      </div>
    );
  }
});

module.exports = MainComponent;

That’s it, the whole MainView. Since it’s so simple, it makes a good introduction to how we can render components in this project.

Now let’s take a look at something a little more advanced.

User Show View

We’ll start by taking a look at how we might write a React component for a user show page.

Backbone View

First, we’ll define the UserShowView we referenced back in the UsersController. It should live at screens/users/show/index.js.

1
2
3
4
5
6
7
8
9
10
11
12
var BaseView = require('../../../base-view');
var UserScreen = require('./component');

var UserView = BaseView.extend({
  component: function () {
    return new UserScreen({
      user: this.options.user
    });
  }
});

module.exports = UsersView;

That’s it. Mostly just boilerplate. In fact, pretty much all of our Backbone views will look like this. A simple extension of BaseView that defines a component() method. That method instantiates a React component and returns it to the render() method in the BaseView, which in turn is called by the mainView’s pageRender() method.

React Component

Now, let’s dig into the meat of user show screen: the UserScreen component. It will live at screens/users/show/component.js.

We’ll imagine that we can “like” users. We want to be able to increment a user’s likes attribute by clicking a button. Here’s how we’d write this component to handle that behavior.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
/** @jsx React.DOM */
var React = require('react');
var Backbone = require('backbone');
var ReactBackbone = require('react.backbone');
var User = require('../../../models/user');

var UserShowScreen = React.createBackboneClass({
  mixins: [
     React.BackboneMixin('user', 'change'),
  ],

  getInitialState: function() {
    return {
      liked: false
    }
  },

  handleLike: function(e) {
    e.preventDefault();
    var currentLikes = this.props.user.get('likesCount');
    this.props.user.save({ likesCount: currentLikes + 1 });
  },

  render: function() {
    var user = this.props.user;
    var username = user.get('username');
    var avatar = user.get('avatar').url;
    var likesCount = user.get('likesCount');

    return (
      <div className="user-container">
        <h1>{username}'s Profile</h1>
        <img src={avatar} alt={username} />
        <p>{likesCount} likes</p>
        <button className="like-button" onClick={this.handleLike}>
          Like
        </button>
      </div>
    );
  }
});

module.exports = UserShowScreeen;

You may have noticed that curious mixins property. What is that? react.backbone gives us some niceties here, since we’re calling React.createBackboneClass instead of React.createClass. Whenever the user prop that was passed into this component fires a change event, the component’s render() method will be called. For more information, take a look at the package on GitHub.

When we click that like button, we’re incrementing the likesCount attribute on the user, and saving it to the server with our save() call. When the result of that sync comes back, our view will automatically re-render, and the likes count indication will update! Pretty sweet!

Users Index Screen

Before we conclude this post, lets take a look at one more case: the index screen. Here, we’ll see how using React can make it easier to render repetitive subcomponents.

Backbone View

The view for this screen will live at /screens/users/index/index.js, and look similar to the UserShowView.

1
2
3
4
5
6
7
8
9
10
11
12
var BaseView = require('../../../base-view');
var UsersIndexScreen = require('./component');

var UsersIndexScreen = BaseView.extend({
  component: function () {
    return new UsersIndexScreen({
      users: this.options.users
    });
  }
});

module.exports = UsersIndexView;

Backbone Component

The UsersIndexScreen component will also be fairly similar to the UserShowScreen one, but with one key difference: since we’re going to be rendering the same DOM elements repeatedly, we can leverage subcomponents.

Here’s the main component, which lives at screens/users/index/component.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/** @jsx React.DOM */
var React = require('react');
var ReactBackbone = require('react.backbone');
var UserBlock = require('.user-block');

var UsersIndexScreen = React.createBackboneClass({
  mixins: [
     React.BackboneMixin('users', 'change')
  ],

  render: function() {
    var userBlocks = this.props.users.map(function(user) {
      return <UserBlock user={user} />
    });

    return (
      <div className="users-container">
        <h1>Users</h1>
        {userBlocks}
      </div>
    );
  }
});

module.exports = UsersIndexScreen;

We’re just looping through the users that were passed into the component, and wrapping each one in a UserBlock React component. This component can be defined in a file that lives right alongside index.js and component.js.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
/** @jsx React.DOM */
var React = require('react');
var Backbone = require('backbone');
var ReactBackbone = require('react.backbone');

var MemberBlock = React.createBackboneClass({
  render: function () {
    var user = this.props.user;
    var username = user.get('username');
    var avatar = user.get('avatar').url;
    var link = '/users/' + user.get('id');

    return (
      <div className="user-block">
        <a href={link}>
          <h2>{username}</h2>
          <img src={avatar} alt={username} />
        </a>
      </div>
    );
  }
});

module.exports = UserBlock;

Voila! An index view at /users that shows all of our users’ beautiful faces and links to their show pages. It was pretty painless, thanks to React!

Wrapping Up

We’ve now traced the entire series of events that happens when someone loads up our application and requests a route. After going through the router and controller, the fetched data is injected through a Backbone view into a React component, which is then rendered by the app’s mainView.

We only barely scratched the surface of what React is capable of here. If you haven’t checked out the React component API docs, I highly suggest doing so. Once I began to fully harness the power it gave me, I found my projects’ view layers much cleaner. Plus, I get all of the React performance benefits for free!

I hope that this post has helped make it more obvious how to get started with integrating React into a Backbone app. To me, it always seemed like a good idea, but I didn’t know where to begin. Once I got a sense of the pattern, though, it became pretty easy to do.

P.S. Do you have a different pattern for using React in your Backbone app? Want to talk about using React in Ember or Angular? Leave us a note in the comments!

Understanding Rack Apps and Middleware

This post originally appeared on Engine Yard.

Introduction

For many of us web developers, we work on the highest levels of abstraction when we program. Sometimes it’s easy to take things for granted. Especially when we’re using Rails.

Have you ever dug into the internals of how the request/response cycle works in Rails? I recently realized that I knew almost nothing about how Rack or middlewares work, so I spent a little time finding out. In this post, I’ll share what I learned.

What’s Rack?

Did you know that Rails is a Rack app? Sinatra too. What is Rack? I’m glad you asked. Rack is a Ruby package that provides an easy-to-use interface to the Ruby Net::HTTP library.

It’s possible to quickly build simple web applications using just Rack.

To get started, all you need is an object that responds to a call method, taking in an environment hash and returning an Array with the HTTP response code, headers, and response body. Once you’ve written the server code, all you have to do is boot it up with a Ruby server like Rack::Handler::WEBrick, or put it into a config.ru file and run it from the command line with rackup config.ru.

Ok, cool. So what does Rack actually do?

How Rack Works

Rack is really just a way for a developer to create a server application while avoiding the boilerplate code that would be required to do so using Net::HTTP. If you’ve written some code that meets the Rack specifications, you can load it up in a Ruby server like WEBrick, Mongrel, or Thin, and you’re ready to accept requests and respond to them.

There are a few methods you should know about that are provided for you. You can call these directly from within your config.ru file.

run Takes an application (the object that responds to call) as an argument. The following code from the Rack website demonstrates how this looks:

1
run Proc.new { |env| ['200', {'Content-Type' => 'text/html'}, ['get rack\'d']] }

map Takes a string specifying the path to be handled, and a block containing the Rack application code to be run when a request with that path is received. Here’s an example:

1
2
3
map '/posts' do
  run Proc.new { |env| ['200', {'Content-Type' => 'text/html'}, ['first_post', 'second_post', 'third_post']] }
end

use Tells Rack to use certain middleware.

So what else do you need to know? Let’s take a closer look at the environment hash and the response Array.

The Environment Hash

Your Rack server object takes in an environment hash. What’s contained in that hash? Here are a few of the more interesting parts:

  • REQUEST_METHOD: The HTTP verb of the request. This is required.
  • PATH_INFO: The request URL path, relative to the root of the application.
  • QUERY_STRING: Anything that followed ? in the request URL string.
  • SERVER_NAME and SERVER_PORT: The server’s address and port.
  • rack.version: The rack version in use.
  • rack.url_scheme: is it http or https?
  • rack.input: an IO-like object that contains the raw HTTP POST data.
  • rack.errors: an object that response to puts, write, and flush.
  • rack.session: A key value store for storing request session data.
  • rack.logger: An object that can log interfaces. It should implement info, debug, warn, error, and fatal methods.

A lot of frameworks built on Rack wrap the env hash in a Rack::Request object. This object provides a lot of convenience methods. For example, request_method, query_string, session, and logger return the values from the keys described above. It also lets you check out things like the params, HTTP scheme, or whether you’re using ssl?. For a complete listing of methods, I would suggest digging through the source.

The Response

When your Rack server object returns a response, it must contain three parts: the status, headers, and body. As there was for the request, there is a Rack::Response object that gives you convenience methods like write, set_cookie, finish, and more. Alternately, you can just return an array containing the three components.

Status

An HTTP status, like 200 or 404.

Headers

Something that responds to each, and yields key-value pairs. The keys have to be strings and conform to the RFC7230 token specification. Here’s where you can set Content-Type and Content-Length if it’s appropriate for your response.

Body

The body is the data that the server sends back to the requester. It has to respond to each, and yield string values.

All Racked Up!

Now that we’ve created a Rack app, how can we customize it to make it actually useful? The first step is to consider adding some middleware.

What is Middleware?

One of the things that makes Rack so great is how easy it is to add a chain middleware components between the webserver and the app to customize the way your request/response behaves. But what is a middleware component?

A middleware component sits between the client and the server, processing inbound requests and outbound responses. Why would you want to do that? There are tons of middleware components available for Rack that take the guesswork out of problems like enabling caching, authentication, trapping spam, and many other problems.

Using Middleware in a Rack App

To add middleware to a Rack application, all you have to do is tell Rack to use it. You can use multiple middleware components, and they will change the request or response before passing it on to the next component. This series of components is called the middleware stack.

Warden

We’re going to take a look at how you would add Warden to a project. Warden has to come after some kind of session middleware in the stack, so we’ll use Rack::Session::Cookie as well.

First, add it to your project Gemfile with gem "warden" and install it with bundle install.

Now add it to your config.ru file:

1
2
3
4
5
6
7
8
9
10
11
12
require "warden"

use Rack::Session::Cookie, secret: "MY_SECRET"

failure_app = Proc.new { |env| ['401', {'Content-Type' => 'text/html'}, ["UNAUTHORIZED"]] }

use Warden::Manager do |manager|
  manager.default_strategies :password, :basic
  manager.failure_app = failure_app
end

run Proc.new { |env| ['200', {'Content-Type' => 'text/html'}, ['get rack\'d']] }

Finally, run the server with rackup. It will find config.ru and boot up on port 9292.

Note that there is more setup involved in getting Warden to actually do authentication with your app. This is just an example of how to get it loaded into the middleware stack. To see a more fleshed-out example of integrating Warden, check out this gist.

By the way, there’s another way to define the middleware stack. Instead of calling use directly in config.ru, you can use Rack::Builder to wrap several middlewares and app(s) in one big application. For example:

1
2
3
4
5
6
7
8
9
10
11
12
failure_app = Proc.new { |env| ['401', {'Content-Type' => 'text/html'}, ["UNAUTHORIZED"]] }

app = Rack::Builder.new do
  use Rack::Session::Cookie, secret: "MY_SECRET"

  use Warden::Manager do |manager|
    manager.default_strategies :password, :basic
    manager.failure_app = failure_app
  end
end

run app

Rack Basic Auth

One really useful piece of middleware is Rack::Auth::Basic, which you can use to protect any Rack app with HTTP basic authentication. It is really lightweight and comes in handy for protecting little bits of an application. For example, Ryan Bates uses it to protect a Resque server in a Rails app in this episode of Railscasts.

Here’s how to set it up:

1
2
3
use Rack::Auth::Basic, "Restricted Area" do |username, password|
  [username, password] == ['admin', 'abc123']
end

That was easy!

Using Middleware in Rails

Now, so what? Rack is pretty cool, and we know that Rails is built on it. But just because we understand what it is, doesn’t make it actually useful in working with a production app.

How Rails Uses Rack

Did you ever notice that there’s a config.ru file in the root of every generated Rails project. Have you ever taken a look inside? Here’s what it contains:

1
2
3
4
# This file is used by Rack-based servers to start the application.

require ::File.expand_path('../config/environment', __FILE__)
run Rails.application

Pretty simple. It just loads up the config/environment file, then boots up Rails.application. Wait, what’s that? Taking a look in config/environment, we can see that it’s defined in config/application.rb. config/environment is just calling initialize! on it.

So what’s in config/application.rb? If we take a look, we see that it loads in the bundled gems from config/boot.rb, requires rails/all, loads up the environment (test, development, production, etc.), and defines a namespaced version of our application. It looks something like this:

1
2
3
4
5
module MyApplication
  class Application < Rails::Application
    ...
  end
end

So I guess that means that Rails::Application must be a Rack app? Sure enough! If we check out the source code, it responds to call!

So what middleware is it using? Well, I see that it’s autoloading rails/application/default_middleware_stack. Checking that out, it looks like it’s defined in ActionDispatch. Where does ActionDispatch come from? ActionPack.

Action Dispatch

Action Pack is Rails’s framework for handling web requests and responses. Action Pack home to quite a few of the niceties you find in Rails, such as routing, the the abstract controllers that you inherit from, and view rendering.

The most relevant part of AP for our discussion here is Action Dispatch. It provides several middleware components that deal with ssl, cookies, debugging, static files, and much more.

If you go take a look at each of the Action Dispatch middleware components, you’ll notice they’re all following the Rack specification: they all respond to call, taking in an app and returning status, headers, and body. Many of them also make use of Rack::Request and Rack::Response objects.

For me, reading through the code in these components took a lot of the mystery out of what’s going on behind the scenes when making requests to a Rails app. When I realized that it’s just a bunch of Ruby objects that follow the Rack specification, passing the request and response to each other, it made this whole section of Rails a lot less mysterious.

Now that we understand a little bit of what’s happening under the hood, let’s take a look at how to actually include some custom middleware in a Rails app.

Adding Your Own Middleware

Imagine you are hosting an application on Engine Yard. You have a Rails API running on one server, and a client-side JavaScript app running on another. The API has a url of https://api.myawesomeapp.com, and the client-side app lives at https://app.myawesomeapp.com.

You’re going to run into a problem pretty quick: you can’t access resources at api.myawesomeapp.com from your JS app, because of the same-origin policy. As you may know, the solution to this problem is to enable Cross-origin resource sharing (CORS). There are many ways to enable CORS on your server, but one of the easiest is to use the Rack::Cors middleware gem.

Begin by requiring it in the Gemfile:

1
gem "rack-cors", require: "rack/cors"

As with so many things, Rails provides a very easy way to get middleware loaded. Although we certainly could add it to a Rack::Builder block in config.ru, as we did above, the Rails convention is to place it in config/application.rb, using the following syntax:

1
2
3
4
5
6
7
8
9
10
11
12
13
module MyAwesomeApp
  class Application < Rails::Application
    config.middleware.insert_before 0, "Rack::Cors" do
      allow do
        origins '*'
        resource '*',
        :headers => :any,
        :expose => ['X-User-Authentication-Token', 'X-User-Id'],
        :methods => [:get, :post, :options, :patch, :delete]
      end
    end
  end
end

Note that we’re using insert_before here to ensure that Rack::Cors comes before the rest of the middleware included in the stack by ActionPack (and any other middleware you might be using).

Now if you reboot the server, you should be good to go! Your client-side app can access api.myawesomeapp.com without running into same-origin policy JS errors.

If you want to learn more about how HTTP requests are routed through Rack in Rails, I’d suggest taking a look at this tour of the Rails source code that deals with handling requests.

Conclusion

In this post, we’ve take an in-depth at the internals of Rack, and by extension, the request/response cycle for several Ruby web frameworks, including Ruby on Rails.

Hopefully, understanding what’s going on when a request hits your server and your application sends back a response helps make things feel a little less magical. Because I don’t know about you, but when things go wrong, I have a lot harder time troubleshooting when there’s magic involved than when I understand what’s going on. In that case, I can say “oh, it’s just a Rack response”, and get down to fixing the bug.

If I’ve done my job, reading this article will enable you to do the same thing.

P.S. Do you know of any use-cases where a simple Rack app was enough to meet your business needs? What other ways do you integrate Rack apps in your bigger applications? We want to hear your battle stories! Leave us a comment!

Deploying and Customizing Applications on Engine Yard

This post originally appeared on Engine Yard.

Introduction

I’ve tried a lot of different Platform as a Service (PaaS) providers for hosting my applications. Some of them make it super-easy to get everything running on the server, but the magic gets in the way when you need to customize things.

Some platforms give you full control, but it can be time-consuming to get all of your dependencies properly set up. It would be nice to have some of the boilerplate taken care of, while still retaining full control of my server environment.

If you haven’t deployed an application to Engine Yard (EY), you should give it a try. You’ll be pleasantly surprised to find that this is exactly the kind of service offered. It’s a breeze to get Redis, cron, and any other tools you need installed. You also get root access to your server and can SSH in just as you would with a bare server.

My favorite feature has always been the ability to push custom Chef recipes to my server, making it super-easy to tweak the server as needed without having to spend a lot of time downloading Ubuntu packages and managing user permissions.

There is a little bit of a learning curve though, so I decided to deploy and customise a new app on Engine Yard so that I could document the process and help first-timers get up and running with a minimum of fuss.

Setting Up An Engine Yard Environment

I started out with a production application in a Git repository that was ready to go. It just needed a server to run on.

I went to Engine Yard and signed up for a free trial account.

Once I was done filling out my contact and billing information, I created my first “application” resource on Engine Yard Cloud. Here are the steps I followed to get it running from there.

First, I created a “production” environment for my application and configured it. I was given four choices of server beefiness: single instance, staging, production, or custom. I went with a production box, and added Phusion Passenger and PostgreSQL to the stack. Since I was deploying a Rails app, I also added Ruby 2.2.0 and set up my migration command. I was happy to see that EY would backup by my database and take a server snapshot on a recurring schedule. I opted in for that service.

While the server was being provisioned, there were a few access-related tasks I had to take care of as well. First, I added the SSH keys from my development machine to my production environment. To do so, I visited the EY Cloud dashboard, then clicked on Tools, then SSH Keys and pasted my key into the text area, then hit the big Apply button on my app’s “production” environment page.

I also had to add an SSH key EY provided to my GitHub account. This allowed EY to grab my code and push it to the server directly.

A few minutes later, the server and my credentials were all set up, and I was ready to deploy. Next, I pressed Deploy. Unfortunately, there was a problem with my deploy, so I decided to dig into it from the command line…

Using the engineyard Gem

Configuring and Deploying

It turned out I’d forgotten to add a config/ey.yml file to my Rails project. This file is used to customize each of the Engine Yard environments the app is being deployed to. To add one, it’s easiest use the engineyard gem.

To install the gem globally, I ran gem install engineyard on my local machine. Then I initialized an EY configuration file using ey init. I checked out the config/ey.yml file it generated. Everything looked good, so I committed and pushed it up to GitHub.

This time, I deployed using ey deploy, and it worked like a charm. Success!

Logging In and Out

  • ey login
  • ey logout
  • ey whoami

Custom Deploys

  • ey status shows the status of your most recent deploy
  • ey timeout-deploy marks the current deploy as failed and begins a new deploy
  • ey rollback revert to a previous deployment

Environments

  • ey environments shows the environments for this app (pass --all to see all environments for all apps)
  • ey servers shows all the servers for an environment (if you have multiple)
  • ey rebuild reruns the configuration bootstrap process, useful for security patches and upgrades
  • ey restart restarts the servers

Debugging

  • ey logs shows the logs
  • ey web disable/enable toggles a maintenance page
  • ey ssh lets you SSH in
  • ey launch launches app in a browser window

Customizing The Server Environment

  • ey recipes upload adds Chef recipes from your dev machine and the remote server
  • ey recipes download syncs Chef recipes from the remote server to your dev machine
  • ey recipes apply trigger a Chef run

These last few commands come in really handy when you want to customize your server setup. Let’s take a deeper look at how to upload custom Chef recipes to an application environment.

Chef Recipes

Engine Yard uses Chef under the hood to make your deploys quick and easy. There’s a default set of recipes that get run every time you deploy.

After the default recipes run, Engine Yard runs any custom recipes that you’ve added to your environment. Since there are so many Chef recipes available, getting dependencies set up is pretty straightforward.

Your Chef recipes will run whenever you create a new instance, add an instance to a cluster, run ey recipes apply, or trigger a Chef run with from the Cloud Dashboard with the Upgrade or Apply buttons.

Getting Set Up

To add recipes to your application, you’ll need to fork the Engine Yard Cloud Recipes repo. Then, clone your fork down to your development machine, in a different directory than your application.

Default Recipes

The Engine Yard Cloud Recipes repo comes with comes with cookbooks for most of the things you would ever need: Sidekiq, Redis, Solr, Elasticsearch, cron, PostgreSQL Extensions, and much more.

Here’s what I did to add Redis to my project.

1) Opened /cookbooks and found the subdirectory I wanted (/redis) 2) Uncommented include_recipe redis in cookbooks/main/recipes/default.rb 3) Saved the file, committed it, and push to my forked repo 4) Uploaded the recipes to my app with ey recipes upload -e production 5) Applied the recipes to my app with ey recipes apply -e production

I took a look at my Engine Yard dashboard, and a few short moments later, Redis was running on my server!

Custom Recipes

I wanted to add HTTP Basic Auth to my server, but it wasn’t one of the recipes in the repo, so I wrote my own recipe for it.

Here’s how I did it.

First, I opened up my ey-cloud-recipes fork repo and ran rake new_cookbook COOKBOOK=httpauth. This generated a bunch of files under cookbooks/httpauth/. Then I edited cookbooks/httpauth/recipes/default.rb like this:

1
2
3
4
5
6
7
8
9
10
11
12
sysadmins = search(:users, 'groups:sysadmin')

template "/etc/nginx/htpasswd.users" do
  source "nginx/htpasswd.users.erb"
  owner node['staging']['nginx']['user']
  group node['staging']['nginx']['user']
  mode "0640"
  variables(
    :sysadmins => sysadmins
  )
  notifies :restart, "service[nginx]", :delayed
end

With the httpauth recipe written, I next created a htpasswd.users.erb under the `cookbooks/httpauth/templates/default/nginx directory, and put this code in it:

1
2
3
<% @sysadmins.each do |sa| -%>
  <%= sa["id"] %>:<%= sa["htpasswd"] %>
<% end -%>

With the template in place, I added the recipe to cookbooks/main/recipes/default.rb (my main cookbook) by adding this line:

1
include_recipe "httpauth"

Finally, I checked my syntax with rake test (all good), committed my changes, and pushed to my fork. With the recipe ready, all that was left was to upload and apply it to my application with the following:

1
2
ey recipes upload -e production
ey recipes apply -e production

The recipe was successfully added to my server in the /etc/chef-custom directory. I know this because I logged in and took a look around.

How did I do that? I’m glad you asked.

Remote Access with SSH

If you ever need to confirm that your Chef recipes are configuring the server the way you expected, or need to access your server directly with root access for any other reason, you can use SSH to get a remote terminal.

There are three ways to do this:

1) Run ssh username@123.123.123.123, the old-fashioned way (you can find your server’s IP address in the Engine Yard dashboard). 2) Click on the SSH link in your application dashboard on EY instead 3) Run ey ssh from the application directory on your dev machine

When you login to your server, some helpful information about your app’s server environment is displayed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Applications:
myapplication:
cd /data/myapplication/current # go to the application root folder
tail -f /data/myapplication/current/log/production.log # production logs
cat /data/nginx/servers/myapplication.conf # current nginx conf

SQL database:
cd /db # your attached DB volume

PostgreSQL:
tail -f /db/postgresql/$1.$2/data/pg_log/* # logs
pg_top -dmyapplication

Inspect node data passed to chef cookbooks:
sudo gem install jazor
sudo jazor /etc/chef/dna.json
sudo jazor /etc/chef/dna.json 'applications.map {|app, data| [app, data.keys]}'

Pretty cool. It’s nice to have access like this if you need it, without being responsible for configuring (and re-configuring) the entire machine by hand.

Conclusion

If you’re anything like me, you drag your feet about trying new things when it comes to sysops. Perhaps you’ve felt the pain of trying to take a server from vanilla Ubuntu to a custom build with cron, Redis, Elasticsearch and a bunch of other packages—carefully balancing everything so that it doesn’t fall apart. Many of us have also experienced getting stuck when using a full-service PaaS that isn’t working the way we expect, and not being able to customise things. Experiences like this make it hard for me to get excited about setting up servers, so I typically avoid it when I can.

That said, Engine Yard makes this sort of work a breeze. Their balance between automation and control gives you the best of both worlds. Getting up and running takes a little bit of learning, but the docs are super helpful and the support team is very responsive if you ever have any questions or need a hand.

If you haven’t given Engine Yard a try, why not give it a go?

P.S. What kind of custom Chef recipes are you using on your servers? I know that business requirements can lead to some pretty gnarly setups. Tell me about it via the comments.

Serving Custom JSON From Your Rails API With ActiveModel::Serializers

This post originally appeared on Engine Yard.

Introduction

These days, there are so many different choices when it comes to serving data from an API. You can build it in Node with ExpressJS, in Go with Martini, Clojure with Compojure, and many more. But in many cases, you just want to bring something to market as fast as you can. For those times, I still reach for Ruby on Rails.

With Rails, you can spin up a function API server in a very short period of time. Rails is large. Perhaps you object that there’s “too much magic”. Have you ever checked out the rails-api gem? It lets you enjoy all the benefits of Rails without including unnecessary view-layer and asset-related code.

Rails-api is maintained by Carlos Antonio Da Silva, Santiago Pastorino, Rails Core team members, and all-around great Rubyist Steve Klabnik. While not busy working on Rails or the Rails API Gem, they found the time to put together the active_model_serializers gem to make it easier to format JSON responses when using Rails as an API server.

ActiveModel::Serializers (AMS) is a powerful alternative to jbuilder, rabl, and other Ruby templating solutions. It’s easy to get started with, but when you want to serve data that quite doesn’t match up with the way ActiveRecord (AR) structures things, it can be hard to figure out how to get it to do what you want.

In this post, we’ll take a look at how to extend AMS to serve up custom data in the context of a Rails-based chat app.

Kicking It Off: Setting Up a Rails Server

Any two users in the system can have a continuous thread that goes back and forth. Let’s imagine we are building a chat app, similar to Apple’s Messages. People can sign up for the service, then chat with their friends.

Most of the presentation logic will happen in a client-side JavaScript app. For now, we’re only concerned with accepting and returning raw data, and we’ve decided to use a Rails server to build it.

To get started, we’ll run a rails new, but since we’re using the rails-api gem, we’ll need to make sure we have it installed first, with:

1
gem install rails-api

Once that’s done, we’ll run the following (familiar) command to start the project:

1
rails-api new mensajes --database=postgresql

cd into the directory and setup the database with:

1
rake db:create

Creating the Models

We’ll need a couple of models: Users and Messages. The workflow to create them should be fairly familiar:

1
2
rails g scaffold user username:string
rails g scaffold message sender_id:integer recipient_id:integer body:text

Open up the migrations, and set everything to null: false, then run rake db:migrate.

We’ll also need to set up the relationships. Be sure to test these relationships (I would suggest using the shoulda gem to make it easy on yourself.

1
2
3
4
class User < ActiveRecord::Base
  has_many :sent_messages, class_name: "Message", foreign_key: "sender_id"
  has_many :received_messages, class_name: "Message", foreign_key: "recipient_id"
end
1
2
3
4
class Message < ActiveRecord::Base
  belongs_to :recipient, class_name: "User", inverse_of: :received_messages
  belongs_to :sender, class_name: "User", inverse_of: :sent_messages
end

Serving the Messages

Let’s send some messages! Imagine for a minute that you’ve already set up some kind of token-based authentication system, and you have some way of getting ahold of the user that is making requests to your API.

We can open up the MessagesController, and since we used a scaffold, we should already be able to view all the messages. Let’s scope that to the current user. First we’ll write a convenience method to get all the sent and received messages for a user, then we’ll rework the MessagesController to work the way we want it to.

1
2
3
4
5
6
class User < ActiveRecord::Base
  ...
  def messages
    Message.where("sender_id = ? OR recipient_id = ?", self.id, self.id)
  end
end
1
2
3
4
5
6
class MessagesController < ApplicationController
  def index
    @messages = current_user.messages
    render json: @messages
  end
end

Assuming that we have created a couple of sent and received messages for the current_user, we should be able to take a look at http://localhost:3000/messages and see some raw JSON that looks like this:

1
[{"sender_id":1,"id":1,"recipient_id":2,"body":"YOLO","created_at":"2015-02-03T21:05:12.908Z","updated_at":"2015-02-03T21:05:12.908Z"},{"recipient_id":1,"id":2,"sender_id":2,"body":"Hello, world!","created_at":"2015-02-03T21:05:51.309Z","updated_at":"2015-02-03T21:05:51.309Z"}]

It’s kind of ugly. It would be nice if we could remove the timestamps and ids. This is where AMS comes in.

Adding ActiveModel::Serializers

Once we add AMS to our project, it should be easy to get a much prettier JSON format back from our MessagesController.

To get AMS, add it to the Gemfile with:

1
gem "active_model_serializers", github: "rails-api/active_model_serializers"

Then bundle install. Note that I’m using a the edge version of AMS here because it supports belongs_to and other features. See the github project README for some information about maintenance and why you might want to use an older version.

Now we can easily set up a serializer with rails g serializer message. Let’s take a look at what this generated for us. In app/serializers/message_serializer.rb, we find this code:

1
2
3
class MessageSerializer < ActiveModel::Serializer
  attributes :id
end

Whichever attributes we specify (as a list of symbols) will be returned in the JSON response. Let’s skip id, and instead return the sender_id, recipient_id, and body:

1
2
3
class MessageSerializer < ActiveModel::Serializer
  attributes :sender_id, :recipient_id, :body
end

Now when we visit /messages, we get this slightly cleaner JSON:

1
{"messages":[{"sender_id":1,"recipient_id":2,"body":"YOLO"},{"sender_id":2,"recipient_id":1,"body":"Hello, world!"}]}

Cleaning Up the Format

It sure would be nice if we could get more information about the other user, like their username, so that we could display it in the messaging UI on the client-side. That’s easy enough, we just change the MessageSerializer to use AR objects as attributes for the sender and recipient, instead of ids.

1
2
3
class MessageSerializer < ActiveModel::Serializer
  attributes :sender, :recipient, :body
end

Now we can see more about the Sender and Recipient:

1
{"messages":[{"sender":{"id":1,"username":"Ben","created_at":"2015-02-03T21:04:09.220Z","updated_at":"2015-02-03T21:04:09.220Z"},"recipient":{"id":2,"username":"David","created_at":"2015-02-03T21:04:45.948Z","updated_at":"2015-02-03T21:04:45.948Z"},"body":"YOLO"},{"sender":{"id":2,"username":"David","created_at":"2015-02-03T21:04:45.948Z","updated_at":"2015-02-03T21:04:45.948Z"},"recipient":{"id":1,"username":"Ben","created_at":"2015-02-03T21:04:09.220Z","updated_at":"2015-02-03T21:04:09.220Z"},"body":"Hello, world!"}]}

Actually, that might be too much. Let’s clean up how User objects are serialized by generating a User serializer with rails g serializer user. We’ll set it up to just return the username.

1
2
3
class UserSerializer < ActiveModel::Serializer
  attributes :username
end

In the MessageSerializer, we’ll use belongs_to to have AMS format our sender and recipient using the UserSerializer:

1
2
3
4
5
class MessageSerializer < ActiveModel::Serializer
  attributes :body
  belongs_to :sender
  belongs_to :recipient
end

If we take a look at /messages, we now see:

1
[{"recipient":{"username":"David"},"body":"YOLO","sender":{"username":"Ben"}},{"recipient":{"username":"Ben"},"body":"Hello, world!","sender":{"username":"David"}}]

Things are really starting to come together!

Conversations

Although we can view all of a user’s messages using the index controller action, or a specific message at the show action, there’s something important to the business logic of our app that we can’t do. We can’t view all of the messages sent between two users. We need some concept of a conversation.

When thinking about creating a conversation, we have to ask, does this model need to be stored in the database? I think the answer is no. We already have messages that know which users they belong to. All we really need is a way to get back all the messages between two users from one endpoint.

We can use a Plain Old Ruby Object (PORO) to create this concept of a conversation model. We will not inherit from ActiveRecord::Base in this case.

Since we already know about the current_user, we really only need it to keep track of the other user. We’ll call her the participant.

1
2
3
4
5
6
7
8
9
# app/models/conversation.rb
class Conversation
  attr_reader :participant, :messages

  def initialize(attributes)
    @participant = attributes[:participant]
    @messages = attributes[:messages]
  end
end

We’ll want to be able to serve up these conversations, so we’ll need a ConversationsController. We want to get all of the conversations for a given user, so we’ll add a class-level method to the Conversation model to find them and return them in this format:

1
# TODO: Insert JSON blob here

To make this work, we’ll run a group_by on the user’s messages, grouping by the other user’s id. We’ll then map the resulting hash into a collection of Conversation objects, passing in the other user and the list of messages.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Conversation
  ...
  def self.for_user(user)
    user.messages.group_by { |message|
      if message.sender == user
        message.recipient_id
      else
        message.sender_id
      end
    }.map do |user_id, messages|
      Conversation.new({
        participant: User.find(user_id),
        messages: messages
      })
    end
  end
end

If we run this in the Rails Console, it seems to be working.

1
2
3
>Conversation.for_user(User.first)
...
=> [#<Conversation:0x007fbd6e5b9428 @participant=#<User id: 2, username: "David", created_at: "2015-02-03 21:04:45", updated_at: "2015-02-03 21:04:45">, @messages=[#<Message id: 1, sender_id: 1, recipient_id: 2, body: "YOLO", created_at: "2015-02-03 21:05:12", updated_at: "2015-02-03 21:05:12">, #<Message id: 2, sender_id: 2, recipient_id: 1, body: "Hello, world!", created_at: "2015-02-03 21:05:51", updated_at: "2015-02-03 21:05:51">]>]

Great! We’ll just call this method in our ConversationsController and everything will be great!

First, we’ll define the route in config/routes.rb:

1
2
3
4
Rails.application.routes.draw do
  ...
  resources :conversations, only: [:index]
end

Then, we’ll write the controller action.

1
2
3
4
5
6
7
8
# app/controllers/conversations_controller.rb

class ConversationsController < ApplicationController
  def index
    conversations = Conversation.for_user(current_user)
    render json: conversations
  end
end

Visiting /conversations, we should see a list of all the conversations for the current user.

Serializing Plain Old Ruby Objects

Whoops! When we visit that route, we get an error: undefined methodnew’ for nil:NilClass`. It’s coming from this line in the controller:

1
render json: conversations

It looks like the error is coming from the fact that we don’t have a serializer. Let’s make one with rails g serializer conversation. We’ll edit it to return its attributes, participant and message.

1
2
3
class ConversationSerializer < ActiveModel::Serializer
  attributes :participant, :messages
end

Now when we try, we get another error, coming from the same line of the controller: undefined method 'read_attribute_for_serialization' for #<Conversation:0x007ffc9c1bed10>

Digging around in the source code for ActiveModel::Serializers, I couldn’t find where that method was defined. So I took a look at ActiveModel itself, and found it here. It turns out that it’s just an alias for send!

We can add that into our PORO easily enough:

1
2
3
4
class Conversation
  alias :read_attribute_for_serialization :send
  ...
end

Or, we could include ActiveModel::Serialization which is where our AR-backed objects got it.

Now when we take a look at /conversations, we get:

1
[{"participant":{"id":2,"username":"David","created_at":"2015-02-03T21:04:45.948Z","updated_at":"2015-02-03T21:04:45.948Z"},"messages":[{"sender_id":1,"recipient_id":2,"id":1,"body":"YOLO","created_at":"2015-02-03T21:05:12.908Z","updated_at":"2015-02-03T21:05:12.908Z"},{"sender_id":2,"id":2,"recipient_id":1,"body":"Hello, world!","created_at":"2015-02-03T21:05:51.309Z","updated_at":"2015-02-03T21:05:51.309Z"}]}]

Whoops. Not quite right. But the problem is similar to the one we had before in the MessageSerializer. Maybe the same approach will work. We’ll change the attributes to AR relationships.

1
2
3
4
class ConversationSerializer < ActiveModel::Serializer
  has_many :messages, class_name: "Message"
  belongs_to :participant, class_name: "User"
end

Almost! Now /conversations returns:

1
[{"messages":[{"body":"YOLO"},{"body":"Hello, world!"}],"participant":{"username":"David"}}]

We can’t see who the sender of each message was! AMS isn’t using the UserSerializer for the message sender and recipient, because we’re not using an AR object.

A little source code spelunking point the way to a fix.

1
2
3
4
5
6
7
8
9
10
11
class MessageSerializer < ActiveModel::Serializer
  attributes :body, :recipient, :sender

  def sender
    UserSerializer.new(object.sender).attributes
  end

  def recipient
    UserSerializer.new(object.recipient).attributes
  end
end

Now /conversations gives us what we want:

1
[{"messages":[{"body":"YOLO","recipient":{"username":"David"},"sender":{"username":"Ben"}},{"body":"Hello, world!","recipient":{"username":"Ben"},"sender":{"username":"David"}}],"participant":{"username":"David"}}]

And /messages still works as well!

Wrapping Up

The ActiveModel::Serializers gem claims to bring “convention over configuration to your JSON generation”. It does a great job of it, but when you need to massage the data, things can get a little bit hairy.

Hopefully some of the tricks we’ve covered will help you present JSON from your Rails API the way you want. For this, and virtually any other problem caused by the magic getting in the way, I can’t suggest digging through the source code enough.

At the end of the day ARS is an excellent choice for getting your JSON API off the ground with a minimum of fuss. Good luck!

P.S. Have a different approach? Prefer rabl or jbuilder? Did I leave something out? Leave us a comment below!

Getting Started With Ruby Processing

This post was a featured article in Ruby Weekly #234.

It was originally published on Engine Yard.

Introduction

If you’re like me, you love to code because it is a creative process. In another life, I am a musician.

I’ve always loved music because it represents a synthesis of the measurable concreteness of math and the ambiguity of language. Programming is the same way.

But desipite the creative potential of programming, I often myself spending my days working out the kinks of HTTP requests or dealing with SSL certificates. Some part of my yearns for a purely Apollonion environment in which to use code to make something new and unseen.

When I feel a void for purely creative coding, I turn to the Processing language. Processing is a simple language, based on Java, that you can use to create digital graphics. It’s easy to learn, fun to use, and has an amazing online community comprised of programmers, visual artists, musicians, and interdiscplinary artists of all kinds.

In 2009, Jeremy Ashkenas, creator of Backbone.JS, Underscore.JS, and Coffeescript), published the ruby-processing gem. It wraps Processing in a “thin little shim” that makes it even easier to get started as a Ruby developer. In this post, we’ll take a look at how you can create your first interactive digital art project in just a few minutes.

What Is Processing?

Processing is programming language and IDE built by Casey Reas and Benjamin Fry, two protegés of indisciplinary digital art guru John Maeda at the MIT Media Lab.

Since the project began in 2001, it’s been helping teach people to program in a visual art context using a simplified version of Java. It comes packaged as an IDE that can be downloaded and used to create and save sketches.

Why Ruby Processing?

Since Processing already comes wrapped in an easy-to-use package, you may ask: “why should I bother with Ruby Processing?”

The answer: if you know how to write Ruby, you can use Processing as a visual interface to a much more complex program. Games, interactive art exhibits, innovative music projects, anything you can imagine; it’s all at your fingertips.

Additionally, you don’t have to declare types, voids, or understand the differences between floats and ints to get started.

Although there are some drawbacks to using Ruby Processing, most notably slower performance, having Ruby’s API available to translate your ideas into sketches more than makes up for it.

Setup

When getting started with Ruby Processing for the first time, it can be a little bit overwhelming to get all of the dependencies set up correctly. The gem relies on JRuby, Processing, and a handful of other things. Here’s how to get them all installed and working.

I’ll assume you already have the following installed: homebrew, wget, java, and a ruby manager such as rvm, rbenv or chruby.

Processing

Download Processing from the official website and install it.

When you’re done, make sure that the resulting app is located in your /Applications directory.

JRuby

Although it’s possible to run Ruby Processing on the MRI, I highly suggest using JRuby. It works much better, since Processing itself is built on Java.

Install the latest JRuby version (1.7.18 at the time of this writing). For example, if you’re using rbenv, the command would be rbenv install jruby-1.7.18, followed by rbenv global jruby-1.7.18 to set your current ruby to JRuby.

Ruby Processing

Install the ruby-processing gem globally with gem install ruby-processing. If you’re using rbenv, don’t forget to run rbenv rehash.

JRuby Complete

You’ll need the jruby-complete Java jar. Fortunately, there are a couple of built-in Ruby Processing commands that make it easy to install. rp5 is the Ruby Processing command. It can be used to do many things, one of which is to install jruby-complete using wget. To do so, run:

1
rp5 setup install

Once it’s complete, you can use rp5 setup check to make sure everything worked.

Setup Processing Root

One final step. You’ll need to set the root of your Processing app. This one-liner should take care of it for you:

echo 'PROCESSING_ROOT: /Applications/Processing.app/Contents/Java' >> ~/.rp5rc

Ready To Go

Now that we have everything installed and ready to go, we can start creating our first piece of art!

Making Your First Sketch

There are two basic parts to a Processing program: setup and draw.

The code in setup runs one time, to get everything ready to go.

The code in draw runs repeatedly in a loop. How fast is the loop? By default, it’s 60 frames per second, although it can be limited by your machine’s processing power. You can also manipulate it with the frame_rate method.

Here’s an example sketch that sets the window size, background and stroke colors, and draws a circle with a square around it.

1
2
3
4
5
6
7
8
9
10
11
12
def setup
  size 800, 600
  background 0
  stroke 255
  no_fill
  rect_mode CENTER
end

def draw
  ellipse width/2, height/2, 100, 100
  rect width/2, height/2, 200, 200
end

Here’s a quick run-through of what each of these methods is doing:

-size(): Sets the window size. It takes two arguments: width and height (in pixels). -background(): Sets the background color. It takes four arguments: R, G, B, and an alpha (opacity) value. -stroke(): Sets the stroke color. Takes RGBA arguments, like background(). -no_fill(): Tells Processing not to fill in shapes with the fill color. You can turn it back on with fill(), which takes RGBA values. -rect_mode: Tells Processing to draw rectangles using the x and y coordinates as a center point, with the other two arguments specifying width and height. The other available modes are: CORNER, CORNERS, and RADIUS. -ellipse: Draws an ellipse or circle. Takes four arguments: x-coordinate, y-coordinate, width, and height. -rect: Draws a rectangle or square. Takes four arguments: x-coordinate, y-coordinate, width, and height.

Note that the coordinate system in Processing starts at the top-left corner, not in the middle as in the Cartesian Coordinate System.

Running the Program

If you’re following along at home, let’s see what we’ve made! Save the code above into a file called my_sketch.rb.

There are two ways to run your program: you can either have it run once with rp5 run my_sketch.rb, or you can watch the filesystem for changes with rp5 watch my_sketch.rb. Let’s just use the run version for now.

Pretty basic, but it’s a good start! Using just the seven methods above, you can create all kinds of sketches.

Other Commonly Used Methods

Here are a few other useful Processing methods to add to your toolbox:

-line(): Draws a line. Takes four argments: x1, y1, x2, y2. The line is drawn from the point at the x, y coordinates of the first two arguments to the point at the coordinates of the last two arguments. -stroke_weight(): Sets the width of the stroke in pixels. -no_stroke(): Tells Processing to draw shapes without outlines. -smooth(): Tells Processing to draw shapws with anti-aliased edges. On by default, but can be disables with noSmooth(). -fill(): Sets the fill color of shapes. Takes RGBA arguments.

For a list of all the methods available in vanilla Processing, check out this list. Note that the Java implementation of these methods is in camelCase, but in Ruby they are probably in snake_case.

Some methods have also been deprecated, usually because you can use Ruby to do the same thing more easily.

If you see anything in the Processing docs and can’t get it to run in Ruby Processing, use $app.find_method("foo") to to search the method names available in Ruby Processing.

Responding to Input

Now that we know how to make a basic sketch, let’s build something that can respond to user input. This is where we leave static visual art behind, and start to make interactive digital art.

Although you can use all kinds of physical inputs to control Processing (e.g. Arduino, Kinect, LeapMotion), today we’ll just use the mouse.

Processing exposes a number of variables exposing its state at runtime, such as frame_count, width, height. We can use the mouse_x and mouse_y coordinates to control aspects of our program.

Here’s a sketch based on the mouse_x and mouse_y positions. It draws lines of random weight starting at the top of the screen at the mouse’s x position (mouse_x, 0) to a y coordinate between 0 and 200 pixels to the right of the mouse’s y position (mouse_y + offset, height).

1
2
3
4
5
6
7
8
9
10
11
12
13
def setup
  size 800, 600
  background 0
  stroke 255, 60 # first argument is grayscale value, second is opacity
  frame_rate 8
end

def draw
  r = rand(20)
  stroke_weight r
  offset = r * 10
  line mouse_x, 0, mouse_y + offset, height
end

Load that up and check it out!

Wrapping Your Sketch in a Class

One last not before we go: you can totally call other methods from within your setup and draw methods. In fact, you can even wrap everything in a class that inherits from Processing::App.

You can do everything you normally do in Ruby, so you can build a whole project, with logic branches and state, that controls the visual effect through these two methods.

Here’s a snippet from a version of Tic Tac Toe I built with Rolen Le during gSchool.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
require 'ruby-processing'

class TicTacToe < Processing::App
  attr_accessor :current_player

  def setup
    size 800, 800
    background(0, 0, 0)
    @current_player = 'x'
  end

  def draw
    create_lines
  end

  def create_lines
    stroke 256,256,256
    line 301, 133, 301, 666
    line 488, 133, 488, 666
    line 133, 301, 666, 301
    line 133, 488, 666, 488

    #borders
    line 133, 133, 666, 133
    line 666, 133, 666, 666
    line 133, 666, 666, 666
    line 133, 133, 133, 666
  end

  ...
end

To see the rest of the code, visit the GitHub repo.

Another example of a game I built early on in my programming career can be found here. I later did a series of refactorings of this code on my personal blog.

I’m still working on a pattern for game development with that I like. Keep an eye out for future posts about the best way to build a game with Ruby Processing.

Learning More

There’s so much more you can do in Processing than what we’ve covered here! Bézier curves, translations, rotations, images, fonts, audio, video, and 3D sketching are all available.

The best way to figure out how to do a lot of sketching. Just tinkering with the methods covered in this post would be enough to keep you busy creating new things for years.

If you’ve really caught the bug and want to go even deeper, check out some of these resources to learn more.

Built-in Samples

If you run rp5 setup unpack_samples, you’ll get a bunch of Processing sketch samples in a directory located at ~/rp_samples. I encourage you to open them up and take a look. There’s a lot you can glean by changing little bits of code in other projects.

Online Examples From Books

Learning Processing is an excellent book by Daniel Shiffman. In addition to being a valuable resource for Processing users, it has a number of examples available online.

Daniel Shiffman also wrote a book called The Nature of Code. The examples from it have been ported to Ruby and are another great resource for learning more.

Process Artist

There’s a great Jumpstart Lab tutorial called Process Artist, that walks you through building a drawing program à la MSPaint.

Conclusion

Processing is an awesome multi-disciplinary tool. It sits at the intersection of coding, visual art, photography, sound art, and interactive digital experiences. With the availability of Ruby Processing, it’s super easy to get started.

If you’re a programmer looking for a way to express your creativity, you couldn’t find a better way to do it than to try tinkering with Processing. I hope this post gets you off to a great start. Good luck and keep sketching!

Setting Up a Client-Side JavaScript Project With Gulp and Browserify

This post originally appeared on Engine Yard.

Introduction

For JavaScript developers, it can be hard to keep up to date with the latest frameworks and libraries. It seems like every day there’s a new something.js to check out. Luckily, there is one part of the toolchain that doesn’t change as often, and that’s the build process. That said, it’s worth checking out your options every now and then.

My build process toolset has traditionally been comprised of RequireJS for dependency loading, and Grunt. They’ve worked great, but recently I was pairing with someone who prefers to use Gulp and Browserify instead. After using them on a couple of projects, I’m coming to like them quite a bit. They’re great for use with Backbone, Angular, Ember, React, and my own hand-rolled JavaScript projects.

In this post, we’ll explore how to set up a clientside JavaScript project for success using Gulp and Browserify.

Defining the Project Structure

For the purposes of this post, we’ll pretend we’re building an app called Car Finder, that helps you remember where you parked your car. If you want to follow along, check out the code on GitHub.

When building a full application that includes both an API server and a clientside JavaScript app, there’s a certain project structure that I’ve found often works well for me. I like to put my clientside app in a folder one level down from the root of my project, called client. This folder usually has sibling folders named server, test, public, and build. Here’s how this would look for Car Finder:

1
2
3
4
5
6
7
8
9
car-finder
|- build
|- client
   |- less
|- public
   |- javascripts
   |- stylesheets
|- server
|- test

The idea is to do our app developent inside of client, then use a build task to compile the JS and copy it to the build folder, where it will be minified, uglified, and copied to public to be served by the backend.

Pulling In Dependencies

To get up and running, we’ll need to pull in some dependencies.

Run npm init and follow the prompts.

Add browserify, gulp, and our build and testing dependencies:

1
2
npm install --save-dev gulp gulp-browserify browserify-shim gulp-jshint gulp-mocha-phantomjs \
gulp-rename gulp-uglify gulp-less gulp-autoprefixer gulp-minify-css mocha chai

If you’re using git, you may want to ignore your node_modules folder with echo "node_modules" >> .gitignore.

Shimming Your Frameworks

You’ll probably want to use browserify-shim to shim jQuery and your JavaScript framework so that you can write var $ = require('jquery') into your code. We’ll use jQuery here, but the process is the same for any other library (Angular, Ember, Backbone, React, etc.). To set it up, modify your package.json like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
  "name": "car-finder",
  "author": "Ben Lewis",
  "devDependencies": {
    "gulp-rename": "^1.2.0",
    "gulp": "^3.8.10",
    "gulp-mocha-phantomjs": "^0.5.1",
    "gulp-jshint": "^1.9.0",
    "gulp-browserify": "^0.5.0",
    "browserify": "^6.3.4",
    "browserify-shim": "^3.8.0",
    "mocha": "^2.0.1",
    "gulp-minify-css": "^0.3.11",
    "gulp-uglify": "^1.0.1",
    "gulp-autoprefixer": "^2.0.0",
    "gulp-less": "^1.3.6",
    "chai": "^1.10.0"
  },
  "browserify-shim": {
    "jquery": "$"
  },
  "browserify": {
    "transform": [
      "browserify-shim"
    ]
  }
}

If you’re getting JSHint errors in your editor for this file, you can turn them off with echo "package.json" >> .jshintignore.

Setting Up Gulp

Now that we have the gulp package installed, we’ll configure gulp tasks to lint our code, test it, trigger the compilation process, and copy our minified JS into the public folder. We’ll also set up a watch task that we can use to trigger a lint and recompile of our project whenever a source file is changed.

We’ll start by requiring the gulp packages we want in a gulpfile.js that lives in the root of the project.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// Gulp Dependencies
var gulp = require('gulp');
var rename = require('gulp-rename');

// Build Dependencies
var browserify = require('gulp-browserify');
var uglify = require('gulp-uglify');

// Style Dependencies
var less = require('gulp-less');
var prefix = require('gulp-autoprefixer');
var minifyCSS = require('gulp-minify-css');

// Development Dependencies
var jshint = require('gulp-jshint');

// Test Dependencies
var mochaPhantomjs = require('gulp-mocha-phantomjs');

Now we can start defining some tasks.

JSHint

To set up linting for our clientside code as well as our test code, we’ll add the following to the gulpfile:

1
2
3
4
5
6
7
8
9
10
11
gulp.task('lint-client', function() {
  return gulp.src('./client/**/*.js')
    .pipe(jshint())
    .pipe(jshint.reporter('default'));
});

gulp.task('lint-test', function() {
  return gulp.src('./test/**/*.js')
    .pipe(jshint())
    .pipe(jshint.reporter('default'));
});

We’ll also need to define a .jshintrc in the root of our project, so that JSHint will know which rules to apply. If you have a JSHint plugin turned on in your editor, it will show you any linting errors as well. I use jshint.vim. Here’s an example of a typical .jshintrc for one of my projects. You’ll notice that it has some predefined globals that we’ll be using in our testing environment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
  "camelcase": true,
  "curly": true,
  "eqeqeq": true,
  "expr" : true,
  "forin": true,
  "immed": true,
  "indent": 2,
  "latedef": "nofunc",
  "newcap": false,
  "noarg": true,
  "node": true,
  "nonbsp": true,
  "quotmark": "single",
  "undef": true,
  "unused": "vars",
  "trailing": true,
  "globals": {
    "after"      : false,
    "afterEach"  : false,
    "before"     : false,
    "beforeEach" : false,
    "context"    : false,
    "describe"   : false,
    "it"         : false,
    "window"     : false
  }
}

Mocha

I’m a Test-Driven Development junkie, so one of the first things I always do when setting up a project is to make sure I have a working testing framework. For clientside unit testing, I like to use gulp-mocha-phantomjs, which we already pulled in above.

Before we can run any tests, we’ll need to create a test/client/index.html file for Mocha to load up in the headless PhantomJS browser environment. It will pull Mocha in from our node_modules folder, require build/client-test.js (more on this in a minute), then run the scripts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Mocha Test Runner</title>
    <link rel="stylesheet" href="../../node_modules/mocha/mocha.css">
  </head>
  <body>
    <div id="mocha"></div>
    <script src="../../node_modules/mocha/mocha.js"></script>
    <script>mocha.setup('bdd')</script>
    <script src="../../build/client-test.js"></script>
    <script>
      if (window.mochaPhantomJS) {
        mochaPhantomJS.run();
      } else {
        mocha.run();
      }
    </script>
  </body>
</html>

Setting Up Browserify

Now we need to set up Browserify to compile our code. First, we’ll define a couple of gulp tasks: one to build the app, and one to build the tests. We’ll copy the result of the compile to public so we can serve it unminified in development, and we’ll also put a copy into build, where we’ll grab it for minification. The compiled test file will also go into build. Finally, we’ll set up a watch task to trigger rebuilds of the app and test when one of the source files changes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
gulp.task('browserify-client', ['lint-client'], function() {
  return gulp.src('client/index.js')
    .pipe(browserify({
      insertGlobals: true
    }))
    .pipe(rename('car-finder.js'))
    .pipe(gulp.dest('build'));
    .pipe(gulp.dest('public/javascripts'));
});

gulp.task('browserify-test', ['lint-test'], function() {
  return gulp.src('test/client/index.js')
    .pipe(browserify({
      insertGlobals: true
    }))
    .pipe(rename('client-test.js'))
    .pipe(gulp.dest('build'));
});

gulp.task('watch', function() {
  gulp.watch('client/**/*.js', ['browserify-client']);
  gulp.watch('test/client/**/*.js', ['browserify-test']);
});

There’s one more thing we’ll need to do before we can run our gulp tasks, which is to make sure we actually have index.js files in each of the folders we’ve it to look at, so it doesn’t raise an error. Add one to the client and test/client folders.

Now, when we run gulp browserify-client from the command line, we see new build/car-finder.js and public/javascripts/car-finder.js files. In the same way, gulp browserify-test creates a build/client-test.js file.

More Testing

Now that we have Browserify set up, we can finish getting our test environment up and running. Let’s define a test Gulp task and add it to our watch. We’ll add browserify-test as a dependency for the test task, so our watch will just require test. We should also update our watch to run the tests whenever we change any of the app or test files.

1
2
3
4
5
6
7
8
9
gulp.task('test', ['lint-test', 'browserify-test'], function() {
  return gulp.src('test/client/index.html')
    .pipe(mochaPhantomjs());
});

gulp.task('watch', function() {
  gulp.watch('client/**/*.js', ['browserify-client', 'test']);
  gulp.watch('test/client/**/*.js', ['test']);
});

To verify that this is working, let’s write a simple test in test/client/index.js:

1
2
3
4
5
6
7
var expect = require('chai').expect;

describe('test setup', function() {
  it('should work', function() {
    expect(true).to.be.true;
  });
});

Now, when we run gulp test, we should see Gulp run the lint-test, browserify-test, and test tasks and exit with one passing example. We can also test the watch task by running gulp watch, then making changes to test/client/index.js or client/index.js, which should trigger the tests.

Building Assets

Next, let’s turn our attention to the rest of our build process. I like to use less for styling. We’ll need a styles task to compile it down to CSS. In the process, we’ll use gulp-autoprefixer so that we don’t have to write vendor prefixes in our CSS rules. As we did with the app, we’ll create a development copy and a build copy, and place them in public/stylesheets and build, respectively. We’ll also add the less directory to our watch, so changes to our styles will get picked up.

We should also uglify our JavaScript files to improve page load time. We’ll write tasks for minification and uglification, then copy the minified production versions of the files to public/stylesheets and public/javascripts. Finally, we’ll wrap it all up into a build task.

Here are the changes to the gulpfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
gulp.task('styles', function() {
  return gulp.src('client/less/index.less')
    .pipe(less())
    .pipe(prefix({ cascade: true }))
    .pipe(rename('car-finder.css'))
    .pipe(gulp.dest('build'))
    .pipe(gulp.dest('public/stylesheets'));
});

gulp.task('minify', ['styles'], function() {
  return gulp.src('build/car-finder.css')
    .pipe(minifyCSS())
    .pipe(rename('car-finder.min.css'))
    .pipe(gulp.dest('public/stylesheets'));
});

gulp.task('uglify', ['browserify-client'], function() {
  return gulp.src('build/car-finder.js')
    .pipe(uglify())
    .pipe(rename('car-finder.min.js'))
    .pipe(gulp.dest('public/javascripts'));
});

gulp.task('build', ['uglify', 'minify']);

If we now run gulp build, we see the following files appear: - build/car-finder.css - public/javascripts/car-finder.min.js - public/stylesheets/car-finder.css - public/stylesheets/car-finder.min.css

Did It Work?

We’ll want to check that what we’ve built is actually going to work. Let’s add a little bit of styling and JS code to make sure it’s all getting compiled and served the way we hope it is. We’ll start with an index.html file in the public folder. It will load up the development versions of our CSS and JS files.

1
2
3
4
5
6
7
8
9
10
11
12
13
<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Car Finder</title>
    <link rel="stylesheet" href="stylesheets/car-finder.css">
  </head>
  <body>
    <script src="javascripts/car-finder.js"></script>
  </body>
</html>

We’ll add some styling in client/less/index.less:

1
2
3
body {
  background-color: DarkOliveGreen;
}

Now we’ll write our million dollar app in client/index.js:

1
alert('I found your car!');

Let’s put it all together. Run grunt build, then open public/index.html. Our default browser opens a beautiful olive green screen with an alert box. Profit!

One Task To Rule Them All

At this point, I usually like to tie it all together with a default Gulp task, so all I have to do is run gulp to check that everything’s going together the way I expect, and start watching for changes. Since test already does the linting and browserifying, all we really need here is test, build, and watch.

1
gulp.task('default', ['test', 'build', 'watch']);

Wrapping Up

We’ve now set up our project to use Browserify and Gulp. The former took the headache out of requiring modules and dependencies, and the latter made defining tasks for linting, testing, less compilation, minification, and uglification a breeze.

I hope you’ve found this exploration of Gulp and Browserify has been enlightening. I personally love these tools. For the moment, they’re my defaults when creating a personal project. I hope this post helps make your day-to-day development more fun by simplifying things. Thanks for reading!

Instances, Classes, and Modules, Oh My!

This post originally appeared on Engine Yard and was later published on the Quick Left Blog.

Introduction

One of the biggest challenges of object oriented programming in Ruby is defining the interface of your objects. In other languages, such as Java, there is an explicit way to define an interface that you must conform to. But in Ruby, it’s up to you.

Compounding with this difficulty is the problem of deciding which object should own a method that you want to write. Trying to choose between modules, class methods, instance methods, structs, and lambdas can be overwhelming.

In this post, we’ll look at several ways to solve the Exercism Leap Year problem, exploring different levels of method visiblitiy and scope level along the way.

Leap Year

If you sign up for Exercism.io and start solving Ruby problems, one of the first problems you will look at is called Leap Year. The tests guide you to write a solution that has an interface that works like this:

1
2
Year.leap?(1984)
#=> true

A leap year is defined as a year that is divisible by four, unless it’s also divisible by 100. Apparently, centuries aren’t leap years. That is, unless they are centuries that are divisible by 400. So there are three rules for leap years:

1) If it’s divisible by 400, it’s an exceptional century and is a leap year. 2) If it’s divisible by 100, it’s a mundane century and in not a leap year. 3) Otherwise, if it’s divisible by 4, it’s a leap year, otherwise it’s not.

The First Approach: Using Class Methods

Here’s one simple solution to this problem. You can rearrange the booleans in a couple of different ways and it will still work. This is the version I came up with:

1
2
3
4
5
class Year
  def self.leap?(year)
    year % 4 == 0 && !(year % 100 == 0) || year % 400 == 0
  end
end

This is nice and compact, but not entirely easy to understand. I’m a pretty big fan of self-documenting code, so I used the extract method refactoring pattern to name these three rules.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Year
  def self.leap?(year)
    mundane_leap?(year) || exceptional_century?(year)
  end

  def self.mundane_leap?(year)
    year % 4 == 0 && !century?
  end

  def self.century?(year)
    year % 100 == 0
  end

  def self.exceptional_century?(year)
    year % 400 == 0
  end
end

Class Methods and Privacy

This is a lot more understandable, in my mind. But there’s a problem with this. mundane_leap?, century?, and exceptional_century? are all publicly exposed methods. They really only exist in support of leap?, and I’m not sure how reusable they are, with the possible exception of century?. If I write this test, it will pass:

1
2
3
  def test_exceptional_century
    assert Year.exceptional_century?(2400)
  end

I would like to make exceptional_century? private, so that it can’t be accessed outside of the Year class. I can try something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Year
  ...

  private

  def self.mundane_leap?(year)
    year % 4 == 0 && !century?
  end

  def self.century?(year)
    year % 100 == 0
  end

  def self.exceptional_century?(year)
    year % 400 == 0
  end
end

But, unfortunately, this won’t work. The test still passes, becuase the private keyword doesn’t work on class methods. Instead, I would have to use private_class_method after my method definition.

1
  private_class_method :mundane_leap?, :century?, :exceptional_century?

Now if I run that last test, it will raise an error.

All About the Eigenclass

In my mind, what we’ve now done is somewhat better, but there’s still a smell here. I’ll get to exactly what that is in just a moment, but for now I’ll say that it’s due to the fact that we’ve defined all of these methods on the singleton class, or eigenclass, of Year. If you don’t know about eigenclasses, you can read about them here.

We can define methods on an eigenclass by prepending each method definition with self., or we can use class << self or class << Year and nesting our method definitions inside of that block. Doing it this way makes it possible to use the private keyword, because we’re now working at the instance level of scope. If we were to introduce class << self, then, we could do away with our private_class_method call.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
class Year
  class << self
    def leap?(year)
      mundane_leap?(year) || exceptional_century?(year)
    end

    private

    def mundane_leap?(year)
      year % 4 == 0 && !century?(year)
    end

    def century?(year)
      year % 100 == 0
    end

    def exceptional_century?(year)
      year % 400 == 0
    end
  end
end

But we haven’t really changed much here. In my mind, there’s still a big smell, which is this. According to Wikipedia, a class is (emphasis mine) “ an extensible program-code-template for creating objects ”. Our Year class never creates a single instance. It’s just an eigenclass that happens to be able to create (mostly) useless Year instances.

So where should we be putting class-level methods that are not associated with any instance object?

Second Approach: Module Functions

In Ruby, the Class object inherits from Module. A module is basically a collection of methods, constants, and classes. Their primary feature is that they can be mixed into other modules and classes to extend their functionality. They’re also frequently used for namespacing (for example, the ActiveRecord constant is a module).

We can put our Year implementation into a module without changing much: just swap the word class for module. Instead of using class << self as we did for the class version, we can use extend self in our module, and get the same effect, allowing us to use the private keyword. extend takes the methods defined in a module and makes them into class-level methods on the target module (or class). This is in contrast with include, which mixes them in as instance-level methods. Thus, if we extend the module into itself, it gets all of it’s own methods at the class level.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
module Year
  extend self

  def leap?(year)
    mundane_leap?(year) || exceptional_century?(year)
  end

  private

  def mundane_leap?(year)
    year % 4 == 0 && !century?(year)
  end

  def century?(year)
    year % 100 == 0
  end

  def exceptional_century?(year)
    year % 400 == 0
  end
end

If we wanted to define some other methods in a Year class that had nothing to do with leap years, we could change the name of our module to Leap and mix it into a class called Year.

If we want to make the three leap year rule methods private, we now have another choice of how to do it. We can use module_function. module_function will make these methods available to the module, but when they get mixed into the class, they will be private. Module functions allow you to be selective with what can be called by the module itself, while still defining methods that can be mixed into other modules and classes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
module Leap
  def leap?(year)
    mundane_leap?(year) || exceptional_century?(year)
  end

  def mundane_leap?(year)
    year % 4 == 0 && !century?(year)
  end

  def century?(year)
    year % 100 == 0
  end

  def exceptional_century?(year)
    year % 400 == 0
  end

  module_function :mundane_leap?, :century?, :exceptional_century?
end

class Year; extend Leap; end

Now, if we run the tests, they will all still pass, with the exception of the one we wrote that tries to call exceptional_century?

Third Approach: Instance Methods

I want to stay with the idea that we might want a Year class that can do other things unrelated to determining whether we are talking about leap year or not. Really, there are a bunch of different years, but they all have certain things in common (which era they fall in, whether they are leap, etc.). In my mind, this would be a good use case for instances instead of class level methods.

What it look like if we brought the responsibility for knowing whether a year is leap back into a Year class, but passing that behavior onto instances of year, instead of putting it on the eigenclass?

It’s a little weird to be able to do something like Year.new(2015).year, so I’ll name the state we’re going to store reckoning instead.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
class Year
  def self.leap?(year)
    self.new(year).leap?
  end

  attr_reader :reckoning

  def initialize(reckoning)
    @reckoning = reckoning
  end

  def leap?
    mundane_leap? || exceptional_century?
  end

  private

  def mundane_leap?
    reckoning % 4 == 0 && !century?
  end

  def century?
    reckoning % 100 == 0
  end

  def exceptional_century?
    reckoning % 400 == 0
  end
end

The instance-based approach has certain advantages. We don’t have to use anything exotic like private_class_method, class << self, or module_function, to make our methods private. This might make things easier to understand for future maintainers of our code.

We also gained access to the reckoning, that could be used to calculate other properites in the future. For example, if we wanted to write to_julian or to_hebrew methods, we’d already have the number we’d need to use to calculate that available to us.

Finally, we can now use a protected section as well. Methods in this section will only be visible to other instances of Year. We might use it to do something like this:

1
2
3
4
5
6
7
8
9
10
11
class Year
  ...

  protected

  def ==(other)
    self.recoking == other.reckoning
  end

  ...
end

Conclusion

We’ve looked at several different ways to write a program that can tell us whether a given year is a leap year. Each has its advantages and disadvantages.

We started with a class method that was a single-liner. There’s a lot of value in the compactness of year % 4 == 0 && !(year % 100 == 0) || year % 400 == 0. But it’s also fairly hard to understand. If we decide to split some of that logic into separate methods, now we’re faced with the problem of how to keep these new methods out of the interface of Year.

We used an eigenclass-based approach, simply hiding the implementation methods with private_class_method. We solved it using a module, treating the leap? method both as a mix-in and as a module_function. Finally, we pushed the functionality down to the instance level, which could help us in dealing with multiple Year objects down the road.

Which of these approaches is best?

It depends on lots of things: how you feel about privacy, how you expect the program to change in the future, how comfortable you are with scope and the more arcane methods in Ruby. At the very least, it’s nice to know what’s available to you when thinking about what methods you want to expose, and what level of scope you should use it on. I hope this post will factor into your thoughts when making these kind of decisions in the future.

P.S. What do you think? Did this make sense? Should I have used Structs or Lambdas? Throw us a comment below!

Measuring Client-Side JavaScript Test Coverage With Istanbul

This post originally appeared on Engine Yard.

It was also published on the Quick Left Blog.

Introduction

Testing is a vital part of any development process. Whether a project’s authors are hoping for scalability, fewer bugs, or just code that stands the test of time, a solid test suite is essential.

It’s all well and good writing tests, but how do you know that you’ve tested the code that matters? Did you cover the most important components? Did you test the sad path? The edge cases? What about the zero states?

Using Istanbul, you can generate a nice coverage report to answer these questions. In this article, we’ll look at how to get it set up in a typical clientside JS project.

A Note About Metrics

Before we get into the process of setting up Istanbul, I’d like to talk about code coverage as a metric. Metrics in programming can be very two-sided. On the one hand, they can be very useful in measuring velocity and predicting completion of new features. But, they can also become self-fulfilling prophecies leading to bad code.

For example, if a project manager or tech lead measures her programmers’ effectiveness by counting the lines of code they write, she will not find that those who wrote the most lines are the ones that wrote the best code. In fact, there’s a danger that some of the programmers will adopt a verbose style, using far more lines than necessary to get the same thing done, in order to bump their ranking.

Placing a strong emphasis on test coverage as a metric can lead to a similar problem. Imagine that a company adopts a policy that any pull request must increase or maintain the percentage of lines tested. What will be the result?

I imagine that in many cases, developers will write good tests to accompany their features and bugfixes. But what happens if there’s a rush and they just want to get the code in?

Test coverage tools only count the lines that were hit when the test suite was run, so the developer can just run the line in question during the test, while making a trivial assertion. The result? The line is marked covered, but nothing meaningful is being tested.

Rather than using test coverage as a measure of developer thoroughness, it makes a lot more sense to use coverage as a way of seeing which code isn’t covered (hint: it’s often else branches). That information can then be used to prioritize testing goals.

In summary: don’t use code coverage to measure what’s tested, use it to find out what isn’t.

Time to Test

Let’s imagine that we have a clientside JavaScript application written in Angular, Ember, or Backbone and templated with Handlebars. It’s compiled with Browserify and built with NPM scripts.

This application has been around for a couple of years, and due to business pressures, its authors have only managed to write a handful of tests. At this point, the app is well-established, and there is a testing setup in place, but there’s also a lot of code that’s untested.

Recently, the company behind the application closed a funding round, and they’re feeling flush. We’ve been brought in to write some tests.

Because we’re hotshots and we want to show off, we decide to begin by taking a snapshot of the current code coverage, so that we can brag about how many percentage points we added to the coverage when we’re done.

Setting up Istanbul

This is where Istanbul comes in. Istanbul is a code coverage tool written in JavaScript by Krishnan Anantheswaran of Yahoo!.

It can be a little tricky to set up, so let’s take a look at one way to do it.

There are four steps in our approach:

  1. Instrument the source code with Istanbul
  2. Run mocha-phantomjs, passing in a hooks argument
  3. Use a phantomjs hook file to write out the results when testing is complete
  4. Run the Istanbul cli to generate the full report as an HTML file

Let’s get started.

1. Instrument the Source

We’ll need to find a way to run our code through Istanbul after it’s been compiled, so the first step is to set up an NPM task that will pipe compiled code into a tool like browserify-istanbul.

Just in case you’re not using browserify, a variety of other Istanbul NPM packages exist for instrumenting code, including browserify-gulp, grunt-istanbul-reporter, and borschik-tech-istanbul.

For the moment, let’s imagine that we are using Browserify. We already have NPM tasks in place to compile our code for development/production and for testing. Here’s what they look like. Note that the -o option to browserify specifies the output file for the build and the -d option turns on debugging.

In package.json:

1
2
3
4
5
6
7
8
9
{
  ...
  "scripts": {
    ...
    "build": "browserify ./js/main.js -o html/static/build/js/myapp.js",
    "build-test": "browserify -r handlebars:hbsfy/runtime ./js/test/index.js -o html/static/build/js/test.js -d --verbose"
  }
  ...
}

We can use the build-test task as a template for a new build-test-coverage task.

Before we do that, we’ll want to make sure we pull in browserify-istanbul with npm install --save-dev browserify-istanbul.

Next, we’ll write the task in package.json. We’ll ignore the Handlebars templates and Node modules when we load everything into Istanbul. We’ll also use the -t option to Browserify to use a transform module.

1
2
3
4
5
6
7
8
{
  ...
  "scripts": {
    ...
    "build-test-coverage": "mkdir -p html/static/build/js/ && browserify -r handlebars:hbsfy/runtime -t [ browserify-istanbul --ignore **/*.hbs **/bower_components/** ] ./js/test/index.js -o html/static/build/js/test.js -d"
  }
  ...
}

With the browserify-istanbul package and the build-test-coverage script in place, we’ve got our code instrumented, and we’re ready to move on to step two.

2. Run mocha-phantomjs, Passing in a Hooks Argument

Now that the code is all built and ready to go, we need to pass it into a test framework. We’ll use write a cli script that spawns mocha-phantomjs in a child process. We’ll pass in a hooks argument that specifies a phantom_hooks.js file we’ve yet to write.

(Note: if you’re using gulp or grunt, you may want to check out gulp-mocha-phantomjs or grunt-mocha-phantomjs for this step.)

In js/test/cli.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/usr/bin/env node

var spawn = require('child_process').spawn;

var child = spawn('mocha-phantomjs', [
  'http://localhost:9000/static/js/test/index.html',
  '--timeout', '25000',
  '--hooks', './js/test/phantom_hooks.js'
]);

child.on('close', function (code) {
  console.log('Mocha process exited with code ' + code);
  if (code > 0) {
    process.exit(1);
  }
});

With our cli script in place, we’ve now got a way to put our Istanbulified code into PhantomJS, so we’ll move to step three.

3. Use a PhantomJS Hook File to Write the Results

In the script we wrote in the last section, we passed a hooks file to mocha-phantomjs, but we hadn’t created it yet. Let’s do that now.

After all the tests have run, our hook will grab the __coverage__ property of the window, which contains the result of our coverage run, and write it to a coverage/coverage.json file.

We’ll load this data into the Istanbul CLI to generate a more readable report in the next step.

In js/test/phantom_hooks.js:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
module.exports = {
  afterEnd: function(runner) {
    var fs = require('fs');
    var coverage = runner.page.evaluate(function() {
      return window.__coverage__;
    });

    if (coverage) {
      console.log('Writing coverage to coverage/coverage.json');
      fs.write('coverage/coverage.json', JSON.stringify(coverage), 'w');
    } else {
      console.log('No coverage data generated');
    }
  }
};

With our coverage data saved, we’re ready to move on to the last step.

4. Run the Istanbul CLI to Generate the Full Report

The final step in our process is to take the coverage data generated in step three and plug it into the Istanbul CLI to generate a coverage report HTML file.

We’ll write an NPM script that executes the instanbul report command, passing it the folder where we saved our results (coverage), and specifying lcov as our output format. This option will save both lcov and html files.

1
2
3
4
5
6
7
8
{
  ...
  "scripts": {
    ...
    "coverage-report": "istanbul report --root coverage lcov"
  }
  ...
}

Now we have all the scripts we need to generate and view our coverage report.

Generating the Report

We’ll need to run each of the commands that we’ve defined before we’ll be able to view our results (you may want to wrap these in a single NPM task for convenience).

  • npm run build-test-coverage to compile our test code and load it into Istanbul
  • ./js/test/cli.js to run the tests with mocha-phantomjs and write the coverage.json file
  • npm run coverage-report to format the coverage results as an HTML file

Once we’ve completed these steps, we should see a new coverage/lcov-report folder containing an index.html file and a few assets to make it look pretty.

Viewing the Report

If we open the generated file in our browser, we’ll see an easy-to-read breakdown of our coverage.

There are four category columns at the top of the page, each telling us about a different aspect of our coverage.

  • Statements tells up how many of the statements in our code were touched during the test run.
  • Branches tells us how many of our logical if/else branches were touched
  • Functions tells us how many of our functions were touched
  • Lines tells us the total lines of code that were touched

There’s also a list of the folders in our project, each with a categorical coverage breakdown. You can also click on each of the folders to further break down the coverage file by file. If you then click on a file, you’ll see its contents highlighted in green and red to indicate covered and uncovered lines.

The ability to zoom in and out on our project and see the categorical breakdown at each level makes Istanbul particularly nice to work with. It also makes it easy to dig in and explore places in your code base that might benefit from some additional testing.

Wrapping Up

If you haven’t added code coverage to your JS projects yet, I highly recommend it. Getting everything up and running is a minimal time investment, and it can really pay off.

Although I would discourage you from measuring the success of your test suite based on percentage points only, getting an in-depth look at what you currently are and aren’t touching can really pave the way to uncovering bugs you didn’t even know you had.

I hope that this post has helped you get Istanbul up and running with a minimum of heartache. Until next time, keep testing!

Five Ruby Methods You Should Be Using

This post was the top featured article in Ruby Weekly #229. It was also the top featured article in issue #15.1 of the Pointer.io newsletter.

It was originally published on Engine Yard and also appeared on the Quick Left Blog.

Introduction

There’s something magical about the way that Ruby just flows from your fingertips. why once said “Ruby will teach you to express your ideas through a computer.” Maybe that’s why Ruby has become such a popular choice for modern web development.

Just as in English, there are lots of ways to say the same thing in Ruby. I spend a lot of time reading and nitpicking people’s code on Exercism, and I often see exercises solved in a way that could be greatly simplified if the author had only known about a certain Ruby method.

Here’s a look at some lesser-used methods solve a specific problem very well.

Object#tap

Did you ever find yourself calling a method on some object, and the return value not being what you wanted it to? You were hoping to get back the object, but instead you got back some other value. Maybe you wanted to add an arbitrary value to a set of parameters stored in a hash. You update it with Hash.[], but you get back 'bar instead of the params hash, so you have to return it explicitly.

1
2
3
4
def update_params(params)
  params[:foo] = 'bar'
  params
end

The params line at the end of that method seems extraneous.

We can clean it up with Object#tap.

It’s easy to use. Just call it on the object, then pass tap a block with the code that you wanted to run. The object will be yielded to the block, then be returned. Here’s how we could use it to improve update_params:

1
2
3
def update_params(params)
  params.tap {|p| p[:foo] = 'bar' }
end

There are dozens of great places to use Object#tap. Just keep your eyes open for methods called on an object that don’t return the object, when you wish that they would.

Array#bsearch

I don’t know about you, but I do a lot of looking through arrays for data. Ruby enumerables make it easy to find what I need: select, reject, and find are valuable tools that I use daily. But when the dataset is big, I start to worry about the length of time it will take to go through all of those records.

If you’re using ActiveRecord and dealing with a SQL database, there’s a lot of magic that happens behind the scenes to make sure that your searches are conducted with the least algorithmic complexity. But sometimes you have to pull all of the data out of the database before you can work with it. For example, if the records are encrypted in the database, you can’t query them very well with SQL.

At times like these, I think hard about how to sift through the data with an algorithm that has the least complex Big O classification that I can. If you don’t know about Big O notation, check out Justin Abrahms’s Big-O Notation Explained By A Self-Taught Programmer or the Big-O Complexity Cheat Sheet.

The basic gist is that algorithms can take more or less time, depending on their complexity, which is ranked in this order: O(1), O(log n), O(n), O(n log(n)), O(n2), O(2n), O(n!). So we prefer searches to be in one of the classifications at the beginning of this list.

When it comes to searching through arrays in Ruby, the first method that comes to mind is Enumerable#find, also known as detect. However, this method will search through the entire list until the match is found. While that’s great if the record is at the beginning, it’s a problem if the record is at the end of a really long list. It takes O(n) complexity to run a find search.

There is a faster way. Using Array#bsearch, you can find a match with only O(log n) complexity. To find out more about how a Binary Search works, check out my post Building A Binary Search.

Here’s a look at the difference in search times between the two approaches when searching a range of 50,000,000 numbers:

1
2
3
4
5
6
7
8
9
10
11
12
require 'benchmark'

data = (0..50_000_000)

Benchmark.bm do |x|
  x.report(:find) { data.find {|number| number > 40_000_000 } }
  x.report(:bsearch) { data.bsearch {|number| number > 40_000_000 } }
end

         user       system     total       real
find     3.020000   0.010000   3.030000   (3.028417)
bsearch  0.000000   0.000000   0.000000   (0.000006)

As you can see, bsearch is much faster. However, there is a pretty big catch involved with using bsearch: the array must be sorted. This somewhat limits its usefulness, but it’s still worth keeping in mind for occasions where it might come in handy, such finding a record in by a created_at timestamp that has already been loaded from the database.

Enumerable#flat_map

When dealing with relational data, sometimes we need to collect a bunch of unrelated attributes and return them in an array that is not nested. Let’s imagine you had a blog application, and you wanted to find the authors of comments left on posts written in the last month by a given set of users.

You might do something like this:

1
2
3
4
5
6
7
8
9
10
module CommentFinder
  def self.find_for_users(user_ids)
    users = User.where(id: user_ids)
    user.posts.map do |post|
      post.comments.map |comment|
        comment.author.username
      end
    end
  end
end

You would then end up with a result such as:

1
[[['Ben', 'Sam', 'David'], ['Keith']], [[], [nil]], [['Chris'], []]]

But you just wanted the authors! I guess we can call flatten.

1
2
3
4
5
6
7
8
9
10
module CommentFinder
  def self.find_for_users(user_ids)
    users = User.where(id: user_ids)
    user.posts.map { |post|
      post.comments.map { |comment|
        comment.author.username
      }.flatten
    }.flatten
  end
end

Another option would have been to use flat_map.

This just does the flattening as you go:

1
2
3
4
5
6
7
8
9
10
module CommentFinder
  def self.find_for_users(user_ids)
    users = User.where(id: user_ids)
    user.posts.flat_map { |post|
      post.comments.flat_map { |comment|
        comment.author.username
      }
    }
  end
end

It’s not too much different, but better than having to call flatten a bunch of times.

Array.new with a Block

One time, when I was in bootcamp, our teacher Jeff Casimir (founder of Turing School) asked us to build the game Battleship in an hour. It was a great exercise in object-oriented programming. We needed Rules, Players, Games, and Boards.

Creating a represention of a Board is a fun exercise. After several iterations, I found the easiest way to set up an 8x8 grid was to do this:

1
2
3
4
5
class Board
  def board
    @board ||= Array.new(8) { Array.new(8) { '0' } }
  end
end

What’s going on here? When you call Array.new with an argument, it creates an array of that length:

1
2
Array.new(8)
#=> [nil, nil, nil, nil, nil, nil, nil, nil]

When you pass it a block, it populates each of its members with the result of evaluating that block:

1
2
Array.new(8) { 'O' }
#=> ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']

So if you pass an array with eight elements a block that produces an array with eight elements that are all 'O', you end up with an 8x8 array populated with 'O' strings.

Using the Array#new with a block pattern, you can create all kinds of bizarre arrays with default data and any amount of nesting.

<=>

The spaceship, or sort, operator is one of my favorite Ruby constructs. It appears in most of the built-in Ruby classes, and is useful when working with enumerables.

To illustrate how it works, let’s look at how it behaves for Fixnums. If you call 5<=>5, it returns 0. If you call 4<=>5, it returns -1. If you call 5<=>4, it returns 1. Basically, if the two numbers are the same, it returns 0, otherwise it returns -1 for least to greatest sorting, and 1 for reverse sorting.

You can use the spaceship in your own classes by including the comparable module and redefining <=> with logic branching to make it return -1, 0, and 1 for the cases you want.

Why would you ever want to do that?

Here’s a cool use of it I came across on Exercism one day. There’s an exercise called Clock, where you have to adjust the hours an minutes on a clock using custom + and - methods. It gets complicated when you try to add more than 60 minutes, because that will make your minute value invalid. So you have to adjust by incrementing another hour and subtracting 60 from the minutes.

One user, dalexj, had a brilliant way to solve this, using the spaceship operator:

1
2
3
4
5
6
7
8
  def fix_minutes
    until (0...60).member? minutes
      @hours -= 60 <=> minutes
      @minutes += 60 * (60 <=> minutes)
    end
    @hours %= 24
    self
  end

It works like this: until the minutes passed in are between 0 and 60, he subtracts either 1 or -1 from the hours, depending on whether the minute amount is greater than 60. He then adjusts the minutes, adding either -60 or 60 depending on the sort order.

The spaceship is great for defining custom sort orders for your objects, and can also come in handy for arithmetic operations if you remember that it returns one of three Fixnum values.

Wrapping Up

Getting better at writing code is a process of learning. Since Ruby is a language, a lot of the time I spend trying to improve is spent of reading “literature” (i.e. code on Exercism andGitHub), and reading (what is essentially) the dictionary for my language: (Rubydocs.

It’s so much easier to write expressive code when you know more methods. I hope that this collection of curiosities helped expand your Ruby vocabulary.