Fluxus Frequency

How I Hacked The Mainframe

Happy, Sad, Evil, Weird: Putting Use Case Planning Into Practice

This post originally appeared on Engine Yard.

Introduction

In part one of this miniseries, we introduced formal Use Case Analysis and a simplified version called Use Case Planning which fits a rapid, iterative development process. That post went over the high-level concepts, and explained how this planning method will help you catch problems with your design before you start to implement.

In this post, the final post of this miniseries, we’ll step through a concrete example so you can see how to put Use Case Planning into practice.

An Example

We’ll imagine that we work for a company that is building a multi-tenant Software as a Service (SaaS) platform where people can set up shops and sell products. Tenants will be able to charge their customers through the platform.

We’re part of a team that’s getting ready to create a credit card payment acceptance feature. It will be a credit card form common to all of our tenants. We’ll be writing the markup by hand and using of Stripe for processing cards. We’re entering a sprint planning meeting to define the scope of the work to be done and decide how long it will take to build.

During this meeting, we’ll talk about many different aspects of the billing process. For the purposes of this post, let’s hone in on one specific feature: once a user has clicked Buy, they are presented with a credit card form. We want to plan what will happen when they try to make use of this form.

Let’s walk through the Use Case Planning process for this scenario.

Step 1. Identifying the Actors and Their Roles

People:

User: exchanges money for goods Merchant: exchanges goods for money and money for tenancy Platform Owner: exchanges tenancy for money

Services:

SaaS Platform: provides the space for tenancy and goods to be exchanged for money Stripe: verifies and charges credit cards, handles much of PCI compliance Credit Card Company: transfers funds between other actors

Step 2. Describe The Purpose of The Feature

Why do we want to build a credit card form? So that we can debit the user and credit the merchant and platform owner.

Step 3. Identify Use Case Packages

In this step, each of the stakeholders will contribute their point of view to the discovery of behaviors that we should consider.

When considering a set of use cases, I often like to think through the alternative paths first. As I mentioned above, they can yield interesting decisions that affect the way the happy path will be built.

The Sad Path

This time, we’ll start with the sad path. Considering the sad path means thinking through what should happen when one of the actors does something differently than we want them to.

Let’s identify some of the sad path use cases of filling out a credit card form.

Here are a few examples:

  1. User fills out the credit card form with invalid credit card information

When this use case is identified, the designer might chime in that when this happens, the invalid fields should be highlighted and an error message should be displayed explaining what went wrong.

The QA technician might point out that these validations should be ironclad; no special characters should be allowed to pass through.

  1. Card is rejected by credit card company

Here, the product owner might insist that it should take as few steps as possible to resubmit the form, so that the user doesn’t become frustrated and decide not to buy the product.

  1. Stripe accepts card when submitted via JavaScript, but fails on subsequent server charge request

The developer would want to make sure that passing error handling from the server back to the front end is captured in this use case.

The Evil Path

Coming up with evil paths requires to you to think like an attacker. How many ways can you come up with to exploit the feature you’re trying to build?

For example:

  1. Price as set in a hidden form field and user figures out that they can change it to zero

In this scenario, the developer would want to make sure that the form is built correctly.

  1. Hacker steals credit card info from the database, server logs, or an insecure network request sent over HTTP

The product owner would want to mitigate against this as much as is possible so as the protect the customer’s data. From a legal perspective, the product owner would also want to ensure that the SaaS company could not be held liable for any losses, and crucially, that PCI compliance was met.

  1. If we were to save the credit card information to the user account, a hacker could launch a CSRF attack, leveraging a logged in user’s account information to order products without authorization.

The developer would suggest using a CSRF token, and the QA technician would want to make sure that form submission failed when the token was changed.

  1. Security holes in session or authentication opens users up to charges

Here, the QA technician might ask what would happen when cookie or local storage data is changed. Does it fail as it should?

The Weird Path

Coming up with weird paths requires a little more creativity.

Consider each component that your feature interacts with (both internally and externally, locally and remotely, and so on) and think through what would happen if that component failed or behaved in an unexpected way.

For instance:

  1. JavaScript is disabled in the user’s browser, and the event listener that would prevent the form from being submitted doesn’t fire. The form element falls back to its default behavior, which is to submit it to the SaaS server. The credit card number now appears in the server logs, making it vulnerable to information theft.

The developer would want to ensure that the form is built such that it would never be submitted to the SaaS server by mistake.

  1. Stripe server is down

In this case, the designer would ask for some kind of error page, perhaps with a link to a status page where users could check for the servers to come back online.

  1. Connection to Stripe is interrupted during transaction

The product owner might ask if we can resend the submission if the connection was interrupted. The developer would probably push back on that request, for security reasons. As a compromise, the designer might offer to invent an error state to be shown in this case.

The Happy Path

This one is easy. How do you want the feature to work?

  1. User successfully fills out form and clicks submit

Again, the designer would probably like to display some kind of success message here.

  1. Credit card charge is accepted by Stripe

In this scenario, the product owner might ask to have the page views leading to the successful charge tracked in an analytics service, so we could track down and encourage the same behavior in the future.

Step 4. Name and Diagram Use Cases

We’ve now identified twelve use cases for the credit card form feature.

I tried my hand at documenting them all in a Use Case Diagram. Green arrows represent good requests or responses, and red arrows represent errors. Bombs represent a broken network connection.

As you can see, there are tons of arrows.

There are a lot of possible scenarios, and a lot of possible communications between the actors in the system. If we hadn’t taken the time to think them through and uncover them all, there’s high probability that we would have left some of these out.

Converting To User Stories

Now that we’ve gone through these four steps, we’ve come out with something very valuable: bite-sized sentences that can be translated directly into stories and entered into our tracker software.

Here’s what the first sad path case might look like when worded as a story:

1
As a user, when I complete the form with invalid information and click submit, I should see the invalid inputs become highlighted, and I should see validation errors telling me what went wrong so that I can correct my error and successfully buy products.

As you can see, we have an actor (user), action (fill out and submit form), a result (show validation errors), and a business purpose (the user can give us money).

These twelve stories are small and clear, and lend themselves to being prioritized based on the priorities of our business. Once we enter them into our tracker, we can be assured that they will all be built, and we can estimate how long it will take to happen.

Conclusion

The Use Case Planning process is not super complicated. It consists of identifying who/what is involved, why we care, what should happen in the happy and alternative scenarios, and how do the scenarios relate?

It’s a relatively low level of effort to answer these four questions and break down the scenarios, but the result is worth its weight in gold. It enables us to identify architectural concerns and edge cases early on and change them at a low cost. We’ve also ended up with a set of small stories with clear acceptance criteria that we can track, providing a huge value both in accountability and estimating timelines.

Happy, Sad, Evil, Weird: Driving Feature Development With Feature Planning

This post originally appeared on Engine Yard.

Introduction

When building software iteratively, feature planning has to be done early and often. But it can be a complicated process due to all of the stakeholders involved, each with different viewpoints and goals.

What’s more, it’s easy to overlook key behaviors of a feature, which can lead to expensive and rushed code later. It’s usually intuitive to figure what should happen when everything goes according to plan, but what about edge cases? What should happen when a user supplies bad data? A hacker launches a malicious attack on our application? What about when chaos makes the whole system unstable?

In the first post of this miniseries, we’ll take a look at one way to get everyone’s voice heard in the planning process, including the product owner, developer, designer, and QA engineer. Using this approach, teams can draw on their diverse perspectives to tease out a detailed blueprint of a feature that costs less and performs better.

Introducing Use Case Planning

Use Case Planning is a term that I’ve come up with to represent a simplified version of Use Case Analysis. I’m aiming to simplify the software feature planning process into a simple, reusable procedure that will save teams money and build more robust systems.

With Use Case Planning, teams can stay flexible early in the game, when it’s still cheap to make big changes to the system.

It also helps us look ahead and find edge cases. For many people, it’s easy to press forward naïvely, writing stories about how a feature should behave entirely in terms of the best-case scenario (also known as the Happy or Golden Path). But there are many other cases to consider. What about bad data (Sad Path), hacker attacks (Evil Path), or web services going down (Weird Path)?

In the end, thinking through these scenarios in advance of development will save a company money and result in features that provide a better user experience.

What Is Use Case Planning?

Background

If you’re not familiar with Use Case Analysis, it’s an academic approach based in Object Oriented Analysis—a way of describing any kind of system (not just software) in terms of conceptual objects.

I first became aware of Use Case Analysis through the work of Mark Shacklette, a professor of Computer Science at the University of Chicago. In this paper, he lays out a very detailed process for building systems with Use Case Analysis. However, I found it to be over-complicated for regular use. I build a lot of software for clients, and need a “boiled down” version that I can reach for when planning sessions are constrained by time. Use Case Planning is my attempt at creating that.

What’s Required?

In the software industry, when we’re getting ready to build a feature, we have to answer three basic questions:

What’s required to build it? How should it behave? What people and computer systems are involved?

Through the Use Case Planning process, we’ll answer these questions and come out with a blueprint detailing what we need, the dependencies involved, and the ways our feature should work.

Who’s Involved?

Feature planning is usually carried out by a team of stakeholders rather than an individual. Product owners, developers, designers, QA technicians, and so on all bring a unique perspective and set of concerns to the table. Drawing on these varied mindsets helps us make a more comprehensive plan than we could come up with on our own

Scenarios

Let’s imagine we’re on such a team, and we want to plan a feature with Use Case Planning. We’ll need to break it down into as many use cases, or scenarios, as we can, weaving a story of interactions between people of various roles and the computer systems involved.

Why Think About Use Cases?

There are many benefits to breaking down a feature into use cases.

First of all, talking through software’s desired behavior in simple everyday language opens the conversation up to all stakeholders. The whole team can work together to determine how a feature should work without jargon getting in the way.

The team involved in the planning process is essentially trying to define a contract of what will be built and what it will do. Use Case Planning makes this contract more resistant to change, as unforeseen circumstances are accounted for before the first line of code is ever written. Also, functional parts of the system are less likely to fall through the cracks when a variety of scenarios are considered up front.

This approach can also help guide architectural decisions. Sometimes, a feature can be ruled out entirely, before any time is ever spent building it. For example, concern about having bots or script-generated users in a system could push a team toward including an email confirmation workflow, or choosing oAuth over traditional authentication.

In my mind, perhaps the most valuable benefit of use case analysis is that it gives teams a way to describe a system that costs very little to modify. In talking through alternate paths early, it’s easy to change requirements before anything’s ever built. This is a huge win, because as time goes on, code is written, and the system begins to take shape, the cost of change increases significantly.

Adding and removing features from a use case story or diagram is easy. Changing them in UX flow diagrams and wireframes is harder, in design comps harder still, in development code even harder, and in a production app it can be extremely hard. But when you’re planning, making changes is as easy as throwing away a sticky note and writing a new one, so it pays to think through as many of the potential scenarios as possible at this early stage.

How To Do Use Case Planning

Now that we’ve talked about the benefits and goals of use case planning, what exactly is it? As I mentioned above, this is my attempt at boiling Use Case Analysis down into a set of memorable, repeatable steps. They mirror the steps in Mark Shacklette’s original paper pretty closely, but I’ve tried to rework them to the minimum of what I think I would need when planning a feature before beginning work.

The Four Steps

To complete the four steps, answer these four questions:

  1. Who are the actors and what are their roles?
  2. What’s the purpose of this feature?
  3. What are the use cases?
  4. How do the use cases relate to each other?

In thinking about the first question, we’ll expand our definition of “actors” from just people to include everything that interacts with the system: users, administrators, our client-side application, our API, external APIs, cloud services, and hardware interfaces. Each of these actors has a role and a responsibility in the system that we should identify.

In identifying the purpose of the feature, we are just looking for a high-level summary of why we care about it. What’s the business value?

When we get to the third question, we’re ready to dig into the meat of this process. We’ll think through all of the use and misuse cases, considering the Happy, Sad, Evil and Weird paths. For each one, we’ll choose a a noun (the actor), verb (actions taken), and a brief description of the desired result. We can also optionally add a more specific purpose (like we identified in step two) to each case as we go.

After step three, we will have identified many use cases, each with a name, an actor, and a breakdown of all the required behavior. Using this information, we can answer the last question (how do they relate?) by diagramming their interactions. There’s an example of a use case diagram for ordering at a restaurant on Wikipedia.

The diagram is in a formal Use Case Analysis style. For the purposes of planning software, it might make sense to replace this style of diagramming with a UX flow diagram, using the screens in the app to delineate how the feature should behave.

Wrapping It Up

Once we’ve identified the various use cases associated with our feature, we can translate these results directly into agile stories. Using a tracking tool like Sprint.ly, we can then prioritize the work that needs to be done and estimate the time it will take to complete the feature. Each user case can be sized, tagged, and tracked, providing a helpful look into the progress being made toward bringing our feature to fruition.

Conclusion

In this post, we introduced formal Use Case Analysis and a simplified version called Use Case Planning which fits a rapid, iterative development process. We went over the benefits, the steps involved, and explained how this planning method will help your team catch problems with your design before you start to implement it.

That’s it for this post, but tune in next week for part two (the final part) of this miniseries, where we’ll be stepping through a concrete example so you can see how to put Use Case Planning into practice.

Actually MVP

This post originally appeared on Engine Yard.

Introduction

In the startup world, there is a lot of talk about building Minimum Viable Products (MVPs). At this point, the concept has become so well-accepted that it has almost become a kind of unquestioned dogma. Yet there is a lot of disagreement about what MVP is exactly, and how to carry it out. Many people in the software industry assume that they know what MVP means, and claim to be using the process, but their production workflow tells a different story.

When it comes to building software, it is often tempting to take an approach akin to building a skyscraper: write the blueprints, obtain the necessary prerequisites, then build it to spec. But software is a quickly shifting market. A businessperson may think she knows what the market wants, and plan and begin a project to meet that desire. But by the time the product is built, the needs of consumers have often morphed in a direction that she could never have foreseen.

In this post, we’ll take explore some common misconceptions about MVP, some different ways to approach building one with software, and how to best use this tool if you’re the CEO or CTO of a startup, a product manager for an established company, or a consultant.

Why MVP?

We hear a lot of talk about MVP and its value, but as a businessperson, why should you care? The reason is simple: it prevents you from spending money building a product that nobody wants.

When you build your business around small, successive iterations, the time before you can reflect on lessons learned is as small as possible. It can even push you to decide not to build your big idea, saving you valuable time and resources.

Another great benefit of an MVP approach is that it allows you to test a hypothesis with minimal resources. If you have no money in the bank, you can still get something off the ground.

History

In the startup world, the idea of MVP was popularized by Steve Blank and Eric Ries. Eric’s 2008 blog post, The Lean Startup, kicked off a movement in software development toward building companies around the idea of testing business hypotheses in an iterative way. The idea of MVP is central to this approach, and has become part of the lingua franca of startup culture.

The origins of MVP (and lean software development) draw on the Toyota corporation’s lean manufacturing approach, called the Toyota Production System (TPS). Toyota bigwig Taiichi Ohno coined the idea of “Just In Time” production, in which return on investment is maximized by reducing inventory.

Among other things, the TPS introduced the idea of Kanban. The key takeaway of TPS as it applies to software is this: production is determined according to the actual demand of the customer.

It’s Not What You Think It Is

We’ve all seen this picture right?

Raise your hand if you think it’s a good idea.

Now raise your hand if you think you actually follow it. Really? Are you sure you didn’t motorize your skateboard? Put a steering wheel on your bike? Let me ask you this: did you actually do a customer interview at any point to see if they even wanted a car?

An MVP may not be what you think it is. Eric Ries defines it as “that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort”. What does that actually mean?

It means building just enough of a product to be deployed and used. It’s the minimum feature set that you need to find out whether it makes sense to invest further in an idea. If you’re building a dating site for dogs, what do you need? Profiles and messaging. You don’t need favoriting, automated emails, or the ability to see who viewed your profile.

If you’re breaking ground with a new idea, you can ask yourself: would people use this? If you’re spinning something that’s already out there, ask: would people love this more than what they’re using now?

Then answer that question. Build profiles and messaging, put it in front of some users, and see if they love it. Track clicks, invite them in for an interview, get their email and reach out personally. Identify Key Performance Indicators (KPIs) and use them to verify success.

If they love it, keep going! Add a link to favoriting that doesn’t work. Track clicks. If enough people are trying to favorite, build it. Invite users in for an interview. Get their email and reach out personally.

If you try several different feature sets and find out that nobody wants a dating site for dogs, it’s ok. In fact, that’s great. That means you really did it. You tested the market cheaply before you sunk bags of money into building Doggie Dates. That’s a win.

An MVP is not just doing sprints to build your product over time. It’s an experimental process.

It’s a small vision, tested and validated thoroughly before moving forward.

It’s not twelve weeks of development and a release.

It’s continuous deployment.

It’s not a buggy alpha site with ten features that kind of work.

It’s the two most defining features needed for the product to be useful.

It’s bare, not broken.

On the other hand, an MVP approach is not just release early, release often, either. Yes, build a small thing. Yes, gather feedback and incorporate it. But don’t let the feedback cause you to pivot so hard that you can’t remember what you were trying to do in the first place. If you find yourself testing Amazon for dog toys instead of a dating site for dogs, you haven’t pivoted for product-market fit, you’ve pivoted to an entirely new idea.

Start with a vision and stay true to it. Build the skeleton, and let feedback from early users help you flesh out the details.

Really MVP

So you want to build a thing, huh? You’re going to change the world, like Steve Jobs? Slow down there, buckaroo! I hope you know what you’re getting into.

According to this article, there were 1.35 million tech startups as of February 2014. Before you start dreaming of all that VC cash and crack your wallet open to get things off the ground, why don’t you do a little experiment to see if it’s at all likely that you’ll end up anywhere besides broke.

The lean startup approach is all about the build-measure-learn cycle. Before you have a company, you’ll have to start by building something.

First Step: Build

What’s the smallest first step you can take?

The classic example of an MVP you can use to test an idea is a landing page with a sign up form. The idea is to build a pixel-perfect landing page touting all the benefits of joining Doggie Dates and deploy it on an easy-to-use platform like Engine Yard. Then you drive users to the site by purchasing some Google AdWords, and entice them to sign up with their email for early access to the application.

There has been some recent debate as to whether a landing page is even an MVP. Although some would say no, like Ramli John, this strategy doesn’t provide enough insight to complete the build-measure-learn cycle. Eric Reis and others seem to disagree. A landing page can provide enough information to build a successive MVP and continue to gather feedback. If they sign up they’re probably interested in what you’re selling.

Although Ramli John isn’t a fan of landing pages, he does have some other great suggestions for ways to build a first MVP. Some startups, like AngelList, have begun as an email list. Blogs can be another great place to gather interested users. We’ve talked about Eric Reis quite a bit already. His blog covered topics like refactoring, TDD, and fundraising before he was known for the Lean Startup. You can try getting off the ground with a video and a startup campaign. Finally, you can do it the old fashioned way: the hustle. Just sell your service. In person. Before you build an app.

Landing pages are great for founders of small startups and developers with a great idea for a side project, but what does MVP mean if you have another sort of job?

If you’re a consultant like me, encourage your clients to prioritize their feature requests. We have so many awesome things in mind for Doggie Dating! Favoriting, profile walls, see who viewed me, a Dogs You Might Dig service, responsive layouts, native mobile versions… The list goes on and on. It’s great to write them all down and put them in a tracking tool like Sprint.ly, so that you don’t lose all these creative ideas. Then it’s time to prioritize.

What do you need first? What is the smallest version of your idea that people (or dogs) could possibly use? Put those tickets at the top of your tracker, and mark the point when they’ll be done with a release bar. Encourage your clients to ask themselves: “is this necessary for people to use the site?” before you put anything above that bar.

Once you’ve reached it, follow the steps below! Measure and learn before you go on. That way, you can collectively decide what should actually be built, instead of spending the client’s money building things that are going end up being discarded.

If you’re a Product Owner, maybe you’re charged with exploring new ways that your company can gain more users, or convince the existing ones to pay more. There are some cool tricks you can use to sneak a feature MVP into what’s already there. You can make links that claim to take the user to one or more features, but are actually inactive (preferably with a modal dialogue to explain what’s going on). Then you track the clicks and decide what to build from there. Or you can ask users to pay for a certain feature before you actually begin development on it. If you don’t reach a certain threshold of sign-ups, you just cancel it, apologize, and refund the money.

No matter your role, if people don’t seem to want what you’ve put out there, delete it and build another version of your dream. Be happy about all the time and money you just saved by not building something that nobody wants! Keep going until you find something that sticks. In this way, you’ll set up a great foundation on which to build the rest of your business.

Second Step: Measure

Regardless of how you choose to build the first version of your idea, you’ll want to measure user engagement once it’s deployed. The easiest way to do this is to add Google Analytics into the page and track clicks and pageviews. Reflecting on this information, you’ll begin to learn what people want. If anyone actually signs up on your landing page with their email, you can reach out personally (maybe even take them to lunch) and ask them questions to learn more about what people would want to use.

Once you’ve gotten past the first iteration of your MVP, you can also invite customers to user testing sessions. These sessions can offer great insight into how your application should behave, and which features are misunderstood or unwanted. Finally, A/B testing can be a great way to research which direction to go next once you’ve passed the early stages.

Third Step: Learn

After you’ve measured clicks, user responses, user testing, and A/B testing results, you can begin to draw conclusions. Maybe dating dogs don’t care about favoriting. Delete that feature. Maybe you heard over and over again that having a profile “wall” would make a huge difference to users. Perhaps that should be the next thing you build?

You need to sift through all of the information that you get and decide how to act.

Conclusion

In the software industry, a lot of people pay lip service to the idea of a Minimum Viable Product. But for many of us, it’s not what you think it is.

If you’re thinking: I know what people want, and I’m going to build it, you’ve already misunderstood the process. MVPs are experiments, research. How you use them differs a little bit depending on your situation, but the basic premise is the same. Build a small thing, measure the way it’s used, learn from it, repeat.

Seven Unusual Ruby Datastores

This post appeared in Ruby Weekly #240. It was also included in issue #27.1 of the Pointer.io newsletter. It originally appeared on the Engine Yard Blog

Introduction

Admit it: you like the unusual. We all do. Despite constant warnings against premature optimization, an emphasis on “readable code”, and the old aphorism, “keep it simple, stupid”, we just can’t help ourselves. As programmers, we love exploring new things.

In that spirit, let’s go on an adventure. In this post, we’ll take a look at seven lesser-known ways to store data in the Ruby language.

The Ones We Already Know

Before we get started, we’ll set a baseline. What are the ways to store data in Ruby that we use every day? Well, these are the ones that come to mind for me: string, array, hash, CSV, JSON, and the filesystem.

We can skip all of these.

So what are some of the other ways to store data in Ruby? Let’s find out.

Struct

What Is It?

A struct is a way of bundling together a group of variables under a single name. If you’ve done any C programming, you’ve probably come across structs before.

A struct is similar to a class. At its most basic, it’s a group of bundled attributes with accessor methods. You can also define methods that instances of the struct will respond to.

In Ruby, structs inherit from Enumerable, so they come with all kinds of great behavior, like to_a, each, map, and member access with [].

You can define a struct object by setting a constant equal to Struct.new and passing in some default attribute names. From there, you can create any number of instances of the struct, passing in attribute values for that instance.

Let’s explore one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Cat = Struct.new(:name, :breed, :hair_length) do
  def meow
    "m-e-o-w-w"
  end
end

tabby = Cat.new("Tabitha", "Russian Blue", "short")

tabby.name
=> "Tabitha"
tabby.meow
=> "m-e-o-w-w"
tabby[0]
=> "Tabitha"
tabby.each do |attribute|
  puts attribute
end
"Tabitha"
"Russian Blue"
"short"
=> #<struct Cat name="Tabitha", breed="Russian Blue", hair_length="short">

When Would You Use It?

If you want to quickly define a class that has easily accessible attributes and little other behavior, structs are a great choice. Since they also respond to enumerable methods, they are great for use as stubs in tests.

If you want to stub a class and send it a message in a test, but you don’t want to use a double, you can fake it with a struct in a single line of code.

Look how simple that is:

1
fake_stripe_charge = Struct.new(:create)

Next we’ll take a look at Struct’s close cousin, OpenStruct.

OpenStruct

What Is It?

An OpenStruct is somewhat like a hash. It’s a data structure that you can use to store and access key-value pairs. In fact, it really is a hash. Under the hood, each OpenStruct uses a hash for data storage. It also defines getters and setters automatically using method_missing and define_method.

There are three main differences between a struct and an open struct.

The first is that when you initialize a struct, you get back a class that inherits from Struct, which you must further instantiate, whereas calling new on an OpenStruct gives you back an OpenStruct object.

Secondly, OpenStructs don’t allow you to define behaviors by passing a block to the initializer as we did with the struct above.

Finally, OpenStructs must be passed an argument that responds to each_pair (such as a hash), whereas stucts expect a list of strings or symbols (to define their attribute names).

In the end, an OpenStruct is a much simpler than a struct.

OpenStruct lives in the Ruby Standard Library, so to use it in your code, you’ll have to require 'ostruct'.

Let’s explore one:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
luke = OpenStruct.new({
  home: "Tatooine",
  side: :light,
  weapon: :light_saber
})

luke
=> #<OpenStruct home="Tatooine", side=:light, weapon=:light_saber>
luke.home
=> "Tatooine"
luke.side = :dark
=> :dark
luke
=> #<OpenStruct home="Tatooine", side=:dark, weapon=:light_saber>

When Would You Use It?

As with Structs, I like to use OpenStructs as test stubs. Unfortunately, the metaprogramming used behind the scenes makes OpenStructs much slower than hashes, and they also respond to far fewer methods, so they aren’t as flexible for everyday use. However, their built-in getters make them really useful anywhere that you need to inject an object that responds to a certain method.

Marshalling

What Is It?

Marshaling is way to serialize Ruby objects into a binary format. It converts them into a bytestream that can be saved and reconstituted later.

You marshal objects by calling Marshal.dump and Marshal.load.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
SpaceCaptain = Struct.new(:name, :rank, :affiliation)
=> SpaceCaptain

picard = SpaceCaptain.new("Jean-Luc Picard", "Captain", "United Federation of Planets")
=> #<struct SpaceCaptain name="Jean-Luc Picard", rank="Captain", affiliation="United Federation of Planets">

saved_picard = Marshal.dump(picard)
=> "\x04\bS:\x11SpaceCaptain\b:\tnameI\"\x14Jean-Luc Picard\x06:\x06ET:\trankI\"\fCaptain\x06;\aT:\x10affiliationI\"!United Federation of Planets\x06;\aT"
# Write to disk

loaded_picard = Marshal.load(saved_picard)
=> #<struct SpaceCaptain name="Jean-Luc Picard", rank="Captain", affiliation="United Federation of Planets">

When Would You Use it?

There are plenty of use cases for serializing code running in memory and saving it for later reuse. For example, if you were writing a video game and you wanted to make it possible for a player to save their game for later, you could marshal the objects in memory (e.g. the player, her location in a map, and any enemies that are nearby) and persist them. You could then load them up again when the player is ready to continue.

Although there are other data serialization formats available, such as JSON, XML, and YAML (which we’ll look at next), marshalling is by far the fastest option available in Ruby. That makes it particularly well-suited to situations where you dealing with large volumes of data or processing it at high speed.

YAML

What is It?

YAML, which stands for YAML Ain’t Markup Language, is a widely-used format for serializing data in a human-readable format. It’s available in many languages, of which Ruby is only one. The most widely-used Ruby YAML parser, psych, is a wrapper around libyaml, the C language parser.

YAML lives in the Ruby Standard Library, so to use it in your code, you’ll have to require 'yaml'. You can use the YAML::Store library to easily save data to disk.

Here’s an example of how to use that library:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
require 'yaml/store'

class Database
  DATABASE = YAML::Store.new('my_database')

  def self.save_person(user_data)
    DATABASE.transaction do
      DATABASE["people"] ||= []
      DATABASE["people"] << user_data
    end
  end
end

bilbo = {
  race: :hobbit,
  aliases: ["Bilba Labingi"],
  home: "The Shire",
  inventory: [:the_one_ring, :arkenstone]
}

Database.save_person(bilbo)
=> [{:race=>:hobbit, :aliases=>["Bilba Labingi"], :home=>"The Shire", :inventory=>[:the_one_ring, :arkenstone]}, {:race=>:hobbit, :aliases=>["Bilba Labingi"], :home=>"The Shire", :inventory=>[:the_one_ring, :arkenstone]}]

Here’s what my_database would look like after running this code:

1
2
3
4
5
6
7
8
9
---
people:
- :race: :hobbit
  :aliases:
  - Bilba Labingi
  :home: The Shire
  :inventory:
  - :the_one_ring
  - :arkenstone

When Would You Use it?

YAML serves the same function as marshaling: it’s a way to serialize Ruby objects for storage. It’s quite a bit slower, but it’s human-readable.

YAML is working behind the scenes when ActiveRecord is used to serialize a record attribute containing a hash or an array and save it to a text column in the database. When the attribute is retrieved, ActiveRecord deserializes it back from YAML into a Ruby object of its original data type.

Set

What Is It?

If you’re familiar with mathematical set theory, the Set class should be pretty intuitive. Sets respond to intersection, difference, merge, and many other Set operations.

It allows you to define a data structure that behaves like an unordered array that can only contain unique members. It exposes many of the same methods available when accessing arrays, but with a faster lookup. Like OpenStruct, Set uses hash under the hood.

Sets can be saved in redis, which makes it possible to look them up very quickly.

Set lives in the Ruby Standard Library, so to use it in your code you’ll have to require 'set'.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
require 'set'

basic_lands = Set.new
[:swamp, :island, :forest, :mountain, :plains].each do |land|
  basic_lands << land
end

basic_lands
=> #<Set: {:swamp, :island, :forest, :mountain, :plains}>

basic_lands << :swamp
# does nothing
=> #<Set: {:swamp, :island, :forest, :mountain, :plains}>

fires_lands = Set.new
[:forest, :mountain, :city_of_brass, :karplusan_forest, :rishadan_port].each do |land|
  fires_lands << land
end
fires_lands
=> #<Set: {:forest, :mountain, :city_of_brass, :karplusan_forest, :rishadan_port}>

basic_lands.intersection(fires_lands)
=> #<Set: {:forest, :mountain}>

basic_lands.difference(fires_lands)
=> #<Set: {:swamp, :island, :plains}>

basic_lands.subset?(fires_lands)
=> false

basic_lands.merge(fires_lands)
=> #<Set: {:swamp, :island, :forest, :mountain, :plains, :city_of_brass, :karplusan_forest, :rishadan_port}>

When Would You Use it?

Sets are great for situations where you need to make sure that a given element isn’t contained in a collection more than once. For example, if you were using tags in an application that was not backed by a database.

They’re also great for comparing the equality of two lists without caring about their order (as an array would). You could use this feature to check whether the data stored in memory is in sync with another collection fetched from a remote server.

Queue

What Is It?

A Queue is a place that can be used hold values that you want to share between threads. It’s basically a stack that is visible to all of the concurrently running thread process in a given Ruby environment.

If you want to limit the amount of data that can be shared, you can use a SizedQueue.

Here’s an example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
require 'thread'
chess_moves = Queue.new

player_moves = Thread.new do
  chess_moves << "e4"
  sleep(1)
  chess_moves << "e5"
  sleep(1)
  chess_moves << "f4"
end

game_board = Thread.new do
  while chess_moves.length > 0
    move = chess_moves.pop
    # update the ui with the move
  end
end

When Would You Use it?

Queues are extremely helpful in any application that runs code concurrently. For example, background processing libraries like Redis make use of a queue to check for the latest jobs and instruct workers to run them.

ObjectSpace

What Is It?

The ObjectSpace module is a collection of methods that can be used to interact with all of the living objects in the current Ruby environment, as well as the garbage collector.

You can use it to check out all of the objects currently living in memory, look up objects by the object ID reference, and trigger garbage collector runs. You can also define a hook to be triggered when any object of a given class is removed from the ObjectSpace using ObjectSpace#define_finalizer.

How is ObjectSpace a data store? Well, it’s the highest-level data store (that hasn’t been interpreted or compiled yet) in any place that you can run Ruby code. Any time you define or remove an object from memory, you are changing what is visible in the ObjectSpace.

Let’s take a look at everything that’s available in an IRB session.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
object_counts = Hash.new(0)
ObjectSpace.each_object do |o|
  object_counts[o.class] += 1
end

require "pp"
pp object_counts

{
  String=>67073,
  Array=>14474,
  Regexp=>164,
  Gem::Specification=>299,
  Hash=>1023,
  # and many more...
}

If we create a new object, it ends up in the ObjectSpace.

1
2
3
4
5
6
7
8
require "ostruct"

ObjectSpace.each_object(OpenStruct).count
=> 0
spidey = OpenStruct.new({ name: "Peter Parker", species: "Human Mutate" })

ObjectSpace.each_object(OpenStruct).count
=> 1

When Would You Use it?

Although you already use the ObjectSpace all the time, whether you realize it or not, knowing about the methods it exposes opens up a lot of possibilities for investigating and improving the performance of your code.

The best use I’ve seen for ObjectSpace so far is using it to detect memory leaks. This article shows an interesting way to map the objects in your object space to create a graph that is useful in tracking down and fixing memory leaks.

Conclusion

Ruby is such a fun language to write because there are so many ways to say the same thing. It doesn’t stop at writing statements and expressions, though. You can also store data in a huge number of ways.

In this post, we looked at seven fairly unusual ways to handle data in Ruby. Hopefully, reading through them has given you some ideas for how to handle persistence or in-memory storage in your own applications.

Until next time, happy coding!

    1. We know that there are other unusual datastores out there. What are some of your favorites and how do you use them? Leave us a comment!

Integrating React With Backbone

This post originally appeared on Engine Yard.

Introduction

There are so many JS frameworks! It can get tiring to keep up to date with them all.

But like any developer who writes JavaScript, I try to keep abreast of the trends. I like to tinker with new things, and rebuild TodoMVC as often as possible.

Joking aside, when it comes to choosing frameworks for a project, emerging frameworks just haven’t been battle-tested enough for me to recommend to clients in most cases.

But like much of the community, I feel pretty confident in the future of React. It’s well documented, makes reasoning about data easy, and it’s performant.

Since React only provides the view layer of a client-side MVC application, I still have to find a way to wrap the rest of the application. When it comes to choosing a library that I’m confident in, I still reach for BackboneJS. A company that bets on Backbone won’t have trouble finding people who can work on their code base. It’s been around for a long time, is unopionated enough to be adaptable to many different situations. And as an added bonus, it plays well with React.

In this post, we’ll explore the relationship between Backbone and React, by looking at one way to structure a project that uses them together.

A Note About Setting Up Dependencies

I won’t go over setting up all of the package dependencies for the project here, since I’ve covered this process in a previous post. For the purposes of this article, you can assume that we’re using Browserify.

One package that is worth noting, though, is ReactBackbone. It will allow us to trigger an automatic update of our React components whenever Backbone model or collection data changes. You can get it with npm install --save react.backbone.

We’ll also be making use of backbone-route-control to make it easier to split our URL routes into logically encapsulated controllers. See “caching the controller call” in this article for more information about how to set this package up.

Project Structure

There are many ways to structure the directories for a client-side JS application, and every project lends itself to a slightly different setup. Today we’ll be creating our directory structure in a fairly typical fashion for a Backbone project. But we’ll also be introducing the concept of screens to our application, so we’ll also be extending it slightly.

Here’s what we’ll need:

1
2
3
4
5
6
7
8
9
10
11
12
assets/
  |- js/
     |- collections/
     |- components/
     |- controllers/
     |- models/
     |- screens/
     |- vendor/
     |- app.js
     |- base-view.js
     |- index.js
     |- router.js

Much of this is standard Backbone boilerplate. The collections/, models/, and vendor/ directories are self-explanatory. We’ll store reusable UI components, such as pagination, pills, and toggles, in components/.

The heart of our app will live in the screens/ directory. Here, we’ll write React components that will handle the display logic, taking the place of traditional Backbone views and templates. However, we’ll still include thin Backbone views to render these components.

We’ll talk more about screens in a moment. For now, let’s take a look at the how a request will flow through the application, starting from the macro level.

The Application

We’ll begin by writing a root-level index.js file, which will be the source of the require tree that Browserify will use.

1
2
3
4
window.$ = window.jQuery = require('jquery');
var Application = require('./app');

window.app = new Application();

What is this Application, you may ask? Simply put, it’s the function we’ll use to bootstrap the entire project. Its purpose is to get all of the dependencies set up, instantiate the controllers and router, kick off Backbone history, and render the main view.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
var Backbone = require('backbone');

var Router = require('./router');
var MainView = require('./screens/main/index');

var UsersController = require('./controllers/users-controller');

Backbone.$ = $;

var Application = function() {
  this.initialize();
};

Application.prototype.initialize = function() {
  this.controllers = {
    users: new UsersController({ app: this })
  };

  this.router = new Router({
    app: this,
    controllers: this.controllers
  });

  this.mainView = new MainView({
    el: $('#app'),
    router: this.router
  });

  this.showApp();
};

Application.prototype.showApp = function() {
  this.mainView.render();
  Backbone.history.start({ pushState: true });
};

module.exports = Application;

Router

Once the application has been booted up, we’ll want to be able to accept requests. When one comes in, our app will need to be able to take a look at the URL path in the navigation bar and decide what to do. This is where the router comes in. It’s a pretty standard part of any Backbone project, so it probably won’t look too out of the ordinary, especially if you’ve used backbone-route-control before.

1
2
3
4
5
6
7
8
9
10
11
12
var Backbone = require('backbone');
var BackboneRouteControl = require('backbone-route-control');

var Router = BackboneRouteControl.extend({
  routes: {
    '':          'users#index',
    'users':     'users#index',
    'users/:id': 'users#show'
  }
});

module.exports = Router;

When one of these routes is hit, the router will take a look at the controllers we passed into it during app initialization, find the controller with the name to the left of the # and try to call the method name to the right of the # in the string defined for that route.

Controllers

Now that the request has been routed through one of the routes, the router will take a look in the matching controller for the method that is to be called. Note that these controllers are not a part of Backbone, but Plain Old JavaScript Objects.

For the purposes of this post, we’ll just have a UsersController with two actions.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
var UsersCollection = require('../collections/users-collection');
var UserModel = require('../models/user');
var UsersIndexView = require('../screens/users/index');
var UserShowView = require('../screens/users/show');

var UsersController = function(options) {
  var app = options.app;

  return {
    index: function() {
      var usersCollection = new UsersCollection();

      usersCollection.fetch().done(function() {
        var usersView = new UsersIndexView({
          users: usersCollection
        });
        app.mainView.pageRender(usersView);
      });
    },

    show: function(id) {
      var user = new UserModel({
        id: id
      });

      user.fetch().done(function() {
        var userView = new UserShowView({
          user: user
        });
        app.mainView.pageRender(userView);
      });
    }
  };
};

module.exports = UsersController;

This controller loads the User model and collection, and uses them to display the user index and show screens. It instantiates a Backbone collection or model, depending on the route, fetches its data from the server, loads it into the screen (which we’ll get to momentarily), then shows that screen in the app’s mainView container.

Screens

At this point, we’ve accepted a request, routed it through a controller action, decided what kind of collection or model we are dealing with, and fetched the data from the server. We’re ready to render a Backbone view. In this case, it will do little more than pass the data on to the React component.

The Base View

Since there’s going to be a lot of repeated boilerplate in our Backbone views, it makes sense to abstract it out into a BaseView, which child views will extend from.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var React = require('react');
var Backbone = require('backbone');

var BaseView = Backbone.View.extend({
  initialize: function (options) {
    this.options = options || {};
  },

  component: function () {
    return null;
  },

  render: function () {
    React.renderComponent(this.component(), this.el);
    return this;
  }
});

module.exports = BaseView;

This base view sets any options passed in as properties on itself, and defines a render() method that renders whatever React component is defined in the component() method.

The Main View

In order to switch between screens without doing a page re-render, we’ll wrap all of our screens in an outer screen called the mainView. This view acts as a sort of “picture frame” for the other screens in the app, displaying, hiding, and cleaning them up.

As with all of our screens, it will consist of two parts: a Backbone view, defined in screens/main/index.js, and a React component, defined in screens/main/component.js.

Backbone View

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var Backbone = require('backbone');
var BaseView = require('../../base-view');
var MainComponent = require('./component');

var MainView = BaseView.extend({
  component: function () {
    return new MainComponent({
      router: this.options.router
    });
  },

  pageRender: function (view) {
    this.$('#main-container').html(view.render().$el);
  }
});

module.exports = MainView;

Since we passed #app as the element for this view to attach to back in app.js, it will render itself there. Thinking through what render actually means, we know that it will call the code defined in the BaseView, which means it will render the whatever’s returned by the component() function. In this case, it’s the React MainComponent. We’ll take a look at that in a moment.

The other special thing this view does is to render any subviews passed to pageRender in the #main-container element found within #app. As I said, it’s basically just a frame for whatever else is going to happen.

React Component

Now let’s take a look at that MainComponent. It’s a very simple React component that does nothing more than render the “container” into the DOM.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
/** @jsx React.DOM */
var React = require('react');
var ReactBackbone = require('react.backbone');

var MainComponent = React.createBackboneClass({
  render: function () {
    return (
      <div>
        <div id="main-container"></div>
      </div>
    );
  }
});

module.exports = MainComponent;

That’s it, the whole MainView. Since it’s so simple, it makes a good introduction to how we can render components in this project.

Now let’s take a look at something a little more advanced.

User Show View

We’ll start by taking a look at how we might write a React component for a user show page.

Backbone View

First, we’ll define the UserShowView we referenced back in the UsersController. It should live at screens/users/show/index.js.

1
2
3
4
5
6
7
8
9
10
11
12
var BaseView = require('../../../base-view');
var UserScreen = require('./component');

var UserView = BaseView.extend({
  component: function () {
    return new UserScreen({
      user: this.options.user
    });
  }
});

module.exports = UsersView;

That’s it. Mostly just boilerplate. In fact, pretty much all of our Backbone views will look like this. A simple extension of BaseView that defines a component() method. That method instantiates a React component and returns it to the render() method in the BaseView, which in turn is called by the mainView’s pageRender() method.

React Component

Now, let’s dig into the meat of user show screen: the UserScreen component. It will live at screens/users/show/component.js.

We’ll imagine that we can “like” users. We want to be able to increment a user’s likes attribute by clicking a button. Here’s how we’d write this component to handle that behavior.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
/** @jsx React.DOM */
var React = require('react');
var Backbone = require('backbone');
var ReactBackbone = require('react.backbone');
var User = require('../../../models/user');

var UserShowScreen = React.createBackboneClass({
  mixins: [
     React.BackboneMixin('user', 'change'),
  ],

  getInitialState: function() {
    return {
      liked: false
    }
  },

  handleLike: function(e) {
    e.preventDefault();
    var currentLikes = this.props.user.get('likesCount');
    this.props.user.save({ likesCount: currentLikes + 1 });
  },

  render: function() {
    var user = this.props.user;
    var username = user.get('username');
    var avatar = user.get('avatar').url;
    var likesCount = user.get('likesCount');

    return (
      <div className="user-container">
        <h1>{username}'s Profile</h1>
        <img src={avatar} alt={username} />
        <p>{likesCount} likes</p>
        <button className="like-button" onClick={this.handleLike}>
          Like
        </button>
      </div>
    );
  }
});

module.exports = UserShowScreeen;

You may have noticed that curious mixins property. What is that? react.backbone gives us some niceties here, since we’re calling React.createBackboneClass instead of React.createClass. Whenever the user prop that was passed into this component fires a change event, the component’s render() method will be called. For more information, take a look at the package on GitHub.

When we click that like button, we’re incrementing the likesCount attribute on the user, and saving it to the server with our save() call. When the result of that sync comes back, our view will automatically re-render, and the likes count indication will update! Pretty sweet!

Users Index Screen

Before we conclude this post, lets take a look at one more case: the index screen. Here, we’ll see how using React can make it easier to render repetitive subcomponents.

Backbone View

The view for this screen will live at /screens/users/index/index.js, and look similar to the UserShowView.

1
2
3
4
5
6
7
8
9
10
11
12
var BaseView = require('../../../base-view');
var UsersIndexScreen = require('./component');

var UsersIndexScreen = BaseView.extend({
  component: function () {
    return new UsersIndexScreen({
      users: this.options.users
    });
  }
});

module.exports = UsersIndexView;

Backbone Component

The UsersIndexScreen component will also be fairly similar to the UserShowScreen one, but with one key difference: since we’re going to be rendering the same DOM elements repeatedly, we can leverage subcomponents.

Here’s the main component, which lives at screens/users/index/component.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/** @jsx React.DOM */
var React = require('react');
var ReactBackbone = require('react.backbone');
var UserBlock = require('.user-block');

var UsersIndexScreen = React.createBackboneClass({
  mixins: [
     React.BackboneMixin('users', 'change')
  ],

  render: function() {
    var userBlocks = this.props.users.map(function(user) {
      return <UserBlock user={user} />
    });

    return (
      <div className="users-container">
        <h1>Users</h1>
        {userBlocks}
      </div>
    );
  }
});

module.exports = UsersIndexScreen;

We’re just looping through the users that were passed into the component, and wrapping each one in a UserBlock React component. This component can be defined in a file that lives right alongside index.js and component.js.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
/** @jsx React.DOM */
var React = require('react');
var Backbone = require('backbone');
var ReactBackbone = require('react.backbone');

var MemberBlock = React.createBackboneClass({
  render: function () {
    var user = this.props.user;
    var username = user.get('username');
    var avatar = user.get('avatar').url;
    var link = '/users/' + user.get('id');

    return (
      <div className="user-block">
        <a href={link}>
          <h2>{username}</h2>
          <img src={avatar} alt={username} />
        </a>
      </div>
    );
  }
});

module.exports = UserBlock;

Voila! An index view at /users that shows all of our users’ beautiful faces and links to their show pages. It was pretty painless, thanks to React!

Wrapping Up

We’ve now traced the entire series of events that happens when someone loads up our application and requests a route. After going through the router and controller, the fetched data is injected through a Backbone view into a React component, which is then rendered by the app’s mainView.

We only barely scratched the surface of what React is capable of here. If you haven’t checked out the React component API docs, I highly suggest doing so. Once I began to fully harness the power it gave me, I found my projects’ view layers much cleaner. Plus, I get all of the React performance benefits for free!

I hope that this post has helped make it more obvious how to get started with integrating React into a Backbone app. To me, it always seemed like a good idea, but I didn’t know where to begin. Once I got a sense of the pattern, though, it became pretty easy to do.

P.S. Do you have a different pattern for using React in your Backbone app? Want to talk about using React in Ember or Angular? Leave us a note in the comments!

Understanding Rack Apps and Middleware

This post originally appeared on Engine Yard.

Introduction

For many of us web developers, we work on the highest levels of abstraction when we program. Sometimes it’s easy to take things for granted. Especially when we’re using Rails.

Have you ever dug into the internals of how the request/response cycle works in Rails? I recently realized that I knew almost nothing about how Rack or middlewares work, so I spent a little time finding out. In this post, I’ll share what I learned.

What’s Rack?

Did you know that Rails is a Rack app? Sinatra too. What is Rack? I’m glad you asked. Rack is a Ruby package that provides an easy-to-use interface to the Ruby Net::HTTP library.

It’s possible to quickly build simple web applications using just Rack.

To get started, all you need is an object that responds to a call method, taking in an environment hash and returning an Array with the HTTP response code, headers, and response body. Once you’ve written the server code, all you have to do is boot it up with a Ruby server like Rack::Handler::WEBrick, or put it into a config.ru file and run it from the command line with rackup config.ru.

Ok, cool. So what does Rack actually do?

How Rack Works

Rack is really just a way for a developer to create a server application while avoiding the boilerplate code that would be required to do so using Net::HTTP. If you’ve written some code that meets the Rack specifications, you can load it up in a Ruby server like WEBrick, Mongrel, or Thin, and you’re ready to accept requests and respond to them.

There are a few methods you should know about that are provided for you. You can call these directly from within your config.ru file.

run Takes an application (the object that responds to call) as an argument. The following code from the Rack website demonstrates how this looks:

1
run Proc.new { |env| ['200', {'Content-Type' => 'text/html'}, ['get rack\'d']] }

map Takes a string specifying the path to be handled, and a block containing the Rack application code to be run when a request with that path is received. Here’s an example:

1
2
3
map '/posts' do
  run Proc.new { |env| ['200', {'Content-Type' => 'text/html'}, ['first_post', 'second_post', 'third_post']] }
end

use Tells Rack to use certain middleware.

So what else do you need to know? Let’s take a closer look at the environment hash and the response Array.

The Environment Hash

Your Rack server object takes in an environment hash. What’s contained in that hash? Here are a few of the more interesting parts:

  • REQUEST_METHOD: The HTTP verb of the request. This is required.
  • PATH_INFO: The request URL path, relative to the root of the application.
  • QUERY_STRING: Anything that followed ? in the request URL string.
  • SERVER_NAME and SERVER_PORT: The server’s address and port.
  • rack.version: The rack version in use.
  • rack.url_scheme: is it http or https?
  • rack.input: an IO-like object that contains the raw HTTP POST data.
  • rack.errors: an object that response to puts, write, and flush.
  • rack.session: A key value store for storing request session data.
  • rack.logger: An object that can log interfaces. It should implement info, debug, warn, error, and fatal methods.

A lot of frameworks built on Rack wrap the env hash in a Rack::Request object. This object provides a lot of convenience methods. For example, request_method, query_string, session, and logger return the values from the keys described above. It also lets you check out things like the params, HTTP scheme, or whether you’re using ssl?. For a complete listing of methods, I would suggest digging through the source.

The Response

When your Rack server object returns a response, it must contain three parts: the status, headers, and body. As there was for the request, there is a Rack::Response object that gives you convenience methods like write, set_cookie, finish, and more. Alternately, you can just return an array containing the three components.

Status

An HTTP status, like 200 or 404.

Headers

Something that responds to each, and yields key-value pairs. The keys have to be strings and conform to the RFC7230 token specification. Here’s where you can set Content-Type and Content-Length if it’s appropriate for your response.

Body

The body is the data that the server sends back to the requester. It has to respond to each, and yield string values.

All Racked Up!

Now that we’ve created a Rack app, how can we customize it to make it actually useful? The first step is to consider adding some middleware.

What is Middleware?

One of the things that makes Rack so great is how easy it is to add a chain middleware components between the webserver and the app to customize the way your request/response behaves. But what is a middleware component?

A middleware component sits between the client and the server, processing inbound requests and outbound responses. Why would you want to do that? There are tons of middleware components available for Rack that take the guesswork out of problems like enabling caching, authentication, trapping spam, and many other problems.

Using Middleware in a Rack App

To add middleware to a Rack application, all you have to do is tell Rack to use it. You can use multiple middleware components, and they will change the request or response before passing it on to the next component. This series of components is called the middleware stack.

Warden

We’re going to take a look at how you would add Warden to a project. Warden has to come after some kind of session middleware in the stack, so we’ll use Rack::Session::Cookie as well.

First, add it to your project Gemfile with gem "warden" and install it with bundle install.

Now add it to your config.ru file:

1
2
3
4
5
6
7
8
9
10
11
12
require "warden"

use Rack::Session::Cookie, secret: "MY_SECRET"

failure_app = Proc.new { |env| ['401', {'Content-Type' => 'text/html'}, ["UNAUTHORIZED"]] }

use Warden::Manager do |manager|
  manager.default_strategies :password, :basic
  manager.failure_app = failure_app
end

run Proc.new { |env| ['200', {'Content-Type' => 'text/html'}, ['get rack\'d']] }

Finally, run the server with rackup. It will find config.ru and boot up on port 9292.

Note that there is more setup involved in getting Warden to actually do authentication with your app. This is just an example of how to get it loaded into the middleware stack. To see a more fleshed-out example of integrating Warden, check out this gist.

By the way, there’s another way to define the middleware stack. Instead of calling use directly in config.ru, you can use Rack::Builder to wrap several middlewares and app(s) in one big application. For example:

1
2
3
4
5
6
7
8
9
10
11
12
failure_app = Proc.new { |env| ['401', {'Content-Type' => 'text/html'}, ["UNAUTHORIZED"]] }

app = Rack::Builder.new do
  use Rack::Session::Cookie, secret: "MY_SECRET"

  use Warden::Manager do |manager|
    manager.default_strategies :password, :basic
    manager.failure_app = failure_app
  end
end

run app

Rack Basic Auth

One really useful piece of middleware is Rack::Auth::Basic, which you can use to protect any Rack app with HTTP basic authentication. It is really lightweight and comes in handy for protecting little bits of an application. For example, Ryan Bates uses it to protect a Resque server in a Rails app in this episode of Railscasts.

Here’s how to set it up:

1
2
3
use Rack::Auth::Basic, "Restricted Area" do |username, password|
  [username, password] == ['admin', 'abc123']
end

That was easy!

Using Middleware in Rails

Now, so what? Rack is pretty cool, and we know that Rails is built on it. But just because we understand what it is, doesn’t make it actually useful in working with a production app.

How Rails Uses Rack

Did you ever notice that there’s a config.ru file in the root of every generated Rails project. Have you ever taken a look inside? Here’s what it contains:

1
2
3
4
# This file is used by Rack-based servers to start the application.

require ::File.expand_path('../config/environment', __FILE__)
run Rails.application

Pretty simple. It just loads up the config/environment file, then boots up Rails.application. Wait, what’s that? Taking a look in config/environment, we can see that it’s defined in config/application.rb. config/environment is just calling initialize! on it.

So what’s in config/application.rb? If we take a look, we see that it loads in the bundled gems from config/boot.rb, requires rails/all, loads up the environment (test, development, production, etc.), and defines a namespaced version of our application. It looks something like this:

1
2
3
4
5
module MyApplication
  class Application < Rails::Application
    ...
  end
end

So I guess that means that Rails::Application must be a Rack app? Sure enough! If we check out the source code, it responds to call!

So what middleware is it using? Well, I see that it’s autoloading rails/application/default_middleware_stack. Checking that out, it looks like it’s defined in ActionDispatch. Where does ActionDispatch come from? ActionPack.

Action Dispatch

Action Pack is Rails’s framework for handling web requests and responses. Action Pack home to quite a few of the niceties you find in Rails, such as routing, the the abstract controllers that you inherit from, and view rendering.

The most relevant part of AP for our discussion here is Action Dispatch. It provides several middleware components that deal with ssl, cookies, debugging, static files, and much more.

If you go take a look at each of the Action Dispatch middleware components, you’ll notice they’re all following the Rack specification: they all respond to call, taking in an app and returning status, headers, and body. Many of them also make use of Rack::Request and Rack::Response objects.

For me, reading through the code in these components took a lot of the mystery out of what’s going on behind the scenes when making requests to a Rails app. When I realized that it’s just a bunch of Ruby objects that follow the Rack specification, passing the request and response to each other, it made this whole section of Rails a lot less mysterious.

Now that we understand a little bit of what’s happening under the hood, let’s take a look at how to actually include some custom middleware in a Rails app.

Adding Your Own Middleware

Imagine you are hosting an application on Engine Yard. You have a Rails API running on one server, and a client-side JavaScript app running on another. The API has a url of https://api.myawesomeapp.com, and the client-side app lives at https://app.myawesomeapp.com.

You’re going to run into a problem pretty quick: you can’t access resources at api.myawesomeapp.com from your JS app, because of the same-origin policy. As you may know, the solution to this problem is to enable Cross-origin resource sharing (CORS). There are many ways to enable CORS on your server, but one of the easiest is to use the Rack::Cors middleware gem.

Begin by requiring it in the Gemfile:

1
gem "rack-cors", require: "rack/cors"

As with so many things, Rails provides a very easy way to get middleware loaded. Although we certainly could add it to a Rack::Builder block in config.ru, as we did above, the Rails convention is to place it in config/application.rb, using the following syntax:

1
2
3
4
5
6
7
8
9
10
11
12
13
module MyAwesomeApp
  class Application < Rails::Application
    config.middleware.insert_before 0, "Rack::Cors" do
      allow do
        origins '*'
        resource '*',
        :headers => :any,
        :expose => ['X-User-Authentication-Token', 'X-User-Id'],
        :methods => [:get, :post, :options, :patch, :delete]
      end
    end
  end
end

Note that we’re using insert_before here to ensure that Rack::Cors comes before the rest of the middleware included in the stack by ActionPack (and any other middleware you might be using).

Now if you reboot the server, you should be good to go! Your client-side app can access api.myawesomeapp.com without running into same-origin policy JS errors.

If you want to learn more about how HTTP requests are routed through Rack in Rails, I’d suggest taking a look at this tour of the Rails source code that deals with handling requests.

Conclusion

In this post, we’ve take an in-depth at the internals of Rack, and by extension, the request/response cycle for several Ruby web frameworks, including Ruby on Rails.

Hopefully, understanding what’s going on when a request hits your server and your application sends back a response helps make things feel a little less magical. Because I don’t know about you, but when things go wrong, I have a lot harder time troubleshooting when there’s magic involved than when I understand what’s going on. In that case, I can say “oh, it’s just a Rack response”, and get down to fixing the bug.

If I’ve done my job, reading this article will enable you to do the same thing.

P.S. Do you know of any use-cases where a simple Rack app was enough to meet your business needs? What other ways do you integrate Rack apps in your bigger applications? We want to hear your battle stories! Leave us a comment!

Deploying and Customizing Applications on Engine Yard

This post originally appeared on Engine Yard.

Introduction

I’ve tried a lot of different Platform as a Service (PaaS) providers for hosting my applications. Some of them make it super-easy to get everything running on the server, but the magic gets in the way when you need to customize things.

Some platforms give you full control, but it can be time-consuming to get all of your dependencies properly set up. It would be nice to have some of the boilerplate taken care of, while still retaining full control of my server environment.

If you haven’t deployed an application to Engine Yard (EY), you should give it a try. You’ll be pleasantly surprised to find that this is exactly the kind of service offered. It’s a breeze to get Redis, cron, and any other tools you need installed. You also get root access to your server and can SSH in just as you would with a bare server.

My favorite feature has always been the ability to push custom Chef recipes to my server, making it super-easy to tweak the server as needed without having to spend a lot of time downloading Ubuntu packages and managing user permissions.

There is a little bit of a learning curve though, so I decided to deploy and customise a new app on Engine Yard so that I could document the process and help first-timers get up and running with a minimum of fuss.

Setting Up An Engine Yard Environment

I started out with a production application in a Git repository that was ready to go. It just needed a server to run on.

I went to Engine Yard and signed up for a free trial account.

Once I was done filling out my contact and billing information, I created my first “application” resource on Engine Yard Cloud. Here are the steps I followed to get it running from there.

First, I created a “production” environment for my application and configured it. I was given four choices of server beefiness: single instance, staging, production, or custom. I went with a production box, and added Phusion Passenger and PostgreSQL to the stack. Since I was deploying a Rails app, I also added Ruby 2.2.0 and set up my migration command. I was happy to see that EY would backup by my database and take a server snapshot on a recurring schedule. I opted in for that service.

While the server was being provisioned, there were a few access-related tasks I had to take care of as well. First, I added the SSH keys from my development machine to my production environment. To do so, I visited the EY Cloud dashboard, then clicked on Tools, then SSH Keys and pasted my key into the text area, then hit the big Apply button on my app’s “production” environment page.

I also had to add an SSH key EY provided to my GitHub account. This allowed EY to grab my code and push it to the server directly.

A few minutes later, the server and my credentials were all set up, and I was ready to deploy. Next, I pressed Deploy. Unfortunately, there was a problem with my deploy, so I decided to dig into it from the command line…

Using the engineyard Gem

Configuring and Deploying

It turned out I’d forgotten to add a config/ey.yml file to my Rails project. This file is used to customize each of the Engine Yard environments the app is being deployed to. To add one, it’s easiest use the engineyard gem.

To install the gem globally, I ran gem install engineyard on my local machine. Then I initialized an EY configuration file using ey init. I checked out the config/ey.yml file it generated. Everything looked good, so I committed and pushed it up to GitHub.

This time, I deployed using ey deploy, and it worked like a charm. Success!

Logging In and Out

  • ey login
  • ey logout
  • ey whoami

Custom Deploys

  • ey status shows the status of your most recent deploy
  • ey timeout-deploy marks the current deploy as failed and begins a new deploy
  • ey rollback revert to a previous deployment

Environments

  • ey environments shows the environments for this app (pass --all to see all environments for all apps)
  • ey servers shows all the servers for an environment (if you have multiple)
  • ey rebuild reruns the configuration bootstrap process, useful for security patches and upgrades
  • ey restart restarts the servers

Debugging

  • ey logs shows the logs
  • ey web disable/enable toggles a maintenance page
  • ey ssh lets you SSH in
  • ey launch launches app in a browser window

Customizing The Server Environment

  • ey recipes upload adds Chef recipes from your dev machine and the remote server
  • ey recipes download syncs Chef recipes from the remote server to your dev machine
  • ey recipes apply trigger a Chef run

These last few commands come in really handy when you want to customize your server setup. Let’s take a deeper look at how to upload custom Chef recipes to an application environment.

Chef Recipes

Engine Yard uses Chef under the hood to make your deploys quick and easy. There’s a default set of recipes that get run every time you deploy.

After the default recipes run, Engine Yard runs any custom recipes that you’ve added to your environment. Since there are so many Chef recipes available, getting dependencies set up is pretty straightforward.

Your Chef recipes will run whenever you create a new instance, add an instance to a cluster, run ey recipes apply, or trigger a Chef run with from the Cloud Dashboard with the Upgrade or Apply buttons.

Getting Set Up

To add recipes to your application, you’ll need to fork the Engine Yard Cloud Recipes repo. Then, clone your fork down to your development machine, in a different directory than your application.

Default Recipes

The Engine Yard Cloud Recipes repo comes with comes with cookbooks for most of the things you would ever need: Sidekiq, Redis, Solr, Elasticsearch, cron, PostgreSQL Extensions, and much more.

Here’s what I did to add Redis to my project.

1) Opened /cookbooks and found the subdirectory I wanted (/redis) 2) Uncommented include_recipe redis in cookbooks/main/recipes/default.rb 3) Saved the file, committed it, and push to my forked repo 4) Uploaded the recipes to my app with ey recipes upload -e production 5) Applied the recipes to my app with ey recipes apply -e production

I took a look at my Engine Yard dashboard, and a few short moments later, Redis was running on my server!

Custom Recipes

I wanted to add HTTP Basic Auth to my server, but it wasn’t one of the recipes in the repo, so I wrote my own recipe for it.

Here’s how I did it.

First, I opened up my ey-cloud-recipes fork repo and ran rake new_cookbook COOKBOOK=httpauth. This generated a bunch of files under cookbooks/httpauth/. Then I edited cookbooks/httpauth/recipes/default.rb like this:

1
2
3
4
5
6
7
8
9
10
11
12
sysadmins = search(:users, 'groups:sysadmin')

template "/etc/nginx/htpasswd.users" do
  source "nginx/htpasswd.users.erb"
  owner node['staging']['nginx']['user']
  group node['staging']['nginx']['user']
  mode "0640"
  variables(
    :sysadmins => sysadmins
  )
  notifies :restart, "service[nginx]", :delayed
end

With the httpauth recipe written, I next created a htpasswd.users.erb under the `cookbooks/httpauth/templates/default/nginx directory, and put this code in it:

1
2
3
<% @sysadmins.each do |sa| -%>
  <%= sa["id"] %>:<%= sa["htpasswd"] %>
<% end -%>

With the template in place, I added the recipe to cookbooks/main/recipes/default.rb (my main cookbook) by adding this line:

1
include_recipe "httpauth"

Finally, I checked my syntax with rake test (all good), committed my changes, and pushed to my fork. With the recipe ready, all that was left was to upload and apply it to my application with the following:

1
2
ey recipes upload -e production
ey recipes apply -e production

The recipe was successfully added to my server in the /etc/chef-custom directory. I know this because I logged in and took a look around.

How did I do that? I’m glad you asked.

Remote Access with SSH

If you ever need to confirm that your Chef recipes are configuring the server the way you expected, or need to access your server directly with root access for any other reason, you can use SSH to get a remote terminal.

There are three ways to do this:

1) Run ssh username@123.123.123.123, the old-fashioned way (you can find your server’s IP address in the Engine Yard dashboard). 2) Click on the SSH link in your application dashboard on EY instead 3) Run ey ssh from the application directory on your dev machine

When you login to your server, some helpful information about your app’s server environment is displayed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Applications:
myapplication:
cd /data/myapplication/current # go to the application root folder
tail -f /data/myapplication/current/log/production.log # production logs
cat /data/nginx/servers/myapplication.conf # current nginx conf

SQL database:
cd /db # your attached DB volume

PostgreSQL:
tail -f /db/postgresql/$1.$2/data/pg_log/* # logs
pg_top -dmyapplication

Inspect node data passed to chef cookbooks:
sudo gem install jazor
sudo jazor /etc/chef/dna.json
sudo jazor /etc/chef/dna.json 'applications.map {|app, data| [app, data.keys]}'

Pretty cool. It’s nice to have access like this if you need it, without being responsible for configuring (and re-configuring) the entire machine by hand.

Conclusion

If you’re anything like me, you drag your feet about trying new things when it comes to sysops. Perhaps you’ve felt the pain of trying to take a server from vanilla Ubuntu to a custom build with cron, Redis, Elasticsearch and a bunch of other packages—carefully balancing everything so that it doesn’t fall apart. Many of us have also experienced getting stuck when using a full-service PaaS that isn’t working the way we expect, and not being able to customise things. Experiences like this make it hard for me to get excited about setting up servers, so I typically avoid it when I can.

That said, Engine Yard makes this sort of work a breeze. Their balance between automation and control gives you the best of both worlds. Getting up and running takes a little bit of learning, but the docs are super helpful and the support team is very responsive if you ever have any questions or need a hand.

If you haven’t given Engine Yard a try, why not give it a go?

P.S. What kind of custom Chef recipes are you using on your servers? I know that business requirements can lead to some pretty gnarly setups. Tell me about it via the comments.

Serving Custom JSON From Your Rails API With ActiveModel::Serializers

This post originally appeared on Engine Yard.

Introduction

These days, there are so many different choices when it comes to serving data from an API. You can build it in Node with ExpressJS, in Go with Martini, Clojure with Compojure, and many more. But in many cases, you just want to bring something to market as fast as you can. For those times, I still reach for Ruby on Rails.

With Rails, you can spin up a function API server in a very short period of time. Rails is large. Perhaps you object that there’s “too much magic”. Have you ever checked out the rails-api gem? It lets you enjoy all the benefits of Rails without including unnecessary view-layer and asset-related code.

Rails-api is maintained by Carlos Antonio Da Silva, Santiago Pastorino, Rails Core team members, and all-around great Rubyist Steve Klabnik. While not busy working on Rails or the Rails API Gem, they found the time to put together the active_model_serializers gem to make it easier to format JSON responses when using Rails as an API server.

ActiveModel::Serializers (AMS) is a powerful alternative to jbuilder, rabl, and other Ruby templating solutions. It’s easy to get started with, but when you want to serve data that quite doesn’t match up with the way ActiveRecord (AR) structures things, it can be hard to figure out how to get it to do what you want.

In this post, we’ll take a look at how to extend AMS to serve up custom data in the context of a Rails-based chat app.

Kicking It Off: Setting Up a Rails Server

Any two users in the system can have a continuous thread that goes back and forth. Let’s imagine we are building a chat app, similar to Apple’s Messages. People can sign up for the service, then chat with their friends.

Most of the presentation logic will happen in a client-side JavaScript app. For now, we’re only concerned with accepting and returning raw data, and we’ve decided to use a Rails server to build it.

To get started, we’ll run a rails new, but since we’re using the rails-api gem, we’ll need to make sure we have it installed first, with:

1
gem install rails-api

Once that’s done, we’ll run the following (familiar) command to start the project:

1
rails-api new mensajes --database=postgresql

cd into the directory and setup the database with:

1
rake db:create

Creating the Models

We’ll need a couple of models: Users and Messages. The workflow to create them should be fairly familiar:

1
2
rails g scaffold user username:string
rails g scaffold message sender_id:integer recipient_id:integer body:text

Open up the migrations, and set everything to null: false, then run rake db:migrate.

We’ll also need to set up the relationships. Be sure to test these relationships (I would suggest using the shoulda gem to make it easy on yourself.

1
2
3
4
class User < ActiveRecord::Base
  has_many :sent_messages, class_name: "Message", foreign_key: "sender_id"
  has_many :received_messages, class_name: "Message", foreign_key: "recipient_id"
end
1
2
3
4
class Message < ActiveRecord::Base
  belongs_to :recipient, class_name: "User", inverse_of: :received_messages
  belongs_to :sender, class_name: "User", inverse_of: :sent_messages
end

Serving the Messages

Let’s send some messages! Imagine for a minute that you’ve already set up some kind of token-based authentication system, and you have some way of getting ahold of the user that is making requests to your API.

We can open up the MessagesController, and since we used a scaffold, we should already be able to view all the messages. Let’s scope that to the current user. First we’ll write a convenience method to get all the sent and received messages for a user, then we’ll rework the MessagesController to work the way we want it to.

1
2
3
4
5
6
class User < ActiveRecord::Base
  ...
  def messages
    Message.where("sender_id = ? OR recipient_id = ?", self.id, self.id)
  end
end
1
2
3
4
5
6
class MessagesController < ApplicationController
  def index
    @messages = current_user.messages
    render json: @messages
  end
end

Assuming that we have created a couple of sent and received messages for the current_user, we should be able to take a look at http://localhost:3000/messages and see some raw JSON that looks like this:

1
[{"sender_id":1,"id":1,"recipient_id":2,"body":"YOLO","created_at":"2015-02-03T21:05:12.908Z","updated_at":"2015-02-03T21:05:12.908Z"},{"recipient_id":1,"id":2,"sender_id":2,"body":"Hello, world!","created_at":"2015-02-03T21:05:51.309Z","updated_at":"2015-02-03T21:05:51.309Z"}]

It’s kind of ugly. It would be nice if we could remove the timestamps and ids. This is where AMS comes in.

Adding ActiveModel::Serializers

Once we add AMS to our project, it should be easy to get a much prettier JSON format back from our MessagesController.

To get AMS, add it to the Gemfile with:

1
gem "active_model_serializers", github: "rails-api/active_model_serializers"

Then bundle install. Note that I’m using a the edge version of AMS here because it supports belongs_to and other features. See the github project README for some information about maintenance and why you might want to use an older version.

Now we can easily set up a serializer with rails g serializer message. Let’s take a look at what this generated for us. In app/serializers/message_serializer.rb, we find this code:

1
2
3
class MessageSerializer < ActiveModel::Serializer
  attributes :id
end

Whichever attributes we specify (as a list of symbols) will be returned in the JSON response. Let’s skip id, and instead return the sender_id, recipient_id, and body:

1
2
3
class MessageSerializer < ActiveModel::Serializer
  attributes :sender_id, :recipient_id, :body
end

Now when we visit /messages, we get this slightly cleaner JSON:

1
{"messages":[{"sender_id":1,"recipient_id":2,"body":"YOLO"},{"sender_id":2,"recipient_id":1,"body":"Hello, world!"}]}

Cleaning Up the Format

It sure would be nice if we could get more information about the other user, like their username, so that we could display it in the messaging UI on the client-side. That’s easy enough, we just change the MessageSerializer to use AR objects as attributes for the sender and recipient, instead of ids.

1
2
3
class MessageSerializer < ActiveModel::Serializer
  attributes :sender, :recipient, :body
end

Now we can see more about the Sender and Recipient:

1
{"messages":[{"sender":{"id":1,"username":"Ben","created_at":"2015-02-03T21:04:09.220Z","updated_at":"2015-02-03T21:04:09.220Z"},"recipient":{"id":2,"username":"David","created_at":"2015-02-03T21:04:45.948Z","updated_at":"2015-02-03T21:04:45.948Z"},"body":"YOLO"},{"sender":{"id":2,"username":"David","created_at":"2015-02-03T21:04:45.948Z","updated_at":"2015-02-03T21:04:45.948Z"},"recipient":{"id":1,"username":"Ben","created_at":"2015-02-03T21:04:09.220Z","updated_at":"2015-02-03T21:04:09.220Z"},"body":"Hello, world!"}]}

Actually, that might be too much. Let’s clean up how User objects are serialized by generating a User serializer with rails g serializer user. We’ll set it up to just return the username.

1
2
3
class UserSerializer < ActiveModel::Serializer
  attributes :username
end

In the MessageSerializer, we’ll use belongs_to to have AMS format our sender and recipient using the UserSerializer:

1
2
3
4
5
class MessageSerializer < ActiveModel::Serializer
  attributes :body
  belongs_to :sender
  belongs_to :recipient
end

If we take a look at /messages, we now see:

1
[{"recipient":{"username":"David"},"body":"YOLO","sender":{"username":"Ben"}},{"recipient":{"username":"Ben"},"body":"Hello, world!","sender":{"username":"David"}}]

Things are really starting to come together!

Conversations

Although we can view all of a user’s messages using the index controller action, or a specific message at the show action, there’s something important to the business logic of our app that we can’t do. We can’t view all of the messages sent between two users. We need some concept of a conversation.

When thinking about creating a conversation, we have to ask, does this model need to be stored in the database? I think the answer is no. We already have messages that know which users they belong to. All we really need is a way to get back all the messages between two users from one endpoint.

We can use a Plain Old Ruby Object (PORO) to create this concept of a conversation model. We will not inherit from ActiveRecord::Base in this case.

Since we already know about the current_user, we really only need it to keep track of the other user. We’ll call her the participant.

1
2
3
4
5
6
7
8
9
# app/models/conversation.rb
class Conversation
  attr_reader :participant, :messages

  def initialize(attributes)
    @participant = attributes[:participant]
    @messages = attributes[:messages]
  end
end

We’ll want to be able to serve up these conversations, so we’ll need a ConversationsController. We want to get all of the conversations for a given user, so we’ll add a class-level method to the Conversation model to find them and return them in this format:

1
# TODO: Insert JSON blob here

To make this work, we’ll run a group_by on the user’s messages, grouping by the other user’s id. We’ll then map the resulting hash into a collection of Conversation objects, passing in the other user and the list of messages.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Conversation
  ...
  def self.for_user(user)
    user.messages.group_by { |message|
      if message.sender == user
        message.recipient_id
      else
        message.sender_id
      end
    }.map do |user_id, messages|
      Conversation.new({
        participant: User.find(user_id),
        messages: messages
      })
    end
  end
end

If we run this in the Rails Console, it seems to be working.

1
2
3
>Conversation.for_user(User.first)
...
=> [#<Conversation:0x007fbd6e5b9428 @participant=#<User id: 2, username: "David", created_at: "2015-02-03 21:04:45", updated_at: "2015-02-03 21:04:45">, @messages=[#<Message id: 1, sender_id: 1, recipient_id: 2, body: "YOLO", created_at: "2015-02-03 21:05:12", updated_at: "2015-02-03 21:05:12">, #<Message id: 2, sender_id: 2, recipient_id: 1, body: "Hello, world!", created_at: "2015-02-03 21:05:51", updated_at: "2015-02-03 21:05:51">]>]

Great! We’ll just call this method in our ConversationsController and everything will be great!

First, we’ll define the route in config/routes.rb:

1
2
3
4
Rails.application.routes.draw do
  ...
  resources :conversations, only: [:index]
end

Then, we’ll write the controller action.

1
2
3
4
5
6
7
8
# app/controllers/conversations_controller.rb

class ConversationsController < ApplicationController
  def index
    conversations = Conversation.for_user(current_user)
    render json: conversations
  end
end

Visiting /conversations, we should see a list of all the conversations for the current user.

Serializing Plain Old Ruby Objects

Whoops! When we visit that route, we get an error: undefined methodnew’ for nil:NilClass`. It’s coming from this line in the controller:

1
render json: conversations

It looks like the error is coming from the fact that we don’t have a serializer. Let’s make one with rails g serializer conversation. We’ll edit it to return its attributes, participant and message.

1
2
3
class ConversationSerializer < ActiveModel::Serializer
  attributes :participant, :messages
end

Now when we try, we get another error, coming from the same line of the controller: undefined method 'read_attribute_for_serialization' for #<Conversation:0x007ffc9c1bed10>

Digging around in the source code for ActiveModel::Serializers, I couldn’t find where that method was defined. So I took a look at ActiveModel itself, and found it here. It turns out that it’s just an alias for send!

We can add that into our PORO easily enough:

1
2
3
4
class Conversation
  alias :read_attribute_for_serialization :send
  ...
end

Or, we could include ActiveModel::Serialization which is where our AR-backed objects got it.

Now when we take a look at /conversations, we get:

1
[{"participant":{"id":2,"username":"David","created_at":"2015-02-03T21:04:45.948Z","updated_at":"2015-02-03T21:04:45.948Z"},"messages":[{"sender_id":1,"recipient_id":2,"id":1,"body":"YOLO","created_at":"2015-02-03T21:05:12.908Z","updated_at":"2015-02-03T21:05:12.908Z"},{"sender_id":2,"id":2,"recipient_id":1,"body":"Hello, world!","created_at":"2015-02-03T21:05:51.309Z","updated_at":"2015-02-03T21:05:51.309Z"}]}]

Whoops. Not quite right. But the problem is similar to the one we had before in the MessageSerializer. Maybe the same approach will work. We’ll change the attributes to AR relationships.

1
2
3
4
class ConversationSerializer < ActiveModel::Serializer
  has_many :messages, class_name: "Message"
  belongs_to :participant, class_name: "User"
end

Almost! Now /conversations returns:

1
[{"messages":[{"body":"YOLO"},{"body":"Hello, world!"}],"participant":{"username":"David"}}]

We can’t see who the sender of each message was! AMS isn’t using the UserSerializer for the message sender and recipient, because we’re not using an AR object.

A little source code spelunking point the way to a fix.

1
2
3
4
5
6
7
8
9
10
11
class MessageSerializer < ActiveModel::Serializer
  attributes :body, :recipient, :sender

  def sender
    UserSerializer.new(object.sender).attributes
  end

  def recipient
    UserSerializer.new(object.recipient).attributes
  end
end

Now /conversations gives us what we want:

1
[{"messages":[{"body":"YOLO","recipient":{"username":"David"},"sender":{"username":"Ben"}},{"body":"Hello, world!","recipient":{"username":"Ben"},"sender":{"username":"David"}}],"participant":{"username":"David"}}]

And /messages still works as well!

Wrapping Up

The ActiveModel::Serializers gem claims to bring “convention over configuration to your JSON generation”. It does a great job of it, but when you need to massage the data, things can get a little bit hairy.

Hopefully some of the tricks we’ve covered will help you present JSON from your Rails API the way you want. For this, and virtually any other problem caused by the magic getting in the way, I can’t suggest digging through the source code enough.

At the end of the day ARS is an excellent choice for getting your JSON API off the ground with a minimum of fuss. Good luck!

P.S. Have a different approach? Prefer rabl or jbuilder? Did I leave something out? Leave us a comment below!

Getting Started With Ruby Processing

This post was a featured article in Ruby Weekly #234.

It was originally published on Engine Yard.

Introduction

If you’re like me, you love to code because it is a creative process. In another life, I am a musician.

I’ve always loved music because it represents a synthesis of the measurable concreteness of math and the ambiguity of language. Programming is the same way.

But desipite the creative potential of programming, I often myself spending my days working out the kinks of HTTP requests or dealing with SSL certificates. Some part of my yearns for a purely Apollonion environment in which to use code to make something new and unseen.

When I feel a void for purely creative coding, I turn to the Processing language. Processing is a simple language, based on Java, that you can use to create digital graphics. It’s easy to learn, fun to use, and has an amazing online community comprised of programmers, visual artists, musicians, and interdiscplinary artists of all kinds.

In 2009, Jeremy Ashkenas, creator of Backbone.JS, Underscore.JS, and Coffeescript), published the ruby-processing gem. It wraps Processing in a “thin little shim” that makes it even easier to get started as a Ruby developer. In this post, we’ll take a look at how you can create your first interactive digital art project in just a few minutes.

What Is Processing?

Processing is programming language and IDE built by Casey Reas and Benjamin Fry, two protegés of indisciplinary digital art guru John Maeda at the MIT Media Lab.

Since the project began in 2001, it’s been helping teach people to program in a visual art context using a simplified version of Java. It comes packaged as an IDE that can be downloaded and used to create and save sketches.

Why Ruby Processing?

Since Processing already comes wrapped in an easy-to-use package, you may ask: “why should I bother with Ruby Processing?”

The answer: if you know how to write Ruby, you can use Processing as a visual interface to a much more complex program. Games, interactive art exhibits, innovative music projects, anything you can imagine; it’s all at your fingertips.

Additionally, you don’t have to declare types, voids, or understand the differences between floats and ints to get started.

Although there are some drawbacks to using Ruby Processing, most notably slower performance, having Ruby’s API available to translate your ideas into sketches more than makes up for it.

Setup

When getting started with Ruby Processing for the first time, it can be a little bit overwhelming to get all of the dependencies set up correctly. The gem relies on JRuby, Processing, and a handful of other things. Here’s how to get them all installed and working.

I’ll assume you already have the following installed: homebrew, wget, java, and a ruby manager such as rvm, rbenv or chruby.

Processing

Download Processing from the official website and install it.

When you’re done, make sure that the resulting app is located in your /Applications directory.

JRuby

Although it’s possible to run Ruby Processing on the MRI, I highly suggest using JRuby. It works much better, since Processing itself is built on Java.

Install the latest JRuby version (1.7.18 at the time of this writing). For example, if you’re using rbenv, the command would be rbenv install jruby-1.7.18, followed by rbenv global jruby-1.7.18 to set your current ruby to JRuby.

Ruby Processing

Install the ruby-processing gem globally with gem install ruby-processing. If you’re using rbenv, don’t forget to run rbenv rehash.

JRuby Complete

You’ll need the jruby-complete Java jar. Fortunately, there are a couple of built-in Ruby Processing commands that make it easy to install. rp5 is the Ruby Processing command. It can be used to do many things, one of which is to install jruby-complete using wget. To do so, run:

1
rp5 setup install

Once it’s complete, you can use rp5 setup check to make sure everything worked.

Setup Processing Root

One final step. You’ll need to set the root of your Processing app. This one-liner should take care of it for you:

echo 'PROCESSING_ROOT: /Applications/Processing.app/Contents/Java' >> ~/.rp5rc

Ready To Go

Now that we have everything installed and ready to go, we can start creating our first piece of art!

Making Your First Sketch

There are two basic parts to a Processing program: setup and draw.

The code in setup runs one time, to get everything ready to go.

The code in draw runs repeatedly in a loop. How fast is the loop? By default, it’s 60 frames per second, although it can be limited by your machine’s processing power. You can also manipulate it with the frame_rate method.

Here’s an example sketch that sets the window size, background and stroke colors, and draws a circle with a square around it.

1
2
3
4
5
6
7
8
9
10
11
12
def setup
  size 800, 600
  background 0
  stroke 255
  no_fill
  rect_mode CENTER
end

def draw
  ellipse width/2, height/2, 100, 100
  rect width/2, height/2, 200, 200
end

Here’s a quick run-through of what each of these methods is doing:

-size(): Sets the window size. It takes two arguments: width and height (in pixels). -background(): Sets the background color. It takes four arguments: R, G, B, and an alpha (opacity) value. -stroke(): Sets the stroke color. Takes RGBA arguments, like background(). -no_fill(): Tells Processing not to fill in shapes with the fill color. You can turn it back on with fill(), which takes RGBA values. -rect_mode: Tells Processing to draw rectangles using the x and y coordinates as a center point, with the other two arguments specifying width and height. The other available modes are: CORNER, CORNERS, and RADIUS. -ellipse: Draws an ellipse or circle. Takes four arguments: x-coordinate, y-coordinate, width, and height. -rect: Draws a rectangle or square. Takes four arguments: x-coordinate, y-coordinate, width, and height.

Note that the coordinate system in Processing starts at the top-left corner, not in the middle as in the Cartesian Coordinate System.

Running the Program

If you’re following along at home, let’s see what we’ve made! Save the code above into a file called my_sketch.rb.

There are two ways to run your program: you can either have it run once with rp5 run my_sketch.rb, or you can watch the filesystem for changes with rp5 watch my_sketch.rb. Let’s just use the run version for now.

Pretty basic, but it’s a good start! Using just the seven methods above, you can create all kinds of sketches.

Other Commonly Used Methods

Here are a few other useful Processing methods to add to your toolbox:

-line(): Draws a line. Takes four argments: x1, y1, x2, y2. The line is drawn from the point at the x, y coordinates of the first two arguments to the point at the coordinates of the last two arguments. -stroke_weight(): Sets the width of the stroke in pixels. -no_stroke(): Tells Processing to draw shapes without outlines. -smooth(): Tells Processing to draw shapws with anti-aliased edges. On by default, but can be disables with noSmooth(). -fill(): Sets the fill color of shapes. Takes RGBA arguments.

For a list of all the methods available in vanilla Processing, check out this list. Note that the Java implementation of these methods is in camelCase, but in Ruby they are probably in snake_case.

Some methods have also been deprecated, usually because you can use Ruby to do the same thing more easily.

If you see anything in the Processing docs and can’t get it to run in Ruby Processing, use $app.find_method("foo") to to search the method names available in Ruby Processing.

Responding to Input

Now that we know how to make a basic sketch, let’s build something that can respond to user input. This is where we leave static visual art behind, and start to make interactive digital art.

Although you can use all kinds of physical inputs to control Processing (e.g. Arduino, Kinect, LeapMotion), today we’ll just use the mouse.

Processing exposes a number of variables exposing its state at runtime, such as frame_count, width, height. We can use the mouse_x and mouse_y coordinates to control aspects of our program.

Here’s a sketch based on the mouse_x and mouse_y positions. It draws lines of random weight starting at the top of the screen at the mouse’s x position (mouse_x, 0) to a y coordinate between 0 and 200 pixels to the right of the mouse’s y position (mouse_y + offset, height).

1
2
3
4
5
6
7
8
9
10
11
12
13
def setup
  size 800, 600
  background 0
  stroke 255, 60 # first argument is grayscale value, second is opacity
  frame_rate 8
end

def draw
  r = rand(20)
  stroke_weight r
  offset = r * 10
  line mouse_x, 0, mouse_y + offset, height
end

Load that up and check it out!

Wrapping Your Sketch in a Class

One last not before we go: you can totally call other methods from within your setup and draw methods. In fact, you can even wrap everything in a class that inherits from Processing::App.

You can do everything you normally do in Ruby, so you can build a whole project, with logic branches and state, that controls the visual effect through these two methods.

Here’s a snippet from a version of Tic Tac Toe I built with Rolen Le during gSchool.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
require 'ruby-processing'

class TicTacToe < Processing::App
  attr_accessor :current_player

  def setup
    size 800, 800
    background(0, 0, 0)
    @current_player = 'x'
  end

  def draw
    create_lines
  end

  def create_lines
    stroke 256,256,256
    line 301, 133, 301, 666
    line 488, 133, 488, 666
    line 133, 301, 666, 301
    line 133, 488, 666, 488

    #borders
    line 133, 133, 666, 133
    line 666, 133, 666, 666
    line 133, 666, 666, 666
    line 133, 133, 133, 666
  end

  ...
end

To see the rest of the code, visit the GitHub repo.

Another example of a game I built early on in my programming career can be found here. I later did a series of refactorings of this code on my personal blog.

I’m still working on a pattern for game development with that I like. Keep an eye out for future posts about the best way to build a game with Ruby Processing.

Learning More

There’s so much more you can do in Processing than what we’ve covered here! Bézier curves, translations, rotations, images, fonts, audio, video, and 3D sketching are all available.

The best way to figure out how to do a lot of sketching. Just tinkering with the methods covered in this post would be enough to keep you busy creating new things for years.

If you’ve really caught the bug and want to go even deeper, check out some of these resources to learn more.

Built-in Samples

If you run rp5 setup unpack_samples, you’ll get a bunch of Processing sketch samples in a directory located at ~/rp_samples. I encourage you to open them up and take a look. There’s a lot you can glean by changing little bits of code in other projects.

Online Examples From Books

Learning Processing is an excellent book by Daniel Shiffman. In addition to being a valuable resource for Processing users, it has a number of examples available online.

Daniel Shiffman also wrote a book called The Nature of Code. The examples from it have been ported to Ruby and are another great resource for learning more.

Process Artist

There’s a great Jumpstart Lab tutorial called Process Artist, that walks you through building a drawing program à la MSPaint.

Conclusion

Processing is an awesome multi-disciplinary tool. It sits at the intersection of coding, visual art, photography, sound art, and interactive digital experiences. With the availability of Ruby Processing, it’s super easy to get started.

If you’re a programmer looking for a way to express your creativity, you couldn’t find a better way to do it than to try tinkering with Processing. I hope this post gets you off to a great start. Good luck and keep sketching!

Setting Up a Client-Side JavaScript Project With Gulp and Browserify

This post originally appeared on Engine Yard.

Introduction

For JavaScript developers, it can be hard to keep up to date with the latest frameworks and libraries. It seems like every day there’s a new something.js to check out. Luckily, there is one part of the toolchain that doesn’t change as often, and that’s the build process. That said, it’s worth checking out your options every now and then.

My build process toolset has traditionally been comprised of RequireJS for dependency loading, and Grunt. They’ve worked great, but recently I was pairing with someone who prefers to use Gulp and Browserify instead. After using them on a couple of projects, I’m coming to like them quite a bit. They’re great for use with Backbone, Angular, Ember, React, and my own hand-rolled JavaScript projects.

In this post, we’ll explore how to set up a clientside JavaScript project for success using Gulp and Browserify.

Defining the Project Structure

For the purposes of this post, we’ll pretend we’re building an app called Car Finder, that helps you remember where you parked your car. If you want to follow along, check out the code on GitHub.

When building a full application that includes both an API server and a clientside JavaScript app, there’s a certain project structure that I’ve found often works well for me. I like to put my clientside app in a folder one level down from the root of my project, called client. This folder usually has sibling folders named server, test, public, and build. Here’s how this would look for Car Finder:

1
2
3
4
5
6
7
8
9
car-finder
|- build
|- client
   |- less
|- public
   |- javascripts
   |- stylesheets
|- server
|- test

The idea is to do our app developent inside of client, then use a build task to compile the JS and copy it to the build folder, where it will be minified, uglified, and copied to public to be served by the backend.

Pulling In Dependencies

To get up and running, we’ll need to pull in some dependencies.

Run npm init and follow the prompts.

Add browserify, gulp, and our build and testing dependencies:

1
2
npm install --save-dev gulp gulp-browserify browserify-shim gulp-jshint gulp-mocha-phantomjs \
gulp-rename gulp-uglify gulp-less gulp-autoprefixer gulp-minify-css mocha chai

If you’re using git, you may want to ignore your node_modules folder with echo "node_modules" >> .gitignore.

Shimming Your Frameworks

You’ll probably want to use browserify-shim to shim jQuery and your JavaScript framework so that you can write var $ = require('jquery') into your code. We’ll use jQuery here, but the process is the same for any other library (Angular, Ember, Backbone, React, etc.). To set it up, modify your package.json like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
  "name": "car-finder",
  "author": "Ben Lewis",
  "devDependencies": {
    "gulp-rename": "^1.2.0",
    "gulp": "^3.8.10",
    "gulp-mocha-phantomjs": "^0.5.1",
    "gulp-jshint": "^1.9.0",
    "gulp-browserify": "^0.5.0",
    "browserify": "^6.3.4",
    "browserify-shim": "^3.8.0",
    "mocha": "^2.0.1",
    "gulp-minify-css": "^0.3.11",
    "gulp-uglify": "^1.0.1",
    "gulp-autoprefixer": "^2.0.0",
    "gulp-less": "^1.3.6",
    "chai": "^1.10.0"
  },
  "browserify-shim": {
    "jquery": "$"
  },
  "browserify": {
    "transform": [
      "browserify-shim"
    ]
  }
}

If you’re getting JSHint errors in your editor for this file, you can turn them off with echo "package.json" >> .jshintignore.

Setting Up Gulp

Now that we have the gulp package installed, we’ll configure gulp tasks to lint our code, test it, trigger the compilation process, and copy our minified JS into the public folder. We’ll also set up a watch task that we can use to trigger a lint and recompile of our project whenever a source file is changed.

We’ll start by requiring the gulp packages we want in a gulpfile.js that lives in the root of the project.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// Gulp Dependencies
var gulp = require('gulp');
var rename = require('gulp-rename');

// Build Dependencies
var browserify = require('gulp-browserify');
var uglify = require('gulp-uglify');

// Style Dependencies
var less = require('gulp-less');
var prefix = require('gulp-autoprefixer');
var minifyCSS = require('gulp-minify-css');

// Development Dependencies
var jshint = require('gulp-jshint');

// Test Dependencies
var mochaPhantomjs = require('gulp-mocha-phantomjs');

Now we can start defining some tasks.

JSHint

To set up linting for our clientside code as well as our test code, we’ll add the following to the gulpfile:

1
2
3
4
5
6
7
8
9
10
11
gulp.task('lint-client', function() {
  return gulp.src('./client/**/*.js')
    .pipe(jshint())
    .pipe(jshint.reporter('default'));
});

gulp.task('lint-test', function() {
  return gulp.src('./test/**/*.js')
    .pipe(jshint())
    .pipe(jshint.reporter('default'));
});

We’ll also need to define a .jshintrc in the root of our project, so that JSHint will know which rules to apply. If you have a JSHint plugin turned on in your editor, it will show you any linting errors as well. I use jshint.vim. Here’s an example of a typical .jshintrc for one of my projects. You’ll notice that it has some predefined globals that we’ll be using in our testing environment.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{
  "camelcase": true,
  "curly": true,
  "eqeqeq": true,
  "expr" : true,
  "forin": true,
  "immed": true,
  "indent": 2,
  "latedef": "nofunc",
  "newcap": false,
  "noarg": true,
  "node": true,
  "nonbsp": true,
  "quotmark": "single",
  "undef": true,
  "unused": "vars",
  "trailing": true,
  "globals": {
    "after"      : false,
    "afterEach"  : false,
    "before"     : false,
    "beforeEach" : false,
    "context"    : false,
    "describe"   : false,
    "it"         : false,
    "window"     : false
  }
}

Mocha

I’m a Test-Driven Development junkie, so one of the first things I always do when setting up a project is to make sure I have a working testing framework. For clientside unit testing, I like to use gulp-mocha-phantomjs, which we already pulled in above.

Before we can run any tests, we’ll need to create a test/client/index.html file for Mocha to load up in the headless PhantomJS browser environment. It will pull Mocha in from our node_modules folder, require build/client-test.js (more on this in a minute), then run the scripts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Mocha Test Runner</title>
    <link rel="stylesheet" href="../../node_modules/mocha/mocha.css">
  </head>
  <body>
    <div id="mocha"></div>
    <script src="../../node_modules/mocha/mocha.js"></script>
    <script>mocha.setup('bdd')</script>
    <script src="../../build/client-test.js"></script>
    <script>
      if (window.mochaPhantomJS) {
        mochaPhantomJS.run();
      } else {
        mocha.run();
      }
    </script>
  </body>
</html>

Setting Up Browserify

Now we need to set up Browserify to compile our code. First, we’ll define a couple of gulp tasks: one to build the app, and one to build the tests. We’ll copy the result of the compile to public so we can serve it unminified in development, and we’ll also put a copy into build, where we’ll grab it for minification. The compiled test file will also go into build. Finally, we’ll set up a watch task to trigger rebuilds of the app and test when one of the source files changes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
gulp.task('browserify-client', ['lint-client'], function() {
  return gulp.src('client/index.js')
    .pipe(browserify({
      insertGlobals: true
    }))
    .pipe(rename('car-finder.js'))
    .pipe(gulp.dest('build'));
    .pipe(gulp.dest('public/javascripts'));
});

gulp.task('browserify-test', ['lint-test'], function() {
  return gulp.src('test/client/index.js')
    .pipe(browserify({
      insertGlobals: true
    }))
    .pipe(rename('client-test.js'))
    .pipe(gulp.dest('build'));
});

gulp.task('watch', function() {
  gulp.watch('client/**/*.js', ['browserify-client']);
  gulp.watch('test/client/**/*.js', ['browserify-test']);
});

There’s one more thing we’ll need to do before we can run our gulp tasks, which is to make sure we actually have index.js files in each of the folders we’ve it to look at, so it doesn’t raise an error. Add one to the client and test/client folders.

Now, when we run gulp browserify-client from the command line, we see new build/car-finder.js and public/javascripts/car-finder.js files. In the same way, gulp browserify-test creates a build/client-test.js file.

More Testing

Now that we have Browserify set up, we can finish getting our test environment up and running. Let’s define a test Gulp task and add it to our watch. We’ll add browserify-test as a dependency for the test task, so our watch will just require test. We should also update our watch to run the tests whenever we change any of the app or test files.

1
2
3
4
5
6
7
8
9
gulp.task('test', ['lint-test', 'browserify-test'], function() {
  return gulp.src('test/client/index.html')
    .pipe(mochaPhantomjs());
});

gulp.task('watch', function() {
  gulp.watch('client/**/*.js', ['browserify-client', 'test']);
  gulp.watch('test/client/**/*.js', ['test']);
});

To verify that this is working, let’s write a simple test in test/client/index.js:

1
2
3
4
5
6
7
var expect = require('chai').expect;

describe('test setup', function() {
  it('should work', function() {
    expect(true).to.be.true;
  });
});

Now, when we run gulp test, we should see Gulp run the lint-test, browserify-test, and test tasks and exit with one passing example. We can also test the watch task by running gulp watch, then making changes to test/client/index.js or client/index.js, which should trigger the tests.

Building Assets

Next, let’s turn our attention to the rest of our build process. I like to use less for styling. We’ll need a styles task to compile it down to CSS. In the process, we’ll use gulp-autoprefixer so that we don’t have to write vendor prefixes in our CSS rules. As we did with the app, we’ll create a development copy and a build copy, and place them in public/stylesheets and build, respectively. We’ll also add the less directory to our watch, so changes to our styles will get picked up.

We should also uglify our JavaScript files to improve page load time. We’ll write tasks for minification and uglification, then copy the minified production versions of the files to public/stylesheets and public/javascripts. Finally, we’ll wrap it all up into a build task.

Here are the changes to the gulpfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
gulp.task('styles', function() {
  return gulp.src('client/less/index.less')
    .pipe(less())
    .pipe(prefix({ cascade: true }))
    .pipe(rename('car-finder.css'))
    .pipe(gulp.dest('build'))
    .pipe(gulp.dest('public/stylesheets'));
});

gulp.task('minify', ['styles'], function() {
  return gulp.src('build/car-finder.css')
    .pipe(minifyCSS())
    .pipe(rename('car-finder.min.css'))
    .pipe(gulp.dest('public/stylesheets'));
});

gulp.task('uglify', ['browserify-client'], function() {
  return gulp.src('build/car-finder.js')
    .pipe(uglify())
    .pipe(rename('car-finder.min.js'))
    .pipe(gulp.dest('public/javascripts'));
});

gulp.task('build', ['uglify', 'minify']);

If we now run gulp build, we see the following files appear: - build/car-finder.css - public/javascripts/car-finder.min.js - public/stylesheets/car-finder.css - public/stylesheets/car-finder.min.css

Did It Work?

We’ll want to check that what we’ve built is actually going to work. Let’s add a little bit of styling and JS code to make sure it’s all getting compiled and served the way we hope it is. We’ll start with an index.html file in the public folder. It will load up the development versions of our CSS and JS files.

1
2
3
4
5
6
7
8
9
10
11
12
13
<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Car Finder</title>
    <link rel="stylesheet" href="stylesheets/car-finder.css">
  </head>
  <body>
    <script src="javascripts/car-finder.js"></script>
  </body>
</html>

We’ll add some styling in client/less/index.less:

1
2
3
body {
  background-color: DarkOliveGreen;
}

Now we’ll write our million dollar app in client/index.js:

1
alert('I found your car!');

Let’s put it all together. Run grunt build, then open public/index.html. Our default browser opens a beautiful olive green screen with an alert box. Profit!

One Task To Rule Them All

At this point, I usually like to tie it all together with a default Gulp task, so all I have to do is run gulp to check that everything’s going together the way I expect, and start watching for changes. Since test already does the linting and browserifying, all we really need here is test, build, and watch.

1
gulp.task('default', ['test', 'build', 'watch']);

Wrapping Up

We’ve now set up our project to use Browserify and Gulp. The former took the headache out of requiring modules and dependencies, and the latter made defining tasks for linting, testing, less compilation, minification, and uglification a breeze.

I hope you’ve found this exploration of Gulp and Browserify has been enlightening. I personally love these tools. For the moment, they’re my defaults when creating a personal project. I hope this post helps make your day-to-day development more fun by simplifying things. Thanks for reading!