Fluxus Frequency

How I Hacked The Mainframe

How to Write a Technical Blog Post: Part 3

This post originally appeared on The Quick Left Blog

Part 3: Publish

In this three part series, we’re exploring what it takes to break into the technical blogging space. In the first part, we looked at initial steps you can take when preparing to write. In the second part, we explored how to to get into a good writing flow during the actual writing itself. In this, the third and final part of this series, we’ll talk about one more aspect of how to write a technical blog post: getting as many people to read it as possible.

Survey Your Kingdom

So you’ve generated your blog post idea, thought about your long-tail keyword, written your blog post, proofread and edited it. It’s time to think about pushing your little bird out of the nest and seeing how she flies.

First, take another look at what you’ve written. Is there anything you left out? Are there parts you’ve included that don’t really belong? Maybe you can split them out and use them for another post. Your readers will appreciate it if you stick to a single topic. It makes for an easier time digesting what you’re talking about.

Along the same lines, take a look at the length of your post. If it’s really long, consider splitting it into a series. I’ve found that the best length is around 750-1000 words. After that, posts tend to lose focus, and readers tend to check out. Plus, when you’re publishing a series, you have more chances to promote yourself.

When you’re sure that you’ve got your post(s) tightly focused, you’re almost ready to publish. There are just two more things to consider: SEO and scheduling.

Optimize for SEO

Since you want to get as much traffic as you can once your post goes live, this is a good time to go through your post and make sure that you’ve done what you can to get good Search Engine Optimization (SEO).

Here are some things to consider. Is your long-tail keyword phrase in all of the following places? The page title, main headline, repeated a couple of times in the body, the meta description and page URL? It’s also a good idea to include several images (don’t forget the alt tags - set one of them to your keyword phrase), links to pages internal and external to your site, and a set of relevant meta keywords. For fun, you can also view your page as it appears to a search engine bot using SEO Browser.

Build The Buzz

Ok. Your content is all set. All that’s left is to put it out into the world. Before you click “publish” think about how you’re going to do send it off. A blog post is not like a software product. A soft launch is usually not a great idea.

When I’m thinking about releasing a post, I recall my days in the music industry. There’s some common wisdom in that industry about releasing an album that goes like this. You want to slowly build the buzz, like a swelling wave, in the weeks before the album drops. Then, you drop it right when the wave is at its peak. The number of sales you make in the first week is greatly indicative of how well the album will sell over time.

While this wisdom may not fit exactly for tech blog posts, as they can stay relevant or even become more relevant as the industry changes, it’s still worth thinking about “the wave swell” when getting ready to publish a post. Good ways to build the buzz include: reaching out to influencers before you publish, discussing your topic and related topics on Twitter and Hacker News, and piggybacking on trending hashtags to get people thinking about what you’re writing about.

Ideally, by the time you go live, you’ll just be continuing the conversation that’s already been happening. Your post will come out right on time.

Schedule Release

Going along with the idea of building the buzz, be intentional about when you plan to publish your post. You can probably configure your blog platform and social media accounts to publish content at a specific time.

Find out the times when your target readers are most likely to see that your article came out, and publish then. Follow up with scheduled tweets.

If you know there’s an event related to your post coming up, plan to publish just before or after that event. For example, you’re writing about a new feature in Rails 5 feature and it’s coming out on Christmas, plan to publish your post during the week surrounding Christmas.

If you’re a prolific writer, you can space out your posts to build on your own buzz. If you have two posts ready to go, don’t publish them a day apart. Give the first one a little time to get some traction, then hit your audience with the second just as they’re beginning to forget about you. This works especially with a series.

Spending a little time to think about when you should publish you post can go a long way toward getting your voice heard by a wider audience.

Shout It From The Rooftops

This final point is probably obvious, but you’ll want to promote your work as extensively as possible once it finally goes live. Here are some good places to self-promote:

At a minimum, I recommend promoting your post on Twitter, Hacker News, Reddit as soon as it comes out.

If your blog allows comments, or if you post to Hacker News or Reddit, you’ll probably begin to get some questions and hear some opinions. Take the time to respond to them. The more you engage with people, the more they will appreciate and share your writing.

Wrapping Up

Over the course of this three part series, we’ve followed the entire cycle of how to write a technical blog post. Starting from the barren field of your mind with nothing but doubts in part one, and we traveled through the process of how to actually write posts in part two. With this post, part three, we’ve come all the way to the end: SEO optimization and self-promotion.

I hope this series has given you the tools you need to enter the world of technical blogging. Although blogging can seem overwhelming at first, it’s actually not as difficult as it seems. Once you’ve written a post or two, you’ll begin to discover a process that works for you. People will start to recognize you around the web (and around town). At that point, you’ll be building on your past successes. Promotion will get easier too, because people will already be familiar with your work.

Best of luck to you in your writing career. I look forward to reading what you come up with!

How to Write a Technical Blog Post: Part 2

This post originally appeared on The Quick Left Blog

Part 2: Write

In this three part series, we’re looking at some of the best ways to get into a good flow as a technical blogger. In the first part, we talked about some initial steps you can take to get psyched up when figuring out how to write a technical blog post. In this, the second part of of the series, we’ll talk about how to do the actual writing itself. We’ll explor some effective structures you can use to organize your posts, thoughts about the creative process of writing, and how to make sure your content is as good as it can be.

Define Your Structure

So you’ve got a topic, you’ve identified a long-tail keyword, and you’re ready to start writing. You put your text editor in distraction-free mode, don your noise-canceling headphones, and get ready to dig in. But where to begin?

I recommend treating your first session on a given post as a scaffolding-building session. Don’t expect to get into details. Forget about jokes and memes. Just sketch an outline of what you want to write about.

There are quite a few good ways to structure a blog post. You don’t have to reinvent the wheel. Just pick one of these basic approaches, and it should get you where you need to go.

The List

You know what I’m talking about. Titles like Five Ruby Methods You Should Be Using are gold. Potential readers know titles like this are a cheap shot, but they can’t resist clicking the link anyway. That’s why it’s called click-bait.

If you’re writing this kind of post, the structure is obvious: write an intro and a conclusion, and slap the five Ruby methods and their explanations in the middle.

The How-To

So much of our industry is about learning new technologies and new ways to do things. Developers need to understand “how to do X” every single day. When they google for it, they will probably type “how to…” If your post begins with those words, perhaps it will be the one they find and read.

Writing a how-to article is a little more complex than doing a list. I usually break it down like this:

1
2
3
4
5
6
1. Introduction
2. Introduce a theoretical coding situation
3. Write the test for what you want to solve
4. Make the test pass
5. Repeat steps 3-4 until the point is made
6. Conclusion that includes a link to the code on GitHub

I used this structure in my Wrapping Your API In A Ruby Gem post, and got a lot of great feedback from readers.

The (Five) Paragraph Essay

If you’re writing an opinion or agile process piece, the basic Five-paragraph essay style is a great way to organize your thoughts. If you went to high school, you’re probably familiar with this layout, so it can be a comfortable choice to reach for. To write an essay think about how you’ll introduce your topic and assert a thesis. Next, prove your point with supporting arguments. Finally, summarize and reiterate your argument in the conclusion. Your outline will look something like this:

1
2
3
4
5
1. Introduction & Thesis
2. Supporting Point 1
3. Supporting Point 2
4. Supporting Point 3
5. Conclusion

The Well-Actually

At Quick Left, we do a lot of joking about being a neckbeard. There are a lot of smart people that work here, and more often than not, they have strong opinions about how to do things. Sometimes when one person begins to make a statement, an opinionated colleague will correct them with a sentence that begins with “well actually…”

“Well actually” is a great phrase, full of tension. In fact, you can build an entire blog post around the drama of this tension. Some of the most interesting articles I read are ones that follow what my mentor Jeff Casimir called “the hero’s journey”.

The basic breakdown of the hero’s journey story goes like this. First, you write about “I always thought that foo worked like bar”, or “I’ve always solved foo by doing bar”. Then you move on to say, “one day, I decided to solve foo by using baz instead”. Finally, you wrap the whole thing up with “but it turns out that the correct solution is neither bar nor baz, but qux”. This is your “well actually moment”.

These kinds of posts can be fascinating to read because they follow a story arc, and they provide more suspense than you get in flat structures like lists and how-tos.

Interlude: Let It Simmer

Deciding on a topic, generating a long-tail keyword, and sketching out your basic structure is pretty good for a first day’s work on a post. At this point, I usually like to leave the post alone for a while and let it simmer. I’ve found that when I take a break and give it a little space, my subconscious gets to work on exploring the main points I’ve set out for myself. When I come back, I find I have plenty of ideas of what to say. When I take some space in between writing sessions, the resulting post tends to be a lot richer than when I force my way through in a single sitting.

Fill In The Details

After you’ve taken a bit of a break and you’re ready to come back to your writing, it’s time to actually do the hard part. At this point, I typically feel that I’d like to do anything but sit down at the keyboard. I can think of a million distractions: “I’ll start writing right after I go get a pumpkin spice latte”, “just as soon as I send this email”, or “I think I’ll check Reddit first”. All of these impulses are what a mentor of mine once called anything to avoid buckling down.

So how do we overcome the feeling of anything to avoid buckling down? My favorite way to deal with this problem is to set up a ritual. I won’t get into the details of what mine entails, but here are some things that can be helpful: put on a specific kind of music, remove social media, email, and chat distractions, set up a different mode in your text editor, drink a certain coffee beverage. You can get even more superstitious if you want to. But the basic idea is that by identifying a series of things you do before you write, you can train your brain to get “in the zone”.

Once you’ve gotten in the zone (however it is that you do that), just start writing. You know the phrase “genius is 1% inspiration and 99% perspiration”? This step - filling in the details - is the perspiration part.

The key thing at this stage is to let yourself get into flow state. As you start to express an idea, you’ll be tempted to stop and think “is that good enough”, “is that really accurate”, or “could I have worded that better?” I recommend just letting it come as it will, and put off the judgements for later. Interrupting yourself to evaluate your writing disrupts your flow. If you’re a programmer, think of it as a TDD exercise: first make the idea come out, then refactor.

Often times, you’ll have to complete a couple of sessions like this before you get through your entire structure and have a complete first draft. Each time you sit down to write about it, you have to get back into the flow. Just follow your ritual and keep working. You’ll be through it before you know it.

Write A Conclusion

There’s a bit of an art to writing a conclusion section. On one hand, you want to reiterate and summarize the high-level concepts involved with what you’ve been talking about. On the other, you want to encourage the reader to think about the wider implications of the subject. How can it be applied to other situations? How can what you’ve been talking about be extended illuminate a higher level of understanding? For example, if you were writing about Stripe integration, maybe talk about how your post fits in the broader context of e-commerce.

The conclusion is also a great place to encourage people to take some sort of action on what they’ve learned. Whoever’s hosting your blog post would probably like to have the readers interact with their website. You can push them to do this by linking them to relevant content elsewhere in the site. Or you can use a P.S. section encouraging them to leave a comment on your post.

Proofread It (Twice)

Once you’ve completed your conclusion, your first draft is done. Take another break and get some space from the post. Shift your mindset from “getting stuff done” to “let’s clean this up” instead.

Remember above when I suggested you put off judging or evaluating your work? Now’s the time to invite that impulse back in. It’s time for everyone’s least favorite part of writing: proofreading. You need to do it. Read through what you’ve written and make any changes that suggest themselves. Then read it aloud and see how it sounds. Make more changes. Pretend that you are a member of your target audience and read it from their point of view.

The more times you repeat this process, the more clear and understandable your writing will be. Aim for brevity and precision. Long sentences are hard to understand. Only write as much as you need to to get your point across, cutting out extraneous words and explanations and really tightening up your word choices.

Ask For Review

It’s a really good idea to get another set of eyes on your writing. If you’re lucky enough to have an editor, they will give you great ideas for improving your phrasing, word choice, and structure. If not, reach out to your peers to see if any of them are willing to help you improve your work. Especially target those you think already know the ins and outs how to write a technical blog post. The more people you can get to check your work before you release it to the world, the better.

You can also ask someone in the role you’re writing for to review it. If there’s nobody on your team in that role, try reaching out to a popular influencer on Twitter or elsewhere on the web. In fact, this is also a good thing to consider doing for promotional purposes. Reaching out to major influencers early and gathering their feedback on your topic might lead to them sharing your content, which will dramatically improve your traffic.

Once you’ve solicited some help, take your reviewers’ advice to heart. You don’t have to incorporate every single change they suggest, but try to keep in mind that they’re not out to criticize you personally. They’re trying to help you, and the places they have trouble understanding your language are good spots to polish up the way you present your ideas.

Once you’re done proofreading, editing, and making changes from reviewer comments, you’re almost done writing. But first, there’s just one more thing. Proofread it again. I’m not kidding :)

Wrapping Up

Even if you know what you want to write about, the process of actually getting the words down can be hard if you haven’t done a lot of writing in the past. In this post, we’ve looked at some cookie-cutter structures you can use to scaffold your post, the importance of giving yourself some space between writing sessions, and tips for proofreading.

Stay tuned for the third and final part of this series, where we’ll explore how to draw more visitors to your post using SEO best practices and how to promote your writing on social media for the most benefit. See you then!

How to Write a Technical Blog Post: Part 1

This post originally appeared on The Quick Left Blog

Part 1: Prepare

Content is king. Bill Gates predicted it in 1996. Much of the money made online today is in content. In the tech world, where languages and frameworks are here today and gone tomorrow, this is doubly true. Developers, managers, and CEOs of technical companies spend an enormous amount of time understanding their chosen tools, the next hot thing, and how to stay relevant.

It’s no wonder so many startups host blogs on their sites. Blogs drive traffic to your site, increase your visibility, and elevate your brand in the tech world. So how can you get a piece of the action?

In this three part series, we’ll explore some strategies you can use to generate ideas, produce clearly written blog posts, and effectively promote your work on the internet. In this, the first part of of the series, we’ll talk about how to get started as a technical writer: overcoming mental resistance, generating ideas, and starting with your audience in mind.

Get Psyched Up

Figuring out how to write a technical blog post can be overwhelming if you’re not used to it. Many people find it hard to choose a topic. A lot of times, it comes down to feeling like you don’t know enough about anything to write about it. But if you think that you have to be an expert before you start writing, think again.

I don’t know how many times I’ve heard people say: “I would write about (insert technology) if I knew a little bit more about it”. It’s common to feel uncomfortable with the idea of publishing a piece telling the world “how you should do X”. But fret not. This is just a little bit of imposter syndrome.

Try on a different perspective: think of blogging as a learning process. Maybe you’re not the world’s foremost expert on Flux. It interests you, but it’s a little bit hard to wrap your head around. Instead of feeling like you have to be an experienced Flux developer before write about it, think of it as the path you’ll use to understand it. There’s no better way to understand something deeply than to teach it.

1
2
3
If you want to learn something, read about it. If you want to understand something, write about it. If you want to master something, teach it.

- Yogi Bhajan

Blogging can be a great way to give structure to what you want to understand about a topic. It can also serve as a roadmap of how you will get to mastery. What are the components of the topic? How can you break it down? What little pieces can you try to grok that will help you see the big picture? These are the questions you have to ask when you’re teaching something. They also happen to be the questions you’ll need to answer to learn something!

You can also treat blogging as a way to document how to solve a specific problem so that you can look it up later. Once you’ve gone through the struggle of figuring out what’s going on, you’ll have a handy place to go back to and remind yourself what you did - in your own writing! Might as well open source it.

There have been a number of times in my blogging career when people have read one of my posts, then reached out to me and and asked: “you’re an expert on X, what can you tell me about this one arcane part of how its written?” In almost every case, it was the first time I’d even tried to understand the topic!

Brainstorm

So you’ve gotten past your initial reservations about writing, it’s time to choose a topic. Figuring out what to write about can be just as daunting as deciding to write. But once you know where to look, you’ll find that fertile ideas present themselves every day.

Here are a few of the strategies I use to come up with topic ideas.

As a consultant, I work on a lot of different projects. Each one usually has one or two unusual problems that have to be solved in a novel way, either due to the business domain or idiosyncracies of the tech stack. I keep an eye out for things like this and write about them. They usually make for an interesting read for curious developers.

I also keep an eye out for ways that I am percieved to stand out by client teams. Sometimes I’ll suggest a certain way of doing things that is new to a team, and I get a lot of feedback about what a great idea it is. For example, the Quick Left Pull Request Template is often quite appreciated.

Similarly, the particular business situation that a client company finds itself often lends itself to a process-related post. I recently did some work for a company whose codebase hadn’t been worked on for a while, so I wrote this post and this post to help technical leaders in this situation find some direction.

In all of these examples, you could just as easily leverage your business or technical situation even if you’re not a consultant. Think about your codebase’s “gotchas”, ideas that new hires have brought in, and the quirks of your particular niche in the business world. These are rich sources of post topics.

If none of these situations applies and you’re still drawing a blank, you can always go the old fashioned route: google it. Sites like Buzzsumo, SEMRush, and Alexa are great for generating ideas. I also recommend Google Trends for figuring out what’s people are asking about a given topic. All of these tools are great ways to identify things to write about that people will actually be interested in reading. This great for generating page views.

Consider Your Audience

While we’re on the topic of page views, you should probably consider who it is that you’re writing for. Is it developers? Your product owner, scrum coach, or CEO? Maybe marketers or salespeople? What role do they play in the tech industry?

It’s vital to have an idea of who you want to read your post. Knowing this enables you to answer an important question: what problem(s) does this person need to solve in order to do his/her job?

When people face a difficulty that they don’t know how to solve, they google the answer. If your blog post comes up at the top of the search results, they’ll probably read it. On the other hand, some folks will look for the answers to their questions in other places online. Where does your target audience spend their time? Are they hanging out on Reddit, Stack Exchange, or Quora? Think about how you can get your post there, and write it such that it will be accepted by that online community.

Identify A Long-Tail Keyword

You might think that marketing your piece is something that comes after you’ve written it. But considering how you will promote your post before you write it can help focus your writing and lead to a warmer reception when you publish. One of the easiest wins you can make early on is to choose an effective title.

I typically try to match my titles with a well-chosen long tail keyword. Long tail keywords are specifically targeted 3-4 word phrases meant to be found by readers researching a particular topic.

Because single-word rankings in search results are very competitive, it’s easier to get a higher search result rank for a multi-word phrase than a single keyword. Usually, it’s best to think of the most compact way to express what you’re writing about using common language.

For example, the long-tail keyword for this post is “how to write a technical blog”. It’s more likely that someone would search for that phrase than something longer like “steps to follow when you want to publish your first technical blog post”. On the other side, it’s more likely that my audience will find my post than if I had used something more generic like “tech blogging”.

Generating a good long-tail keyword is a bit of a fine art. If you have any friends that are marketers, ask them for help. Barring that, you can google terms similar to your idea to see what people are asking about, use a thesaurus, or follow the suggestions in this article.

Wrapping Up

Getting into the head space of writing a blog post can be difficult at first. But when you treat it as a learning process and break the process up into small, manageable tasks, it becomes easier. In this post, we’ve looked at how to rethink your mindset about the purpose of blogging, ways to generate ideas, thinking about your audience, and how to drive your topic with a tightly focused long-tail keyword.

Stay tuned for part two of this series, where we’ll talk about another important aspect of how to write a technical blog post: improving your actual writing process itself. Until then, good luck and happy blogging!

What It Takes to Be a Software Consultant

This post originally appeared on The Quick Left Blog

Four Things Our Developers Wished They Knew On Their First Day

As part of our onboarding process at Quick Left, we meet with recent hires after 30 days and ask them several questions about their experience so far. One of the questions that gets some of the most interesting reponses is: “What advice would you give yourself on your first day?”

After six years in business, we’ve asked this question quite a few times. We recently took some time to read back and analyzing the responses we’ve gotten. When we looked closely, some trends started to emerge. Interested in finding out what it takes to be a software consultant? Read on to find out the top four pieces of advice our devs wished they had known on their first day.

Forget Imposter Syndrome

Becoming a tech consultant can be intimidating. It’s natural to feel like you don’t know enough to effectively counsel clients in making the best decisions. Our advice? Forget imposter syndrome.

As one developer put it, “you come in here more prepared than you think”. Everyone has life experience that they can bring to bear on the job. In the words of another QLer: “even junior developers have a lot to offer with their other experience”. We’ve hired people from all kinds of backgrounds, from the restaurant industry, to project management, to education, and every one of them has found that they have skills they can leverage in serving clients. “Just relax and know everything will be fine. You’ll learn things”.

If you just can’t shake the feeling of inadequacy, you can always fall back to this advice: “shut up about your lack of confidence, keep it to yourself”.

Be A Self-Starter

Quick Left has always been known as a democratic place to work. We made the WorldBlu List of Most Democratic Workplaces in 2012 and 2013. As we’ve grown, we’ve always managed to keep that spirit alive despite the pressures of scaling. One of the things that’s made that possible has been that we’ve focused on hiring people who know how to take the initiative.

Here’s some of the advice we’ve heard employees give about being a self-starter: “start working, don’t wait for someone to tell you what to do”. Another person said, “just try things and if something breaks, its not the end of the world”. Because we’re smart and know when to ask for help, we know that sometimes the best strategy is just to dive in and try to solve things.

If all else fails, you can always read the manual. It never hurts to read the docs. If that fails, there’s always “source code spelunking” (a favorite pasttime around here).One QLer reflected: “sometimes I was asking questions and the answer was right there in front of me and I just needed to read more”.

Get To Know Everyone

Two heads are better than one. And a whole team of heads is better than two! We pride ourselves on a strong culture of education and mentorship at Quick Left. We know that we can rely on each other to help when the going gets tough. None of that would be possible if we didn’t have great relationships within the team.

Although we host plenty of events that bring our developers together, like our hackfests and monthly happy hours, some of our developers found that they just had to get themselves out there and “engage with the team outside of work”.

Others reflected that they wished they’d leveraged the team as a resource earlier. So how do you get integrated? “Go out to coffee with everyone in the company”, one person suggested”. This goes for both the social and the code realm: “reach out and talk to my co-workers more. Both small talk and technical stuff”.

As nerds, it can be difficult to connect socially. But the benefits more than make up for the little bit of discomfort it costs up front. Tight teams stay together.

Don’t Sweat It

Finally, there’s one piece of advice that we saw again and again as we looked back through the archives: “don’t sweat the small stuff”. The first days on a new job are always a little bit nerve-wracking, but everything is going to be fine in the end.

Your team wants you to succeed. We’re all in this together. So don’t get caught up in little problems, because we’re all here to support each other in the big picture. One developer admitted: “it was stressful for me to get into a new focus. I would have told myself to calm down.

Displaying a little bit of confidence can go a long way. It can put clients, managers, and team mates at ease. This makes everything run a little bit more smoothly. Take another QLer’s advice: “just relax and know everything will be fine”.

Conclusion

There are a lot of things that go into being an effective developer. You have to know your tools, stay up to date on the latest technology, and make sure that you’ve thought through all of the edge cases. At the same time, being a consultant is no walk in the park either. You have to be able to balance budget, time constraints, client relationships, and getting work done. Between the two of them, what it takes to be a software consultant can seem daunting.

Even developers coming from a product background are sometimes daunted when they first become consultants. But after a few months, most of the people we hire tend to get the hang of it regardless of what they were doing before.

After doing this for several years, it’s interesting to look back and see that developers consistently gave the same answers for how to deal with filling this role. Time after time, the answer to the question “what advice would you give yourself on your first day” came out matching one of the same themes.

If you’re entering the world of software consulting, take these tips to heart. Believe in yourself, trust your gut, say hello, and just relax. Best of luck. See you on the interwebs!

Bring Back Your App: How to Prioritize Your Best Features

This post originally appeared on The Quick Left Blog

how to prioritize your best features

Introduction

The tech world moves fast. It’s not uncommon for a startup to scale from two people in a coworking space to a team of thirty or more in a matter of months. Just as often, the team shrinks again due to sudden market changes or errant developers moving on to the next big thing.

When you’re getting ready to rebuild your business, it can be hard to know where to begin. There are so many things to do, from marketing, to onboarding a new team, to deciding what to build next.

In part one of this series, we talked about some of things you can do on the technical side to get your dev team up and running with a minimum of fuss.

In this, the second and final part of this series, we’ll focus on how to prioritize your best features. We’ll look at how to decide what you should get rid of and what you should pull out into its own codebase. Then we’ll tackle the question: what to build next?

Finding Your Strategic Direction

Assuming your development team is all ready to go, you might start to ask yourself: “what do we build next?”

But before you assume that you’re ready to start prototyping and shipping new features, you should take a hard look at what’s already there. Then, ask yourself if you shouldn’t do a little housekeeping first.

Take An Inventory

It’s a good idea to take an inventory of the workflows and features that are already in your application. Make a list of ways that users interact with the system. You could even consider writing user stories for the functionality that’s already there.

Consider Removing Features

Here’s a fun exercise. Once you’ve created a list of your use cases, go through and assign a value to each based on how important you think it is to your users. Then check out your analytics tool and compare your assumptions to the actual engagement of each feature.

When you’ve completed this process, you can begin to reprioritize. Are there secondary features that aren’t seeing much traffic? Maybe they were failed attempts to grow your user base that you never got around to pulling out. Maybe there are features that used to be popular, but a competing technology rendered them obsolete. Can you identify any parts of the app that require a lot of upkeep, but aren’t really serving your users? Here’s the tough question: what can you remove?

Think about Basecamp. They used to be 37Signals, but in February 2014, they announced that they were dropping support for all of their applications aside from Basecamp. They whittled it down to what they knew was successful so that they could focus all of their resources on it.

You can do the same thing on a smaller scale. Focus on the part(s) of your application that made the business a success in the first place. Do your marketing and landing pages drive users to interact with that feature? Is it easy to get to? Is it easy to use? Are there other features or workflows that are getting in the way of users finding their way there?

If you can identify some features that you can lose, you can focus your resources on the things that bring users to your site. You free up energy to refine them and make sure they’re solid and bug-proof. It also gives you room to experiment with new secondary features without overwhelming your users. I know one company with a well-established product that follows this rule: “you can’t add a feature unless you remove one.”

Don’t build yet, even if you have customers clamoring for it. Remove the cruft first. Think of it as refactoring at an application level.

Consider Splitting Out Services

In part one of this series, we talked a bit about refactoring your application code. If you do a little bit of refactoring, and find yourself getting into this whole code extraction thing, maybe you want to go whole hog. Talk with your engineers about whether it makes sense to extract an entire part of your app into a separate service. Some good candidates include: exposing a REST API, creating a gem to talk to your API, storing data that’s shared with other applications, or integrating with an Elastic Search service.

In some cases, starting a whole new code base can take a lot of time, and might not be worth it. On the other hand, if it’s hard to make changes in the code you’ve already got, sometimes extracting a service can actually speed things up. With a smaller surface area, and code written by people who currently work for your company, it’s easier for folks to grok what’s going on. This leads to faster development.

If you do go down the road of extracting services, your team should take the time to define clear boundaries between the old app and the new one, and describe how they will interact in detail. For example, if you’re pulling out an API, you might want to come up with a sample JSON response and the HTTP Status Codes that will be used before you build it. Defining the limitations and structure beforehand can keep a project like this from dragging on endlessly.

Use MVP To Figure Out Where To Go Next

A lot of people don’t realize it, but a Minimum Viable Product (MVP) isn’t just the first version of the thing you’re building. It’s actually a process. A way of testing out the market to find out whether things you think that people will like are actually things that people will like.

If you haven’t tried driving your business this way, you should! Instead of just guessing what you should build next, or building whatever customers ask for, you can actually find out using research and numbers. Customer requests are great, but spiking on new features and getting information from analytics and customer feedback can really help you hit a home run.

Here’s basic gist of MVP. You start with an idea. It might be an app. It might be a feature. Either way, you build just enough to find out whether it makes sense to invest further in your idea. How do you find out? Measure user response with clicks and sign-ups that actually go nowhere, and use analytics, questionnaires, interviews, and emails. When you’ve got data, analyze it. Do they love it? Keep going! Was it a partial success? Ask yourself: what can I keep, what can I lose? Otherwise, pivot or abandon your idea. Build something else. Measure its success. Analyze your data. Repeat.

For a more specific walk-through of using MVP to drive your business direction, check out my Actually MVP post.

Assuming you’re getting ready to scale up your app, you probably already have some ideas about what you want to build. Stop for a second, and consider adding a couple of months of MVP-driven development to your roadmap instead.

Wrapping Up

In this post, we looked at ways to find your strategic direction when beginning work on a product that’s been on ice for a while. We talked about how to decide what to remove, what to extract, and what to build next. Hopefully, the process I’ve outlined here has got you thinking about how to prioritize your best features. Now, it’s up to you to build the next big thing. Best of luck!

Bring Back Your App: Ramping Up Developers on Code

This post originally appeared on The Quick Left Blog

Introduction

Remember the glory days? Your company had it all! New signups every day. Coverage on the hottest blogs. Money was rolling in hand over fist. All thanks to your hot app. You and your goons really nailed it when you followed an MVP process and built just the thing the market was looking for.

But then something happened. You shifted focus to integrating with another company’s API. Or you lost your entire developer team. Slowly, kudzu vines covered over what was once a glorious app. The technology you used to build grew outdated. Bugs accumulated, but there was no time to fix them. You shed a single tear when you thought back to the glory days.

Finally, a new day dawned! You got a funding round! You’ve just hired a new team of developers. They start next week. You can’t wait to get them slinging code. But are you ready? Starting a new team on an old app isn’t as easy as handing them laptops and saying “clone down the repo”. In this post, the first of a two-part series, we’ll take a look at some advice you can use when ramping up developers on code in a legacy application. When we’re through, you’ll be ready to let the good times roll again!

Have An Easy Setup Process

Getting developers set up on a project can be time-consuming. Maybe you’re planning to write off two days for each person to set up their laptop, get the application and its dependencies installed, and familiarize themselves with the codebase. If it’s just one or two developers, you can probably justify four days of lost time. But what if you’re bringing on a team of ten? And what if they could get set up in one day? Or even half a day? That would be 15 days spent on developing new features and bringing in more money!

So how can you get them up and running faster?

Documentation

Documentation is so important. I can’t possibly recommend it enough. As a consultant, it’s one of the first things I look for when checking out a new code base. Docs are the most efficient way to get a new team member up and running. If they’re up-to-date, new devs can read about the project architecture and setup process, instead of having one of your more experienced developers take the time to sit down and explain it.

There are many effective ways to make documentation accessible to your team. I’ve seen companies have success with their GitHub project wiki, Atlassian Confluence, a docs folder in their app repo, or a separate docs repo on GitHub.

Document as much as you can. Your main application should have a thorough README. It’s the first thing people will see when visit the GitHub repository. If you’re running separate server and front-end apps, there should be a way to understand what has to be done to get them to talk to each other in development. How do you install the dependencies? How do you run the tests?

Sometimes, project setup can get held up by weird system setup issues. Maybe the app used to run on a pre-Yosemite version of OS X, but when you try to install it on Yosemite, it runs into problems with Nokogiri. I recommend creating a troubleshooting document and putting errors and their fixes into it. At Quick Left, we cut our dev setup time by several hours when we introduced a troubleshooting doc in the Sprint.ly repo. It also allowed senior members of the team to stay focused, instead of context switching to help people.

Some other good things to document include: language style guides, office culture, PR templates, things to look for in PR review, QA tasks, and deploy steps.

Docker

We live in an exciting time. The dawn of containers has made it easier than ever to make sure that each person working on a project is working with the same system setup. I like to use Docker to facilitate running applications across all environments, from development to CI to production. If you’ve never created a container before, I recommend checking out my colleague Alex Johnson’s Sailing Past Dependency Hell With Docker.

Docker isn’t right for every team, but if you have the knowledge and time to get it set up, it can save hours of time that would otherwise be spent googling errors and entering arcane commands into SSH tunnels. I’d recommend looking into it.

Pay Off Technical Debt

Great documentation is a good first step, but often it isn’t enough to ramp up developers on code in a reasonable amount of time on its own. If your app has accrued significant amounts of technical debt, this can present a major milestone. There are two kinds of tech debt to look for: dependency debt and native debt. Here are some ways to pay down each.

Dependency Debt

If your app was built back in the heyday of (name your tech stack here), you probably made use of a lot of open source packages that were popular and well-maintained at the time. But fast-forward a few years, and suddenly some of your dependencies are outdated, others deprecated. This can be not only frustrating, but also a potential security concern.

If this describes you, you’ve probably missed some patches that deal with widely publicized security issues. This can be a concern for open source packages and languages alike. For example, if you’re still running Ruby 2.1.1, you might find that you’re vulnerable to DNS Hijack attacks and other security holes.

It’s safest to invest a couple of days getting your language and dependency versions updated before bringing on a bunch of new developers. Not only will this fix your security holes, it will save you tons of time in working out module version conflicts.

Gems & Modules

If your app is built in Ruby or JavaScript, you’re probably making use of Bundler or NPM to resolve dependency versions and pull them down. In your Gemfile or package.json, you may be specifying the specific versions of packages you want to use, like this:

1
2
3
4
5
6
gem 'rails',                  '~> 3.2.17'
gem 'jquery-rails',           '~> 2.1.3'
gem 'mysql2',                 '~> 0.3.11'
gem 'devise',                 '~> 2.2.0'
gem 'cancan',                 '~> 1.6.8'
gem 'nokogiri',               '~> 1.5.6'

Or this:

1
2
3
4
5
6
"dependencies": {
  "config": "^1.10.0",
  "hapi": "8.1.0",
  "hapi-auth-cookie": "^2.0.0",
  "superagent": "^1.2.0"
}

Your new developers pull down the repo, run bundle install or npm install, and soon find out that you actually can’t install some things on the latest version of OS X without passing special flags.

Or maybe you realize that the authentication gem you’re using is one or two major versions behind the latest release. Fine, you think, and update the version specification. But then, when you try to run the install, you have version conflicts with another gem.

I have yet to find a good way to resolve all of these problems at once. The best approach I’ve found is to start with the module that you most need to update. Bump its version, then try running the install. If there’s a dependency version conflict, change that. Keep following this pattern until the waterfall of pain subsides. Then repeat the process with the next package that you know needs updating. If you run into issues, sometimes a bundle update or deleting the node_modules folder completely and reinstalling can get you closer to success.

The Rails Upgrade By Version Trick

When it comes to updating Ruby gems, I’ve come across nothing more painful than updating Rails across multiple versions. I’ve recently been working on upgrading an old Rails 3.1 app to Rails 4.2 for a client. When I try updating the gem version all at once, I get so many regressions and new bugs that I don’t even know where to start.

There is a better way. Begin by upgrading Rails one minor version at a time. So, if you’re on 3.1, upgrade to 3.2 before attempting to go to 4.0. Then go on to 4.1 and 4.2. There’s a list of Rails releases you can use to drive this process. I’ve also found this Rails Upgrade Checklist site pretty helpful. It lets you know all the deprecations you need to change for each version along the way.

Native Debt

So you’ve got all of your dependencies updated, and installing your app is a breeze. You can get a new developer from zero to sixty in an hour. Great! But now, do they actually understand anything about what’s going on in the code base?

If you’re lucky, your last team wrote clear, understandable code with lots of comments and no duplication. In reality, the odds of that are pretty low. Because of customer demands and pressure on time and budget, every code base ends up with some cruft, duplication, and just plain cryptic parts.

Take the time before your new team rolls on to clean things up a little bit. It will be invaluable in getting them up and running, ready to be part of the conversation on architectural decisions. Here a couple of steps you can take to get there.

Refactor

You’ve heard it before: if you practice agile development, you need to refactor. It’s something you should be doing along the way if you’re practicing Test Driven Development. The mantra goes: red, green, REFACTOR. Nevertheless, there are probably dozens of places that your code could benefit from some refactoring.

If your team is new to refactoring, I highly recommend the Refactoring book or its Ruby counterpart. You don’t have to become refactoring experts, just pick up a couple of patterns and go to town. Most of the time, there is some kind of smell that’s repeated throughout the codebase. If you learn to identify it, it becomes easy to identify and snipe.

Generally speaking, I’ve found that the easiest wins are to extract and inline code. This works at any level of scope. If you have a big, unweildy method that’s doing a lot, you can extract a method from it. If you have a class calling a method that doesn’t make any sense, you might want to extract an object. You can also go the other way: if there’s some unnecessary indirection, you can pull code right into the class or method that needs it.

Spending a day or two refactoring the hairiest parts of you application can make things understandability. That makes a big difference in your team’s efficiency.

Delete Dead Code

Related to refactoring, another great way to reduce the mental overhead involved in starting with a new app is to remove dead code.

One easy win is to “snipe” unused dependencies. You should also look through the codebase and remove any functions or methods that are old and not being used anymore. It can be hard to determine where exactly those spots are. If you have someone who’s familiar with the code, asking them can be a good place to start.

My colleague Meeka Gayhart also suggests cleaning up cruft by looking for all the places where a package or method is called, and removing it if there aren’t any callers in the current code. You can also look through git history to confirm using the git pickaxe.

If you have a good test suite, make sure to run it often as you remove suspect bits of code to make sure nothing broke.

Take the time to clean up your code base before new developers roll on, it can save huge amounts of time. New devs don’t have to work as hard to understand what’s going on, so they can spend those mental cycles shipping new features instead.

Wrapping Up

In this post, we looked at a couple of strategies for getting a new team ready to develop on an old code base. We looked at how to streamline the technical set up process, some tips for updating dependencies, and ways to pay down technical debt.

There are few things more exciting to a major stakeholder in an application than seeing it rise from the ashes to conquer the world a second time. I hope these tips help get you back to the glory days! Stay tuned for part two of this series, where we’ll take a look at some higher-level planning ideas that can help ensure you make the most of your development time from here.

Testing Flux Applications

This post originally appeared on The Quick Left Blog

Introduction

A lot of people in the JavaScript community are pretty excited about Facebook’s React library, and associated Flux architecture. We’ve been using quite a bit of these tools in our client-side projects at Quick Left. It can be a little hard to wrap your mind around the way the data flows at first, but once you get used to it, you come to appreciate how clean it can be.

As with any development, test-driving features is the way to go in a Flux app. As I’ve been learning this technology, I’ve been collecting some of the less obvious patterns that make testing easier. In this post, we’ll take a look at some of these strategies, to make it easier for you to build the next big thing.

Setup

Project Structure

Since Flux is a more of an idea than a framework, there’s no convention as to how to structure your project. I personally like to break it down in a fairly obvious fashion, with the different Flux objects grouped together by folder.

1
2
3
4
5
6
7
8
app/
├── actions
├── collections
├── dispatchers
├── lib
├── models
├── stores
└── views

There are a couple of places to put the tests, but I’ve been leaning toward a pattern where the tests live right alongside their corresponding files. This makes it easy to find the test for a given module. It also keeps the directory structures from getting out of sync, as they might if you put everything into a separate test/ folder. Here’s an example of what this might look like.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
app/
├── actions
|   |── user-actions.js
|   |── user-actions-test.js
├── collections
├── dispatchers
├── lib
├── models
├── stores
|   |── users-store.js
|   |── users-store-test.js
└── views
    |── login.js
    |── login-test.js

When we find the files to run in our test suite, we can just use globbing to find them all, so it doesn’t really present any problems for setting up our tests.

Testing Dependencies

Facebook recommends using their testing tool, Jest, to test React and Flux components. Although I totally respect Jest, it doesn’t run in the browser, plus I’m pretty used to the toolchain I’m about to describe, so I go about things a slightly different way.

Mocha + Chai

When it comes to testing frameworks, I’m a big fan of Mocha. It gives us describe and all of the other BDD-style assertions we could want when combined with ChaiJS.

Sinon

Since there are a lot of dependencies in a Flux app, we’ll probably be doing a lot of stubbing. I like to use SinonJS for this purpose. It gives us stubs and spies, and its API provides the ability to drill down into how functions were called and with what arguments with a level of granularity that can come in really useful.

Karma

When it comes to test runners, there are many viable choices. Lately, I’ve been leaning toward Karma for most of my needs, because it’s easy to get set up, and it can be hooked into a coverage tool with ease.

Here’s an example karma.conf file for a Flux app in ES6 with Browserify and Babel.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
module.exports = function(config) {
  config.set({
    frameworks: ['mocha', 'browserify'],

    files: [ 'app/**/*-test.js' ],

    preprocessors: {
      'app/**/*.js': [ 'browserify' ]
    },

    browserify: {
      debug: true,
      files: [
        'app/**/*-test.js'
      ],
      transform: [
        ['babelify', { sourceMapRelative: './app' }]
      ]
    },

    browsers: [ 'Chrome' ],

    singleRun: true
  });
};

Coverage

If you’re interesting in setting up a coverage tool to see how well-tested your code base is, check out my post Measuring Clientside JavaScript Test Coverage With Istanbul.

NPM Build Scripts

As far as running tasks, you can rely on Grunt or Gulp, or you can just set a test script up in your package.json file. Doing it this way, running tests is as simple as typing npm test. Here’s what to put into package.json:

1
2
3
"scripts": {
  "test": "NODE_ENV=test ./node_modules/karma/bin/karma start"
}

With these dependencies set up, you’re all ready to start writing tests. Let’s take a look at some of the testing specifics.

Actually Writing Tests

There are four parts to a Flux app: Actions, Stores, Views, and Dispatchers.

As mentioned above, Flux is more of a pattern than a framework. Although several people have released experimental frameworks built in its image, there is only one official Facebook package, called flux. Ironically, it only contains a Dispatcher. You can find the source code here. Since this package works well and is tested externally, so we won’t be looking at testing Dispatchers in this post.

Before we get into looking at the remaining parts of Flux in depth, here are a couple of tips that come in handy in all cases.

Any Object

Setting Up A Sandbox

Sinon sandboxes are a great way to use stubs and spies without having to restore the objects they’re touching later. You can clean things up automatically by setting up a new sandbox before each test and tearing it down afterwards.

1
2
3
4
5
6
7
beforeEach(function() {
  this.sinon = sinon.sandbox.create();
});

afterEach(function() {
  this.sinon.restore();
});

Getting Dependencies

Sometimes it can be a pain to pull in and/or stub a bunch of dependencies for an object you’re testing. There’s an easy way to grab what you need from within the test: using Rewire, which exposes a special __get__ method you can use to access whatever you need from the top level scope of the module. You can then stub out methods and properties on those modules. Here’s how to leverage it to your advantage.

1
2
3
4
5
6
7
8
9
10
11
beforeEach(function() {
  this.sinon = sinon.sandbox.create();
  this.todos = MyAction.__get__('todos');
});

describe('something related to todos', function() {
  it('doesnt have to care about todos', function() {
    this.sinon.stub(this.todos, 'getAll');
    // do something else that calls this.todos.getAll without worrying about the result
  });
});

As a note, you don’t even need to use Rewire unless you need access to instance variables on your objects. Since Flux uses plain objects, multiple calls to require will always return the same object. This means that you can just spy on or stub out a method on one of your Actions or Stores directly after requiring them.

Actions

Testing Event Dispatching

When you’re writing a Flux action, it typically sends some kind of event and payload to the AppDispatcher to trigger events registered elsewhere in the application. It’s easy to spy on the AppDispatcher and test that it’s called with the right arguments to ensure that your Action is working properly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// my-action.js
import myCollection from '../collections/my-collection';
import AppDispatcher from '../dispatchers/app-dispatcher';

let MyAction = {
  loadModels() {
    myCollection.fetch().then(function() {
      AppDispatcher.dispatch({
        actionType: 'COLLECTION_LOAD'
      });
    });
  }
};

export default MyAction;



// my-action-test.js
import MyAction from './my-action';
import AppDispatcher from '../dispatchers/app-dispatcher';
import Backbone from 'backbone';

it('dispatches an event', function(done) {
  this.spy = this.sinon.spy(AppDispatcher, 'dispatch');
  this.collection = new Backbone.Collection();

  MyAction.loadModels(this.collection);
  setTimeout(function() {
    sinon.assert.calledOnce(this.spy);
    done();
  }, 0)
});

Testing Promises

We often load data from a remote server in our Action objects, so there are typically a lot of promises involved in its internals. When testing these methods, it’s often useful to stub out these promises. It’s pretty easy to do using native promises in ES6. Note that we use setTimeout and done to ensure that the promise is fully resolved before testing our assertion and moving on to the next test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
// search-action.js
import searchClient from '../lib/search';
import AppDispatcher from '../dispatchers/app-dispatcher';

let SearchAction = {
  search(query) {
    AppDispatcher.dispatch({
      actionType: 'SEARCH_START',
      query
    });
    searchClient.search().then(function(results) {
      AppDispatcher.dispatch({
        actionType: 'SEARCH_SUCCESS',
        payload: results
      });
    });
  }
}

export default SearchAction;



// search-action-test.js
import SearchAction from './search-action';

describe('search', function() {
  beforeEach(function() {
    this.success = new Promise(function(resolve) {
      resolve('results');
    });
    this.failure = new Promise(function(resolve, reject) {
      reject()
    });
    this.searchStub = this.sinon.stub(this.searchClient, 'search')
  });

  it('dispatches a SEARCH_SUCCESS event', function(done) {
    this.appDispatcher = SearchActions.__get__('AppDispatcher');
    this.dispatchStub = this.sinon.stub(this.appDispatcher, 'dispatch');
    this.searchStub.returns(this.success);

    SearchAction.search('my_search');
    setTimeout(function(){
      sinon.assert.calledWith(this.dispatchStub, {
        actionType: 'SEARCH_SUCCESS',
        payload: 'results'
      });
      done()
    }, 0);
  });
});

Stores

In my own Flux projects, I have tried to keep the external API of stores as “dumb” as possible. They are meant to be simple repositories for business objects that expose an interface for other objects to subscribe to change events. I typically define methods named emitChange, addChangeListener, and removeEventListener for each store.

Despite their relatively simple API, it is vitally important to test your stores. They’re usually the place where the business logic lives. Plus, they’re responsible for loading data from the server into the client-side app. For these reasons, we want to make sure they work properly. Here are a couple of tricks that can be helpful.

Using Internals

Given that stores are only supposed to accept data through the callback they register with the dispatcher, it can be tricky to send mocked data into them while testing. Facebook has one suggested way of doing it with Jest, or you can try this approach with Mocha or Jasmine. Alternatively, another nice way to hide the implementation a store uses to fetch its data is to wrap the fetch implementation in an internals object and test that instead. Here’s what it looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import _ from 'lodash';
import {EventEmitter} from 'events;
import Widgets from '../collections/widgets';
import AppDispatcher from '../dispatchers/app-dispatcher';

let WidgetStore = _.extend({}, EventEmitter.prototype, {
  emitChange() {
    this.emit('change');
  },

  addChangeListener(callback) {
    this.on('change', callback);
  },

  removeChangeListener(callback) {
    this.removeListener('change', callback);
  },

  getAll() {
    return this.widgets.toJSON();
  }
});

WidgetStore.internals = {
  init() {
    this.widgets = new Widgets();
    return this.widgets.fetch().then(() => {
      this.emitChange();
    });
  }
};

AppDispatcher.register((action) => {
  switch(action.actionType) {
    case 'INIT_WIDGETS':
      WidgetStore.internals.init();
      break;
    default:
      break;
  }
});

export default WidgetStore;

When it comes to testing this internals object, we can test that the internals methods are behaving as expected. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
describe('internals', function() {
  beforeEach(function() {
    this.widgets = new Backbone.Collection();
    this.widgets.fetch = function() {
      return new Promise(function(resolve) {
      resolve(['widget1', 'widget2']);
    });
  });

  describe('init', function(done) {
    it('returns a promise', function() {
      WidgetStore.internals.init();
      setTimeout(() => {
        expect(WidgetStore.widgets).to.equal(['widget1', 'widget2']);
        done()
      }, 0);
    });
  });
});

Using Dependency Injection

If you write your stores in an object-oriented way, you can pass a reference to the dispatcher directly into them. This makes it easier to test that different dispatching events trigger the correct callbacks to produce the behavior that is desired. Here’s an example (thanks to Jack Hsu for the inspiration for this tip).

1
2
3
4
5
6
7
8
9
10
11
12
class WidgetStore extends Store {
  constructor(options) {
    this.widgets = new Backbone.Collection();
    this.dispatcher = options.dispatcher;
    this.dispatcher.register(this.onWidgetAdded);
  }

  onWidgetAdded(action) {
    this.widgets.add(action.payload);
    this.emit('change');
  }
}

And now testing the store is simple. We can just inject a dispatcher and use it to trigger the events we want to test.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
describe('WidgetStore', function() {
  beforeEach(function() {
    this.dispatcher = new Dispatcher();
    this.widgetStore = new WidgetStore({ dispatcher: this.dispatcher });
  });

  describe('WIDGET_ADDED', function() {
    let widget = new Backbone.Model();
    this.dispatcher.dispatch({
      actionType: 'WIDGET_ADDED',
      payload: widget
    });
    expect(this.widgetStore.widgets.toJSON()).to.haveLength(1);
  });
});

View Components

Finally we come to the view layer. If you’ve played with React, writing these view components should come naturally.

When it comes to testing, there are a lot of things you can safely skip, since testing them would just be verifying that React works as expected. For example, checking that onClick handlers fire is pointless, since we know that React will call them. On the other hand, it can be useful that the behavior we want them to cause is actually carried out.

Wrap Components With StubRouterContext

It’s usually a good idea to wrap your components in a stubbed out context, to make it easier to force them to behave the way you want within your tests. If you don’t, it can be hard to get them to render and behave as expected.

To get this to happen, I recommend using the stub-router-context module from the react-router project. It’s useful for wrapping the context of all kinds of components aside from the React Router. Although I tend to stick to the name “Stub Router Context”, it would perhaps be more accurate to just call it “stub context”, since you can use it to stub out any context.

I also like to add a ref to the stub in the component that’s returned in render, to make it easier to get hold of the component being wrapped by the component returned by stub-router-context.

1
2
3
render: function() {
  return <Component ref='stub' {...props} />;
}

Here’s how it looks when you include it in your test. Note how the ref I included makes it easy to grab the child and call setState on it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import React from 'react';
import stubRouterContext from '../lib/stub-router-context';
import Search from './search';
let TestUtils = React.addons.TestUtils;

beforeEach(function () {
  let Component = stubRouterContext(Search);
  this.component = TestUtils.renderIntoDocument(<Component/>);
});

it('does the thing when the state is such', function() {
  this.component.refs.stub.setState({
    isLoading: false
  });
});

Use the React TestUtils

When it comes to rendering your component into a test DOM, checking whether classes are being dynamically added or removed, or whether input values are are changing in response to user interactions, the React TestUtils can’t be beat. Get to know the TestUtils API, and use it to test your view components. It makes things much less painful.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import React from 'react';
import WidgetRepeater from './widget-repeater';
import stubRouterContext from '../lib/stub-router-context';
let TestUtils = React.addons.TestUtils;

describe('WidgetRepeater', function() {
  beforeEach(function() {
    let Component = stubRouterContext(WidgetRepeater);
    // Add 10 midgets to the repeater
    this.component = TestUtils.renderIntoDocument(<Component/>);
  });

  it('shows the widgets', function() {
    let widgets = TestUtils.scryRenderedDOMComponentsWithClass(this.component, 'widget')
    expect(widgets).to.have.length(10);
  });

  it('sets the first widget active', function() {
    let activeWidgets = TestUtils.scryRenderedDOMComponentsWithClass(this.component, 'active')
    expect(activeWidgets).to.have.length(1);
  });

  it('changes widgets to active on click', function() {
    let widgets = TestUtils.scryRenderedDOMComponentsWithClass(this.component, 'widget')
    let secondWidget = widgets[1].findDOMNode();
    TestUtils.Simulate.click(secondWidget);
    let activeWidgets = TestUtils.scryRenderedDOMComponentsWithClass(this.component, 'active')
    expect(activeWidgets).to.have.length(2);
  });
});

Conclusion

Committing to a test-driven development approach in client-side JavaScript applications can sometimes be a hard sell. Aside from problems with testing DOM manipulation and asynchronous code, it can also be hard to test patterns that are new to the team, like Flux. But with the right tools, it can become second nature to test some of these things. Once your team has the confidence that they can effectively test these components, it’s a lot easier to approach all feature development with a TDD mindset.

In this post, we explored how to better test Flux applications. We took a look at some of the JS testing tools that can be helpful to get setup in your build process. We talked about some testing tips that are useful across all of the different Flux objects. Then we drilled down into testing tips specific to Actions, Stores, and View Components.

I hope that some of these tips come in useful for your team as you build your cutting-edge web application. Best of luck!

How I Deployed My First App to Deis

This post originally appeared on The Deis Blog

Introduction

Have you ever felt the pain that comes when your app runs fine on development, but breaks terribly in production? Maybe your CI build has been red for days, but you haven’t had time to figure out how the CI server is misconfigured?

With containers, you can easily rid yourself of such dependency woes. If the app runs in a container on one machine, it will most likely run in the same container on another.

Once you’ve bought into a container-based development workflow, the question soon arises: how can I get my production server to run my application in a container without the difficulty of having to provision a bare server with all of the other services, writing deploy tasks, and handling scaling issues on my own? In short, can I have a managed production environment that also supports containers?

The answer is yes. Using Deis, an open source Platform as a Service, you can host and manage your Docker-based application using your own Amazon Web Services (AWS) servers, without the hassle of configuring a bare Linux server.

I recently deployed a simple Rails app to Deis, and took notes along the way. In this post, I’ll share the steps I took to set up a Deis Pro account and deploy a new application.

Setting Up AWS

Deis applications run on a cluster of servers tied to your AWS account. This allows you to control your settings with Amazon, and takes the middle man out of the billing process for server resources. Since I didn’t have an AWS account, I had to create one. To do this, I followed this guide from Engine Yard. Here’s a quick run down of what I did.

First, I visisted aws.amazon.com and clicked on “Create a Free Account”. I walked through the signup process, entering my credentials and credit card information.

Once my account was all set up and I was logged in, I visited the “Identity & Access Management” page, then clicked “Groups” on the sidebar. I then created a group called “DeisAdminGroup”.

Next, I created a user called “deis_user” under the “Users” tab. When I created it, Amazon asked me to download the user credentials in a CSV file, which I saved. Finally, I went back to “Groups”, selected my group, clicked “Add Users to Group”, and added “deis_user” to the group.

With these steps completed, I had my AWS user and security group set up. Next I turned my attention to setting up a Deis Pro account.

Signing Up For Deis PRO

Before I could deploy my app to Deis Pro, I had to sign up for an account.

I filled out the form, and a few minutes later I got a verification email. I clicked the link in the email, then entered my billing information. Deis then asked for my AWS credentials. I entered the information saved in the CSV I downloaded when I was setting up AWS, and I was ready to go.

Creating the Cluster

Now that I was signed in to my Deis PRO account, I began to set up some server resources that I could deploy my application to. Here are the steps I followed to get them ready.

First, I visited my dashboard and clicked “Create Environment”. Since I didn’t need any special performance resources or customizations, I just used the default options for server size and memory allocations. I created an administrator username and password.

Next, I created three servers on AWS using the Deis PRO UI as part of the “create environment” process. For me, this process failed the first time, but when I tried again, the servers eventually came online. The last thing before leaving the Deis PRO site was to make note of my Deis endpoint name (it looks like deis.1ab2345.my.ey.io) so that I could use it to configure my app from the command line.

Installing Deis Locally

After setting up my AWS and Deis accounts, I turned my attention to getting the Deis command line client installed on my development machine. It was as simple as running this curl command:

1
curl -sSL http://deis.io/deis-cli/install.sh | sh -s 1.7.3

From there, I needed to put the deis executable into my $PATH. Although I could have used a symlink, I opted to move it directly to usr/local/bin with mv deis /usr/local/bin/deis instead. After running these two commands, I was able to run deis -h.

As a last step before setting up an app, I also needed to login to the server cluster I’d created and add the SSH key from my local machine to it. I logged into my Deis endpoint using deis login deis.1ab2345.my.ey.io. The endpoint name was the one I got from the Deis Pro website when I was setting up my resources. I found the username and password for the cluster in the Deis PRO UI. After I logged in, I added my local SSH key with deis keys:add.

With the Deis CLI set up and connected to my server cluster, I was ready to begin deploying my application.

Deploying The App

When I was finishing bootcamp, I wrote an application called Twelve Stepper to help 12 Step Program participants interact with friends, find meetings, and work with the steps. Since it’s a fairly simple application, I thought it would be a good one to use for my first Deis deploy.

I cloned it down from GitHub, bundled my gems and ran migrations as I would for any other Rails project. I also made a Dockerfile and got the app running in a container locally. Then I set up a Deis project by running deis create 12stepper from within the project folder.

After that, I tried to deploy using git push deis master, but I ran into an error: tar: invalid tar magic. After doing a little research, I found that I had forgotten to include a Procfile, so I created one. After adding it to git and pushing again, my app successfully deployed.

I ran deis open, and there was my app, up and running on the web!

Wrapping Up

Getting Twelve Stepper set up on a Deis PRO cluster was pretty easy, all told. Most of my time was spent setting up accounts on AWS and Deis PRO and installing the Deis CLI. But these were one-time tasks. From here on out, deploying apps to Deis will be as easy as creating a new server cluster from my Deis PRO dashboard, then running deis create <appname> and git push deis master from my project folder.

I was surprised how easy it was to get my app up and running on a managed production environment using a container with Deis. If you’re using a container-based development environment, I would definitely recommend checking Deis out as a hosting and deployment solution. Good luck!

Goodbye MVP, Hello V1

This post originally appeared on Engine Yard

Introduction

You’ve done it! It all started with an idea and two people in your garage. After weeks of coding and tweaking, you’ve proven that your business idea is the greatest thing since sliced bread. You used the Build, Measure, Learn cycle to find out what your customers want, and you’re pretty sure you have a product market fit.

Now what?

It’s time to build your V1. In the post, we’ll look at how to take the most important lessons from the information you’ve gleaned during the MVP stage of your product’s lifecycle and apply them to building the first full release of your product.

V0: Minimum Viable Product

If you’re interested in building the V1 of your project, you’ve probably already spent a good amount of time iterating on features, measuring customer feedback, and learning more about what the market wants in your problem space. If you haven’t started building an MVP, or are still working on it, check out my thoughts on MVP and come back to this article when you think you’re sure that your product is ready to be scaled.

If you have been using an MVP process, and you feel that you’ve validated your assumptions, that people would like to use and pay for your product, you’re probably ready to start thinking about building your V1. It’s time to take all of the metrics and feedback you’ve been gathering and put it together to make the first complete version of your product.

Before we continue, make sure you’ve answered these questions. What are your target demographic and platform? Which features make up the core of your offering? Which have resulted in the most clicks and positive feedback?

With all of these things in hand, let’s turn our attention to what it takes to release a V1.

V1: The Final Frontier

If you think about it, V1 is kind of a funny concept. If you’ve been releasing MVP features for a while, you probably already have a functioning product. It might even be complete, in the sense that customers use it and pay for it, and aren’t missing anything that they need to releave their pain points.

Yet it’s definitely possible to draw a line in the sand between beta and V1. According to Wikipedia’s definition of Software Versioning, the free-software community tends to define 1.0 as a “major milestone, indicating that the software is ‘complete’, that it has all major features, and is considered reliable enough for general release”.

Of course, we know better. After releasing a piece software, there will always changes big and small: security concerns corrected, features added, previously undiscovered instabilities fixed. To plan all of our features, set a date, and keep everything shrouded in secrecy until then is a hopelessly waterfall-esque approach that leads to a far more costly product.

In many ways, the distinction between beta and V1 is largely a marketing concern. As you iterate on an MVP, those brave souls known as early adopters are by your side, helping light the way. But once you tell the world that what you’re offering is now a “real” product, you begin to see what the less brave among us are willing to buy. You’ve potentially expanded your market greatly, by giving birth to “The New (your product name here)”.

Do you think your product has what it takes to convert this new audience to your cause? As my friend and colleague Justin Jackson puts it, “the real question is: can you profitably acquire new customers every month?” MVP should show that you have some initial traction. V1 should show that you actually have a business.

Let’s take a look at some strategies you can use to make your transition from MVP to V1 a successful one.

Build A Roadmap

Before you start writing stories for the big release, pause for a second and question your assumptions. You might be excited, because you think you’ve got a handle on what your customers want, and you’re ready to give it to them as soon as you can!

Stop. Take a breath. Think. How does what you’re about to build fit in with the plans you have for your business?

If you don’t have a roadmap, you should make one as soon as possible. It doesn’t have to be an all-day exercise, but you should be able to answer some basic questions.

Do you know which features you’re going to build? What about funding? If you don’t have investors, does what you’re about to build have the ability to attract them? If you do, are there certain things they’re hoping you will build? Do you have a plan for scaling your development team? What about marketing? If all else fails, do you know how to pivot?

Set A Deadline

It’s tempting to think of V1 as the chance to sit back, take your time, and do the waterfall thing. You’ve moved fast and broken things to prove that you have a market fit, and now you can spend as long as you need to build the first stable version. That’s a mistake. Getting to V1 is just a high-pressure as finding an MVP that proves its value.

You might have an initial MVP that’s proven it has traction, put your head down for eight months to build V1, and by the time you release it, find out that you’ve made a big mistake. Maybe the features you focused on building out turned out to be ones that customers wanted – it’s just that they weren’t willing to pay for them. Maybe another company beat you to market. Maybe you overlooked a key integration with another service that would have allowed you to take off. These are the kind of mistakes that you can’t afford to make as a business.

Taking too long to get to V1 is just as bad as taking too long to build MVP. Make sure that you know where you’re headed and when you’ll get there.

Identify Your Target Market

If you’re about to outgrow your MVP, you probably think you know your target market. But take a closer look. Your excitement could by hiding some assumptions. Can you get more granular about the different subdivisions that make up your user base, and use that information to find out more about who you’re targeting?

Have you figured out which customers you should be listening to? Hint: it’s the ones that would actually pay for your service. When you were building an MVP, you proved that you could find paying customers. In V1, you want to prove that you can consistently pick up new customers that will pay you more than it costs you to provide your service to them.

Justin Jackson weighs in again: “To be successful, a product needs customers that are easy to reach, cheap to convert, and undemanding to support”.

To dig deeper into these questions, I highly recommend going through a User Experience (UX) discovery process. You should create personas and think about the emotional response that you are hoping to evoke. UX discovery can help answer important questions, like: “should we move to mobile?” and “what are our key features?”

Identify Your Key Features And Cut The Rest

A huge part of the MVP process is measuring customer engagement with the different features you’re testing. Pull out all of our customer interviews and surveys. Read through the reports in your analytics software. Make a list of all the features that you’ve had success with, and rank them. Which ones were the most popular?

Your V1 should focus on your core features only. Cut the rest. Remember, for each feature you add, you have to think not only about the feature, but also tests, regressions, and bugs. Plus, you’ll be dealing with scaling your business. Employees, culture, and performance are going to be concerns in a way that they weren’t during the MVP process. Keep it simple.

Build A Strong Cultural Foundation

The business you build when you’re going to V1 sets stage for everything that will come after. Now is the time to think about what kind of company you want to create. You will want to make decisions that build a strong foundation for what’s to come.

You will be hiring new developers. The culture you inspire and the process you set up for five developers are the blueprints that will be followed when you get to fifty. The process you use to acquire new customers had better be scalable, or you could find your pipeline going dry a few months down the road.

Take the time to think through what kind of place you would like to work in, and try to manifest that reality as you build your V1.

Build A Strong Technical Foundation

If you’ve done a good job of building MVP features at a minimal cost, you probably already have some technical debt that needs to be paid down. This is the time to switch out stopgap measures for a scalable tech stack.

Maybe you proved your concept with a simple WordPress site, an email list, or one of the other simple MVP approaches I discussed in my MVP article. These are all great ways to get started, but they’re not very customizable. If you’re stuck with the features that you get with a certain WordPress plugin, you might find yourself unable to extend or change your features in a timely and stable manner down the road. Plus, moving away from plugins often cuts out the middleman and allows you to recover some capital that you were spending on things you can provide in-house.

Take the time to consider what kind of tech stack will meet your needs. Take the time to understand the tradeoffs between different technologies in the market, and find the ones that best meet your business’s needs. It can be useful to do a discovery sprint to find out what options are available to you.

Find ways to ensure that your code remains working and stable. I highly recommend following a Test-Driven Development process, writing functional tests, signing up for Code Climate, setting up a continuous integration service like Wercker, TravisCI, or CircleCI, and deploying to a staging server before releasing things to production. These safeguards will protect you and your team from accidentally breaking your application, and be the canary in the coal mine to let you know when things are about to go wrong.

Finally, make sure you’re thinking about scale. If you’re hoping to get tons of new users you’ll want to have infrastructure in place that will allow you to serve a heavy load of users. If you’re using Engine Yard, it’s easy to upgrade your servers from your dashboard with a few clicks, plus they’re always monitoring your app for emerging issues, and their support team is immediately available to support you as you respond to a high volume of requests.

As you start building your first release, make sure that you’re investing in technologies that will support your business as it grows.

Conclusion

The move from MVP to V1 is a moment of transition for your business. It’s vital to consider the norms that you’re about to set up, which will have far-reaching effects into the future. You have to be smart. Know your customers, know your features, and don’t delay. But also build a strong foundation, culturally and technically. Finally, set a deadline and stick to it. This lets you build the buzz, keep your team focused, and provides a great excuse to throw a big party. Just don’t forget to invite me!

Until then, best of luck with your business.

A Smooth Transition to ECMAScript 6: Using New Features

This post originally appeared on Engine Yard

Introduction

In part one of this miniseries, we talked about the timeline for ES6 rollout, feature compatibility in existing environments and transpilers, and how to get ES6 set up in your build process.

Today, we’ll continue the conversation, looking at some of the easiest places to start using ES6 in a typical front-end Backbone + React project. Even if that’s not your stack, read on! There’s something for everyone here.

If you want to try out the examples, you can use a sandboxed ES6 environment at ES6 Fiddle.

New Features

Classes, Shorthand Methods, and Shorthand Properties

A lot of client-side JS code is object-oriented. If you’re using Backbone, just about every Model, Collection, View, or Router you ever write will be a subclass of a core library Class. With ES6, extending these objects is a breeze. We can just call class MySubclass extends MyClass and we get object inheritance. We get access to a constructor method, and we can call super from within any method to apply the parent class’s method of the same name. This prevents us from having to write things like:

1
Backbone.Collection.prototype.initialize.apply(this, arguments)

We also get some handy shorthands for defining methods and properties. Note the pattern I’m using to call initialize instead of initialize: function(args) {}:

1
2
3
4
5
class UserView extends Backbone.View {
  initialize(options) {
    super(options);
  }
}

We can also define properties using a nice new shorthand. The code below sets an app property on the Injector that points to the instance of App we create on the second line. In other words, it’s the same as doing this.app = app;.

1
2
3
4
5
6
let App = function() {}; // we'll look at 'let' in just a second.
let app = new App();

let Injector = {
  app
};

Let

The new let keyword is probably the easiest win that you can possibly get in using ES6. If you do nothing else, just start replacing var with let everywhere. What’s the difference, you ask? Well, var is scoped to the closest enclosing function, while let is scoped to the closest enclosing block.

In essence, variables defined with let aren’t visible outside of if blocks and for loops, so there’s less likelihood for a naming collision. There are other benefits. See this Stack Overflow answer for more details.

You can use it pretty much everywhere, but here’s a good example of somewhere that it actually makes a difference in preventing a naming collision. The userNames inside of the for loop don’t clash with the current user’s userName defined just above it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
let UserList = React.createClass({
  ...
  render() {
    let userName = this.props.current
    // See the Fat Arrow section below
    let userComponents = this.props.users.map(user => {
      let userName = user.get('userName');
      return <UserComponent displayName={userName} />;
    });
    return (
    <div className="user-list">
      <h1>Welcome back, {userName}!</h1>
      {userComponents}
    </div>
  }
});

Const

As you might guess from the name, const defines a read-only (constant) variable. It should be pretty easy to guess where to use this. For example:

1
2
3
4
5
6
7
const DEFAULT_MAP_CENTER = [48.1667, -100.1667];

class MapView extends Backbone.View {
  centerMap() {
    map.panTo(DEFAULT_MAP_CENTER);
  }
}

The Fat Arrow

You’ve probably already heard about the fat arrow, or used it before if you’ve written any CoffeeScript. The fat arrow, =>, is a new way to define a function. It preserves the value of this from the surrounding context, so you don’t have to use workarounds like var self = this; or bind. It comes in really handy when dealing with nested functions. Plus it looks really cool.

1
2
3
4
5
6
7
8
9
10
11
let Toggle = React.createClass({
  componentDidMount() {
    // iOS
    setTimeout(() => {
      var $el = $('#' + this.props.id + '_label');
      $el.on('touchstart', e => {
        let $checkbox = $el.find('input[type="checkbox"]');
        $checkbox.prop("checked", !$checkbox.prop("checked"));
      });
    }, 0);
  },

Template Strings

Do you ever get sick of doing string concatenation in JavaScript? I sure do! Well, good news! We can finally do string interpolation. This will come in very handy all over the place. I’m especially excited about using it in React render calls like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
let ProductList = React.createClass({
  ...
  render() {
    let links = this.props.products.map(product => {
      return (
        <li>
          <a href={`/products/${product.id}`}>{product.get('name')}</a>
        </li>
      );
    });

    return(
      <div className="product-list">
        <ul>
          {links}
        </ul>
      </div>
    );
  }
});

String Sugar

It’s always been kind of a pain to check for substrings in JavaScript. if (myString.indexOf(mySubstring) !== -1)? Give me a break! ES6 finally gives us some sugar to make this a little easier. We can call startsWith, endsWith, includes, and repeat.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// Clean up all the AngularJS elements

$('body').forEach(node => {
  let $node = $(node);
  if ($node.attr('class').startsWith('ng')) {
    $node.remove();
  }
});

// Pluralize

function Pluralize(word) {
  return word.endsWith('s') ? word : `${word}s`;
}

// Check for spam

let spamMessages = [];

$.get('/messages', function(messages) {
  spamMessages = messages.filter(message => {
    !(message.toLowerCase().includes('sweepstakes'));
  });
});

// Sing the theme song

let sound = 'na';
sound.repeat(10); // 'nananananananananana'

Argument Defaults

Languages like Ruby and Python have long allowed you to define argument defaults in your method and function signatures. With the addition of this feature to ES6, writing Backbone views requires one less line of boilerplate.

By setting options to an argument default, we don’t have to worry about cases where nothing is passed in. No more options = options || {}; statements!

1
2
3
4
5
class BaseView extends Backbone.View {
  initialize(options={}) {
    this.options = options;
  }
}

Spread and Rest

Sometimes function calls that take multiple arguments can get really messy to deal with. Like when you’re calling them from a bind that’s being triggered by an event listener.

For example, check out this event listener from a Backbone view in a recent project I was working on. Because of the method signature of _resizeProductBox, I have to pass all those null arguments into bind and it gets kind of ugly.

1
2
3
4
5
6
7
8
9
10
class ProductView extends BaseView {
  initialize() {
    this.listenTo(options.breakpointEvents, 'all',
      _.bind(this._resizeProductBox, this, null, null, true));
  }

  _resizeProductBox(height, width, shouldRefresh) {
    ...
  }
}

In ES6, we can clean this up a bit with spread. We’ll just prepend an array of default arguments with a ... to send them through as arguments to the method call.

Here’s how you’d do it using spread:

1
2
3
4
5
6
7
8
9
10
11
12
const BREAKPOINT_RESIZE_ARGUMENTS = [null, null, true];

class ProductView extends BaseView {
  initialize() {
    this.listenTo(options.breakpointEvents, 'all',
      _.bind(this._resizeProductBox, this, ...BREAKPONT_RESIZE_ARGUMENTS));
  }

  _resizeProductBox(height, width, shouldRefresh) {
    ...
  }
}

On the other side of the coin is rest, which lets us accept any number of arguments in a method signature instead of at invocation time, as you can do with the splat in Ruby. For example:

1
2
3
4
5
function cleanupViews(...views) {
  views.forEach(function(view) {
    view.remove();
  });
}

Array Destructuring

Sometimes I find myself having to access all of the elements of an array-like object with square brackets. It’s kind of a bummer. Luckily, ES6 lets me use array destructuring instead. It makes it easy to do things like splitting latitude and longitude from an array into two variables, as I do here. (Note that I also could have used spread.)

1
2
3
4
5
6
7
8
9
let markers = [];

listings.forEach(function (listing, index) {
  let lat, lng = listing.latlng; // looks like: [39.719121, -105.191969]
  let listingMarker = new google.maps.Marker({
    position: new google.maps.LatLng(lat, lng)
  });
  markers.push(listingMarker);
});

Promises

It seems like every project I work on these days uses promises. Native promises have landed in ES6, so we can all rely on the same API from here on out. Both promise instances and static methods like Promise.all are provided. Here’s an example of a userService from an Angular app that returns a promise from $http if the user is online, and a native promise otherwise.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
angular.module('myApp').factory('userService', function($http, offlineStorage) {
  return {
    updateSettings: function(user) {
      var promise;

      if (offlineStorage.isOffline()) {
        promise = new Promise(function (resolve, reject) {
          resolve(user.toJSON());
        });
      } else {
        promise = $http.put('/api/users/' + user.id, user.settings)
        .then(function(result) {
          return result.data;
        });
      }

      return promise.then(data => {
        return new User(data);
      });
    }
  };
});

A Note On Modules

You may have noticed that I didn’t cover modules, importing, or exporting in this miniseries. Although modules are one of the higher profile features in ES6, and they’re easy to get started with, they still have a lot of edge cases that need to be worked out as ES6 is rolled out.

Specifically, ES6 modules have a default export and named exports. CommonJS and AMD only support a single export, and traditionally handle named exports by exporting an object with the named exports as properties. The different ES6 module libraries have different ways of reconciling the differences, so you have to use only default exports or named exports if CommonJS might use the module. How these differences will be reconciled remains to be seen.

That said, the easiest way to start requiring modules is to switch from:

1
var _ = require(lodash);

`

To the new syntax:

1
import _ from lodash;

Similarly, you can import relative files using:

1
import Router from ../router;

When exporting, you can switch from:

1
module.exports = App;

To this:

1
export default App;

There’s a bit more to using modules (such as named exports), which we won’t cover today. If you want to learn more, take a look at this overview.

Conclusion

In this miniseries, we took a look at some real-world examples of how you would use ES6 in a client-side JavaScript app. We took a quick look at setting up an ES6 transpile step in your build process, and examined many of the easy-to-use features you can start using right away.

Before you know it, ES6 will be the standard language in use across the web and on servers and personal computers everywhere. Hopefully, this walkthrough will help you get started with a minimum of fuss.

If you’re feeling excited about ES6, and you want to learn more, I would suggest reading through this overview. Or if you’re feeling really enthusiastic, try this book.

Until next time, happy coding!