Halving Response Times: Lessons Learned

Response times under 100 ms are capable of lightening anyone’s heart.

It’s even better when you reach that milestone as a side effect of some rounds of refactoring — even when performance isn’t an explicit priority, good engineering certainly shows measurable results.

I will talk you through what I have learned after reaching a nice 85 ms average for a Rails API (which used to average up to 270 ms) 🚗💨💨

Note: I’ll assume you are treating your DB well, okay? Improper N+1 queries or lack of otherwise useful indexes may be a major bottleneck. Also we’re supposing you have well-dimensioned instances for your application, considering of course the throughput it has to deal with.

Slowest transactions should be your priority

Some widely-used metrics will guide you to take care of your slow transactions first. For example, apdex.

T is your ideal time in seconds, any request finishing below T will give full points to your application. T = 0.2 s is commonly adopted.

Between T and 4T, well, that’s considered tolerating. Your application will be penalized in 50% on each request in that range.

Above 4T, we enter the frustated range. Your application receives a zero score for those.

I personally consider this metric way better than just looking at the average.

Slower transactions will block your web workers, potentially queueing up requests and having a multiplied impact on the overall responses time, especially during throughput peaks.

In addition to that, if you use Heroku, be extra-cautious on this topic, as this article states:

Even worse, however, is that due to the random-routing algorithm Heroku uses for load balancing, a single slow dyno brings the entire app to its knees. It’s well-known that intermixing fast and slow response times in a single Heroku app wreaks havoc on overall app performance.

Caching is hard

Just look around your code and you’ll see many opportunities to apply fragment caching.

Wait, there are other types of caching? Yes! But fragment caching is the most versatile. Plus it naturally escalates to russian-doll caching. Take the word from experts:

As a tip to newcomers to caching, my advice is to ignore action caching and page caching. The situations where these two techniques can be used is so narrow that these features were removed from Rails as of 4.0. I recommend instead getting comfortable with fragment caching.

Well, make sure you know how to proper implement caching, otherwise you may blow things up!

That’s why:

When a key does not have enough information, you may find yourself wondering how you’re going to invalidate stale cache keys, and that’s where things start to become dangerous.

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

Cache invalidation is complex and error-prone. Let’s see how we avoid having to use it.

A good key for any given data structure will contain at least those values:

  1. A kind;

  2. An identifier;

  3. A timestamp;

  4. A version.

The first two are simple to understand — among all your keys, you have to know what you’re looking for.

The third, a timestamp. It’ll prevent your app to serve stale data.

For Rails devs - you can use the method #cache_key to see these three in practice:

entity = Entity.last
=> #<Entity:0x007f8f40739ca0
 id: 170519,
 created_at: Wed, 11 Jan 2017 19:00:24 UTC +00:00,
 updated_at: Fri, 27 Jan 2017 06:27:28 UTC +00:00>

=> "entities/170519-20170127062728637756000"
#  "#{collection_name}/#{id}-#{updated_at}"

What if I change my data? It’s simple, a brand new key will be used.

=> true

=> stores/170519-20170127070432006502000
#       stale key was  ...62728637756000

Your cache system will eventually forget the stale keys, as those aren’t being used anymore. It’s that simple!

And last, but not least, we also need a version.

Well, some developers may change details of a given data structure. During a deploy, the code may change and may not play well with the old version of the same data structure (which will still be returned by the cache).

# very BAD
  ## deploy 1, by Mariah
  Rails.cache.fetch ['entities', id, timestamp] do

  ## deploy 2, John just introduced a cache bug!
  Rails.cache.fetch ['entities', id, timestamp] do

This can cause a bug in production, and, take it from me, eventually it WILL cause a bug if you miss this simple step:

# good
  ## deploy 1, by Mariah
  Rails.cache.fetch ['v1', 'entities', id, timestamp] do

  ## deploy 2, John incremented the version :)
  Rails.cache.fetch ['v2', 'entities', id, timestamp] do

The v1 and v2 on the snippet above identify the version of the data structure. Once your code changes, old versions will stop being retrieved.

However, a structural change may be subtle, yet problematic if it goes unnoticed by the developer. Use automated tests in your favor 🙂

More info on this article by DHH.

Algorithmic optimization

I consider myself an old-school programmer. I like to spend time reading about some optimization problems (like this and this) that are near to irrelevant in today’s industry.

But the fact is that, algorithms or data structures that are, at a first glance, better suited for a problem, often present the same efficiency, or even are outperformed by their simpler counterparts, in real world situations. Well, if you don’t believe me (as you shouldn’t), watch this keynote by Bjarne Stroustrup, on a well-known case:

So, I would mind the complexity of my blocks of code only if they seem extremely bad, or if they deal with a *n *too large. Otherwise, the rule of thumb here is trust and use your native standard library as much as possible, especially if you are dealing with Ruby.

Ruby (and Rails) does not offer all the algorithms and data structures you would find in a textbook. But, trying to outperform the algorithms it does offer has not been an easy task to me.

A much better way to say whether it is meaningful or not to optimize a piece of code, though, is by profiling an application.

A flame graph.A flame graph.

Let’s say you just came up with a improvement for an algorithm, what would be the impact on the whole stack, given that realistic requests are being made?

Or, the other way around, what code fragments have the biggest impact on your stack, so we should look for an optimization?

We need answers to both questions. If you’re are a Ruby dev, playing around with rack-mini-profiler is definitely a good starting point. Install the gem and run (even locally) your app in production mode. Study your results and have fun exploring the flame graphs (like the one above 😛).

Functional style pays off

This topic is the most experimental one (for me) on this article.

An advantage of using Ruby is that your OO code can live alongside some well written functional-like code. Developers should not have problems composing functions, as operations like reduce and map are extensively available in the standard library.

When the business logic is beyond trivial, which was my case, you might want to start playing with some side effect free functions, composing them as needed, and using some patterns like stateless services objects.

Why? Things get simpler when you do not use free variables. Otherwise, you need to ensure that all your variables will look good at all times, for any method. Going functional means you just need to worry about your input parameters and what’s being returned (and nothing else), as pure functions are free of side effects.

But, is functional more performatic? Well, not necessarily. But functional will help you to say on what conditions your performance falls short, while improving predictability and testability, by eliminating those free variables.

If you want to get more detailed arguments around this topic, check this article by Dave Copeland.

That’s it!

Those topics helped me as a guideline for optimizing a Rails API. Hope they will be useful for you too!

If you like this article, also check this following one 😜
Copycatting netcat: from Node.JS to C
Let’s get to the heart of streams, how they work and some classical problems. blog.codeminer42.com

We want to work with you! Check out our "What We Do" page.