Introduction: Let’s Talk About Speed
If you’ve been building with Rails for a little while, you’ve probably hit that moment: You add a new feature (maybe it sends a welcome email, processes an image upload, or generates a PDF) and suddenly, that snappy page load you loved is gone. The user clicks a button and waits… and… waits.
Or even if you’ve never built with Rails and are just learning, you’ve probably encountered a website that is painfully slow when trying to perform a task, and you had to wait and stare at the screen.
That waiting game is a classic problem in web development. Performing slow tasks during a web request is a sure-fire way to create a frustrating user experience. For years, the solution involved setting up extra services, such as Redis and a job processor like Sidekiq to handle background work and caching. These tools are incredibly powerful and battle-tested, but they also mean more things to manage, configure, and pay for.
Rails 8 has a new way to approach this with its new philosophy: making it simpler to go from a new idea to a production-ready app without a bunch of extra dependencies. At the heart of this are new, built-in tools, the "Solid Trifecta" (Solid Queue, Solid Cache, and Solid Cable), that handle the heavy lifting using the one thing every Rails app already has: a database.
Now, the choice between using the new built-in tools or established solutions like Sidekiq and Redis will always depend on your specific scenario and scale. However, for the sake of simplicity, and because it’s targeted at new applications and is much easier to set up, we will focus on the new Solid stack, more specifically, Solid Queue and Solid Cache. It’s the perfect place to start.
So, let’s dive into how you can use these new tools to keep your app fast and your users happy.
Behind the Curtains
By default, Rails only has Solid configured for production; to enable these features in development, we have to make some small changes.
In the config/database.yml
add this to your development
attribute:
development:
primary:
<<: *default
database: storage/development.sqlite3
cache:
<<: *default
database: solid/development_queue.sqlite3
migrations_paths: db/cache_migrate
queue:
<<: *default
database: solid/development_queue.sqlite3
migrations_paths: db/queue_migrate
Now, in your config/cache.yml
tell your app to use the cache storage you just defined:
development:
<<: *default
database: cache # This is the key we just defined in the config/database.yml
And in your config/environments/development.rb
, tell your app to use solid cache and solid queue:
config.cache_store = :solid_cache_store
config.active_job.queue_adapter = :solid_queue
config.solid_queue.connects_to = { database: { writing: :queue } } # queue matches the key we just defined in config/database.yml
And finally, run bin/rails dev:cache
to enable caching in development.
Why Do It Now When You Can do It Later?
Some work just doesn’t belong in the request-response cycle. Think of it like ordering a coffee. You place your order, pay, and step aside. You don’t stand at the register, holding up the line, while the barista makes your drink. You get your confirmation (the response) and trust that your coffee (the task) will be ready in a moment.
Background jobs are how we "step aside" in our code.
Active Job
Think of Active Job as the universal remote for background work in Rails. It provides a standard way to create, enqueue, and run jobs. You write your jobs to the Active Job standard (by inheriting from ApplicationJob
, which itself comes from ActiveJob::Base
), and then you can plug in different backend "adapters" to actually execute them.
This is incredibly powerful. It means you can start with one backend job adapter and switch to another later with just a configuration change, without having to rewrite all your job calls. The perform_later
method we use is part of the Active Job API.
In Rails 8, the default tool for this is Solid Queue. It’s a background job processor that, true to the new philosophy, uses your existing SQL database to manage a to-do list of tasks. This means you don’t need to set up or maintain a separate Redis server just to handle jobs.
First, you define a "job." It’s just a simple Ruby class that knows how to do one specific thing. Let’s imagine a job that processes a report.
# app/jobs/report_generator_job.rb
class ReportGeneratorJob < ApplicationJob
queue_as :default
def perform(user_id)
user = User.find(user_id)
#... some slow logic to generate a report...
puts "Generated a report for #{user.email}!"
end
end
Then, in your service (or anywhere that would fit), instead of running this slow logic directly, you just ask the job to do it later.
ReportGeneratorJob.perform_later(current_user.id)
And that’s it! Your app doesn’t need to wait before it can respond to the user, and Solid Queue will pick up the job and run it in the background. To get it all working in development, you just need to run bin/jobs start
in your terminal.
Bonus: You can also set your jobs to execute after some time using set(wait: some_time)
:
ReportGeneratorJob.set(wait: 5.minutes).perform_later(current_user.id)
Or tell them to wait until some specific time with set(wait_until: some_date)
ReportGeneratorJob.set(wait_until: Time.now.tomorrow).perform_later(current_user.id)
A Couple of Golden Rules for Jobs
Pass IDs, Not Objects: Notice how we passed
current_user.id
to the job, not thecurrent_user
object itself? This is crucial. The job might not run for a few seconds or even minutes. In that time, the user data in your database could change. By passing just theID
, the job can fetch the freshest version of the user from the database right when it’s about to do its work.Make Them Idempotent: This is a fancy word for a simple, vital concept: your job should be safe to run more than once without causing problems. Background jobs can sometimes fail and be retried automatically. If your job sends a welcome email, you don’t want it to send the same email five times because of a temporary network hiccup.
The fix is simple: check if the work has already been done before you do it.
A non-idempotent job:
class MakeUserStatueJob < ApplicationJob
def perform(user_id)
user = User.find(user_id)
MakeUserStatueService.call(user) # This will execute every time the job is called
end
end
An idempotent job:
class MakeUserStatueJob < ApplicationJob
def perform(user_id)
user = User.find(user_id)
# The magic is right here:
return if user.has_statue?
MakeUserStatueService.call(user)
user.update!(has_statue: true) # Mark it as done!
end
end
By adding a simple check and then marking the task as complete, you’ve made your system much more robust.
PS: This is just an example on how you can achieve idempotency, there are a lot of different ways, it will all depend on your specific scenario.
Sending Emails the Right Way
Before we talk about how to send emails, let’s quickly cover what we use to send them. In Rails, all things email are handled by Action Mailer. Think of it like a controller for emails. You define mailer classes (e.g., UserMailer
) with methods for different types of emails (e.g., welcome
, password_reset
). Each method has a corresponding view template in app/views/user_mailer/
that defines the body of the email. It’s a clean, organized way to manage all your application’s email communications.
Creating Your First Mailer
Let’s create a simple welcome email. First, the mailer class itself:
# app/mailers/user_mailer.rb
class UserMailer < ApplicationMailer
# Set a default "from" address for all emails in this mailer
default from: 'notifications@example.com'
def welcome(user_id)
@user = User.find(user_id)
mail(to: @user.email, subject: 'Welcome to My Awesome App!')
end
end
This code is pretty straightforward. The welcome
method takes a user_id
, grabs the User
, and makes it available to the view as an instance variable (@user
), and then calls the mail
method to set the recipient and subject.
Next, we need the views for the email body. Action Mailer uses two templates for each email: one for HTML and one for plain text.
Here’s the HTML version:
<%# app/views/user_mailer/welcome.html.erb %>
<!DOCTYPE html>
<html>
<head>
<meta content='text/html; charset=UTF-8' http-equiv='Content-Type' />
</head>
<body>
<h1>Welcome, <%= @user.name %>!</h1>
<p>
Thanks for signing up. We're excited to have you on board.
</p>
</body>
</html>
And here’s the plain text version for email clients that don’t support HTML:
<%# app/views/user_mailer/welcome.text.erb %>
Welcome, <%= @user.name %>!
Thanks for signing up. We're excited to have you on board.
Asynchronous by Default
Now, for the golden rule of sending emails: use deliver_later
.
Connecting to an email server can be surprisingly slow. Using deliver_now
will force your user to wait for that connection to happen, which is exactly the kind of delay we want to avoid.
When you call UserMailer.welcome(user).deliver_later
, you’re really just using Active Job
and Solid Queue
under the hood. Rails packages up the email into a background job and sends it off to the queue, letting your controller finish its work without having to wait around.
And here’s a pro-tip for development: how do you see what your emails look like without actually sending them? Rails has a fantastic built-in preview system. When you generate a mailer, Rails also creates a preview file for you in test/mailers/previews/
. You can set up different scenarios there:
# test/mailers/previews/user_mailer_preview.rb
class UserMailerPreview < ActionMailer::Preview
def welcome
user = User.new(name: "Test User", email: "test@example.com")
UserMailer.welcome(user.id)
end
end
Now, just navigate to http://localhost:3000/rails/mailers
in your browser, and you’ll get a clickable list of all your email previews. It’s a huge time-saver.
Speeding Things Up with Caching
Caching is another fundamental way to make your app faster. The principle is simple: don’t repeat yourself (DRY). If you’ve already calculated something or rendered a piece of HTML, save it somewhere and reuse it next time.
Meet Solid Cache: Sometimes Bigger is Better
Rails 8’s new default is Solid Cache. Like Solid Queue, it uses your database for storage. Why is this a big deal? Because server disk space is way, way cheaper than server RAM. This means you can afford to have a massive cache.
And a massive cache is a pretty big deal, it means that you can store data for a longer period of time, potentially increasing your cache hit rate.
Of course, everything has its drawbacks. While NVME SSDs are becoming increasingly fast, they can’t compare to the speed of RAM. This means that for some cases, the trade-off in speed for a bigger cache may be prejudicial.
The good thing is that you can start your application using Solid Queue, and if down the line you notice that a faster cache would be more beneficial to you, the change is pretty easy.
View Caching
The most common place to use caching is within your views. Rails gives you a few powerful tools to do this.
Fragment Caching
This is the most basic form of view caching. You wrap a piece of a view, a fragment if you will, in a cache block. It’s incredibly elegant. Imagine you’re displaying a list of products:
<% @products.each do |product| %>
<% cache product do %>
<%= render product %>
<% end %>
<% end %>
When you wrap the render call in the cache product, Rails automatically generates a unique key for that HTML fragment based on the product’s ID
and, crucially, its updated_at
timestamp.
The first time this page loads, Rails renders each product partial and saves the HTML to Solid Cache. On the next request, it serves those fragments directly from the cache.
But what if you update a product’s price? Its updated_at
timestamp changes, which changes the cache key. Rails sees there’s no cache for this new key, so it re-renders just that one fragment and saves it. Everything else stays cached. It’s automatic cache invalidation, and it’s beautiful.
You can even take it a step further. If a Product
belongs to a Category
, you can make it so that updating a product also updates the category’s cache. Just add touch: true
to the association:
# app/models/product.rb
class Product < ApplicationRecord
belongs_to :category, touch: true
end
Now, saving a product will "touch" the parent category, updating its updated_at
timestamp and automatically invalidating any caches that depend on it. This is called Russian Doll Caching, and it’s a powerful pattern for complex views.
Collection Caching
The each
loop mentioned previously is good, but it has a small inefficiency: it has to ask the cache store about each product one by one. If you have hundreds of products, that’s hundreds of separate requests. We can do better with collection caching.
<%= render partial: 'products/product', collection: @products, cached: true %>
By using this special form of the render
helper, Rails is smart enough to fetch all the cached partials for the entire collection in a single, bulk request. This dramatically reduces the overhead and is much faster for rendering large collections.
Going Deeper: Low-Level Caching
Sometimes you need to cache something that isn’t a view fragment, like the result of a complex calculation or a slow API call. For this, you can use low-level caching directly with Rails.cache
.
While you can use Rails.cache.read
and Rails.cache.write
, the most useful method is Rails.cache.fetch
. It tries to read a key from the cache; if the key doesn’t exist (a "cache miss"), it runs the block you provide, saves the result to the cache, and then returns it. On the next call, it will find the key and return the value instantly without running the block.
class Product < ApplicationRecord
def competing_price
# The key is an array containing the object itself and a name.
# Rails automatically creates a key like "products/123-timestamp/competing_price"
Rails.cache.fetch([self, "competing_price"], expires_in: 12.hours) do
# This block only runs on a cache miss.
CompetitorApi.get_price(self.sku)
end
end
end
By passing self
as part of the cache key, Rails automatically includes the product’s ID and updated_at
timestamp, so if the product changes, the cache is automatically invalidated. This is a powerful way to speed up parts of your application that have nothing to do with rendering views.
The Big Question: To Solid or not to Solid?
So, with these great new tools built into Rails, should you ever reach for the old guard like Sidekiq
and Redis
? The answer is a definite "maybe." It all comes down to a trade-off between simplicity and raw (expensive) power.
The beauty of the Rails 8 ecosystem is that it gives you a clear "graduation path." You can start with the simple, integrated Solid stack. It’s powerful enough for the vast majority of applications. If your app grows to a point where you have data proving that your job processor or cache is a bottleneck, you can then make the informed decision to migrate to Sidekiq and Redis.
To sum it up: There is no silver bullet; each scenario has its own intricacies and quirks, so you should be familiar with the available options and go with what makes more sense for you and your app.
Wrapping up: Keep It Simple, Keep It Fast
These tools available in Rails are all about helping you make your application cleaner and snappier. So, remember these key points:
- Offload slow tasks and emails to background jobs.
- Watch out for parts of your views that aren’t updated frequently and therefore are good candidates for caching.
- Ensure you are using the most updated version of a record, whether it by passing an ID for your job or correctly managing your cache.
That’s it for today, folks. There is always more to learn, so remember to keep an eye out for new posts!
Previously: Creating REST APIs with Ruby on Rails
This post is part of our ‘The Miners’ Guide to Code Crafting’ series, designed to help aspiring developers learn and grow. Stay tuned for more and continue your coding journey with us!! Check out the full summary here!
We want to work with you. Check out our Services page!