Orchestrating background tasks (or "jobs," as they are often called) is common in web development. Any application that deals with time-consuming operations, such as sending emails, processing images, or integrating with external requests, can benefit from this approach.
The reason we offload tasks to run in the background is simple!
- Improves user experience — allowing the user to access other pages while tasks are processed in parallel, without needing to keep the user "frozen" on the same page until the task is completed;
- Allows the application to scale better under high demand — enabling the workload to be distributed among multiple processes or servers, which is especially useful in applications with many simultaneous users;
- Increases application resilience — considering that eventual failures may occur due to temporary limitations of external services or load spikes, background tasks have a "retry" mechanism, ensuring they are automatically reprocessed in case of failures.
Therefore, keeping these benefits in mind, it is normal that there are several tools on the market to facilitate job management. In the Rails world, the most popular are:
Sidekiq
Sidekiq uses threads to handle many jobs at the same time in the same process.Resque
Resque (pronounced like "rescue") is a Redis-backed library for creating background jobs, placing those jobs on multiple queues, and processing them later.Delayed Job
Delayed::Job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.Solid Queue (enabled by default from Rails version 8.0)
Solid Queue is a database-based queuing backend for Active Job, designed with simplicity and performance in mind.
Before Rails version 8.0, it was recommended to choose one of these tools. Rails already offered native resources to handle jobs, but with some limitations.
For enqueuing and executing jobs in production you need to set up a queuing backend, that is to say, you need to decide on a 3rd-party queuing library that Rails should use. Rails itself only provides an in-process queuing system, which only keeps only the jobs in RAM. If the process crashes or the machine is reset, then all outstanding jobs are lost with the default async backend. This may be fine for smaller apps or non-critical jobs, but most production apps will need to pick a persistent backend.
Active Job Basics # Job Execution
And the list of available adapters and their respective features was as follows:
| Async | Queues | Delayed | Priorities | Timeout | Retries | Notes | |
|---|---|---|---|---|---|---|---|
| Backburner | Yes | Yes | Yes | Yes | Job | Global | – |
| Delayed Job | Yes | Yes | Yes | Job | Global | Global | – |
| Que | Yes | Yes | Yes | Job | No | Job | – |
| queue_classic | Yes | Yes | Yes* | No | No | No | – |
| Resque | Yes | Yes | Yes (Gem) | Queue | Global | Yes | – |
| Sidekiq | Yes | Yes | Yes | Queue | No | Job | – |
| Sneakers | Yes | Yes | No | Queue | Queue | No | – |
| Sucker Punch | Yes | Yes | Yes | No | No | No | Not present in Rails 8+ |
| Active Job Async | Yes | Yes | Yes | No | No | No | – |
| Active Job Inline | No | Yes | N/A | N/A | N/A | N/A | – |
| Active Job Test | No | Yes | N/A | N/A | N/A | N/A | Present in Rails 8+ |
Check the Active Job adapters documentation for more details.
And with so many options available, Rails brought Active Job into its core!
Active Job is a framework in Rails designed for declaring background jobs and executing them on a queuing backend.
Simply put, Active Job offers a unified interface for different job queue systems, allowing developers to write code independent of the specific backend used. This means you can switch the queuing system without needing to change your job logic.
As expected by the community, the Rails Way of implementing this brought a series of benefits that we will explore below!
Exploring Active Job
In this post, we will skip the environment configuration part, assuming you already have a running Rails project, which is simple to do and should take little time to set up. Well, with the environment ready, we can start exploring Active Job!
To keep the post from getting too long, we will cover some more advanced points about Active Job, assuming you already have a basic understanding of how to create and execute jobs. But if that’s not the case, I recommend further reading on:
- Scheduling Jobs
- Queues and Priorities
- Workers and Concurrency
In this post, we will explore the following topics:
- Lifecycle and Workflow — Understanding a job’s lifecycle, how it is processed by Active Job, and when it is delegated via Sidekiq.
- Error handling and Retries — Retry strategies, wait time between executions, queuing new attempts, and callbacks.
- Tests Coverage — Tips to ensure your jobs are properly tested with
ActiveJob::TestHelperand other complementary gems.
It is worth noting that as a point of comparison, we will use Sidekiq as the backend for Active Job. With it, we will try to compare the native behavior of Active Job with direct implementation via Sidekiq, highlighting the differences and analyzing the pros and cons of both approaches.
To start, let’s implement a common scenario in some web applications: a job responsible for queuing other jobs.
# app/jobs/enqueuer_job.rb
class EnqueuerJob < ApplicationJob
queue_as :default
def perform(*args)
2.times do |i|
# ActiveJob Implementation
MyJob.perform_later({ "id" => i })
# Sidekiq Implementation
MySidekiqJob.perform_async({ "id" => i })
end
end
endNow, the code for the job that will be enqueued, using Active Job (MyJob) and Sidekiq (MySidekiqJob) respectively:
# app/jobs/my_job.rb
class MyJob < ApplicationJob
queue_as :default
retry_on MyJobError, wait: :polynomially_longer, attempts: 5
def perform(args)
id = args["id"]
Sidekiq.logger.info "Starting work"
Sidekiq.logger.info "Doing hard work for identifier #{id}..."
sleep 2 # Simulating some work being done
if id.to_i.even? # 0, 2, 4, 6, 8 ...
raise MyJobError, "Even IDs are not allowed. Received ID: #{id}"
else
sleep 2 # Simulating more work being done
end
Sidekiq.logger.info "Completed work for identifier #{id}."
end
end# app/jobs/my_sidekiq_job.rb
class MySidekiqJob
include Sidekiq::Job
sidekiq_options retry: 5, queue: "default"
sidekiq_retry_in do |count, exception, jobhash|
case exception
when MyJobError
10 * (count + 1) # (i.e. 10, 20, 30, 40, 50)
when ExceptionToKillFor
:kill
when ExceptionToForgetAbout
:discard
end
end
def perform(args)
id = args["id"]
Sidekiq.logger.info "Starting work"
Sidekiq.logger.info "Doing hard work for identifier #{id}..."
sleep 2 # Simulating some work being done
if id.to_i.even? # 0, 2, 4, 6, 8 ...
raise MyJobError, "Even IDs are not allowed. Received ID: #{id}"
else
sleep 2 # Simulating more work being done
end
Sidekiq.logger.info "Completed work for identifier #{id}."
end
endThe differences between the two implementations are subtle but important:
| ActiveJob (MyJob) | Sidekiq (MySidekiqJob) | |
|---|---|---|
| Job Definition | Inherits from ApplicationJob, which in turn inherits from ActiveJob::Base | Includes the Sidekiq::Job module |
| Queue Configuration | Uses queue_as :default to define the queue | Uses sidekiq_options queue: "default" to define the queue |
| Retry Configuration | Uses retry_on , attempts: 5 | Uses sidekiq_options retry: 5 |
| Retries by Exception | Can be configured for specific exceptions | Configured globally for all exceptions |
| Time between Retries | Configured directly via wait: :polynomially_longer or a specific value | Configured via sidekiq_retry_in |
Lifecycle and Workflow
Sidekiq
When a job is created, it goes through several stages before being completed. According to the official Sidekiq documentation, a job’s lifecycle can be described as follows:

To understand this better, let’s run our EnqueuerJob through the console, with the line MyJob.perform_later({ "id" => i }) commented out to avoid duplicate jobs, and follow the Sidekiq counters via the web interface.

Right in the first image, we can see 3 processed, 1 from EnqueuerJob and 2 from MySidekiqJob, one with id => 0, and the other with id => 1. Since the job with id => 0 raises an exception, it is counted as a failure, justifying the failed counter. Due to the retry configuration, Sidekiq automatically re-enqueues the job for a new attempt in the retries section.

In the second image, we can see that the job with id => 0 was reprocessed but failed again, being re-enqueued for a new attempt. processed +1, failed +1, retries remains at 1 because there is only one job waiting for a retry, but with the Retry Count updated to 1.

In the third image, the job with id => 0 was reprocessed again, failed once more, and was re-enqueued for a new attempt. processed +1, failed +1, retries remains at 1, and the job’s Retry Count updated to 2.

In the fourth image, the job with id => 0 was reprocessed again, failed once more, and was re-enqueued for a new attempt. processed +1, failed +1, retries remains at 1, and the job’s Retry Count updated to 3.

In the fifth image, the job with id => 0 was reprocessed again, failed once more, and was re-enqueued for the last attempt. processed +1, failed +1, retries remains at 1, and the job’s Retry Count updated to 4.

In the sixth image, the job with id => 0 was reprocessed again, failed once more, and since it reached the retry limit, it was moved to the dead section. processed +1, failed +1, retries is now 0 because there are no more jobs waiting for a retry.
| Job | Params | Processed | Failed | Retry Count | Dead Jobs | Image |
|---|---|---|---|---|---|---|
| EnqueuerJob | {} | +1 → 1 | 0 | – | 0 | #1 |
| MySidekiqJob | {"id"=>0} | +1 → 2 | +1 → 1 | 0 | 0 | #1 |
| MySidekiqJob | {"id"=>1} | +1 → 3 | 1 | – | – | #1 |
| MySidekiqJob | {"id"=>0} | +1 → 4 | +1 → 2 | +1 → 1 | 0 | #2 |
| MySidekiqJob | {"id"=>0} | +1 → 5 | +1 → 3 | +1 → 2 | 0 | #3 |
| MySidekiqJob | {"id"=>0} | +1 → 6 | +1 → 4 | +1 → 3 | 0 | #4 |
| MySidekiqJob | {"id"=>0} | +1 → 7 | +1 → 5 | +1 → 4 | 0 | #5 |
| MySidekiqJob | {"id"=>0} | +1 → 8 | +1 → 6 | +1 → 5 (exausthed) | +1 → 1 | #6 |
We know, a lot is happening in a short time and maybe it will take you some time to understand the full flow but the important thing is that now you have a clear notion of how Sidekiq handles the job lifecycle, especially concerning failures and retries.
ActiveJob
Now, let’s look at how Active Job handles the job lifecycle. Let’s comment out the MySidekiqJob.perform_async({ "id" => i }) line in EnqueuerJob to avoid job duplication and follow Sidekiq’s web interface again.

Again, in the first image, we can see 3 processed, 1 from EnqueuerJob and 2 from MyJob, one with id => 0, and the other with id => 1. The job with id => 0 raises an exception, but the difference here is that Active Job does not send the job to the Sidekiq retries section; it simply re-enqueues it in the same queue for a new attempt, which is reflected in the scheduled counter.
With this change, the failed counter is not incremented because the job has not yet been considered failed; it is merely awaiting a new attempt.
Another detail is that although the job failed, it is not possible to see the Retry Count or the reason for the failure in the scheduled interface, which can make monitoring the number of attempts difficult.
For this, Active Job offers some additional resources that can be useful in these scenarios. For example, we can access the job itself and its attributes at runtime and get more detailed information than Sidekiq’s own interface offers. Let’s see:
# app/jobs/my_job.rb
class MyJob < ApplicationJob
#...
def perform(args)
#...
Sidekiq.logger.info "Current Execution: #{@executions} with args: #{args.inspect}"
Sidekiq.logger.info "Exceptions Encountered: #{@exception_executions}"
end
endIf we inspect the self object inside the perform method, we will have something like:
#<MyJob:0x00007faae04f26c0
@_halted_callback_hook_called=nil,
@arguments=[{"id" => 0}],
@enqueued_at=2026-01-15 14:42:11.547148227 UTC,
@exception_executions={"[MyJobError]" => 1},
@executions=2,
@job_id="d66b7f05-445a-457c-b3be-3572495bd81f",
@locale="en",
@priority=nil,
@provider_job_id="f48b27089166695e5a25b70a",
@queue_name="default",
@scheduled_at=2026-01-15 14:42:27.266608636 UTC,
@serialized_arguments=nil,
@timezone="UTC">With this, we can see that Active Job maintains an execution counter (@executions) and a hash of encountered exceptions (@exception_executions), which can be useful for monitoring the job’s behavior at runtime. It is worth noting that the behavior differs from Sidekiq, and if you try to access the self object of MySidekiqJob, you will not find these attributes.

In the second image, we can see that the job with id => 0 was reprocessed but failed again, being re-enqueued for a new attempt. processed +1, scheduled +1, failed remains at 0, and the Retry Count is not displayed in the interface.

In the third image, the job with id => 0 was reprocessed again, failed once more, and was re-enqueued for a new attempt. processed +1, scheduled +1, failed remains at 0, and the Retry Count is not displayed in the interface.

In the fourth image, the job with id => 0 was reprocessed again, failed once more, and was re-enqueued for a new attempt. processed +1, scheduled +1, failed remains at 0, and the Retry Count is not displayed in the interface.
Now, what happens on the fifth (and last) attempt?
Well, I can tell you that it’s probably not what you are expecting!
Let’s look a bit more at the Sidekiq documentation.
In Sidekiq Wiki # ActiveJob – Customizing error handling, we have the following message:
The default Active Job retry scheme, when using retry_on, is 5 retries, 3 seconds apart. Once this is done (after 15-30 seconds), Active Job will kick the job back to Sidekiq, where Sidekiq’s retries with exponential backoff will take over.
- You can use sidekiq_options with your Active Jobs and configure the standard Sidekiq retry mechanism.
- Sidekiq supports sidekiq_retries_exhausted and sidekiq_retry_in blocks on an ActiveJob job as of 7.1.3.
This means that after Active Job exhausts its retry attempts, in this case, 5 attempts, it delegates the job back to Sidekiq, where Sidekiq will apply its own retry logic with exponential backoff… But what is Sidekiq’s behavior in this case?
According to Sidekiq Wiki # Error Handling – Best Practices, we have the following information:
- Let Sidekiq catch errors raised by your jobs. Sidekiq’s built-in retry mechanism will catch those exceptions and retry the jobs regularly. The error service will notify you of the exception. You fix the bug, deploy the fix and Sidekiq will retry your job successfully.
- If you don’t fix the bug within 25 retries (about 21 days), Sidekiq will stop retrying and move your job to the Dead set. You can fix the bug and retry the job manually anytime within the next 6 months using the Web UI.
- After 6 months, Sidekiq will discard the job.
In other words, if the job failed all 5 Active Job attempts, it will be re-enqueued in Sidekiq —similar to the first approach — where Sidekiq will apply its own retry rule, in this case, with 25 attempts with exponential backoff before moving the job to the dead queue.
This scenario opens up space for a series of important considerations regarding behavior, such as whether it is really necessary to reprocess the job so many times or if it would be ideal to apply a different strategy in this case.
After all, we need to consider all cases, whether queuing a payment request, retrying a charge, or sending an important email. However, it is up to you, as a developer, to assess the scenario and decide if this approach makes sense.
Another very important point to consider here is whether it really makes sense to retry, either by Active Job or Sidekiq. Remember, Sidekiq does not validate which exception was raised; it only reprocesses the job in case any error was raised.
Therefore, if the job failed due to a business validation, such as processing a CSV file where a row has invalid data, it might not make sense to reprocess the job, as it will fail every time.
And with this behavior, even if Active Job is configured to retry only on MyJobError, Sidekiq will reprocess the job regardless of the type of exception raised, which may not be the expected behavior.
So, how can we deal with this? Well, still in the Sidekiq documentation, we have a possible solution in Sidekiq Wiki # Job Lifecycle – Altering the lifecycle
The retry property can be set on a specific job to disable retries completely (job goes straight to Dead) or disable death (failed job is simply discarded). If your Failed count is increasing but you don’t see anything in the Retry or Dead tabs, it’s likely you’ve disabled one or both of those:
class SomeJob # will be completely ephemeral, not in Retry or Dead sidekiq_options retry: false # will go immediately to the Dead tab upon first failure sidekiq_options retry: 0 #...
This means we can configure Sidekiq to retry according to the needs of each job, either by completely disabling retries (retry: false) or moving the job directly to the dead section after the first failure (retry: 0)
Continuing the previous sequence of images, since no sidekiq_options were configured in MyJob, Sidekiq will take over from here…

Note that the number of processed continues to increase, reflecting the job reprocessing attempts. But this time the failed counter also increases, indicating that the job failed. The job is now listed in retries and shows the Retry Count, reflecting Sidekiq’s first attempt.

Following the same logic as before, the processing continues to be incremented, counting one more failure and updating the Retry Count with each attempt.




Observe that the Next Retry time has increased exponentially, reflecting Sidekiq’s standard behavior, and that even after 5 attempts, the job has NOT yet been moved to the dead section, because Sidekiq’s limit of 25 attempts (standard) has not yet been reached.
Well, it doesn’t make sense to keep showing all the following images, after all, it would take us a month just to take screenshots but we hope the flow of what will happen has become clear…
And just to reinforce, what if we had added the options sidekiq_options retry: false or sidekiq_options retry: 0 in MyJob?
Well, then after the Active Job attempt, we would have the following scenario.
sidekiq_options retry: false

In this case, the job would simply be discarded after all failures. Note that there are no more jobs in the retries section, nor scheduled, nor dead. The job was simply discarded. And the downside to this is that there is no way to monitor or recover the job later.
sidekiq_options retry: 0

With retry configured to 0, the job would be moved directly to the dead section after the first failure. In this case, it is possible to analyze the reason for the failure and eventually reprocess the job manually via Sidekiq’s web interface, which can be useful.
Callbacks
ActiveJob
Another great advantage of Active Job is callbacks, which allow executing code at specific points in the job lifecycle.
Imagine that after the job is successfully executed, you want to enqueue another job, or perhaps just send a notification to the user, print some information in the logs, or even recalculate a total count or update a record’s status. With callbacks, this is possible in a simple and easy way.
For this case, we could easily use the after_perform callback. However, it is important to keep in mind that this callback will only be executed if the job is completed successfully!. If there is a failure, this callback will not be triggered.
To handle the failure case, we have another callback, which is documented in the exceptions section: after_discard. This callback is triggered when the job is discarded, either by reaching the attempt limit or by an error that does not have a retry configured.
In terms of code, it would look something like this:
# app/jobs/my_job.rb
class MyJob < ApplicationJob
# sidekiq_options retry: false # will be completely ephemeral, not in Retry or Dead
sidekiq_options retry: 0 # will go immediately to the Dead tab upon first failure
queue_as :default
retry_on MyJobError, wait: 15.seconds, attempts: 5
after_discard do |job, exception|
# Do something when the job is discarded (after retries are exhausted)
Sidekiq.logger.info "💀 Job #{job.job_id} discarded after #{job.executions} attempts due to #{exception.class}: #{exception.message}"
end
after_perform do |job|
# Do something after the job performs successfully
Sidekiq.logger.info "Do something after job #{job.job_id} has been performed. (Executions: #{job.executions})"
end
def perform(args)
id = args["id"]
Sidekiq.logger.info "Starting work"
Sidekiq.logger.info "Current Execution: #{@executions} with args: #{args.inspect}"
Sidekiq.logger.info "Exceptions Encountered: #{@exception_executions}"
Sidekiq.logger.info "Doing hard work for identifier #{id}..."
sleep 2 # Simulating some work being done
if id.to_i.even? # 0, 2, 4, 6, 8 ...
raise MyJobError, "Even IDs are not allowed. Received ID: #{id}"
else
sleep 2 # Simulating more work being done
end
Sidekiq.logger.info "Completed work for identifier #{id}."
end
endWith this, we know exactly how the job will behave in all scenarios, whether in case of failure or success, having its retry handled by Active Job or Sidekiq, and thus, we can execute specific actions in each case.
Running the EnqueuerJob again with this configuration, we will have the following log:

And accessing the Sidekiq web interface, the job will be listed in the I section (same image as the last example).
Sidekiq
In Sidekiq’s case, there are some callbacks available, but they are not as complete as Active Job’s.
We could implement the same behavior with different approaches. Let’s look at the code.
# app/jobs/my_sidekiq_job.rb
class MyJobError < StandardError; end
class MySidekiqJob
include Sidekiq::Job
sidekiq_options retry: 5, queue: "default"
sidekiq_retry_in do |count, exception, jobhash|
case exception
when MyJobError
10 * (count + 1) # (i.e. 10, 20, 30, 40, 50)
when ExceptionToKillFor
:kill
when ExceptionToForgetAbout
:discard
end
end
sidekiq_retries_exhausted do |job, exception|
# Do something when the job is discarded (after retries are exhausted)
Sidekiq.logger.info "💀 Job #{job['jid']} discarded after #{job['retry_count']} attempts due to #{job['error_class']}: #{job['error_message']}"
end
def perform(args)
id = args["id"]
Sidekiq.logger.info "Starting work"
Sidekiq.logger.info "Current Execution: #{(JSON.parse(@_context.job.job)['retry_count']&.+ 2) || 1} with args: #{args.inspect}" # +2 because retry_count starts at 0 and the first execution is not a retry.
Sidekiq.logger.info "Exceptions Encountered: #{JSON.parse(@_context.job.job)['error_class']}: #{JSON.parse(@_context.job.job)['error_message']}" if JSON.parse(@_context.job.job)["error_class"].present?
Sidekiq.logger.info "Doing hard work for identifier #{id}..."
sleep 2 # Simulating some work being done
if id.to_i.even? # 0, 2, 4, 6, 8 ...
raise MyJobError, "Even IDs are not allowed. Received ID: #{id}"
else
sleep 2 # Simulating more work being done
end
Sidekiq.logger.info "Completed work for identifier #{id}."
run_after_perform
end
private
def run_after_perform
# Do something after the job performs successfully
Sidekiq.logger.info "Do something after job #{@jid} has been performed. (Executions: #{(JSON.parse(@_context.job.job)['retry_count']&.+ 2) || 1})"
end
endNote that with this approach, besides the code becoming more verbose, we need to manually implement the run_after_perform method to simulate the behavior of Active Job’s after_perform callback.
Another important detail here is that the sidekiq_retries_exhausted block is executed in a separate thread from the main job, implying that any instance variable, context, state, and even job methods will not be available inside this block.
Also, note that the way we access job attributes is different and much worse since Sidekiq does not directly expose the job object inside the perform method.
In summary, both work. But the golden rule is:
If it’s Rails, do it the Rails Way! Use ActiveJob!
Testing Coverage
Right, and with all this implementation, how can we ensure our jobs are properly tested?
Well, we can write unit tests for our jobs using ActiveJob::TestHelper and the rspec-rails gem, which offer a series of useful methods for testing jobs.
Let’s see below how to implement tests for MyJob:
# spec/jobs/my_job_spec.rb
require 'rails_helper'
RSpec.describe MyJob, type: :job do
include ActiveJob::TestHelper
subject(:enqueue_my_job) { described_class.perform_later({ "id" => id }) }
describe '#perform' do
before do
enqueue_my_job
allow(Sidekiq.logger).to receive(:info)
allow_any_instance_of(described_class).to receive(:sleep) # avoid actual sleeping during tests
end
context 'when id is odd' do
let(:id) { 1 }
it 'processes the job successfully', :aggregate_failures do
expect { perform_enqueued_jobs }.not_to raise_error
expect(Sidekiq.logger).to have_received(:info).with("Completed work for identifier 1.").once
end
it 'calls the after_perform callback block' do # BDD style → it 'logs the success completion message'
perform_enqueued_jobs
expect(Sidekiq.logger).to have_received(:info).with(/Do something after job/).once
end
end
context 'when id is even' do
let(:id) { 0 }
context 'and retries are still available' do
it 're-enqueues the job with the right arguments' do
# `assert_enqueued_with` matcher from ActiveJob::TestHelper (https://api.rubyonrails.org/classes/ActiveJob/TestHelper.html#method-i-assert_enqueued_with)
expect { perform_enqueued_jobs }.not_to raise_error
assert_enqueued_with(job: described_class, args: [ { "id" => id } ])
assert_enqueued_jobs 1
end
it 'does not trigger the discard logic' do
expect { perform_enqueued_jobs }.not_to raise_error
expect(Sidekiq.logger).not_to have_received(:info).with(/💀 Job .* discarded/)
end
end
context 'and retry attempts are exhausted' do
before do
4.times do
# `have_enqueued_job` matcher from rspec-rails (https://rspec.info/features/6-0/rspec-rails/matchers/have-enqueued-job-matcher/)
expect { perform_enqueued_jobs }.to have_enqueued_job(described_class).with({ "id" => id }).once
end
end
it 'stops retrying the job' do
expect { perform_enqueued_jobs }.to raise_error(MyJobError, "Even IDs are not allowed. Received ID: 0")
assert_performed_jobs 5
assert_no_enqueued_jobs
end
it 'raises the error' do
expect { perform_enqueued_jobs }.to raise_error(MyJobError, "Even IDs are not allowed. Received ID: 0")
end
it 'triggers the discard logic' do
expect { perform_enqueued_jobs }.to raise_error(MyJobError, "Even IDs are not allowed. Received ID: 0")
expect(Sidekiq.logger).to have_received(:info).with(/💀 Job .* discarded/).once
end
end
end
context 'when an unexpected error occurs' do
let(:id) { 2 }
before do
allow_any_instance_of(described_class).to receive(:sleep).and_raise(StandardError, "Unexpected error")
end
it 'raises the error' do
expect { perform_enqueued_jobs }.to raise_error(StandardError, "Unexpected error")
end
it 'does not re-enqueue the job' do
expect { perform_enqueued_jobs }.to raise_error(StandardError, "Unexpected error")
assert_no_enqueued_jobs
end
it 'triggers the discard logic' do
expect { perform_enqueued_jobs }.to raise_error(StandardError, "Unexpected error")
expect(Sidekiq.logger).to have_received(:info).with(/💀 Job .* discarded/).once
end
end
end
endAfter running these tests, we have the following behavior documented.
MyJob
#perform
when id is odd
processes the job successfully
calls the after_perform callback block
when id is even
and retries are still available
re-enqueues the job with the right arguments
does not trigger the discard logic
and retry attempts are exhausted
stops retrying the job
raises the error
triggers the discard logic
when an unexpected error occurs
raises the error
does not re-enqueue the job
triggers the discard logicIt is worth remembering that this is just an example of how to test, and that the way the service was implemented is exclusively for didactic purposes.
As mentioned earlier, it would not make sense to reprocess a job that has an invalid argument several times, nor even add "smells" in the tests like mocking the sleep function (just remembering that it represents an external service).
The goal here is to exemplify how Active Job handles the job lifecycle, especially concerning failures and retries, and how to ensure that all of this is properly tested.
For Sidekiq tests, the approach is very similar, but instead of using ActiveJob::TestHelper, we can use the rspec-sidekiq gem, which offers matchers similar to those of Active Job.
Well, if you’ve made it this far, I have a challenge for you!
How about complementing the current tests with:
- Tests to ensure that
retry_on,waitis being respected? - Tests to ensure that
sidekiq_options retry: falseandsidekiq_options retry: 0are working as expected? (Discarding or Sending the job to the dead section) - Tests for the Sidekiq version
MySidekiqJob, ensuring the same behavior asMyJob
Happy coding! 🚀
If this post has helped you in any way, consider sharing it with your colleagues and friends.
Your support motivates me to continue creating more content like this!
We want to work with you. Check out our Services page!

