In the first post of this series, our great Edy Silva tried multiple approaches for a Talent Matching system that would work for our little problem.
What is the problem?
As Codeminer42 grows in both teams and projects, it becomes harder to do talent-position matching with traditional approaches due to the amount of data that needs to be considered.
As Edy pointed out in his post, there are several ways of matching talents, data-wise, through algorithms both old and new, but most of all, if we want something truly effective when considering the context of the data we are comparing, the more effective approach is a Natural Language Processor, likely a Large Language Model, which can deal with the grammar implicit and explicit context.
How to manage this context
Let’s discuss why use an AI first, right? Since there’s an important reason for that which has nothing to do with ‘let’s just shove an AI and call it a day!’
Traditional algorithms rely heavily on keyword matching. If a candidate writes "React," they match. But what if they write "extensive frontend experience with modern JS libraries"? A simple regex might miss that. This is where LLMs shine.
For models with self-attention mechanisms, the dataset they are trained on changes their behavior and how it approaches a given task. So, you can say that LLMs don’t just see words; they see relationships and patterns learned from massive datasets. They can infer implicit meaning from the context provided. There are many ways in which that shows, like in-context learning, reasoning, and other forms of generating text from previous tokens.
And which context do we have here?
In our case, the context is tricky. We have a specific job position, and we need to match it against an employee’s stored profile. But data isn’t flat.
We have a logical gap to bridge:
- Explicit Context: Years of experience, list of frameworks (e.g., "5 years in Ruby").
- Implicit Context: The nuance of a Senior who has led projects versus a Junior who is just starting, even when they both are able to write code with the same syntax.
The LLM allows us to feed this structured data (the candidate’s scores, rank, and history) and ask it to "reason" like a hiring manager, rather than just calculating a math formula. But to do that effectively, we need to build a proper system.
Gathering the data
The information we have about the experience of our developers are stored in multiple sources and formats, so we need to figure out a way to get this information into the app. Without it, we won’t have enough context to feed the LLM the proper data to be used to generate the candidate’s prompt.
Luckily, we have an internal survey that asked all developers about their proficiency in a lot of languages and technologies, allowing us to build a knowledge base that will serve as a first step here.
The Survey looked like this

In this survey, we were able to get the proficiency in a multitude of languages and frameworks for all our developers, providing standard information on all our developers. Even though this approach is not ideal since it’ll lack career history information, which is usually present in resumés, this is a good starting point to get the ball rolling and build the MVP of this feature.
We moved all this information into a Technology model in the Rails app and created a UserTechnology model to link this information to every developer in our database, like this:
class Technology < ApplicationRecord
has_many :user_technologies, dependent: :destroy
has_many :users, through: :user_technologies
validates :name, presence: true, uniqueness: true
end
class UserTechnology < ApplicationRecord
extend Enumerize
belongs_to :user
belongs_to :technology
validates :user, uniqueness: { scope: :technology }
enumerize :experience_level, in: %i[starter capable expert], default: :capable
validates :experience_level, presence: true
endDuring this migration, we ignored every answer to the “I don’t know” or “Used outside of work” options, as those would have low significance to our prompts and would just increase the amount of tokens without generating much value. We matched the other 3 to the “starter”, “capable” and “expert” experience level, respectively.
Building the Feature
Now that we have the data in place, we can start building the feature. To do this, we need to construct a system that will receive position information, look into our developer database and select the best developers for the job. To achieve this, we decided to use a Ruby gem called Active Genie.
Active Genie provides a toolkit for a bunch of AI use cases, allowing you to easily create AI pipelines that will help you automate your processes, which accelerates the development process, and I highly recommend that you check out their gem documentation. For our use case, we are using the Scorer module, which allows you to generate a numeric score for a content prompt based on a criterion prompt and selecting a specific model to be used in this evaluation. This is amazing and suits our use case just right.
So now we need to generate two prompt templates: one to be used as the criterion prompt, in our case this will be the new position description; and one to be used as a content prompt, in our case, the developer information with all of the technologies he currently knows.
Choose the Model and Build the Prompt
As we are using market models, we have to keep in mind that choosing the correct model also influences the way the prompt will be understood, so these two come hand-in-hand.
There are several ways of building a prompt for that, but first, we need to choose the model that best fits our needs. Well, for testing purposes, we firstly used GPT 4.o mini, which isn’t the best option for this, but it is fast and cheap.
And once in production, the chosen one was Claude 4.5 Sonnet.
Why? Among the current state-of-the-art models, Anthropic’s models excel at understanding implicit context. They have a lower hallucination rate and, crucially, they grasp nuance better.
The gap between "The candidate needs 5 years in Rails" and "The candidate led a legacy migration" is massive. Claude Sonnet understands that the latter implies a higher level of seniority, whereas other models might just look for the keyword "5 years.”
Once the model is chosen, we need to feed it data. Instead of dumping a raw JSON object with the employee’s database record, we decided to convert the profile into a structured text excerpt.
While LLMs can read JSON, converting the data into a narrative format (like a resume) helps the model focus on the semantics rather than the syntax structure. We want the LLM to "read" the candidate’s story.
module NewAdmin
module Scorer
class EmployeeProfileBuilderService < ApplicationService
attr_reader :developer
def initialize(developer)
@developer = developer
end
def call
generate_employee_profile
end
private
def generate_employee_profile
profile_text = []
profile_text << "=== EMPLOYEE PROFILE ==="
profile_text << ""
add_name(profile_text)
add_specialty(profile_text)
add_technologies(profile_text)
profile_text.join("\n")
end
def add_name(profile_text)
return if @developer.name.blank?
profile_text << "EMPLOYEE NAME"
profile_text << @developer.name
profile_text << ""
end
def add_specialty(profile_text)
return if @developer.specialty.blank?
profile_text << "EMPLOYEE'S SPECIALTY:"
profile_text << "The developer's main specialty:"
profile_text << @developer.specialty
profile_text << ""
end
def add_technologies(profile_text)
return if @developer.user_technologies.blank?
profile_text << "EMPLOYEE'S TECHNOLOGIES:"
profile_text << "The employee has expert level in dealing with the following technologies:"
@developer.user_technologies.each do |user_tech|
tech_name = user_tech.technology.name
exp_level = user_tech.experience_level
profile_text << "- #{tech_name} (#{exp_level})"
end
profile_text << ""
end
end
end
end
Simple and effective. We explicitly tell the AI the skillset and experience levels without the noise of database keys.
Now, we do the same for the Job Position. We need the LLM to understand not just the requirements, but the responsibilities, this is where the implicit comparison happens.
module NewAdmin
module Scorer
class PositionCriteriaBuilderService < ApplicationService
def initialize(position_params)
@position = position_params
end
def call
generate_evaluation_criteria
end
private
attr_reader :position
def generate_evaluation_criteria
criteria_text = []
criteria_text << "=== POSITION EVALUATION CRITERIA ==="
criteria_text << ""
add_position_overview(criteria_text)
add_responsibilities(criteria_text)
add_requirements(criteria_text)
add_nice_to_have(criteria_text)
add_evaluation_instructions(criteria_text)
criteria_text.join("\n")
end
def add_position_overview(criteria_text)
return if position[:description].blank?
criteria_text << "POSITION OVERVIEW:"
criteria_text << position[:description]
criteria_text << ""
end
def add_responsibilities(criteria_text)
return if position[:responsibilities].blank?
criteria_text << "KEY RESPONSIBILITIES AND DUTIES:"
criteria_text << "The ideal candidate should have experience with the following responsibilities:"
criteria_text << position[:responsibilities]
criteria_text << ""
end
def add_requirements(criteria_text)
return if position[:requirements].blank?
criteria_text << "MANDATORY REQUIREMENTS:"
criteria_text << "The candidate MUST meet these essential qualifications:"
criteria_text << position[:requirements] criteria_text << ""
end
def add_nice_to_have(criteria_text)
return if position[:nice_to_have].blank?
criteria_text << "PREFERRED QUALIFICATIONS (Nice to Have):"
criteria_text << "Additional qualifications that would make a candidate stand out:"
criteria_text << position[:nice_to_have] criteria_text << ""
end
def add_evaluation_instructions(criteria_text)
criteria_text << "EVALUATION INSTRUCTIONS:"
criteria_text << "Please evaluate the candidate's experience against all the criteria above. Consider:"
criteria_text << "1. How well the candidate meets the mandatory requirements"
criteria_text << "2. Relevant experience with the listed responsibilities"
criteria_text << "3. Any matching preferred qualifications"
criteria_text << "4. Overall fit for the position based on their background"
criteria_text << ""
criteria_text << "Provide a short summary of the candidate's suitability for this role, focusing on the scores."
end
end
end
end
By using these builders, we ensure that every prompt follows a strict structure: Overview -> Responsibilities -> Hard Requirements -> Soft Skills.
It’s a sort of **Generated Knowledge prompt.**
Connecting everything together
Now that we have the prompts and the information, we can finally generate a report on a new position. To do that, we receive the position information from a form that looks like this.

This will send all the position information to be used in our position prompt and will be processed by our position services and call Active Genie for all the available developers.
def generate_report(position)
criteria = NewAdmin::Scorer::PositionCriteriaBuilderService.call(position)
devs = User.engineer.active.includes(user_technologies: :technology)
devs.map do |dev|
profile = NewAdmin::Scorer::EmployeeProfileBuilderService.call(dev)
ActiveGenie::Scoring.call(
profile,
criteria,
config: { model: "gpt-4.1-mini" }
)
end
end This will return a JSON report following this format for all developers.
{"Software Engineering Manager_reasoning": "The candidate's profile indicates their main specialty is QA (Quality Assurance), not Python development. There is no mention of Python experience, development skills, or relevant responsibilities related to Python software development. This does not meet the mandatory requirement of being a Python developer with good experience. There are no preferred qualifications or any indication of experience related to the desired role. Therefore, the candidate's suitability for the Python developer role is very low.",
"Software Engineering Manager_score": 10,
"Technical Recruiter specialized in IT Roles_reasoning": "The profile lacks any mention of Python development experience or skills. It only mentions QA as the specialty, which does not align with the role of a Python developer. No responsibilities or experience related to Python are indicated, so this candidate is not suitable based on the information provided.",
"Technical Recruiter specialized in IT Roles_score":15,
"Quality Assurance Lead with Developer Experience_reasoning": "Since the candidate's specialty is QA and no Python development experience is stated, they do not meet the criteria for a Python developer role. Although QA experience can be valuable, it is not sufficient here without Python skills or development experience.",
"Quality Assurance Lead with Developer Experience_score":20,
"final_score":15,
"final_reasoning": "The candidate's profile only mentions a QA specialty with no indication of Python development experience or skills, which is mandatory for the position. All reviewers agree the profile does not fit the role requirements, resulting in a very low score and overall unsuitability for the Python Developer role."}”We use the “final_score” attribute to rank all the developers in the report and then send a list of the top 5 developers on this criterion. We do this because we shouldn’t trust AI to do the final judgment. ”So we present the top 5 developers to our commercial team, while showing the AI reasoning, and then the commercial team decides which developer makes the most sense for this position based on their own knowledge about those developers as well.
What’s next
This was an amazing experiment and a fun challenge, but it’s still only the beginning, and there is a lot of work to do. One thing that we actually mentioned in this blog post is that the current dataset we are using is not ideal, as we don’t have nuances of the developer experience.
To improve that, we plan on building an AI powered Resume reader that can parse the information from a developer’s resume and update our database, maintaining consistent data. Also, we will use AI to enhance the experience descriptions so that all the experiences can have the same tone in our database.
We may have a model switch for Gemini 3 Pro, given its huge context window, since we may need to read a lot of resumes, instead of just a few.
This is still a work in progress, but we’ll keep you posted, and once we have more to show, we will be creating a new blog post to present this progress and any new findings we discover.
Until then, take care, my friends!
Co-author: Beatriz Freccia
We want to work with you. Check out our Services page!

