The Screening Agent

Product Design • UX Research
The Geektrust screening agent is an AI-powered tool designed to automate the initial candidate evaluation process, replacing manual human screening to make hiring faster, more consistent, and unbiased.
Project overview
The goal was to redesign the first step of hiring: screening by eliminating human bias and drastically reducing recruiter time spent on manual evaluations. We wanted to make the process faster, objective, and scalable for high-volume hiring.

This AI layer acts as the entry point in the Geektrust AI Hiring Ecosystem, seamlessly connecting to the Interview Agent and the Smart Resume, together building a faster, unbiased, and transparent end-to-end hiring experience.
MY ROLE
End-to-end experience designer from concept to prototype
TEAM
Individual designer, 1 PM, Reported to the PM
TIMELINE
3 months,
Feb 2024- May 2024
🔰  < 10 mins
Average screening time
Screenings that once took up to 35 minutes were completed in under 10 minutes, making the process 3x faster.
🔰  66%
Reduction in manual effort
Recruiters saved hours of repetitive screening work every week, focusing instead on qualified candidates.
🔰  83%
Candidate completion rate
Candidates found the conversational flow natural and engaging, leading to high completion and positive feedback.
Now, let's dive deeper to understand the entire process.
How does the agent fit in the hiring process?
Below is a user story explaining how the screening agent works.
How it started
How screening works?
Screening is the first gate in hiring and typically happens in two stages: a recruiter screen and a technical screen.


We spoke with recruiters and observed workflows to validate the baseline:

⏰  Recruiter screens take ~10–15 minutes per candidate
⏰  Technical screens ~30–45 minutes.

--------------------------------------------------------------------------------------------------------------------------------------------------------

Recruiters form fast decisions across four dimensions:
🙋🏻‍♀️
Candidate fitment against JD
🗣️
Presentation & credibility
🧑🏻‍💻
Technical skill
🤔
Behavioural traits
The outcome of a screening: Match / Not a match / Moderate match (on hold)
These research findings became our baseline metrics (not hypotheses) and directly shaped the Screening Agent’s design goals to reduce screening time and improve fairness.
Problem statement
🕒 Time constraints → Rushed screening → Missed good candidates
Recruiters handle hundreds of profiles each week, spending 10–15 mins on recruiter screening and 30–45 mins on technical screening. With so little time, many strong candidates get missed early on.

🧩 Limited technical depth → Shallow evaluation → Weak feedback loop
Non-technical recruiters often struggle to ask the right technical questions or evaluate responses, which leads to incomplete assessments and back-and-forth with engineers.

💭 Reliance on gut feeling → Inconsistent and biased decisions
Without a consistent framework or scoring system, decisions vary from person to person and bias can easily creep in.

📅 Operational overhead → Scheduling delays → Slower hiring cycles
Coordination, documentation, and scheduling take up a lot of time — making the screening process slow and inefficient.


Result: Screening becomes slow, inconsistent, and biased, impacting both candidate experience and time-to-hire.
🔍 Keyword bias → Missed opportunities → Unfair shortlisting
Candidates often get filtered out early when recruiters rely too heavily on resume keywords rather than actual skills.

💬 Unprepared conversations → Disjointed experience → Poor evaluation
Calls can feel unstructured when recruiters haven’t reviewed profiles deeply, leading to repetitive or irrelevant questions.

🧭 Unclear job context → Misaligned answers → Lower confidence
Many candidates struggle to understand the role expectations or company priorities, which affects how they present themselves.

Generic questions → Low engagement → Missed potential
When questions aren’t tailored, candidates can’t effectively showcase their strengths or technical depth.

📭 No feedback → Unclear next steps → Frustration
Lack of constructive feedback and multiple repetitive calls make the process tiring and discouraging.


Result: The screening stage often feels opaque, repetitive, and biased, leading to poor candidate experience and lost talent opportunities for companies.
PROCESS
How we decided to go about it?
Process: How we decided to go about it?
We didn’t want to guess how screening works — we wanted to feel it.
So, we sat with recruiters, listened to their calls, and understood what goes behind every 'Let’s shortlist this one.'
From those conversations, the logic started to take shape.

Our AI screening assistant was designed to think like a recruiter — but faster and more consistent. It reads the JD, company type, size, funding stage, and domain; weighs skills, experience, and traits; and asks only what really matters.

Then came the matching layer — our structured map for decision-making.
It looks at skills, experience, company fit, culture, and compensation expectations, tying everything together into one clear view — so that every match feels intentional, not accidental.
The parameters were segmented into different categories. Below is the priority listed:
How we approached it
Solution
After talking to recruiters and candidates, we saw the same pain from both sides — screening was slow, inconsistent, and biased.

So we built the AI Screening Agent — an AI that talks to candidates like a recruiter, understands their profile, and gives fair, fast results.

Our goal was simple: Make screening faster, smarter, and fairer — without losing the human touch.

The agent could:
Read the JD and resume to ask only relevant questions.
Adapt based on candidate answers.
Generate insights in minutes, not hours.
And for candidates, it offered real value:
Instant feedback after every call.
A Smart Resume built from the conversation.
A direct path to the next round for top performers.
PROCESS:
Screening Prompt
The screening process for the candidate involved following specific prompts and instructions provided to GPT. It includes asking follow-up questions when necessary and delivering concise responses to the candidate's inquiries, ensuring a smooth and efficient screening experience.
Exploring possibilities
Ideation
Two main things were considered before designing the user flows and wireframes:
🎙️
Screening should be voice first.
Since we are trying to save time and make the experience more human like, the voice first approach should be selected for the screening process.
📑
Incentives should be included for taking the screening.
Candidates won't be motivated enough to take the screening as it doesn't have the seriousness of an interview. To keep them motivated, we should give them some sort of incentive.
Use Cases
What all possible use cases should be considered while designing the tool?
Bringing structure
Flows
A basic flow was designed to account for all the possible use cases.
Below is a combined flow for how the whole screening process is designed.
Initial Concepts
We went with 2 approaches , design mid-fidelity wireframes for both the concepts and then test which one works better.
Iteration 1
This version was designed like a whatsapp chat approach with the conversation history present in the screen. The voice option was placed right next to the text box. It's a common feature so users are used to this functionality. Video option was also included as a form of answering the question. Questions were a combination of  MCQ+ form+ normal text questions.

Iteration 2
We improvised and thought of aligning our designs with the goal we had set. For that we made few below changes:
The design went with multiple iterations :)
Stepping back to evaluate
Have we considered everything?
Before we went on designing the final set of wireframes, we asked ourselves, 'Have we considered everything'? Have we thought of all the scenarios, all the questions a user might be having before starting the screening process. Since this is an AI round, users will be having a lot of questions and doubts.

We started listing down all the questions and later categorised them into below categories:
About Screening
  • So I will not be talking to a human?
  • How much time will it take?
  • If I have questions about the company/JD, will the AI help me in getting the answers?
  • What do I get in return? Will someone even get back to me?
  • Why should I spend so much time on this?
Quick Tips
  • Will it record me?
  • Can I take breaks in between?
  • Will I be able to use another tab in between?
  • Can I do it on a desktop browser?
  • How do you even know its Advin sitting infront of the camera?
  • Do I need to be prepared for this?
FAQs
  • Would my application even be considered - there is no human.
  • How do I trust if the system will understand my skillset/ depth of my work.?
  • What if I lose the internet connection in the middle of the screening?
  • Who should I contact if I encounter technical difficulties during the screening?
  • What happens after the screening round?
  • Will a human get back to me for the next round or an AI?
  • When can I expect to hear back about the results?
  • Will there be further interviews or assessments?
Final Wireframe
We considered the above questions and tried answering everything through our final version of wireframe.
Highlights of the final iteration:
Bringing it to life
Visual Design

Style Guide
The colour palette was kept close to Geektrust brand guideline. Few colours were added to give fresh look and feel since this is an AI tool.
Same typeface has been used as Geektrust's other products.

Final Screens
Seeing It in Action
Evaluation & Outcomes
When the product went live, we wanted to validate if it truly made screening faster, fairer, and more consistent not just in theory, but in practice.

So, we tracked four key metrics that mapped directly to our goals:
We ran shadow tests with recruiters for the first few weeks — comparing AI-led screening outcomes with human ones — and refined our parameters until the overlap reached 90%+ accuracy.

These numbers mattered because they weren’t vanity metrics; they reflected what we set out to fix — speed, bias, and trust.
Where We’re Headed
Learnings & what's next?
This phase reminded us that great design isn’t just smart, it’s empathetic. We discovered that while AI could mimic judgment, it couldn’t replace human intuition. Recruiters wanted to understand why the bot rated a candidate a certain way, and even subtle wording shifts made interviews feel warmer and more human.

We also noticed friction in the candidate flow. Too many clicks, too many stops. In our next iteration, we’re removing all that clutter. Candidates will simply start and stop answering while the AI manages clarifications in the background.
The experience becomes faster, simpler, and more human, exactly what hiring conversations should be.