Job interviews are often full of biases. Here’s how to hire the best person

When Ginni Rometty was CEO of technology corporation IBM, she introduced “skills-first” hiring, arguing that the filters we typically use, such as education and experience, are not helpful in many jobs.

Instead, companies should ask themselves what skills are required to succeed in a given role—say, computer programming or selling software—and then find job seekers who either have or want to acquire those skills, even if they don’t have a computer science or business degree. By creating on-ramps through internship and apprenticeship opportunities, which are more common in European countries, IBM was able to dramatically broaden its talent pool. A skills-based approach holds the promise of better matches between jobs and employees, Rometty and coauthors suggest.

The question, then, is how to best assess an applicant’s skills during this stage of the hiring process. Do the typical techniques used today, such as interviews and assessment tools, enable us to identify the best talent for the job?

Unfortunately, the evidence suggests the answer is likely no. Interviews, for example, are fraught with problems. Numerous biases can lead us astray. To name but a few: In-group bias makes us prefer people who look like we do; stereotypes lead us to prefer candidates who look like the typical employee; halo effects cause us to put too much weight on first impressions; and confirmation bias makes us look for evidence confirming our gut instincts while ignoring contrary information.

Sadly, seeing an actual person and receiving additional information such as demeanor and appearance did not counteract interviewer bias. In some ways, being confronted with another human makes things worse. We cannot help but be influenced by what job applicants wear (our favorite color maybe?), how they speak (with a dialect maybe?), and how they look (attractive maybe?). Based on a large data set from entrepreneurial pitch competitions as well as laboratory experiments in the U.S., we know that such irrelevant factors affect evaluators. Investors favored pitches delivered by men, especially attractive men, even when the substance of the pitch was identical to the pitches presented by women.

In light of this, we should not be surprised that interviews, particularly unstructured ones, are bad predictors of future performance. It is in these unstructured contexts that unconscious bias flourishes. When people have discretion in their judgments, rules of thumb such as stereotypes are hard to avoid.

Here are a few ways to make interviews and other formal assessment tools more effective and fair:

Create an Interview Checklist

It all starts with a simple list. What is it that you want to evaluate? Determine the skills, knowledge, and competencies a successful candidate should have and design the questions you want to ask accordingly. Each question should elicit information that allows you to better assess something you care about—and, ideally, focus squarely on the competencies required. We are always astonished to discover that questions like “Please tell us about yourself” or “What are your greatest strengths?” are still beloved by many interviewers. What competencies are these questions testing, exactly?

It is also important to define the criteria you will use to evaluate responses beforehand so that you know what you are looking for when talking to a candidate. It is easy to be swayed by, say, the first candidate’s vision but then completely focus on execution when you talk to the second candidate. The list will help you focus and make sure you collect comparable information on all the criteria you care about.

To conduct a gold standard structured interview, ask all candidates the same set of questions in the same order. Determine a scoring rubric and the weights you want to give to each question beforehand. You might want to weight all of them equally or you may decide that the responses to your first and your fourth question are essential, so they should get more weight.

Improve the Interview Process

In addition to designing a set of questions based on what you look for in a candidate and deciding on the scoring of the responses and weighting of the questions, you also need to think about who will be involved in the interview process. Note that while it is helpful for candidates to meet a diverse set of interviewers, diversity on the selection.

In interviews, have candidates meet the evaluators one-on-one. While panel interviews are common, we advise against them. On a panel, interviewers are unable to form truly independent judgments as they will be influenced by each other, increasing the likelihood that they fall prey to groupthink, where the group’s judgment is worse than the aggregate of the interviewers’ individual assessments.

Much of this influence is subtle and unconscious, such as noticing whether a fellow interviewer is leaning forward or back (indicating interest or disinterest in what the candidate is saying); whether their tone of voice is excited or judgmental; and whether they are nodding along and taking prolific notes as the candidate is speaking, or checking the messages on their phone instead.

When interviewing, take notes for each response received and compare candidates’ responses horizontally. Submit your scores multiplied by the weight you have assigned to the question to the person leading the recruitment process (often, someone from HR) who can aggregate all final scores received for each candidate.

Much like you should not meet with a job candidate in a group, you should not discuss your thoughts with other evaluators before you have submitted your scores. It is just too easy to fall right back into what you have successfully averted by meeting with the candidates individually: groupthink. The territory is particularly treacherous if you hear the most senior person’s opinion before you have made up your own mind. A good practice is that even in the final calibration meeting, after everyone has submitted their scores, the most senior person speaks last.

You cannot leave the evaluation of your candidates up to your gut instinct. The more discipline we can add to the evaluation process—by moving from unstructured to structured interviews and from informal to formal skills-based assessment tools—the more likely we will be able to identify the best possible job candidate. And what is even better, in most cases the additional rigor also helps us overcome our biased assessments, particularly if we examine the impacts our tests might have on various groups beforehand.

From the book MAKE WORK FAIR: Data-Driven Design for Real Results by Iris Bohnet and Siri Chilazi Copyright 2025 by Iris Bohnet and Siri Chilazi. Reprinted by permission of HarperCollins Publishers.

No comments

Read more