The pandemic has only underscored the need to accelerate digital transformation work. Avoid five common pitfalls as you do so
How we built an IT hiring process that curbs bias
Storj Labs VP of Engineering explains how changes like using pseudonyms and paying candidates for solutions to interview problems had a big impact on diversity
As the world has accelerated, there’s a growing emphasis on what – and who – has been left behind, and at what cost. While technology has provided a huge opportunity to some, it has also amplified inequitable situations for others.
Here at Storj Labs, we recently deployed a new hiring process that aims to accomplish two things: Build the most diverse, inclusive, and welcoming team we can. And build the strongest technical team we can.
Before I share the details, I want to stress that pursuing a diverse team is the right thing to do, full stop. It also happens to be the best choice a business can make in terms of benefits it can provide. Studies have repeatedly shown that diverse teams are more successful, perform better financially, and are more innovative. We all want that for our companies.
[ Want advice and data for IT hiring managers and job applicants? Download our free eBook: IT job searching in 2019: A practical guide. ]
While diversity represents having multiple viewpoints in the same room, inclusion is having them want to be there. Inclusion is just as important as diversity, if not more so. While this blog post is specifically about hiring, it’s worth noting that this is not a complete solution; creating a hiring process that promotes diversity is worthless if your newly hired employees don’t feel welcome.
Building a new hiring processThe hiring process outlined here definitely has trade-offs, but we feel the downside is offset by the benefits. Here are a few aspects of our hiring process that specifically help curb bias.
It’s not enough to simply say that you encourage candidates who might add diversity to your team to apply. You must find inputs to your recruiting funnel that represent greater diversity than the diversity in your field. It takes extra energy and effort.
Ultimately, we believe that the key ingredient to raising the bar is widening the net we’re casting. The more people we interview, the more options we have, and the better candidates we can find. It’s crazy not to open as many doors as possible or not to level the playing field for people who haven’t had a shot yet.
In addition to your traditional recruiting methods, which will recruit traditional candidates, find communities where underrepresented individuals are likely to be and promote your positions there. Include.io, Lesbians Who Tech, POCIT (People of Color In Tech), Tech Ladies, PowerToFly, and Women Who Tech are just a few to consider.
Name-blind homework problem
Our interview process happens in a few different stages, but I want to take a minute and dive into our most important stage: the name-blind homework problem. This part of our interview process is the biggest, most time consuming, and potentially the most controversial, precisely because of how many trade-off decisions it makes. So, before we jump into what the downsides are, and why we do it anyway, let me just outline what we do.
After initially screening a candidate for potential minor logistical issues (are they applying for the job they think they’re applying for, etc.), we invite candidates to an interview-specific Slack channel with a randomly chosen pseudonym. Interview candidates join the Slack channel to anonymously talk with the team.
Then, the hiring manager posts a link to a homework problem in the channel. We try to select a problem that:
- Is clear and concise, but with deep complexity.
- Is fairly representative of day-to-day work.
- Is considerably challenging, but preferably not due to the requirement of much additional existing knowledge.
- Requires design discussion and architectural considerations prior to implementation.
- Can be completed by our target candidates in less than eight hours.
We don’t set a deadline on the assignment because we want to be flexible with candidates’ schedules. We answer any questions the candidate has, discussing potential solutions and tradeoffs, and let the candidate get to work.
Finally, we run the homework submission against tests and evaluate their assignment’s code against a checklist. We pay for problem-solution submissions (whether or not they work completely).
Why do we do it this way?
First, we want our interview to hinge on evaluating a candidate’s ability to do work in as real an environment as possible. We want a direct measurement of if they will be able to do the job in question, as opposed to trying to find correlated measurements such as white board interviews, interview puzzles, etc.
We want them to use their IDE, their programming language, have access to documentation, relieve time pressure (if possible), and see how they communicate remotely, because much of our own team is remote. We do our best to engage our candidates in a discussion about the problem; in fact, the Slack conversation itself influences our hiring decision. How the candidate works with the team to solve the problem and how they collaborate is a huge factor in what we’re looking for.
Second, we use pseudonyms to let the candidate’s work stand for itself. We want this stage of our interview process to simply select the best possible candidates, independent of as many other factors as possible. This part is precisely how we use our focus on diversity in recruiting to raise the bar — by including a wider pool of applicants, we can be even more selective at this name-blind stage.
Third, we use a hard problem that isn’t completely specified. Just like real tickets, the problem statement has assumptions that aren’t clear and require some level of additional requirements gathering and clarification. We want a problem that most people won’t ace. The greater the fidelity of the test, the more an excellent candidate will stand out. This, combined with our lack of specific experience criteria, is intended to give inexperienced but sharp candidates the ability to shine.
Fourth, we pay people. The major downside of our problem is it takes time, potentially time the candidate doesn’t have. This is something we’re pretty torn about. Unfortunately, a homework problem might eliminate people with busier home lives or who need to work elsewhere until they land the job with us, which is why we don’t set deadlines on assignments. It’s challenging to compress our hard problem into the span of an hour, but we’re hopeful that compensating our candidates will help.
Fifth, we try to grade as evenly and as routinely as possible. We have a rubrik and test harness. Even though we’re already doing the interview at this stage name-blind, we still want to avoid as much bias as possible that might cause us to prefer anything besides what is explicitly stated as criteria in the homework problem description.
If the candidate passes this stage, our last phase is essentially a team meet-and-greet for bidirectional interviewing. In that final stage we look to make sure the candidate is a culture add (as opposed to a culture fit), and that there are no huge red flags.
Lessons learned about bias in IT hiring
Since implementing this new hiring process, we’ve hired some great candidates, we’ve diversified our candidate pool, and our company (albeit slowly) has become more diverse as well.
To give some specific numbers, of the 10 candidates currently in the final stages of our hiring process (homework, final interview, accepted an offer, and waiting to start) three identitiy as LGBTQ, three identify as women, and five are people of color. For our team of 50, that’s certainly a start. We have a long way to go, but our direction feels promising.
In addition, I think this approach has helped our entire team realize several issues with traditional hiring methods. If a team is made up of mostly white males, the questions often asked in the standard whiteboard interview process (as well as the answers often expected) are likely more suited for white males.
Using a direct work problem combined with Slack helps alleviate this tendency a little bit, as responses are addressed asynchronously, allowing a candidate’s work to shine on its own merits and giving candidates the opportunity to share their best answers, rather than their fastest answers.
I hope this overview helps others analyze their own hiring processes to see if there are ways they can eliminate bias. If you have feedback or suggestions on how we can improve this, please share your thoughts in the comments section below or on Twitter.
[ Read also: How to develop the next generation of female digital leaders. ]