- asrobson
Challenging The Coding Interview

As long as I've worked in software, the interview process has hyper-focused on the candidate's ability to write software in an interview setting. There are many problems with how this plays out in practice. I hope a review of the biases and logic errors encourages folks involved in hiring strategy to revisit how software engineering candidates are evaluated.
Survivorship Bias
When was the last time you brought one of your software engineers into an unfamiliar environment with strangers and asked them to solve a production issue in 60 minutes or less with the stipulation that their job depended on the quality and correctness of their solution in that moment? Whether you've managed to design a problem that closely resembles a typical coding task or rely on well-defined problems with a best answer, the context and environment needs to be considered.
There are software engineers who can excel under pressure but unless this is an actual job requirement, you've based a signal on the incidental ability to solve problems under unrealistic circumstances. You will absolutely miss out on fantastic talent if you're ruling out candidates based on whether they can survive an arbitrarily difficult exercise.
Implicit Bias
A major issue at hand is the ongoing failure to create diverse workforces that are representative of the talent available. The evidence is clear; diverse teams produce better business outcomes. Why aren't we seeing more progress in diversity, equity, and inclusion in our software engineering organizations? Existing team members were able to study and master algorithmic and data-structure coding problems but don't assume that every qualified candidate has the same access and opportunities. Capable candidates with an important and potentially missing skillset (to say nothing of new perspectives and experiences) will be disqualified if the test for competency selects for candidates who meet characteristics or interests that don't actually reflect work responsibilities.
For the majority of software engineers working in application development, if there's a "best way" to solve the problem, it's been productized or open-sourced. Spending time memorizing the solutions to these kinds of problems is a lot of investment for little practical professional benefit. Shouldn't the correct answer to these kinds of challenges be to find available and well-tested solutions in standard libraries or open source libraries?
Fundamental Attribution Error
If a candidate can excel at reproducing algorithms over rudimentary data structures, does this mean we can conclude that they will be equally skilled at the kinds of tasks required to be a successful member of the team? Companies relying on this style of interview will have to answer with an enthusiastic, "yes!" These hiring companies see a candidate exhibit some behavior or ability and attribute a fundamental quality. This is a problem when done without consideration for the nuance introduced by the context and situation in which we observed the candidate. A candidate that does well at solving popular interview problems for which there exists a lot of guidance and material is not necessarily a good software engineer. A candidate that struggles with coding in an interview setting, with types of problems that are not representative of the skills required to succeed, is not necessarily a poor software engineer.
Don't Miss Out
I've worked with enough organizations where their coding exercises failed to weed out poor software engineers but regularly eliminated candidates who had valuable experience, ability, and interpersonal skills.
More than once, I've ignored the feedback from a live coding exercise required by process when all other signs pointed to a great candidate who would be a wonderful addition to the team. I've never regretted hiring those folks. Make hiring decisions based solely on predictive signals that represent the job requirements and your team's needs and you'll build balanced, diverse teams that lead to better business outcomes.