interviewing.io logo interviewing.io blog
better interviewing through data
Navigation
Featured

Uncategorized

The Eng Hiring Bar: What the hell is it?

Posted on March 31st, 2020.

Recursive Cactus has been working as a full-stack engineer at a well-known tech company for the past 5 years, but he’s now considering a career move.

Over the past 6 months, Recursive Cactus (that’s his anonymous handle on interviewing.io) has been preparing himself to succeed in future interviews, dedicating as much as 20-30 hours/week plowing through LeetCode exercises, digesting algorithms textbooks, and of course, practicing interviews on our platform to benchmark his progress.

Recursive Cactus’s typical weekday schedule

TimeActivity
6:30am – 7:00amWake up
7:00am – 7:30amMeditate
7:30am – 9:30amPractice algorithmic questions
9:30am – 10:00amCommute to work
10:00am – 6:30pmWork
6:30pm – 7:00pmCommute from work
7:00pm – 7:30pmHang out with wife
7:30pm – 8:00pmMeditate
8:00pm – 10:00pmPractice algoirthmic questions

Recursive Cactus’s typical weekend schedule

TimeActivity
8:00am – 10:00amPractice algorithmic questions
10:00am – 12:00pmGym
12:00pm – 2:00pmFree time
2:00pm – 4:00pmPractice algorithmic questions
4:00pm – 7:00pmDinner with wife & friends
7:00pm – 9:00pmPractice algorithmic questions

But this dedication to interview prep has been taking an emotional toll on him, his friends, and his family. Study time crowds out personal time, to the point where he basically has no life beyond work and interview prep.

“It keeps me up at night: what if I get zero offers? What if I spent all this time, and it was all for naught?”

We’ve all been through the job search, and many of us have found ourselves in a similar emotional state. But why is Recursive Cactus investing so much time, and what’s the source of this frustration?

He feels he can’t meet the engineer hiring bar (aka “The Bar”), that generally accepted minimum level of competency that all engineers must exhibit to get hired.

To meet “The Bar,” he’s chosen a specific tactic: to look like the engineer that people want, rather than just be the engineer that he is.

It seems silly to purposefully pretend to be someone you’re not. But if we want to understand why Recursive Cactus acts the way he does, it helps to know what “The Bar” actually is. And when you think a little more about it, you realize there’s not such a clear definition.

Defining “The Bar”

What if we look at how the FAANG companies (Facebook, Amazon, Apple, Netflix, Google) define “The Bar?” After all, it seems those companies receive the most attention from pretty much everybody, job seekers included.

Few of them disclose specific details about their hiring process. Apple doesn’t publicly share any information. Facebook describes the stages of their interview process, but not their assessment criteria. Netflix and Amazon both say they hire candidates that fit their work culture/leadership principles. Neither Netflix nor Amazon describes exactly how they assess against their respective principles. However, Amazon does share how interviews get conducted as well as software topics that could be discussed for software developer positions.

The most transparent of all FAANGs, Google publicly discloses its interview process with the most detail, with Laszlo Bock’s book Work Rules! adding even more insider color about how their interview process came to be.

And speaking of tech titans and the recent past, Aline (our founder) mentioned the 2003 book How Would You Move Mount Fuji? in a prior blog post, which recounted Microsoft’s interview process when they were the pre-eminent tech behemoth of the time.

In order to get a few more data points about how companies assess candidates, I also looked at Gayle Laakmann McDowell’s “Cracking the Coding Interview”, which is effectively the Bible of interviewing for prospective candidates, as well as Joel Spolsky’s Guerilla Guide to Interviewing 3.0, written by an influential and well-known figure within tech circles over the past 20-30 years.

Definitions of “The Bar”

Source Assessment Criteria
AppleNot publicly shared
AmazonAssessed against Amazon’s Leadership principles
FacebookNot publicly shared
NetflixNot publicly shared
Google1. General cognitive ability
2. Leadership
3. “Googleyness”
4. Role-related knowledge
Cracking the Coding Interview – Gayle Laakmann McDowell– Analytical skills
– Coding skills
– Technical knowledge/computer science fundamentals
– Experience
– Culture fit
Joel Spolsky– Be smart
– Get things done
Microsoft (circa 2003)– “The goal of Microsoft’s interviews is to assess a general problem-solving ability rather than a specific competency.”
– “Bandwidth, inventiveness, creative problem-solving ability, outside-the-box thinking”
– “Hire for what people can do rather than what they’ve done”
– Motivation
Defining “Intelligence”

It’s not surprising that coding and technical knowledge would be part of any company’s software developer criteria. After all, that is the job.

But beyond that, the most common criteria shared among all these entities is a concept of intelligence. Though they use different words and define the terms slightly differently, all point to some notion of what psychologies call “cognitive ability.”

SourceDefinition of cognitive ability
Google“General Cognitive Ability. Not surprisingly, we want smart people who can learn and adapt to new situations. Remember that this is about understanding how candidates have solved hard problems in real life and how they learn, not checking GPAs and SATs.”
Microsoft (circa 2003)“The goal of Microsoft’s interviews is to assess a general problem-solving ability rather than a specific competency… It is rarely dear what type of reasoning is required or what the precise limits of the problem are. The solver must nonetheless persist until it is possible to bring the analysis to a timely and successful conclusion.”
Joel Spolsky“For some reason most people seem to be born without the part of the brain that understands pointers.” 
Gayle Laakmann McDowell“If you’re able to work through several hard problems (with some help, perhaps), you’re probably pretty good at developing optimal algorithms. You’re smart.”

All these definitions of intelligence resemble early 19th-century psychologist Charles Spearman’s theory of intelligence, the most widely acknowledged framework for intelligence. After performing a series of cognitive tests on school children, Spearman found that those who did well in one type of test tended to also perform well in other tests. This insight led Spearman to theorize that there exists a single underlying general ability factor (called “g” or “g factor”) influencing all performance, separate from specific, task-specific abilities (named “s”).

If you believe in the existence of “g” (many do, some do not… there exist different theories of intelligence), finding candidates with high measures of “g” aligns neatly with the intelligence criteria companies look for.

While criteria like leadership and culture fit matter to companies, “The Bar” is not usually defined in those terms. “The Bar” is defined as having technical skills but also (and perhaps more so) having general intelligence. After all, candidates aren’t typically coming to interviewing.io to specifically practice leadership and culture fit.

The question then becomes how you measure these two things. Measuring technical skills seems tough but doable, but how do you measure “g?”

Measuring general intelligence

Mentioned in Bock’s book, Frank Schmidt’s and John Hunter’s 1998 paper “The Validity and Utility of Selection Methods in Personnel Psychology” attempted to answer this question by analyzing a diverse set of 19 candidate selection criteria to see which predicted future job performance the best. The authors concluded general mental ability (GMA) best predicted job performance based on a statistic called “predictive validity.”

In this study, a GMA test referred to an IQ test. But for Microsoft circa 2003, puzzle questions like “How many piano tuners are there in the world?” appear to have taken the place of IQ tests for measuring “g”. Their reasoning:

“At Microsoft, and now at many other companies, it is believed that there are parallels between the reasoning used to solve puzzles and the thought processes involved in solving the real problems of innovation and a changing marketplace. Both the solver of a puzzle and a technical innovator must be able to identify essential elements in a situation that is initially ill-defined.”

– “How Would You Move Mount Fuji?” – page 20

Fast forward to today, Google denounces this practice, concluding that “performance on these kinds of questions is at best a discrete skill that can be improved through practice, eliminating their utility for assessing candidates.”

So here we have two companies who test for general intelligence, but who also fundamentally disagree on how to measure it.

Are we measuring specific or general intelligence?

But maybe as Spolsky and McDowell have argued, the traditional algorithmic and computer science-based interview questions are themselves effective tests for general intelligence. Hunter & Schmidt’s study contains some data points that could support this line of reasoning. Among all single-criteria assessment tools, work sample tests possessed the highest predictive validity. Additionally, when observing the highest validity regression result of two-criteria assessment tool (GMA test plus work sample test), the standardized effect size on the work sample rating was larger than that of the GMA rating, suggesting a stronger relationship with future job performance.

If you believe algorithmic exercises function as work sample tests in interviews, then the study suggests traditional algorithm-based interviews could predict future job performance, maybe even more than a GMA/IQ test.

Recursive Cactus doesn’t believe there’s a connection.

There’s little overlap between the knowledge acquired on the job and knowledge about solving algorithmic questions. Most engineers rarely work with graph algorithms or dynamic programming. In application programming, lists and dictionary-like objects are the most common data structures. However, interview questions involving those are often seen as trivial, hence the focus on other categories of problems.

– Recursive Cactus

In his view, algorithms questions are similar to Microsoft’s puzzle questions: you learn how to get good at solving interview problems, which to him don’t ever show up in actual day-to-day work, which, if true, wouldn’t actually fit with the Schmidt & Hunter study.

Despite Recursive Cactus’s personal beliefs, interviewers like Spolsky still believe these skills are vital to being a productive programmer.

A lot of programmers that you might interview these days are apt to consider recursion, pointers, and even data structures to be a silly implementation detail which has been abstracted away by today’s many happy programming languages. “When was the last time you had to write a sorting algorithm?” they snicker.

Still, I don’t really care. I want my ER doctor to understand anatomy, even if all she has to do is put the computerized defibrillator nodes on my chest and push the big red button, and I want programmers to know programming down to the CPU level, even if Ruby on Rails does read your mind and build a complete Web 2.0 social collaborative networking site for you with three clicks of the mouse.

– Joel Spolsky

Spolsky seems to concede that traditional tech interview questions might not mimic actual work problems, and therefore wouldn’t act as work samples. Rather, it seems he’s testing for general computer science aptitude, which is general in a way, but specific in other ways. General intelligence within a specific domain, one might say.

That is, unless you believe intelligence in computer science is general intelligence. McDowell suggests this:

There’s another reason why data structure and algorithm knowledge comes up: because it’s hard to ask problem-solving questions that don’t involve them. It turns out that the vast majority of problem-solving questions involve some of these basics.

– Gayle Laakmann McDowell

This could be true assuming you view the world primarily through computer science lenses. Still, it seems pretty restrictive to suggest people who don’t speak the language of computer science would have more difficulty solving problems.

At this point, we’re not really talking about measuring general intelligence as Spearman originally defined it. Rather, it seems we’re talking about specific intelligence, defined or propagated by those grown or involved in traditional computer science programs, and conflating that with general intelligence (Spolsky, McDowell, Microsoft’s Bill Gates, and 4 of 5 FAANG founders studied computer science at either some Ivy League university or Stanford).

Maybe when we’re talking about “The Bar,” we’re really talking about something subjective, based on whoever is doing the measuring, and whose definition might not be consistent from person-to-person.

Looking at candidate assessment behavior from interviewers on the interviewing.io platform, you can find some evidence that supports this hypothesis.

“The Bar” is subjective

On the interviewing.io platform, people can practice technical interviews online and anonymously, with interviewers from top companies on the other side. Interview questions on the platform tend to fall into the category of what you’d encounter at a phone screen for a back-end software engineering role, and interviewers typically come from companies like Google, Facebook, Dropbox, Airbnb, and more. Check out our interview showcase to see how this all looks and to watch people get interviewed. After every interview, interviewers rate interviewees on a few different dimensions: technical skills, communication skills, and problem-solving skills. Each dimension gets rated on a scale of 1 to 4, where 1 is “poor” and 4 is “amazing!”. You can see what our feedback form looks like below:

If you do well in practice, you can bypass applying online/getting referrals/talking to recruiters and instead immediately book real technical interviews directly with our partner companies (more on that in a moment).

When observing our most frequent practice interviewers, we noticed differences across interviewers in the percent of candidates that person would hire, which we call the passthrough rate. Passthrough rates ranged anywhere between 30% and 60%. At first glance, certain interviewers seemed to be a lot stricter than others.

Because interviewees and interviewers are anonymized and matched randomly[1], we wouldn’t expect the quality of candidates to vary much across interviewers, and as a result, wouldn’t expect interviewee quality to explain the difference. Yet even after accounting for candidate attributes like experience level, differences in passthrough rates persist[2].

Maybe some interviewers choose to be strict on purpose because their bar for quality is higher. While it’s true that candidates who practiced with stricter interviewers tended to receive lower ratings, they also tended to perform better on their next practice.

This result could be interpreted in a couple of ways:

  • Stricter interviewers might systematically underrate candidates
  • Candidates get so beat up by strict interviewers that they tended to improve more between practices, striving to meet their original interviewer’s higher bar

If the latter were true, you would expect that candidates who practiced with stricter interviewers would perform better in real company interviews. However, we did not find a correlation between interviewer strictness and future company interview passthrough rate, based on real company interviews conducted on our platform[3].

Interviewers on our platform represent the kinds of people a candidate would encounter in a real company interview, since those same people also conduct phone screens and onsites at the tech companies you’re all applying to today. And because we don’t dictate how interviewers conduct their interviews, these graphs could be describing the distribution of opinions about your interview performance once you hang up the phone or leave the building.

This suggests that, independent of your actual performance, whom you interview with could affect your chance of getting hired. In other words, “The Bar” is subjective.

This variability across interviewers led us to reconsider our own internal definition of “The Bar,” which determined which candidates were allowed to interview with our partner companies. Our definition strongly resembled Spolsky’s binary criteria (“be smart”), heavily weighing an interviewer’s Would Hire opinion way more than our other 3 criteria, leading to the bimodal, camel-humped distribution below.

While our existing scoring system correlated decently with future interview performance, we found that an interviewer’s Would Hire rating wasn’t as strongly associated with future performance as our other criteria were. We lessened the weight on the Would Hire rating, which in turn improved our predictive accuracy[4]. Just like in “Talledega Nights” when Ricky Bobby learned there existed places other than first place and last place in a race, we learned that it was more beneficial to think beyond the binary construct of “hire” vs. “not hire,” or if you prefer, “smart” vs. “not smart.”

Of course, we didn’t get rid of all the subjectivity, since those other criteria were also chosen by the interviewer. And this is what makes assessment hard: an interviewer’s assessment is itself the measure of candidate ability.

If that measurement isn’t anchored to a standard definition (like we hope general intelligence would be), then the accuracy of any given measurement becomes less certain. It’s as if interviewers used measuring sticks of differing lengths, but all believed their own stick represented the same length, say 1 meter.

When we talked to our interviewers to understand how they assessed candidates, it became even more believable that different people might be using measuring sticks of differing lengths. Here are some example methods of how interviewers rated candidates:

  • Ask 2 questions. Pass if answer both
  • Ask questions of varying difficulty (easy, medium, hard). Pass if answers a medium
  • Speed of execution matters a lot, pass if answers “fast” (“fast” not clearly defined)
  • Speed doesn’t matter much, pass if have a working solution
  • Candidates start with full points. When candidates make mistakes, start docking points

Having different assessment criteria isn’t necessarily a bad thing (and actually seems totally normal). It just introduces more variance to our measurements, meaning our candidates’ assessments might not be totally accurate.

The problem is, when people talk about “The Bar,” that uncertainty around measurement usually gets ignored.

You’ll commonly see people advising you only to hire the highest quality people.

A good rule of thumb is to hire only people who are better than you. Do not compromise. Ever.

– Laszlo Bock

Don’t lower your standards no matter how hard it seems to find those great candidates.

– Joel Spolsky

In the Macintosh Division, we had a saying, “A player hire A players; B players hire C players”–meaning that great people hire great people.

– Guy Kawasaki

Every person hired should be better than 50 percent of those currently in similar roles – that’s raising the bar.

– Amazon Bar Raiser blog post

All of this is good advice, assuming “quality” could be measured reliably, which as we’ve seen so far, isn’t necessarily the case.

Even when uncertainty does get mentioned, that variance gets attributed to the candidate’s ability, rather than the measurement process or the person doing the measuring.

[I]n the middle, you have a large number of “maybes” who seem like they might just be able to contribute something. The trick is telling the difference between the superstars and the maybes, because the secret is that you don’t want to hire any of the maybes. Ever.

If you’re having trouble deciding, there’s a very simple solution. NO HIRE. Just don’t hire people that you aren’t sure about.

– Joel Spolsky

Assessing candidates isn’t a fully deterministic process, yet we talk about it like it is.

Why “The Bar” is so high

“Compromising on quality” isn’t really about compromise, it’s actually about decision-making in the face of uncertainty. And as you see from the quotes above, the conventional strategy is to only hire when certain.

No matter what kind of measuring stick you use, this leads to “The Bar” being set really high. Being really certain about a candidate means minimizing the possibility of making a bad hire (aka “false positives”). And companies will do whatever they can to avoid that.

A bad candidate will cost a lot of money and effort and waste other people’s time fixing all their bugs. Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it.

– Joel Spolsky

Hunter and Schmidt quantified the cost of a bad hire: “The standard deviation… has been found to be at minimum 40% of the mean salary,” which in today’s terms would translate to $40,000 assuming a mean engineer salary of $100,000/year.

But if you set “The Bar” too high, chances are you’ll also miss out on some good candidates (aka “false negatives”). McDowell explains why companies don’t really mind a lot of false negatives:

“From the company’s perspective, it’s actually acceptable that some good candidates are rejected… They can accept that they miss out on some good people. They’d prefer not to, of course, as it raises their recruiting costs. It is an acceptable tradeoff, though, provided they can still hire enough good people.”

In other words, it’s worth holding out for a better candidate if the difference in their expected output is large, relative to the recruiting costs from continued searching. Additionally, the costs of HR or legal issues downstream from potentially problematic employees also tilt the calculation toward keeping “The Bar” high.

This is a very rational cost-benefit calculation. But has anyone ever done this calculation before? If you have done it, we’d love to hear from you. Otherwise, it seems difficult to do.

Given that nearly everyone is using hand-wavy math, if we do the same, maybe we can convince ourselves that “The Bar” doesn’t have to be set quite so high.

As mentioned before, the distribution of candidate ability might not be so binary, so Spolsky’s nightmare bad hire scenario wouldn’t necessarily happen with all “bad” hires, meaning the expected difference in output between “good” and “bad” employees might be lower than perceived.

Recruiting costs might be higher than perceived because finding and employing +1 standard deviation employees gets increasingly difficult. By definition, fewer of those people exist as your bar rises. Schmidt and Hunter’s “bad hire” calculation only compares candidates within an applicant pool. The study does not consider the relative cost of getting high-quality candidates into the applicant pool to begin with, which tends to be the more significant concern for many of today’s tech recruiting teams. And when you consider that other tech companies might be employing the same hiring strategy, competition would increase the average probability that offers get rejected, extending the time to fill a job opening.

Estimating the expected cost of HR involvement is also difficult. No one wants to find themselves interacting with HR. But then again, not all HR teams are as useless as Toby Flenderson.

Taken together, if the expected output between “good” and “bad” candidates were less than expected, and the recruiting costs were higher than perceived, it would make less sense to wait for a no-brainer hire, meaning “The Bar” might not have to be set so high.

Even if one does hire an underperformer, companies could adopt the tools of training and employee management to mitigate the negative effects from some disappointing hires. After all, people can and do become more productive over time as they acquire new skills and knowledge.

Employee development seems to rarely get mentioned in conjunction with hiring (Laszlo Bock makes a few connections here and there, but the topics are mostly discussed separated). But when you add employee development into the equation above, you start to see the relationship between hiring employees and developing employees. You can think of it as different methods for acquiring more company output from different kinds of people: paying to train existing employees versus paying to recruit new employees.

You can even think of it as a trade-off. Instead of developing employees in-house, why not outsource that development? Let others figure out how to develop the raw talent, and later pay recruiters to find them when they get good. Why shop the produce aisle at Whole Foods and cook at home when you can just pay Caviar to deliver pad thai to your doorstep? Why spend time managing and mentoring others when you can spend that time doing “real work” (i.e. engineering tasks)?

Perhaps “The Bar” is set high because companies don’t develop employees effectively, which puts more pressure on the hiring side of the company to yield productive employees.

Therefore, companies can lower their risk by shifting the burden of career development onto the candidates themselves. In response, candidates like Recursive Cactus have little choice but to train themselves.

Initially, I thought Recursive Cactus was a crazy outlier in terms of interview preparation. But apparently, he’s not alone.

Candidates are training themselves

Last year we surveyed our candidates about how many hours they spent preparing for interviews. Nearly half of the respondents reported spending 100 hours or more on interview preparation[5].

We wondered whether hiring managers and recruiters had the same expectations for candidates they encounter, Aline asked a similar question on Twitter, and results suggest they vastly underestimate the work and effort candidates endure prior to meeting with a company.

Decision makers clearly underestimate the amount of work candidates put into job hunt preparation. The discrepancy seems to reinforce the underlying and unstated message pervading all these choices around how we hire: If you’re not one of the smart ones (whatever that means), it’s not our problem. You’re on your own.

“The Bar” revisited

So this is what “The Bar” is. “The Bar” is a high standard set by companies in order to avoid false positives. It’s not clear whether companies have actually done the appropriate cost-benefit analysis when setting it, and it’s possible it can be explained by an aversion to investing in employee development.

“The Bar” is in large part meant to measure your general intelligence, but the actual instruments of measurement don’t necessarily follow the academic literature that underlies it. You can even quibble about the academic literature[6]. “The Bar” does measure specific intelligence in computer science, but that measurement might vary depending on who conducts your interview.

Despite the variance that exists across many aspects of the hiring process, we talk about “The Bar” as if it were deterministic. This allows hiring managers to make clear binary choices but discourages them to think critically about whether their team’s definition of “The Bar” could be improved.

And that helps us understand why Recursive Cactus spends so much time practicing. He’s training himself partially because his current company isn’t developing his skills. He’s preparing for the universe of possible questions and interviewers he might encounter because hiring criteria varies a lot, which cover topics that won’t necessarily be used in his day-to-day work, all so he can resemble someone that’s part of the “smart” crowd.

That’s the system he’s working within. Because the system is the way it is, it’s had significant impact on his personal life.

My wife’s said on more than one occasion that she misses me. I’ve got a rich happy life, but I don’t feel I can be competitive unless I put everything else on hold for months. No single mom can be doing what I’m doing right now.

– Recursive Cactus

This impacts his current co-workers too, whom he cares about a lot.

This process is sufficiently demanding that I’m no longer operating at 100% at work. I want to do the best job at work, but I don’t feel I can do the right thing for my future by practicing algorithms 4 hours a day and do my job well.

I don’t feel comfortable being bad at my job. I like my teammates. I feel a sense of responsibility. I know I won’t get fired if I mail it in, but I know that it’s them that pick up the slack.

– Recursive Cactus

It’s helpful to remember that all the micro decisions made around false positives, interview structure, brain teasers, hiring criteria, and employee development add up to define a system that, at the end of the day, impacts people’s personal lives. Not just the lives of the job hunters themselves, but also all the people that surround them.

Hiring is nowhere near a solved problem. Even if we do solve it somehow, it’s not clear we would ever eliminate all that uncertainty. After all, projecting a person’s future work output after spending an hour or two with them in an artificial work setting seems kinda hard. While we should definitely try to minimize uncertainty, it might be helpful to accept it as a natural part of the process.

This system can be improved. Doing so requires not only coming up with new ideas, but also revisiting decades-old ideas and assumptions, and expanding upon that prior work rather than anchoring ourselves to it.

We’re confident that all of you people in the tech industry will help make tech hiring better. We know you can do it, because after all, you’re smart.

[1] There does exist some potential for selection bias, particularly around the time frames when people choose to practice. Cursory analysis suggests there’s not much of a relationship, but we’re currently digging in deeper (hinthint: could be a future blog post). You can also choose between the traditional algorithmic interview vs. a systems design interview, but the vast majority opt for the traditional interview. The passthrough rates shown are for the traditional interview.
[2] You might be wondering about the relative quality of candidates on interviewing.io. While it’s hard to pin down the true distribution of quality (which is the underlying question of this blog post), on average our practice interviewers have communicated to us that the quality of candidates on interviewing.io tends to be similar to the quality of candidates they encounter during their own company’s interview process, particularly during the phone screen process.
[3] This only includes candidates who have met our internal hiring bar and attended a company interview on our site. This does not represent the entire population of candidates who have interviewed with an interviewer.
[4] For those of you that have used interviewing.io before, you may remembere that we had an algorithm that adjusted for interviewer strictness. Upon further inspection, we found this algorithm also introduced variance to candidate scores in really unexpected ways. Because of this, we don’t rely on this algorithm as heavily.
[5] Spikes at 100 and 200 hours occurred because of an error in the labeling and max value of the survey question. The 3 survey questions asked were the following: 1) In your most recent job search, how many hours did you spend preparing for interviews? 2) How many hours did you spend on interview preparation before signing up for interviewing.io? 3) How many hours did you spend on interview preparation after signing up for interviewing.io (not including time using interviewing.io itself)? Each question had a max value of 100 hours, but many respondents responded to questions 2) and 3) where their sum exceeded 100. The distribution here shows the sum of 2) and 3). The median from question 1) responses was 94, nearly identical to the median of the sum of 2) and 3), so we used the sum to observe the shape of the distribution beyond 100 hours. Key lessons: assume a larger max value than you’d expect, and double-check your survey.
[6]I found the study a little hard to reason about, mainly because I’m not a psychologist, so techniques like meta analysis were a little foreign to me even if the underlying statistical tools were familiar. It’s not a question of whether the tools are valid, it’s that reasoning about the study’s underlying data was difficult. Similar to spaghetti code, the validation of the underlying datasets is spread across decades of prior academic papers, which makes it difficult to follow. It’s likely this is the nature of psychology, where useful data is harder to acquire, at least compared to the kinds of data we deal with in tech. Beyond that, I also had other questions about their methodology, which this article asks in far greater detail than I could.

Featured

Uncategorized

No engineer has ever sued a company because of constructive post-interview feedback. So why don’t employers do it?

Posted on February 6th, 2020.

One of the things that sucks most about technical interviews is that they’re a black box—candidates (usually) get told whether they made it to the next round, but they’re rarely told why they got the outcome that they did. Lack of feedback, or feedback that doesn’t come right away, isn’t just frustrating to candidates. It’s bad for business. We did a whole study on this. It turns out that candidates chronically underrate and overrate their technical interview performance, like so:

Where this finding starts to get actionable is that there’s a statistically significant relationship between whether people think they did well in an interview and whether they’d want to work with you. In other words, in every interview cycle, some portion of interviewees are losing interest in joining your company just because they don’t think they did well, even when they actually did. It makes sense… when you suspect you might not have done well, you’re prone to embark on a painful bout of self-flagellation, and to make it stop, you’ll rationalize away the job by telling yourself that you totally didn’t want to work there anyway.

Practically speaking, giving instant feedback to successful candidates can do wonders for increasing your close rate.

In addition to making candidates you want today more likely to join your team, feedback is crucial for the candidates you might want down the road. Technical interview outcomes are highly non-deterministic. According to our data, only about 25% of candidates perform consistently from interview to interview. Why does this matter? If interview outcomes are erratic, it means that the same candidate you reject this time might be someone you want to hire in 6 months. It’s in your interest to forge a good relationship with them now and be cognizant of and humble about the flaws in your hiring process.

I thought this tweet captured my sentiments particularly well.

Danny Trinh's tweet

So, despite the benefits, why do most companies persist in giving slow feedback or none at all? I surveyed founders, hiring managers, recruiters and labor lawyers (and also put out some questions to the Twitterverse) to understand why anyone who’s ever gone through interviewer training has been told in no uncertain terms to not give feedback.

As it turns out, feedback is discouraged primarily because companies are scared of getting sued… and because interviewers fear defensive candidate backlash. In some cases, giving feedback is avoided just because companies view it as a no-upside hassle.

The sad truth is that hiring practices have not caught up with market realities. Many of the hiring practices we take for granted today originated in a world where there was a surplus of candidates and a shortage of jobs. This extends to everything from painfully long take-home assignments to poorly written job descriptions. And post-interview feedback is no exception. As Gayle Laakmann McDowell, author of Cracking the Coding Interview, explains on Quora:

“Companies are not trying to create the most perfect process for you. They are trying to hire—ideally efficiently, cheaper, and effectively. This is about their goals, not yours. Maybe when it’s easy they’ll help you too, but really this whole process is about them… Companies do not believe it helps them to give candidates feedback. Frankly, all they see is downside.”

Look, I’m guilty of this, too. Here’s a rejection email I wrote when I was head of technical recruiting at TrialPay. This email makes me want to go back in time and punch myself in the face and then wish myself the best in my future endeavors to not get punched in the face.

Bad rejection email

These kinds of form letter rejections (which I guess is better than just leaving the person hanging) make a lot of sense when you have a revolving door of disposable candidates. They are completely irrational in this brave new world where candidates have more leverage than companies. But, because HR is fundamentally a cost center tasked with risk mitigation (rather than a profit center tasked with, you know, making stuff better), and because engineers on the ground only have so many cognitive cycles to tackle hard stuff outside their job descriptions, we continue to march forward on autopilot, perpetuating outdated and harmful practices like this one.

In this hiring climate, companies should move toward practices that give candidates a better interview experience. Is fear of litigation and discomfort legit enough to keep companies from giving feedback? Does optimizing for fear and a few bad actors in lieu of candidate experience make sense in the midst of a severe engineering shortage? Let’s break it down.

Does the fear of getting sued even make sense?

While researching this piece, I spoke to a few labor lawyers and ran some Lexis Nexis searches to see just how often a company’s constructive feedback (i.e. not “durrrr we didn’t hire you because you’re a woman”) to a rejected eng candidate has resulted in litigation.

Hey, guess what? IT’S ZERO! THIS HAS NEVER HAPPENED. EVER.1

As some of my lawyer contacts pointed out, a lot of cases get settled out of court, and that data is much harder to get. But in this market, creating poor candidate experience to hedge against something that is highly unlikely seems… irrational at best and destructive at worst.

What about candidates getting defensive?

At some point, I stopped writing trite rejection emails like the one above, but I was still beholden to my employer’s rules about written feedback.2 As an experiment, I tried giving candidates verbal feedback over the phone.

For context, I had a unique, hybrid role at TrialPay. Though my title was Head of Technical Recruiting, which meant I was accountable for normal recruiter stuff like branding and sourcing and interview process logistics, my role had one unique component. Because I had previously been a software engineer, to take the heat off the long-suffering eng team, I was often the first line of defense for technical interviews and conducted something like 500 of them that year.

After doing a lot of interviews day in and day out, I became less shy about ending them early when it was clear that a candidate wasn’t qualified (e.g. they couldn’t get through the brute force solution to the problem, let alone optimize). Did ending interviews early cause candidates to fly off the handle or feel particularly awkward, as many people suspect?

Defensive candidates tweet

In my experience, cutting things off and saying nothing about why is a lot more awkward and leads to more defensiveness than letting candidates know what the deal is. Some candidates will get defensive (at which point you can politely end the call), but if you offer constructive feedback—let them know what went wrong, make some recommendations about books to read, point them to problem repositories like Leetcode3, etc.—most will be grateful. My personal experience with giving feedback has been overwhelmingly positive. I used to love mailing books to candidates, and I formed lasting relationships with many. Some became early interviewing.io users a few years later.

Anyway, the way to avoid negative reactions and defensiveness from candidates is to practice giving feedback in a way that’s constructive. We’ll cover this next.

So if giving feedback isn’t actually risky and has real upsides, how does one do it?

When I started interviewing.io, it was the culmination of what I had started experimenting with at TrialPay. It was clear to me that feedback is a Good Thing and that candidates liked it… which in this market means it’s also good for companies. But, we still had to grapple with prospective customers’ (pretty irrational) fears about the market being flooded with defensive candidates with a lawyer on speed dial.

For context, interviewing.io is a hiring marketplace. Before talking to companies, engineers can practice technical interviewing anonymously, and if things go well, unlock our jobs portal, where they can bypass the usual top-of-funnel cruft (applying online, talking to recruiters or “talent managers,” finding friends who can refer them) and book real technical interviews with companies like Microsoft, Twitter, Coinbase, Twitch, and many others… often as early as the next day.

The cool thing is that both practice and real interviews with companies take place within the interviewing.io ecosystem, and wrt feedback, you’ll see why this matters in a moment.

Before we started working with employers, we spent some time building out our practice platform and getting the mechanics right. For practice interviews, our post-interview feedback forms looked like this:

The feedback form that an interviewer fills out

The feedback form that an interviewer fills out

After each practice interview, interviewers fill out the form above. Candidates fill out a similar form rating their interviewer. When both parties fill out their forms, they can see each other’s responses.

If you’re curious, you can watch people practicing and read real feedback that they got in our public showcase. Here’s a snapshot:

An interviewer's rubric after an interviewing.io interview

Check out our showcase. It’s cool.

When we started letting employers hire on our platform, we just recycled this post-interview feedback format, told them they should leave feedback to help us calibrate and because it’s good for candidate experience, and fervently hoped that they wouldn’t have an issue with it.

To our surprise and delight, employers were eminently willing to leave feedback. On our platform, candidates were able to see whether they passed or not and exactly how they did, just a few minutes after the interview was over, stopping the rising tide of post-interview anxiety and self-flagellation in its tracks, and, as we’ve said, increasing the likelihood that a great candidate will accept an offer.

A real, successful company interview on interviewing.io

A real, successful company interview on interviewing.io

And if a candidate failed an interview, they got to see exactly why they failed and what they needed to work on, probably for the first time ever in interview history.

A real, failed company interview on interviewing.io

A real, failed company interview on interviewing.io

Anonymity makes it easier to give feedback

On interviewing.io, interviews are anonymous: an employer knows literally nothing about the candidate before and during the interview (employers can even enable our real-time voice masking feature). Candidates’ identities are only revealed after a successful interview, i.e. after the employer has already submitted feedback.

We insist on anonymity because about 40% of our top-performing candidates are non-traditional, and we don’t want lack of pedigree or an unusual background to open them up to bias. Because interviews are anonymous, it’s often impossible to discriminate on the basis of age or gender or background. Therefore, feedback has to be constructive, by design, because the only info the interviewer has to go on is how well the candidate is performing in the interview. In addition to helping candidates get a fair shot, this anonymity provides something of a safety net for employers—it’s harder to build a discrimination case out of the feedback when the employer doesn’t know the identity of the candidate.

In many other contexts, anonymity can be destructive because of reduced accountability. But in the interview process, we’ve discovered over and over that anonymity sets free the better angels of our nature and creates a kinder, more inclusive interview experience for candidates and employers alike.

Building post-interview feedback into your eng hiring process

So, how can you fold these learnings into your process? Of course, the easiest way to get your feet wet is to start using interviewing.io. We’ll get you candidates you wouldn’t be sourcing without us, and we’ll empower you to give them the best interview experience you can.

But even if you don’t use us, based on how unlikely it is that you’re going to get sued or deal with angry candidates, we strongly recommend having your interviewers provide constructive feedback (like in the examples above) after every interview for all candidates, whether they pass or not, over email.

Here are a few strategies for delivering constructive feedback:

  1. Be clear that it’s a no-go. Ambiguity is psychologically difficult in a stressful situation. For instance: Thank you for interviewing with us. Unfortunately, you didn’t pass the interview.
  2. After you make it clear that it’s a no-go, tell them something nice. Find something about their performance—an answer they gave, or the way they thought through a problem, or how they asked the right questions—and share it with them. They’ll be more receptive to the rest of your feedback once they know that you’re on their side. For instance: Despite the fact that it didn’t work out this time, you did {x,y,z} really well, and I think that you can do much better in the future. Here are some suggestions for what you can work on.
  3. When you give suggestions, be specific and constructive. Don’t tell them that they royally screwed the whole operation and need to rethink their line of work. Instead, focus on specific things they can work on. Or, to put it another way, “Hey, familiarize yourself with big O notation. It’s not as scary as it sounds bc it comes up a lot in these kinds of interviews.”4 doesn’t say “you’re dumb and your work experience is dumb and you should feel bad” or “you seem like an asshole.” It says you should familiarize yourself with big O notation.
  4. Make recommendations. Is there a book they could read? If they’re promising but just lack knowledge, it’s a really nice gesture to ship said book to them.
  5. If you think the candidate is on their way to becoming a great engineer (especially if they take your recommendations and advice!), let them know that they can contact you again in a few months. You’ll build goodwill with someone who, even if they don’t work with you in the future, will talk about you to others. And when they do improve, you’ll be in a better position to bring them onto your team.
1If you know of such a case, please tell me, and I’ll update this post/associated content accordingly ASAP.
2This is a rule pretty much every company has, ever. It’s not just TrialPay, which was a great place to work and whose defensive HR policies, like every other company’s, were in no way indicative of their workplace culture.
3Whether algorithmic problems of the type you’d find on Leetcode are the best way to interview is a question worth asking… and I’ve since come to feel pretty strongly that many of them are not. But that’s out of scope for this piece.
4I recently discovered an amazing online book that makes big O approachable, practical, and not scary (all without talking down to the reader): Grokking Algorithms by Aditya Bhargava.

Featured

Uncategorized

We ran the numbers, and there really is a pipeline problem in eng hiring.

Posted on December 3rd, 2019.

If you say the words “there’s a pipeline problem” to explain why we’ve failed to make meaningful progress toward gender parity in software engineering, you probably won’t make many friends (or many hires). The pipeline problem argument goes something like this: “There aren’t enough qualified women out there, so it’s not our fault if we don’t hire them.”

Many people don’t like this reductive line of thinking because it ignores the growing body of research that points to unwelcoming environments that drive underrepresented talent out of tech: STEM in early education being unfriendly to children from underrepresented backgrounds, lack of a level playing field and unequal access to quality STEM education (see this study on how few of California’s schools offer AP Computer Science for instance), hostile university culture in male-dominated CS programs, biased hiring practices, and ultimately non-inclusive work environments that force women and people of color to leave tech at disproportionately high rates.1

However, because systemic issues can be hard to fix (they can take years, concerted efforts across many big organizations, and even huge socioeconomic shifts), the argument against the pipeline problem tends to get reduced to “No, the candidates are there. We just need to fix the bias in our process.”

This kind of reductive thinking is also not great. For years, companies have been pumping money and resources into things like unconscious bias training (which has been shown not to work), anonymizing resumes, and all sorts of other initiatives, and the numbers have barely moved. It’s no wonder tech eventually succumbs to a “diversity fatigue” that comes from trying to make changes and not seeing results.

We ran the numbers and learned that there really IS a pipeline problem in hiring — there really aren’t enough women to meet demand… if we keep hiring the way we’re hiring. Namely, if we keep over-indexing on CS degrees from top schools, and even if we remove unconscious bias from the process entirely, we will not get to gender parity. And yes, there is a way to surface strong candidates without relying on proxies like a college degree. We’ll talk about that toward the end.

Our findings ARE NOT meant to diminish the systemic issues that make engineering feel unwelcome to underrepresented talent, nor to diminish our ability to work together as an industry to effect change — to enact policy changes like access to CS in public schools, for instance. Our findings ARE meant to empower those individuals already working very hard to make hiring better who find themselves frustrated because, despite their efforts, the numbers aren’t moving. To those people, we say, please don’t lose sight of the systemic problems, but in the short term, there are things you can do that will yield results. We hope that, over time, by addressing both systemic pipeline issues and biases, we will get to critical mass of women from all backgrounds in engineering positions, and that these women, in turn, will do a lot to change the pipeline problem by providing role models and advocates and by changing the culture within companies.

Lastly, an important disclaimer before we proceed. In this post, we chose to focus on gender (and not on race). This decision was mostly due to the dearth of publicly available data around race and intersectionality in CS programs/bootcamps/MOOCs.2 While this analysis does not examine race and intersectionality, it is important to note that we recognize: 1) Not all women have the same experience in their engineering journey, and 2) tech’s disparities by gender is no more important than the lack of representation of people of color in engineering. We will revisit these subjects in a future post.

The percentage of women engineers is low and likely worse than reported

It’s very hard to have a cogent public conversation about diversity when there is no standardization of what statistic means what. As this post is about engineering specifically, we needed to find a way to get at how many women engineers are actually in the market and work around two big limitations in how FAAMNG (Facebook, Amazon, Apple, Microsoft, Google, and Netflix) report their diversity numbers.

The first limitation is that FAAMNG’s numbers are global. Why does this matter? It turns out that other countries, especially those where big companies have their offshore development offices, tend to have a higher percentage of female developers.3 In India, for instance, about 35% of developers are women; in the U.S., it’s 16%. Why are these numbers reported globally? The cynic in me says that it’s likely because the U.S. numbers, on their own, are pretty dismal, and these companies know it.4 To account for this limitation and get at the U.S. estimate, we did some napkin math and conservatively cut each company’s female headcount by 20%.

The second limitation is that reported numbers are based on “technical roles,” which Facebook at least defines very broadly: “A position that requires specialization and knowledge needed to accomplish mathematical, engineering, or scientific related duties.” I expect the other giants use something similar. What are the implications of this fairly broad definition? Given that product management and UX design count as technical roles, we did some more napkin math and removed ~20% to correct for PMs and designers.4

With these limitations in mind, below is a graph comparing the makeup of the U.S. population to its representation in tech at FAAMNG companies where said data was available, as well as an estimate of women in engineering specifically.

There's still a lot of work to do if we want to reach gender parity in engineering

 

If we want to reach gender parity in engineering, especially when we correct for women in the U.S. (and whether they’re actually software engineers), you can see that we have a long way to go.

Is it a pipeline problem?

So, are there just not enough qualified women in the hiring pool? It turns out that we’re actually hiring women at pretty much the same rate that women are graduating with CS degrees from four-year universities — out of the 71,420 students who graduated with a CS degree in 2017, 13,654, or ~20%, were women.5 So maybe we just need more women to get CS degrees?

Top tech companies and their non-profit arms have been using their diversity and inclusion budgets to bolster education initiatives, in the hopes that this will help them improve gender diversity in hiring. Diversity initiatives started taking off in earnest in 2014, and in 4 years, enrollment in CS programs grew by about 60%. It’s not anywhere near enough to get to gender parity.

And even if we could meaningfully increase the number of women enrolling in CS programs overall, top companies have historically tended to favor candidates from elite universities (based on some targeted LinkedIn Recruiter searches, 60% of software engineers at FAAMNG hold a degree from a top 20 school). You can see enrollment rates of women in 3 representative top computer science programs below. Note that while the numbers are mostly going up, growth is linear and not very fast.

Growth in women's undergraduate computer science enrollment at top schools

Sources: UC-Berkeley, MIT (1 and 2), and Stanford enrollment data

To see if it’s possible to reach gender parity if we remove unconscious bias but keep hiring primarily from top schools, let’s build a model. For the purposes of this model let’s focus solely on new jobs — if companies want to meet their diversity goals, at a minimum they need to achieve parity on any new jobs they’ve created. Based on the US BLS’s projections, the number of software engineering jobs is estimated to increase by 20% by 2028 (or about 1.8% annually). Today, the BLS estimates there are about 4 million computer-related jobs. This projects to about 70,000 new jobs created this year, increasing to 85,000 new jobs created in 2028.

If the goal is to hit gender parity in the workforce, our goal should be to have 50% of these new seats filled by women.

To see if this is possible, let’s project the growth of the incoming job pool over the same timeframe. Based on NCES’ 2017 survey, computer science graduates have grown annually anywhere between 7% and 11% this decade. Let’s optimistically assume this annual growth rate persists at 10%. Let’s also assume that the
percentage of graduates who are women remains at 20%, which has been true for the last 15 years. But, there are some gotchas.

First, there’s no guarantee that the seats earmarked for women actually get filled by women, particularly in a world where male CS graduates will continue to outnumber females 4-to-1. Not all of these jobs will be entry-level, so some portion of these jobs will be pulling from an already female-constrained pool of senior candidates. Finally, there’s no guarantee that traditional 4-year colleges will be able to support the projected influx of computer science candidates, particularly from the top-tier universities that companies usually prefer. Below, we graph the net new seats we’d need to fill if women held half of software engineering jobs (blue line) vs. how many women are actually available to hire if we keep focusing largely on educational pedigree in our recruiting efforts (red line). As you can see, it’s not possible to hit our goals, whether or not we’re biased against women at any point in the hiring process.6

 

So if the pipeline is at least partially to blame, what can we do?

You saw above that enrollment in undergraduate computer science programs among women is growing linearly. Rising tuition costs coupled with 4-year universities’ inability to keep up with demand for computer science education have forced growing numbers of people to go outside the system to learn to code.

Below is a graph of the portion of developers who have a bachelor’s degree in computer science and the portion of developers who are at least partially self-taught, according to the Stack Overflow Developer Surveys from 2015 to 2019. As you can see, in 2015, the numbers were pretty close, and then, with the emergence of MOOCs, there was a serious spurt, with more likely to come.

Education sources for software engineers

Sources: Stack Overflow Developer Surveys for 2015, 2016, 2018, and 2019

The rate of change in alternative, more affordable education is rapidly outpacing university enrollment. Unlike enrollment in traditional four-year schools, enrollment in MOOCs and bootcamps is growing exponentially.

In 2015 alone, over 35 million people have signed up for at least one MOOC course, and in 2018 MOOCs collectively had over 100M students. Of course, many people treat MOOCs as a supplement to their existing educational efforts or career rather than relying on MOOCs entirely to learn to code. This is something we factored into our model.

Despite their price tag (most charge on the order of $10-20K), bootcamps seem like a rational choice when compared to the price of top colleges. Since 2013, bootcamp enrollment has grown 9X, with a total of 20,316 grads in 2018. Though these numbers represent enrollment across all genders7 and the raw number of grads lags behind CS programs (for now), below you can see that the portion of women graduating from bootcamps is also on the rise and that graduation from online programs has actually reached gender parity (as compared to 20% in traditional CS programs).

Source: Course Report’s Data Dive: Gender in Coding Bootcamps

Source: Course Report’s Data Dive: Gender in Coding Bootcamps

Of course, one may rightfully question the quality of grads from alternative education programs. We factored in bootcamp placement rates in building our updated model below.

Outside of alternative education programs, the most obvious thing we can do to increase the supply of qualified women engineers is to expand our pipeline to include strong engineers who don’t hail from top schools or top companies.

In previous posts, we looked at the relationship between interview performance and traditional credentialing and found that participation in MOOCs mattered almost twice as much for interview performance than whether the candidate had worked at a top company. And top school was least predictive of performance and sometimes not at all. And some of my earlier research indicates that the most predictive attribute of a resume is the number of typos and grammatical errors (more is bad), rather than top school or top company. In this particular study, experience at a top company mattered a little, and a degree from a top school didn’t matter at all.

But, even if lower-tier schools and alternative programs have their fair share of talent, how do we surface the most qualified candidates? After all, employers have historically leaned so hard on 4-year degrees from top schools because they’re a decent-seeming proxy. Is there a better way?

But culling non-traditional talent is hard… that’s why we rely on pedigree and can’t change how we hire!

In this brave new world, where we have the technology to write code together remotely, and where we can collect data and reason about it, technology has the power to free us from relying on proxies, so that we can look at each individual as an indicative, unique bundle of performance-based data points. At interviewing.io, we make it possible to move away from proxies by looking at each interviewee as a collection of data points that tell a story, rather than a largely signal-less document a recruiter looks at for 10 seconds and then makes a largely arbitrary decision before moving on to the next candidate.

Of course, this post lives on our blog, so I’ll take a moment to plug what we do. In a world where there’s a growing credentialing gap and where it’s really hard to figure out how to separate a mediocre non-traditional candidate from a stellar one, we can help. interviewing.io helps companies find and hire engineers based on ability, not pedigree. We give out free mock interviews to engineers, and we use the data from these interviews to identify top performers, independently of how they look on paper. Those top performers then get to interview anonymously with employers on our platform (we’ve hired for Lyft, Uber, Dropbox, Quora, and many other great, high-bar companies). And this system works. Not only are our candidates’ conversion rates 3X the industry standard (about 70% of our candidates ace their phone screens, as compared to 20-25% in a typical, high-performing funnel), about 40% of the hires made by top companies on our platform have come from non-traditional backgrounds. Because of our completely anonymous, skills-first approach, we’ve seen an interesting phenomenon happen time and time again: when an engineer unmasks at the end of a successful interview, the company in question realizes that the student who just aced their phone screen was one whose resume was sitting at the bottom of the pile all along (we recently had someone get hired after having been rejected by that same company 3 times based on his resume!).

Frankly, think of how much time and money you’re wasting competing for only a small pool of superficially qualified candidates when you could be hiring overlooked talent that’s actually qualified. Your CFO will be happier, and so will your engineers. Look, whether you use us or something else, there’s a slew of tech-enabled solutions that are redefining credentialing in engineering, from asynchronous coding assessments like CodeSignal or HackerRank to solutions that vet candidates before sending them to you, like Triplebyte, to solutions that help you vet your inbound candidate pool, like Karat.

And using these new tools isn’t just paying lip service to a long-suffering D&I initiative. It gets you the candidates that everyone in the world isn’t chasing without compromising on quality, helps you make more hires faster, and just makes hiring fairer across the board. And, yes, it will also help you meet your diversity goals. Here’s another model.

How does changing your hiring practices improve the pipeline?

Above, you saw our take on status quo supply and demand of women engineers — basically how many engineers are available to hire using today’s practices versus how many we’d need to actually reach gender parity. Now, let’s see what it looks like when we include candidates without a traditional pedigree (yellow line).

 

As you can see, broadening your pipeline isn’t a magic pill, and as long as demand for software engineers continues to grow, it’s still going to be really hard, systemic changes to our society notwithstanding. If we do make these changes, however, the tech industry as a whole can accelerate its path toward gender parity and potentially get there within a decade.

What about specific companies? An interactive visualization.

So far we’ve talked about trends in the industry as a whole. But, how do these insights affect individual employers? Below is an interactive model where you visualize when Google, Facebook, or your company (where you can plug in your hiring numbers) will be able to hit their goals based on current hiring practices versus the more inclusive ones we advocate in this post. Unlike the industry as a whole, built into this visualization is the idea of candidate reach, as well as hire rates — one company can’t source and hire ALL the women (as much as they might want to). Of course, the stronger your brand, the higher your response rates will be.

We made some assumptions about response rates to sourcing outreach for both Google and Facebook. Specifically, we guessed a 60%-70% response rate for these giants based on the strength of their brand and their army of sourcers — when those companies reach out and tenaciously follow up, you’ll probably respond eventually.8 We also made some assumptions about their hire rates (5-10% of interviewed candidates). You can see both sets of assumptions below. And you can see that even with all the changes we propose, in our model, Google and Facebook will still not get to gender parity!

We also included a tab called “Your company” where you can play around with the sliders and see how long it would take your company to get to gender parity/whether it’s possible. There, we made much more conservative assumptions about response rates!

As you can see, for the giants, getting to gender parity is a tall order even with broadening your pipeline to include non-traditional candidates. And while it may be easier for smaller companies to get there without making drastic changes, when you’re small is exactly the right time to get fairer hiring into your DNA. It’s much harder to turn the ship around later on.

Conclusion

Regardless of whether you’re a giant or a small company, as long as hiring practices largely limits itself to top schools, the status quo will continue to be fundamentally inefficient, unmeritocratic, and elitist, and any hope of reaching gender parity will be impossible. Look, there are no easy fixes or band-aids when it comes to diversifying your workforce. Rather than continuing to look for hidden sources of women engineers (I promise, we’re not all hiding somewhere, just slightly out of reach) or trying to hire all the women from top schools, the data clearly shows that the only path forward is to improve hiring for everyone by going beyond top schools and hiring the best people for the job based on their ability, not how they look on paper.

I was recently in a pitch meeting where I got asked what interviewing.io’s mission is. I said that it’s to make hiring more efficient. The investors in the room were a bit surprised by this and asked, given that I care about hiring being fair, why that’s not the mission. First off, “fair” is hard to define and open to all manners of interpretation, whereas in an efficient hiring market, a qualified candidate, by definition, gets the job, with the least amount of pain and missteps. In other words, meritocracy is a logical consequence of efficiency. Secondly, and even more importantly, while I firmly believe that most people at companies want to do “the right thing”, it’s much easier to actually do the right thing in a big organization when it’s also cheaper, better, and faster.

All that’s to say that there are no shortcuts, and the most honorable (and most viable) path forward is to make hiring better for everyone and then hit your diversity goals in the process (or at least get closer to them). Software engineering is supposed to be this microcosm of the American dream — anyone can put in the work, learn to code, and level up, right? Until we own our very conscious biases about pedigree and change how we hire, that dream is a hollow pipe.

Appendix: Model Description and assumptions

To assess whether there exists a pipeline problem, we need to estimate the number of job openings that exist, as well as the number of recent female job market entrants that could feasibly fill those roles. If a pipeline problem does exist, the number of job openings would be greater than the number of female entrants.

For this analysis, we focused on new jobs created over the next 10 years and ignored openings from existing jobs due to attrition. Unfortunately, engineering does have a significantly higher attrition rate for women than other industries, so likely the numbers are worse than they appear in our models.9

That said, if a company wants to meet its diversity goals, it seems reasonable to expect them to do so with jobs that don’t yet exist, rather than on existing jobs whose pool of candidates we know are dominated by men.

Demand: Projected new jobs created

Tech industry net new jobs created = (# tech industry jobs prior year) x (annual growth rate)

Assumptions:

Supply: Projected new women in job pool from top tier universities

New women in job pool = (# CS graduates prior year) x (annual CS graduate growth rate) x (% CS graduates that are women) * (% CS graduates from top tier schools)

Assumptions:

Supply: Projected new women in job pool beyond top tier universities

This represents female bootcamp graduates plus female CS graduates not from top schools.

New women in job pool from beyond top schools =
(# bootcamp graduates prior year) x (%  bootcamp graduates that are women) x (% annual bootcamp graduate growth)
+ (# CS graduates prior year) x (annual CS graduate growth rate) x (% women in CS grads) x (% CS graduates not from top tier schools)

Bootcamp assumptions:

  • Current # bootcamp graduates: 20,000 (Course Report)
  • % bootcamp graduates that are women: 40% (Switchup)
  • Bootcamp graduates growth rate: 10% (Course Report)
  • Placement rate: 50% (Switchup)
  • % CS graduates not from top tier schools: 75% (see assumption from “Supply: Projected new women in job pool from top tier universities”)

Assumptions for CS graduates beyond top tier universities are the same as those found under “Supply: Projected new women in job pool from top tier universities”, but taking the remaining 75% of CS graduates excluded there.

Company-specific Demand: Projected number of candidates needed to source for job openings

Number of women to source = (# Engineers employed prior year) x (% annual growth rate) x (% diversity goal) x (1 / hire rate) x (1 / sourcing response rate)

In practice, companies typically need to contact many people for any single job opening, since there is plenty of inherent variability in the sourcing and interview process. This line describes how many people your company would have to reach to fill all new job openings created, based on assumptions about your company’s hiring practices.

1The Kapor Center has some great, data-rich reports on attrition in tech as well as systemic factors that contribute to the leaky pipeline. For a detailed study of attrition in tech, please see the Tech Leavers Study. For a comprehensive look at systemic factors that contribute to the leaky tech pipeline (and a series of long-term solutions), please see this comprehensive report. For a survey of the deplorable state of computer science education in California schools, please see this report.
2For instance, MIT keeps their race stats behind a login wall.
3According to a HackerRank study, “India, the United Arab Emirates, Romania, China, Sri Lanka, and Italy are the six countries with the highest percentage of women developers, [whereas the] U.S. came in at number 11.”.
4We assumed a 1:7 ratio of PMs to engineers and a 1:7 ratio of designers to engineers on product teams. Removing PMs and designers from our numbers does not mean to imply that their representation in tech doesn’t matter but rather to scope this post specifically to software engineers.).
5The National Center for Education Statistics doesn’t yet list graduation rates beyond 2017… the new numbers might be a bit higher, as you’ll see when you look at enrollment numbers for CS a bit further down in the post..
6This is not independent of the idea that deep, systemic issues within the fabric of our society (such as the ones we mention at the beginning of the post) are keeping women from entering engineering in droves. But, as I mentioned in the intro, laying the blame at the feet of these systemic issues entirely paralyzes us and prevents us from fixing the things we can fix.
7We couldn’t find a public record of women’s numbers by year and so are relying on graduation rates from Course Report as a proxy.
8These rates might seem high to recruiters reading this post. They might be high. We did try to correct for 2 things, both of which made our estimates higher: 1) this includes university and junior candidates, who tend to be way more responsive, and 2) this isn’t per sourcing attempt but over the lifetme of a candidate, so it includes followups, outreach to candidates who had previously been declined, and so on. However, if this still seems off, please write in and tell us!
9There are two good sources that look at attrition among women in tech. One is Women in Tech: The Facts, published by the National Center for Women&IT. The other is the excellent Tech Leavers Study by the Kapor Center.

Thank you to the interviewing.io data & eng team for all the data modeling, projections, and visualizations, as well as everyone who proofread the myriad long drafts.

Featured

Uncategorized

3 exercises to craft the kind of employer brand that actually makes engineers want to work for you

Posted on May 14th, 2019.

If I’m honest, I’ve wanted to write something about employer brand for a long time. One of the things that really gets my goat is when companies build employer brand by over-indexing on banalities (“look we have a ping pong table!”, “look we’re a startup so you’ll have a huge impact”, etc.) instead of focusing on the narratives that make them special.

Hiring engineers is really hard. It’s hard for tech giants, and it’s hard for small companies… but it’s especially hard for small companies people haven’t quite heard of, and they can use all the help they can get because talking about impact and ping pong tables just doesn’t cut it anymore.

At interviewing.io, making small companies shine is core to our business, and I’ll share some of what we’ve learned about branding as a result. I’ll also walk you through 3 simple exercises you can do to help you craft and distill your employer brand… in a way that highlights what’s actually special about you and will make great engineers excited to learn more.

If I have my druthers, this will be one of three posts on brand. This first one will focus on how to craft your story, the second will show you how to use that story to write great job descriptions, and the final one will focus on how to take the story you’ve created and get it out into the world in front of the kinds of people you’d want to hire.

So, onto the first post!

What is employer brand, and why does it matter?

Companies like Google have built a great brand, and this brand is largely what makes it possible for them to hire thousands of good engineers every year — when you think of Google (or, more recently, Microsoft!), you might think of an army of competent, first-principles thinkers competently building products people love… or maybe a niche group of engineers working on moonshots that will change the world.

This is employer brand. Put simply, it’s what comes to mind when prospective candidates think about your company. Employer brand encompasses every aspect of your narrative: whether people use/like your product, the problem you’re trying to solve, your founding story, your mission, your culture, and what generally what it’s like to work for you. Put another way, all of these attributes (and others besides!) coalesce into the visceral feeling people get when they imagine working for you.

Brand is the single most important thing for getting candidates in the door — even if you have a stellar personal network, in most cases, that’ll usually only last you until your first 30 hires or so — after that, your networks begin to sort of converge on one another. Even so, despite how important brand is for hiring, building it is one of the psychologically hardest things to do at the beginning of your journey because the opportunity cost of spending your time on ANYTHING is staggering, and it’s really hard to justify writing blog posts and hosting events and speaking at conferences when you have to build product and make individual hires and do 50 kabillion other things.

But, until you build a brand and get it out in the world, you’re going to be hacking through the jungle with a proverbial machete, making hires one by one, trying to charm each one by telling them your story. And once you have a brand, all of a sudden, sourcing is going to feel really, really different (just like it feels when you’ve found product market fit!).

Over time, if you continue to tell your own story, you, too, will see how much easier sourcing and hiring can be. So, let’s talk about how to craft the right narrative and then proudly shout it from the rooftops.

Why interviewing.io knows about branding

I mentioned earlier that a lot of what we do at interviewing.io is help our customers put their best foot forward and present themselves to engineers in a way that’s authentic and compelling. I’ll show you some examples of good copy in a moment, but here’s a bit of our flow to put it in context.
When engineers do really well in practice, they unlock our jobs portal, where they can see a list of companies and roles, like so:

As you can see, companies simply describe who they are and what they do, and top-performing engineers just book real interviews with one click. Because our goal is to remove obstacles from engineers talking to other engineers, we don’t have talent managers or talent advocates or, as they’re often called, recruiters, on staff to talk to our candidates and try to convince them to interview at a certain company. As a result, we often find ourselves coaching companies on how to present themselves, given limited time and space. We do work with quite a few companies whose brands are household names, but a good chunk of our customers are smaller, fast-growing companies. What’s interesting is that while, on our platform, a household name can have 7X the engagement of a company no one’s heard of, companies no one’s heard of that have exceptional brand narratives aren’t far behind burgeoning unicorns! (We define engagement as the frequency with which candidates who look at our jobs portal then choose to visit that employer’s page.)

Candidate engagement as a function of employer brand

And that’s why having a brand story matters… and why we’re equipped to talk about it at some depth.

What constitutes brand

Below are some attributes that can make up a brand story. As you look at the list, think about what each of these corresponds to in your company, and then, think about which of these are the most unique to you. For instance, every early-stage startup can say they have a culture characterized by autonomy and the potential for huge impact. It’s become a trope that doesn’t differentiate anyone in any way anymore and is therefore probably not worth emphasizing. On the other hand, if you are solving a problem that a lot of your candidates happen to have or if you use a really cool tech stack that attracts some niche community, that’s really special and worth emphasizing.

  • Your product and whether people have heard of it/like it
  • Your growth numbers if they’re impressive
  • Your tech stack and how flashy and cool it is
  • Your founding story and mission… are you working on a problem that people care about personally? If not, are you disrupting some outdated, inefficient way to do things?
  • How hard are the problems you’re solving? Both technically and otherwise?
  • How elite is your team?
  • What is it like to work for you, both with regard to overall culture and then eng culture specifically?1
    • Overall culture:
      • Are you known for kindness/work-life balance? Or grueling hours? (Either can be good depending on whom you want to attract.)
      • What portion of your employees have gone on to found startups?
      • Do you have a lot of organization/structure or are you chaos pirates?
    • Eng culture:
      • Are you more pragmatic/hacker-y vs. academic?
      • Do you subscribe to or actively reject any particular development methodologies (e.g. agile)?
      • Do you tend to roll your own solutions in house or do you try to use 3rd party tools wherever possible?
      • Do you open source parts of your code? Or regularly contribute to open source?

How to craft your story in 3 easy exercises!

There isn’t a single good formula for what to focus on or highlight when crafting this story, but there are a few exercises that we’ve seen be effective. You can do them in the order below, and by the end, you should have a concise, authentic, pithy narrative that you can shout proudly to the world.

We’ve found that it makes sense to do these exercises in the order below. First, you’re going to go with your gut and craft a high-level story. Then you’ll embellish it with details that make you unique and with anecdotes from your employees. And finally, you’ll edit it down into something crisp. As you work, you can use the list from the “What constitutes brand” section above as a reference. Note that, in general, your story plus details shouldn’t have more than 3 distinct ideas total or it’ll start to feel a bit all over the place.

Exercise 1 – The story itself

Imagine you have a smart, technical friend you respect but who doesn’t know anything about your company’s space. Quick, how would you describe what your company does and why it matters to them? Write it down (and target 5-6 sentences… but don’t worry too much about editing it yet… we’ll do that later).

If you’re feeling a bit stuck, here are some questions to get you started — think about how you might answer if your friend were asking each of these:

  • Why does the company exist/what does your product do, and why does that matter?
  • Why are you going to succeed where others have failed?
  • Why does the company matter to you personally?
  • What do you know that no one else does about your space?
  • What is your company doing that no one else is doing, and why does that matter?

As you do this exercise, note that when talking to your friend, you dispense with flowery language and explain things succinctly and clearly in simple terms! And that’s the point — the audience you’re selling to is not different than your friend, and your friend probably shares the same cynicism about disingenuous branding that they do!

Exercise 2 – The unique embellishments

Once you have the story you came up with above, which will likely be at a pretty high level, it’s time to drill down into the details that make you special. These details will likely be 2nd order, in the sense that they won’t be as broad or all-encompassing as the attributes that came to mind in the first exercise, but they might still be special and unique and worth noting.

Some examples of unique embellishments can be:

  • Your tech stack
    • Do you use any cutting-edge programming languages that one might not often see being used in production? If so, it might be a bit polarizing but attract the community around that language. More on the value of polarization when it comes to unique embellishments below.
  • Unfettered access to some type of user/a specific group you care about that your product impacts
    • Do you build products for Hollywood? Or for VCs? Or for schools? Some portion of your candidates, depending on their interests outside of work or their future career ambitions, are going to be really excited that they’ll get more direct access to users who operate in these worlds.
  • Unique lifestyle/work culture stuff like working remotely or pair programming
    • E.g. 37Signals and Pivotal respectively
  • Access to a ton of data/ability to work on massively distributed systems
    • E.g. even in its early days, Twitch had almost as much ingest as YouTube, and this was a meaningful selling point to candidates who wanted to work at scale but didn’t necessarily want to work at FAANG

The surprising value of polarization

Today’s job seeker is in equal parts jaded and savvy, and we’re currently in a labor market climate where talent has the power. The latter makes branding especially important, and by now, engineers have been told all manners of generalities about how much impact they’re going to have if they join a small startup and how whatever you’re working on is going to change the world… to the point where these things have become cliches that shows like Silicon Valley deride with impunity. To avoid cliches like this, think about what TRULY makes you special, and even if it’s a bit polarizing, own it. It’s your story, and the more honest you are about who you are and what you do, the more trust you’ll build with your audience and the more they’ll want to engage.

Another way to say this is that the most memorable stories might be shrouded in a bit of controversy. That’s not to say that you have to be controversial or contrive it when it isn’t there, but if you do operate in a space or have some aspect to your culture or tech stack that not everyone agrees with, you might find that the resulting self-selection among candidates can work to your advantage. Below are some examples of polarizing narratives.

  • Your work style. Some companies really value work-life balance, whereas others exalt burning the midnight oil. Some run on chaos and some take a more orderly approach. Some work by the book, and some choose more of a bulldoze your way to success and ask forgiveness rather than permission approach. An example of the latter is Uber — for a long time, their culture was known for a take-no-prisoners approach to getting things done, and this approach has a certain type of appeal for the right people.
  • Your tech stack. Certainly choosing your tech stack is, first and foremost, and engineering decision, but this decision has some implications for hiring, and choosing a fringe or niche stack or language can be a great way to attract the entire community around it. The more culty a language, the more fiercely passionate its acolytes will be about working for you (e.g. Rust… though by the time this guide comes out it might be something else!). Note that it doesn’t have to be thaaaat fringe of a language choice as long as there’s a tight-knit community around it, e.g. C#.
  • Your engineering culture. Do you subscribe to any particular type of methodology that might be controversial, e.g. are you super into TDD? Are you adamant about rolling all your own everything?

Note that there is no right or wrong here — to loosely paraphase Tolstoy, every startup is broken in its own way, and one saying we’ve heard is that, especially during the early days, The only thing you can do wrong is not own who you are — if you misrepresent how you work or make decisions, you’ll find yourself in one of two regrettable positions: either your hires will leave well before their time or you’ll have a bunch of people marching in different directions or completely paralyzed and unable to choose the right course of action on their own.

Exercise 3 – Your employees’ unique perspective

As your team grows beyond you, you will find that your employees’ reasons for working for you are likely different than the answers to the questions above. Talking to them (or, if you don’t want to put them on the spot, having another team member do so) can surface gold for your narrative. In particular, when I was a recruiter, one of the most useful exercises I did was asking my client to introduce me to a very specific handful of engineers. In particular, I was looking for people who 1) didn’t come to the company through personal referrals and 2) had a lot of good offers during their last job search. Why this mix of people? Because they’re the ones who, despite no personal connection to the company and despite other having other options, actively chose to work for you! You’d be surprised what stories I heard, and they’re rarely just about the company mission. For instance, one candidate I spoke to was really excited about the chance to closely interact with investors because he wanted to start his own company one day. Another was stoked at the chance to use Go in production.

Sometimes you’ll be surprised by what you’ll hear because the people working at your company might be there for very different reasons than you, but these anecdotes help flesh out your narrative and make it feel a bit more personal and real.

Once you have a few choice tidbits from employees, ask yourself whether each one is somehow charming or unusual and whether it’s a reason that a lot of people would find compelling about your company. If it’s all of these things, it should likely make it in your narrative. If it’s not particularly original (e.g. short commute) it may not be worth calling out in your primary narrative, but it’s well worth repeating and telling once you actually interact with candidates.

The finished product

So, what should the finished product look like? At a minimum, it’ll be some concise, compelling copy that you can use in your job descriptions. Hopefully, though, it’s more than that. Hopefully it becomes a consistent refrain you and your team use during calls, interviews, maybe investor pitches… a way to highlight all the things you’re most proud of about your company and the things that make you special… without having to reinvent the wheel every time.

Is brand the be-all and end-all of hiring? Not quite.

In closing, I’d like to leave you with a word or two of encouragement. Sure, as you saw in this post, brand matters. Having a great story will you get somewhere, but it won’t get you everywhere with candidates, and the truth is that the more established you are, the more candidates will come to you. But… there’s one piece of data we found in our interviewing and hiring adventures that flies in the face of brand completely proscribing your hiring destiny.

When we looked at how often candidates wanted to work at companies after interviewing there as a function of brand strength, its impact was not statistically significant. In other words, we found that brand strength didn’t matter at all when it came to either whether the candidate wanted to move forward or how excited the candidate was to work at the company. This was a bit surprising, so I decided to dig deeper. Maybe brand strength doesn’t matter overall but matters when the interviewer or the questions they asked aren’t highly rated? In other words, can brand buttress less-than-stellar interviewers? Not so, according to our data. Brand didn’t matter even when you corrected for interviewer quality. In fact, of the top 10 best-rated companies on our platform, half have no brand to speak of, 3 are mid-sized YC companies that command respect in Bay Area circles but are definitely not universally recognizable, and only 2 have anything approaching household name status.

So, what’s the takeaway here? Maybe the most realistic thing we can say is that while brand likely matters a lot for getting candidates in the door, once they’re in, no matter how well-branded you are, they’re yours to lose.

So, take heart.

Portions of this post will also appear in part in an upcoming, comprehensive Guide to Technical Recruiting and Hiring published by Holloway (where you can sign up if you’d like to read, review, or contribute to it).

1The 2019 Stack Overflow Developer Survey recently came out, and it turns out that in the US the most important thing for engineers is office/company culture… which realistically refers to the eng team culture because that’s engineers will spend most of their time. Anything you can do to call yours out (assuming, well, that it’s good) is going to be a win.

Featured

Uncategorized

You probably don’t factor in engineering time when calculating cost per hire. Here’s why you really should.

Posted on April 24th, 2019.

Whether you’re a recruiter yourself or an engineer who’s involved in hiring, you’ve probably heard of the following two recruiting-related metrics: time to hire and cost per hire. Indeed, these are THE two metrics that any self-respecting recruiting team will track. Time to hire is important because it lets you plan — if a given role has historically taken 3 months to fill, you’re going to act differently when you need to fill it again than if it takes 2 weeks. And, traditionally, cost per hire has been a planning tool as well — if you’re setting recruiting budgets for next year and have a headcount in mind, seeing what recruiting spent last year is super helpful.

But, with cost per hire (or CPH, as I’ll refer to it from now on in this post) in particular, there’s a problem. CPH is typically blended across ALL your hiring channels and is confined to recruiting spend alone. Computing one holistic CPH and confining it to just the recruiting team’s spend hides problems with your funnel and doesn’t help compare the quality of all your various candidate sources. And, most importantly, it completely overlooks arguably the most important thing of all — how much time your team is actually spending on hiring. Drilling down further, engineering time, specifically, despite being one of the most expensive resources, isn’t usually measured as part of the overall cost per hire. Rather, it’s generally written off as part of the cost of doing business. The irony, of course, is that a typical interview process puts the recruiter call at the very beginning of the process precisely to save eng time, but if we don’t measure eng time spent and quantify, then we can’t really save it.

For what it’s worth, the Twitterverse (my followers are something like 50/50 engineers and recruiters) seems to agree. Here are the results (and some associated comments) of a poll I conducted on this very issue:

And yet, most of us don’t do it. Why? Is it because it doesn’t measure the things recruiters care about? Or is it because it’s hard? Or is it because we can’t change anything, so why bother? After all, engineers need to do interviews, both phone screens and onsites, and we already try to shield them as much as possible by having candidates chat with recruiters or do coding challenges first, so what else can you do?

If you’d like to skip straight to how to compute a better, more inclusive CPH, you can skip down to our handy spreadsheet. Otherwise read on!

I’ve worked as both an engineer and an in-house recruiter before founding interviewing.io, so I have the good fortune of having seen the limitations of measuring CPH, from both sides of the table. As such, in this post, I’ll throw out two ways that we can make the cost per hire calculation more useful — by including eng time and by breaking it out by candidate source — and try to quantify exactly why these improvements are impactful… while building better rapport between recruiting and eng (where, real talk, relationships can be somewhat strained). But first, let’s talk about how CPH is typically calculated.

How is CPH typically calculated, and why does it omit eng time?

As I called out above, the primary purpose of calculating cost per hire is to plan the recruiting department’s budget for the next cycle. With that in mind, below is the formula that you’ll find if you google how to calculate cost per hire (pulled from Workable):

To figure out your CPH, you add up all the external and internal costs incurred during a recruiting cycle and divide by the number of hires.

“External” refers to any money paid out to third parties. Examples include job boards, tools (e.g. sourcing, assessment, your ATS), agency fees, candidate travel and lodging, and recruiting events/career fairs.

“Internal” refers to any money you spend within your company: recruiting team salaries, as well as any employee referral bonuses paid out over the course of the last cycle.

Note that internal costs don’t include eng salaries, as engineering and recruiting teams typically draw from different budgets. Hiring stuff is the domain of the recruiting team, and they pay for it out of their pockets… and engineers pay for… engineering stuff.

What’s problematic is that, while being called “cost per hire” this metric actually tells us what recruiting spends rather than what’s actually being spent as a whole. While tracking recruiting spend makes sense for budget planning, this metric, because of its increasingly inaccurate name, often gets pulled into something it ironically wasn’t intended for: figuring out how much the company is actually spending to make hires.

Why does factoring in engineering time matter?

As you saw above, not only is this the way we compute CPH inaccurate because it doesn’t factor in any time or resource expenditure outside the recruiting team (with eng being the biggest one). But, does engineering time really matter?

Yes, it matters a lot, for the following three reasons:

  1. Way more eng time than recruiting time goes into hiring (as you’ll see in this post!)
  2. Eng time is more expensive
  3. Eng time expenditure can vary wildly by channel

To establish that these things are (probably) true, let’s look at a typical eng hiring funnel.1 For the purposes of this exercise, we’ll start the funnel at the recruiter screen and assume that the costs of sourcing candidates are fixed.2

The green arrows are conversion rates between each step (e.g. 50% of people who get offers accept and get hired). The small gray text at the bottom of each box is how long that step takes for an engineer or recruiter (or both, in the case of an onsite). And the black number is how many times that needs to happen to ultimately make 1 hire, based on the green-arrow conversion rates.

So, with that in mind, to make one hire, let’s see how much time both eng and recruiting need to spend to make 1 hire and how much that time costs. Note that I’m assuming $100/hour is a decent approximation for recruiting comp and $150/hour is a decent approximation for eng comp.

Is eng time spent on recruiting really that costly?

Based on the funnel above, here’s the breakdown of time spent by both engineering and recruiting to make 1 hire. The parentheticals next to each line of time spent are based on how long that step takes times the number of times it needs to happen.

RECRUITING – 15 total hours
10 hours of recruiter screens (20 screens needed * 30 min per screen)
4 hours of onsites (4 onsites needed * 1 hour per onsite)
1 hour of offers (2 offer calls needed * 30 min per offer call)

To make 1 hire, it takes 15 recruiting hours or $1500.

ENGINEERING – 40 total hours
16 hours of phone screens (16 screens needed * 1 hour per screen)
24 hours of onsites (4 onsites needed * 6 hours per onsite)

For 1 hire, that’s a total of 40 eng hours, and on the face of it, it’s $6,000 of engineering time, but there is one more subtle multiplier on eng time that doesn’t apply to recruiting time that we need to factor in. Every time you interrupt an engineer from their primary job, which is solving problems with code, it takes time to refocus and get back into it. If you’re an engineer, you know this deep in your bones. And if you’re not, interruptions are very likely something you’ve heard your engineering friends decry… because they’re so painful and detrimental to continued productivity. Back when I was writing code on a regular basis, it would take me 15 minutes of staring at my IDE (or, if I’m honest, occasionally reading Hacker News or Reddit) to let my brain ease back into doing work after coming back from an interview. And it would take me 15 minutes before an interview to read a candidate’s resume and get in the mindset of whatever coding or design question I was going to ask. I expect my time windows are pretty typical, so it basically ends up being a half hour of ramping up and back down for every hour spent interviewing.

Therefore, with ramp-up and ramp-down time in mind, it’s more like $9,000 in eng hours.3

Ultimately, for one hire, we’re paying a total of $10,500, but eng incurs 6X the cost that recruiting does during the hiring process.

Why does breaking out cost per hire by source matter?

So, hopefully, I’ve convinced you that engineering time spent on hiring matters and that it’s the biggest cost you incur. But, if there’s nothing we can do to change it, and it’s just the cost of doing business, then why factor it in to CPH calculations? It turns out that eng time spent IS a lever you can pull, and its impact becomes clear when you think about cost per hire by candidate source.

To make that more concrete, let’s take a look at 2 examples. In both cases, we’ll pretend that one of our candidate sources has a different conversion rate than the overall rate at some step in the funnel. Then we’ll change up the conversion rate at one step in the funnel and try to guess that the financial implications of that are… and then actually calculate it. You might be surprised by the results.

What happens when you increase TPS to onsite conversion to 50%?

As you can see in the funnel above, a decent TPS to onsite conversion rate is 25%. Let’s say one of your sources could double that to 50% (by doing more extensive top-of-funnel filtering, let’s say). What do you think this will do to cost per hire?

In this model, we’re spending a total of 10 recruiting hours (worth $1000) and 32 eng hours (worth $7200).4 Unlike in the first example, we’re now paying a total of $8200 to make a hire.

In this case, you’ve reduced your recruiting time spent by 30% and your eng time spent by 20%, ultimately saving $2300 per hire. If one of your sources can get you this kind of efficiency gain, you probably want to invest more resources into it. And though doubling conversion from tech screen to onsite sounds great and perhaps something you would have known already about your source, without computing the cost per hire for this channel, it’s not intuitively clear just how much money a funnel improvement can save you, end to end.

What happens when you cut your offer acceptance rate in half?

Another possibility is that one of your sources does pretty well when it comes to candidate quality all the way to offer, but for some reason, those candidates are twice as hard to close. In this scenario, you double both the eng and recruiting time expenditure and ultimately pay an extra $7500 per hire for this source (which you’ll likely want to deallocate resources from here on out).5

In either of the examples above, until you break out CPH by source and see exactly what each is costing you, it’s a lot harder to figure out how to optimize your spend.

How to actually measure cost per hire (and include eng time of course!)

The usual way to calculate cost per hire is definitely useful for setting recruiting budget, as we discussed above, but if you want to figure out how much your whole company is actually spending on hiring, you need to factor in the most expensive piece — engineering time.

To do this, we propose a different metric, one that’s based on time spent by your team rather than overall salaries and fixed costs. Let’s call it “cost per hire prime” or CPH prime.

CPH prime doesn’t factor in fixed costs like salaries or events, which you can still do using the formula above… but it is going to be instrumental in helping you get a handle on what your spend actually looks like and will help you compare different channels.

To make your life easier, we’ve created a handy spreadsheet for you to copy and then fill in your numbers, like so:

As you can see, once you fill them the highlighted cells with your own conversion numbers (and optionally your hourly wages if yours differ much from our guesses), we’ll compute CPH prime for you.

And because we’re a business and want you to hire through us, we’ve included the average savings for companies hiring through our platform. We provide two big value-adds: we can pretty drastically improve your TPS to onsite conversion — about 65% of our candidates pass the tech screen at companies on average. From there, they get offers and accept them at the same rate as you’d see in your regular funnel.

Closing thoughts on building bridges between eng and recruiting

So, why does being cognizant of eng time in your CPH calculations matter? I’ve already kind of beaten it into the ground that it’s the biggest cost sink. However, there’s another, more noble reason, to care about eng time. In my career, having sat on all different sides of the table, I’ve noticed one unfortunate, inalienable truth: engineering and recruiting teams are simply not aligned.

Engineers tend to harbor some resentment toward recruiters because recruiters are the arbiters of how eng spends their time when it comes to hiring without a set of clear metrics or goals that help protect that time.

Recruiters often feel some amount of resentment toward engineers who tend to be resistant to interruptions, toward putting in the time to provide meaningful feedback about candidates so that recruiting can get better, and toward changes in the process.

In our humble opinion, much of the resentment on both sides could be cured by incorporating recruiting and engineering costs together in a specific, actionable way that will reduce the misalignment we’re seeing. Recruiters tend to hold the cards when it comes to hiring practices, so we’d love to see them take the lead to reach across the aisle by proactively factoring in eng time spent during hiring and ultimately incorporating recruiting and eng costs together in one metric that matters. Once that’s in place, recruiting can use the data they gather to make better decisions about how to use eng time, and in the process, rebuild much of the rapport and love that’s lost between the two departments.

1We’re basing these numbers on a mix of ATS reporting (Lever’s recruiting metrics report in particular) and what we’ve heard from our customers.

2We’re assuming sourcing costs are fixed for purposes of simplicity and because this post is largely about the importance of eng time factored in to the funnel. Of course, if you have channels that reduce sourcing time significantly, you’ll want to weigh that when deciding its efficacy.

3Really though, the value of an hour of work for an engineer is intangible and much higher than an hourly wage. There ARE inefficiencies and overhead to having a larger staff, not every hour is effective, and most likely it’s your best people who are conducting interviews. The reality is that the money spent on salaries is probably only a fraction of the true cost to the company, particularly for engineers (as opposed to recruiters).

4Here’s us showing our work in figuring out how much recruiting and eng time it takes to make a hire when your TPS to onsite conversion rate is 50%:
RECRUITING – 15 total hours or $1500
5 hours of recruiter screens (10 screens needed * 30 min per screen)
4 hours of onsites (4 onsites needed * 1 hour per onsite)
1 hour of offers (2 offer calls needed * 30 min per offer call)
ENGINEERING – 32 total hours or $7200
8 hours of phone screens (8 screens needed * 1 hour per screen)
24 hours of onsites (4 onsites needed * 6 hours per onsite)

5Here’s us showing our work in figuring out how much recruiting and eng time it takes to make a hire when you cut your offer acceptance rate in half:
RECRUITING – 30 total hours or $3000
20 hours of recruiter screens (40 screens needed * 30 min per screen)
8 hours of onsites (8 onsites needed * 1 hour per onsite)
2 hours of offers (4 offer calls needed * 30 min per offer call)
ENGINEERING – 80 total hours or $18,000
32 hours of phone screens (32 screens needed * 1 hour per screen)
48 hours of onsites (8 onsites needed * 6 hours per onsite)