interviewing.io logo interviewing.io blog
better interviewing through data
Navigation
Featured

Uncategorized

We ran the numbers, and there really is a pipeline problem in eng hiring.

Posted on December 3rd, 2019.

If you say the words “there’s a pipeline problem” to explain why we’ve failed to make meaningful progress toward gender parity in software engineering, you probably won’t make many friends (or many hires). The pipeline problem argument goes something like this: “There aren’t enough qualified women out there, so it’s not our fault if we don’t hire them.”

Many people don’t like this reductive line of thinking because it ignores the growing body of research that points to unwelcoming environments that drive underrepresented talent out of tech: STEM in early education being unfriendly to children from underrepresented backgrounds, lack of a level playing field and unequal access to quality STEM education (see this study on how few of California’s schools offer AP Computer Science for instance), hostile university culture in male-dominated CS programs, biased hiring practices, and ultimately non-inclusive work environments that force women and people of color to leave tech at disproportionately high rates.1

However, because systemic issues can be hard to fix (they can take years, concerted efforts across many big organizations, and even huge socioeconomic shifts), the argument against the pipeline problem tends to get reduced to “No, the candidates are there. We just need to fix the bias in our process.”

This kind of reductive thinking is also not great. For years, companies have been pumping money and resources into things like unconscious bias training (which has been shown not to work), anonymizing resumes, and all sorts of other initiatives, and the numbers have barely moved. It’s no wonder tech eventually succumbs to a “diversity fatigue” that comes from trying to make changes and not seeing results.

We ran the numbers and learned that there really IS a pipeline problem in hiring — there really aren’t enough women to meet demand… if we keep hiring the way we’re hiring. Namely, if we keep over-indexing on CS degrees from top schools, and even if we remove unconscious bias from the process entirely, we will not get to gender parity. And yes, there is a way to surface strong candidates without relying on proxies like a college degree. We’ll talk about that toward the end.

Our findings ARE NOT meant to diminish the systemic issues that make engineering feel unwelcome to underrepresented talent, nor to diminish our ability to work together as an industry to effect change — to enact policy changes like access to CS in public schools, for instance. Our findings ARE meant to empower those individuals already working very hard to make hiring better who find themselves frustrated because, despite their efforts, the numbers aren’t moving. To those people, we say, please don’t lose sight of the systemic problems, but in the short term, there are things you can do that will yield results. We hope that, over time, by addressing both systemic pipeline issues and biases, we will get to critical mass of women from all backgrounds in engineering positions, and that these women, in turn, will do a lot to change the pipeline problem by providing role models and advocates and by changing the culture within companies.

Lastly, an important disclaimer before we proceed. In this post, we chose to focus on gender (and not on race). This decision was mostly due to the dearth of publicly available data around race and intersectionality in CS programs/bootcamps/MOOCs.2 While this analysis does not examine race and intersectionality, it is important to note that we recognize: 1) Not all women have the same experience in their engineering journey, and 2) tech’s disparities by gender is no more important than the lack of representation of people of color in engineering. We will revisit these subjects in a future post.

The percentage of women engineers is low and likely worse than reported

It’s very hard to have a cogent public conversation about diversity when there is no standardization of what statistic means what. As this post is about engineering specifically, we needed to find a way to get at how many women engineers are actually in the market and work around two big limitations in how FAAMNG (Facebook, Amazon, Apple, Microsoft, Google, and Netflix) report their diversity numbers.

The first limitation is that FAAMNG’s numbers are global. Why does this matter? It turns out that other countries, especially those where big companies have their offshore development offices, tend to have a higher percentage of female developers.3 In India, for instance, about 35% of developers are women; in the U.S., it’s 16%. Why are these numbers reported globally? The cynic in me says that it’s likely because the U.S. numbers, on their own, are pretty dismal, and these companies know it.4 To account for this limitation and get at the U.S. estimate, we did some napkin math and conservatively cut each company’s female headcount by 20%.

The second limitation is that reported numbers are based on “technical roles,” which Facebook at least defines very broadly: “A position that requires specialization and knowledge needed to accomplish mathematical, engineering, or scientific related duties.” I expect the other giants use something similar. What are the implications of this fairly broad definition? Given that product management and UX design count as technical roles, we did some more napkin math and removed ~20% to correct for PMs and designers.4

With these limitations in mind, below is a graph comparing the makeup of the U.S. population to its representation in tech at FAAMNG companies where said data was available, as well as an estimate of women in engineering specifically.

There's still a lot of work to do if we want to reach gender parity in engineering

 

If we want to reach gender parity in engineering, especially when we correct for women in the U.S. (and whether they’re actually software engineers), you can see that we have a long way to go.

Is it a pipeline problem?

So, are there just not enough qualified women in the hiring pool? It turns out that we’re actually hiring women at pretty much the same rate that women are graduating with CS degrees from four-year universities — out of the 71,420 students who graduated with a CS degree in 2017, 13,654, or ~20%, were women.5 So maybe we just need more women to get CS degrees?

Top tech companies and their non-profit arms have been using their diversity and inclusion budgets to bolster education initiatives, in the hopes that this will help them improve gender diversity in hiring. Diversity initiatives started taking off in earnest in 2014, and in 4 years, enrollment in CS programs grew by about 60%. It’s not anywhere near enough to get to gender parity.

And even if we could meaningfully increase the number of women enrolling in CS programs overall, top companies have historically tended to favor candidates from elite universities (based on some targeted LinkedIn Recruiter searches, 60% of software engineers at FAAMNG hold a degree from a top 20 school). You can see enrollment rates of women in 3 representative top computer science programs below. Note that while the numbers are mostly going up, growth is linear and not very fast.

Growth in women's undergraduate computer science enrollment at top schools

Sources: UC-Berkeley, MIT (1 and 2), and Stanford enrollment data

To see if it’s possible to reach gender parity if we remove unconscious bias but keep hiring primarily from top schools, let’s build a model. For the purposes of this model let’s focus solely on new jobs — if companies want to meet their diversity goals, at a minimum they need to achieve parity on any new jobs they’ve created. Based on the US BLS’s projections, the number of software engineering jobs is estimated to increase by 20% by 2028 (or about 1.8% annually). Today, the BLS estimates there are about 4 million computer-related jobs. This projects to about 70,000 new jobs created this year, increasing to 85,000 new jobs created in 2028.

If the goal is to hit gender parity in the workforce, our goal should be to have 50% of these new seats filled by women.

To see if this is possible, let’s project the growth of the incoming job pool over the same timeframe. Based on NCES’ 2017 survey, computer science graduates have grown annually anywhere between 7% and 11% this decade. Let’s optimistically assume this annual growth rate persists at 10%. Let’s also assume that the
percentage of graduates who are women remains at 20%, which has been true for the last 15 years. But, there are some gotchas.

First, there’s no guarantee that the seats earmarked for women actually get filled by women, particularly in a world where male CS graduates will continue to outnumber females 4-to-1. Not all of these jobs will be entry-level, so some portion of these jobs will be pulling from an already female-constrained pool of senior candidates. Finally, there’s no guarantee that traditional 4-year colleges will be able to support the projected influx of computer science candidates, particularly from the top-tier universities that companies usually prefer. Below, we graph the net new seats we’d need to fill if women held half of software engineering jobs (blue line) vs. how many women are actually available to hire if we keep focusing largely on educational pedigree in our recruiting efforts (red line). As you can see, it’s not possible to hit our goals, whether or not we’re biased against women at any point in the hiring process.6

 

So if the pipeline is at least partially to blame, what can we do?

You saw above that enrollment in undergraduate computer science programs among women is growing linearly. Rising tuition costs coupled with 4-year universities’ inability to keep up with demand for computer science education have forced growing numbers of people to go outside the system to learn to code.

Below is a graph of the portion of developers who have a bachelor’s degree in computer science and the portion of developers who are at least partially self-taught, according to the Stack Overflow Developer Surveys from 2015 to 2019. As you can see, in 2015, the numbers were pretty close, and then, with the emergence of MOOCs, there was a serious spurt, with more likely to come.

Education sources for software engineers

Sources: Stack Overflow Developer Surveys for 2015, 2016, 2018, and 2019

The rate of change in alternative, more affordable education is rapidly outpacing university enrollment. Unlike enrollment in traditional four-year schools, enrollment in MOOCs and bootcamps is growing exponentially.

In 2015 alone, over 35 million people have signed up for at least one MOOC course, and in 2018 MOOCs collectively had over 100M students. Of course, many people treat MOOCs as a supplement to their existing educational efforts or career rather than relying on MOOCs entirely to learn to code. This is something we factored into our model.

Despite their price tag (most charge on the order of $10-20K), bootcamps seem like a rational choice when compared to the price of top colleges. Since 2013, bootcamp enrollment has grown 9X, with a total of 20,316 grads in 2018. Though these numbers represent enrollment across all genders7 and the raw number of grads lags behind CS programs (for now), below you can see that the portion of women graduating from bootcamps is also on the rise and that graduation from online programs has actually reached gender parity (as compared to 20% in traditional CS programs).

Source: Course Report’s Data Dive: Gender in Coding Bootcamps

Source: Course Report’s Data Dive: Gender in Coding Bootcamps

Of course, one may rightfully question the quality of grads from alternative education programs. We factored in bootcamp placement rates in building our updated model below.

Outside of alternative education programs, the most obvious thing we can do to increase the supply of qualified women engineers is to expand our pipeline to include strong engineers who don’t hail from top schools or top companies.

In previous posts, we looked at the relationship between interview performance and traditional credentialing and found that participation in MOOCs mattered almost twice as much for interview performance than whether the candidate had worked at a top company. And top school was least predictive of performance and sometimes not at all. And some of my earlier research indicates that the most predictive attribute of a resume is the number of typos and grammatical errors (more is bad), rather than top school or top company. In this particular study, experience at a top company mattered a little, and a degree from a top school didn’t matter at all.

But, even if lower-tier schools and alternative programs have their fair share of talent, how do we surface the most qualified candidates? After all, employers have historically leaned so hard on 4-year degrees from top schools because they’re a decent-seeming proxy. Is there a better way?

But culling non-traditional talent is hard… that’s why we rely on pedigree and can’t change how we hire!

In this brave new world, where we have the technology to write code together remotely, and where we can collect data and reason about it, technology has the power to free us from relying on proxies, so that we can look at each individual as an indicative, unique bundle of performance-based data points. At interviewing.io, we make it possible to move away from proxies by looking at each interviewee as a collection of data points that tell a story, rather than a largely signal-less document a recruiter looks at for 10 seconds and then makes a largely arbitrary decision before moving on to the next candidate.

Of course, this post lives on our blog, so I’ll take a moment to plug what we do. In a world where there’s a growing credentialing gap and where it’s really hard to figure out how to separate a mediocre non-traditional candidate from a stellar one, we can help. interviewing.io helps companies find and hire engineers based on ability, not pedigree. We give out free mock interviews to engineers, and we use the data from these interviews to identify top performers, independently of how they look on paper. Those top performers then get to interview anonymously with employers on our platform (we’ve hired for Lyft, Uber, Dropbox, Quora, and many other great, high-bar companies). And this system works. Not only are our candidates’ conversion rates 3X the industry standard (about 70% of our candidates ace their phone screens, as compared to 20-25% in a typical, high-performing funnel), about 40% of the hires made by top companies on our platform have come from non-traditional backgrounds. Because of our completely anonymous, skills-first approach, we’ve seen an interesting phenomenon happen time and time again: when an engineer unmasks at the end of a successful interview, the company in question realizes that the student who just aced their phone screen was one whose resume was sitting at the bottom of the pile all along (we recently had someone get hired after having been rejected by that same company 3 times based on his resume!).

Frankly, think of how much time and money you’re wasting competing for only a small pool of superficially qualified candidates when you could be hiring overlooked talent that’s actually qualified. Your CFO will be happier, and so will your engineers. Look, whether you use us or something else, there’s a slew of tech-enabled solutions that are redefining credentialing in engineering, from asynchronous coding assessments like CodeSignal or HackerRank to solutions that vet candidates before sending them to you, like Triplebyte, to solutions that help you vet your inbound candidate pool, like Karat.

And using these new tools isn’t just paying lip service to a long-suffering D&I initiative. It gets you the candidates that everyone in the world isn’t chasing without compromising on quality, helps you make more hires faster, and just makes hiring fairer across the board. And, yes, it will also help you meet your diversity goals. Here’s another model.

How does changing your hiring practices improve the pipeline?

Above, you saw our take on status quo supply and demand of women engineers — basically how many engineers are available to hire using today’s practices versus how many we’d need to actually reach gender parity. Now, let’s see what it looks like when we include candidates without a traditional pedigree (yellow line).

 

As you can see, broadening your pipeline isn’t a magic pill, and as long as demand for software engineers continues to grow, it’s still going to be really hard, systemic changes to our society notwithstanding. If we do make these changes, however, the tech industry as a whole can accelerate its path toward gender parity and potentially get there within a decade.

What about specific companies? An interactive visualization.

So far we’ve talked about trends in the industry as a whole. But, how do these insights affect individual employers? Below is an interactive model where you visualize when Google, Facebook, or your company (where you can plug in your hiring numbers) will be able to hit their goals based on current hiring practices versus the more inclusive ones we advocate in this post. Unlike the industry as a whole, built into this visualization is the idea of candidate reach, as well as hire rates — one company can’t source and hire ALL the women (as much as they might want to). Of course, the stronger your brand, the higher your response rates will be.

We made some assumptions about response rates to sourcing outreach for both Google and Facebook. Specifically, we guessed a 60%-70% response rate for these giants based on the strength of their brand and their army of sourcers — when those companies reach out and tenaciously follow up, you’ll probably respond eventually.8 We also made some assumptions about their hire rates (5-10% of interviewed candidates). You can see both sets of assumptions below. And you can see that even with all the changes we propose, in our model, Google and Facebook will still not get to gender parity!

We also included a tab called “Your company” where you can play around with the sliders and see how long it would take your company to get to gender parity/whether it’s possible. There, we made much more conservative assumptions about response rates!

As you can see, for the giants, getting to gender parity is a tall order even with broadening your pipeline to include non-traditional candidates. And while it may be easier for smaller companies to get there without making drastic changes, when you’re small is exactly the right time to get fairer hiring into your DNA. It’s much harder to turn the ship around later on.

Conclusion

Regardless of whether you’re a giant or a small company, as long as hiring practices largely limits itself to top schools, the status quo will continue to be fundamentally inefficient, unmeritocratic, and elitist, and any hope of reaching gender parity will be impossible. Look, there are no easy fixes or band-aids when it comes to diversifying your workforce. Rather than continuing to look for hidden sources of women engineers (I promise, we’re not all hiding somewhere, just slightly out of reach) or trying to hire all the women from top schools, the data clearly shows that the only path forward is to improve hiring for everyone by going beyond top schools and hiring the best people for the job based on their ability, not how they look on paper.

I was recently in a pitch meeting where I got asked what interviewing.io’s mission is. I said that it’s to make hiring more efficient. The investors in the room were a bit surprised by this and asked, given that I care about hiring being fair, why that’s not the mission. First off, “fair” is hard to define and open to all manners of interpretation, whereas in an efficient hiring market, a qualified candidate, by definition, gets the job, with the least amount of pain and missteps. In other words, meritocracy is a logical consequence of efficiency. Secondly, and even more importantly, while I firmly believe that most people at companies want to do “the right thing”, it’s much easier to actually do the right thing in a big organization when it’s also cheaper, better, and faster.

All that’s to say that there are no shortcuts, and the most honorable (and most viable) path forward is to make hiring better for everyone and then hit your diversity goals in the process (or at least get closer to them). Software engineering is supposed to be this microcosm of the American dream — anyone can put in the work, learn to code, and level up, right? Until we own our very conscious biases about pedigree and change how we hire, that dream is a hollow pipe.

Appendix: Model Description and assumptions

To assess whether there exists a pipeline problem, we need to estimate the number of job openings that exist, as well as the number of recent female job market entrants that could feasibly fill those roles. If a pipeline problem does exist, the number of job openings would be greater than the number of female entrants.

For this analysis, we focused on new jobs created over the next 10 years and ignored openings from existing jobs due to attrition. Unfortunately, engineering does have a significantly higher attrition rate for women than other industries, so likely the numbers are worse than they appear in our models.9

That said, if a company wants to meet its diversity goals, it seems reasonable to expect them to do so with jobs that don’t yet exist, rather than on existing jobs whose pool of candidates we know are dominated by men.

Demand: Projected new jobs created

Tech industry net new jobs created = (# tech industry jobs prior year) x (annual growth rate)

Assumptions:

Supply: Projected new women in job pool from top tier universities

New women in job pool = (# CS graduates prior year) x (annual CS graduate growth rate) x (% CS graduates that are women) * (% CS graduates from top tier schools)

Assumptions:

Supply: Projected new women in job pool beyond top tier universities

This represents female bootcamp graduates plus female CS graduates not from top schools.

New women in job pool from beyond top schools =
(# bootcamp graduates prior year) x (%  bootcamp graduates that are women) x (% annual bootcamp graduate growth)
+ (# CS graduates prior year) x (annual CS graduate growth rate) x (% women in CS grads) x (% CS graduates not from top tier schools)

Bootcamp assumptions:

  • Current # bootcamp graduates: 20,000 (Course Report)
  • % bootcamp graduates that are women: 40% (Switchup)
  • Bootcamp graduates growth rate: 10% (Course Report)
  • Placement rate: 50% (Switchup)
  • % CS graduates not from top tier schools: 75% (see assumption from “Supply: Projected new women in job pool from top tier universities”)

Assumptions for CS graduates beyond top tier universities are the same as those found under “Supply: Projected new women in job pool from top tier universities”, but taking the remaining 75% of CS graduates excluded there.

Company-specific Demand: Projected number of candidates needed to source for job openings

Number of women to source = (# Engineers employed prior year) x (% annual growth rate) x (% diversity goal) x (1 / hire rate) x (1 / sourcing response rate)

In practice, companies typically need to contact many people for any single job opening, since there is plenty of inherent variability in the sourcing and interview process. This line describes how many people your company would have to reach to fill all new job openings created, based on assumptions about your company’s hiring practices.

1The Kapor Center has some great, data-rich reports on attrition in tech as well as systemic factors that contribute to the leaky pipeline. For a detailed study of attrition in tech, please see the Tech Leavers Study. For a comprehensive look at systemic factors that contribute to the leaky tech pipeline (and a series of long-term solutions), please see this comprehensive report. For a survey of the deplorable state of computer science education in California schools, please see this report.
2For instance, MIT keeps their race stats behind a login wall.
3According to a HackerRank study, “India, the United Arab Emirates, Romania, China, Sri Lanka, and Italy are the six countries with the highest percentage of women developers, [whereas the] U.S. came in at number 11.”.
4We assumed a 1:7 ratio of PMs to engineers and a 1:7 ratio of designers to engineers on product teams. Removing PMs and designers from our numbers does not mean to imply that their representation in tech doesn’t matter but rather to scope this post specifically to software engineers.).
5The National Center for Education Statistics doesn’t yet list graduation rates beyond 2017… the new numbers might be a bit higher, as you’ll see when you look at enrollment numbers for CS a bit further down in the post..
6This is not independent of the idea that deep, systemic issues within the fabric of our society (such as the ones we mention at the beginning of the post) are keeping women from entering engineering in droves. But, as I mentioned in the intro, laying the blame at the feet of these systemic issues entirely paralyzes us and prevents us from fixing the things we can fix.
7We couldn’t find a public record of women’s numbers by year and so are relying on graduation rates from Course Report as a proxy.
8These rates might seem high to recruiters reading this post. They might be high. We did try to correct for 2 things, both of which made our estimates higher: 1) this includes university and junior candidates, who tend to be way more responsive, and 2) this isn’t per sourcing attempt but over the lifetme of a candidate, so it includes followups, outreach to candidates who had previously been declined, and so on. However, if this still seems off, please write in and tell us!
9There are two good sources that look at attrition among women in tech. One is Women in Tech: The Facts, published by the National Center for Women&IT. The other is the excellent Tech Leavers Study by the Kapor Center.

Thank you to the interviewing.io data & eng team for all the data modeling, projections, and visualizations, as well as everyone who proofread the myriad long drafts.

Featured

Uncategorized

3 exercises to craft the kind of employer brand that actually makes engineers want to work for you

Posted on May 14th, 2019.

If I’m honest, I’ve wanted to write something about employer brand for a long time. One of the things that really gets my goat is when companies build employer brand by over-indexing on banalities (“look we have a ping pong table!”, “look we’re a startup so you’ll have a huge impact”, etc.) instead of focusing on the narratives that make them special.

Hiring engineers is really hard. It’s hard for tech giants, and it’s hard for small companies… but it’s especially hard for small companies people haven’t quite heard of, and they can use all the help they can get because talking about impact and ping pong tables just doesn’t cut it anymore.

At interviewing.io, making small companies shine is core to our business, and I’ll share some of what we’ve learned about branding as a result. I’ll also walk you through 3 simple exercises you can do to help you craft and distill your employer brand… in a way that highlights what’s actually special about you and will make great engineers excited to learn more.

If I have my druthers, this will be one of three posts on brand. This first one will focus on how to craft your story, the second will show you how to use that story to write great job descriptions, and the final one will focus on how to take the story you’ve created and get it out into the world in front of the kinds of people you’d want to hire.

So, onto the first post!

What is employer brand, and why does it matter?

Companies like Google have built a great brand, and this brand is largely what makes it possible for them to hire thousands of good engineers every year — when you think of Google (or, more recently, Microsoft!), you might think of an army of competent, first-principles thinkers competently building products people love… or maybe a niche group of engineers working on moonshots that will change the world.

This is employer brand. Put simply, it’s what comes to mind when prospective candidates think about your company. Employer brand encompasses every aspect of your narrative: whether people use/like your product, the problem you’re trying to solve, your founding story, your mission, your culture, and what generally what it’s like to work for you. Put another way, all of these attributes (and others besides!) coalesce into the visceral feeling people get when they imagine working for you.

Brand is the single most important thing for getting candidates in the door — even if you have a stellar personal network, in most cases, that’ll usually only last you until your first 30 hires or so — after that, your networks begin to sort of converge on one another. Even so, despite how important brand is for hiring, building it is one of the psychologically hardest things to do at the beginning of your journey because the opportunity cost of spending your time on ANYTHING is staggering, and it’s really hard to justify writing blog posts and hosting events and speaking at conferences when you have to build product and make individual hires and do 50 kabillion other things.

But, until you build a brand and get it out in the world, you’re going to be hacking through the jungle with a proverbial machete, making hires one by one, trying to charm each one by telling them your story. And once you have a brand, all of a sudden, sourcing is going to feel really, really different (just like it feels when you’ve found product market fit!).

Over time, if you continue to tell your own story, you, too, will see how much easier sourcing and hiring can be. So, let’s talk about how to craft the right narrative and then proudly shout it from the rooftops.

Why interviewing.io knows about branding

I mentioned earlier that a lot of what we do at interviewing.io is help our customers put their best foot forward and present themselves to engineers in a way that’s authentic and compelling. I’ll show you some examples of good copy in a moment, but here’s a bit of our flow to put it in context.
When engineers do really well in practice, they unlock our jobs portal, where they can see a list of companies and roles, like so:

As you can see, companies simply describe who they are and what they do, and top-performing engineers just book real interviews with one click. Because our goal is to remove obstacles from engineers talking to other engineers, we don’t have talent managers or talent advocates or, as they’re often called, recruiters, on staff to talk to our candidates and try to convince them to interview at a certain company. As a result, we often find ourselves coaching companies on how to present themselves, given limited time and space. We do work with quite a few companies whose brands are household names, but a good chunk of our customers are smaller, fast-growing companies. What’s interesting is that while, on our platform, a household name can have 7X the engagement of a company no one’s heard of, companies no one’s heard of that have exceptional brand narratives aren’t far behind burgeoning unicorns! (We define engagement as the frequency with which candidates who look at our jobs portal then choose to visit that employer’s page.)

Candidate engagement as a function of employer brand

And that’s why having a brand story matters… and why we’re equipped to talk about it at some depth.

What constitutes brand

Below are some attributes that can make up a brand story. As you look at the list, think about what each of these corresponds to in your company, and then, think about which of these are the most unique to you. For instance, every early-stage startup can say they have a culture characterized by autonomy and the potential for huge impact. It’s become a trope that doesn’t differentiate anyone in any way anymore and is therefore probably not worth emphasizing. On the other hand, if you are solving a problem that a lot of your candidates happen to have or if you use a really cool tech stack that attracts some niche community, that’s really special and worth emphasizing.

  • Your product and whether people have heard of it/like it
  • Your growth numbers if they’re impressive
  • Your tech stack and how flashy and cool it is
  • Your founding story and mission… are you working on a problem that people care about personally? If not, are you disrupting some outdated, inefficient way to do things?
  • How hard are the problems you’re solving? Both technically and otherwise?
  • How elite is your team?
  • What is it like to work for you, both with regard to overall culture and then eng culture specifically?1
    • Overall culture:
      • Are you known for kindness/work-life balance? Or grueling hours? (Either can be good depending on whom you want to attract.)
      • What portion of your employees have gone on to found startups?
      • Do you have a lot of organization/structure or are you chaos pirates?
    • Eng culture:
      • Are you more pragmatic/hacker-y vs. academic?
      • Do you subscribe to or actively reject any particular development methodologies (e.g. agile)?
      • Do you tend to roll your own solutions in house or do you try to use 3rd party tools wherever possible?
      • Do you open source parts of your code? Or regularly contribute to open source?

How to craft your story in 3 easy exercises!

There isn’t a single good formula for what to focus on or highlight when crafting this story, but there are a few exercises that we’ve seen be effective. You can do them in the order below, and by the end, you should have a concise, authentic, pithy narrative that you can shout proudly to the world.

We’ve found that it makes sense to do these exercises in the order below. First, you’re going to go with your gut and craft a high-level story. Then you’ll embellish it with details that make you unique and with anecdotes from your employees. And finally, you’ll edit it down into something crisp. As you work, you can use the list from the “What constitutes brand” section above as a reference. Note that, in general, your story plus details shouldn’t have more than 3 distinct ideas total or it’ll start to feel a bit all over the place.

Exercise 1 – The story itself

Imagine you have a smart, technical friend you respect but who doesn’t know anything about your company’s space. Quick, how would you describe what your company does and why it matters to them? Write it down (and target 5-6 sentences… but don’t worry too much about editing it yet… we’ll do that later).

If you’re feeling a bit stuck, here are some questions to get you started — think about how you might answer if your friend were asking each of these:

  • Why does the company exist/what does your product do, and why does that matter?
  • Why are you going to succeed where others have failed?
  • Why does the company matter to you personally?
  • What do you know that no one else does about your space?
  • What is your company doing that no one else is doing, and why does that matter?

As you do this exercise, note that when talking to your friend, you dispense with flowery language and explain things succinctly and clearly in simple terms! And that’s the point — the audience you’re selling to is not different than your friend, and your friend probably shares the same cynicism about disingenuous branding that they do!

Exercise 2 – The unique embellishments

Once you have the story you came up with above, which will likely be at a pretty high level, it’s time to drill down into the details that make you special. These details will likely be 2nd order, in the sense that they won’t be as broad or all-encompassing as the attributes that came to mind in the first exercise, but they might still be special and unique and worth noting.

Some examples of unique embellishments can be:

  • Your tech stack
    • Do you use any cutting-edge programming languages that one might not often see being used in production? If so, it might be a bit polarizing but attract the community around that language. More on the value of polarization when it comes to unique embellishments below.
  • Unfettered access to some type of user/a specific group you care about that your product impacts
    • Do you build products for Hollywood? Or for VCs? Or for schools? Some portion of your candidates, depending on their interests outside of work or their future career ambitions, are going to be really excited that they’ll get more direct access to users who operate in these worlds.
  • Unique lifestyle/work culture stuff like working remotely or pair programming
    • E.g. 37Signals and Pivotal respectively
  • Access to a ton of data/ability to work on massively distributed systems
    • E.g. even in its early days, Twitch had almost as much ingest as YouTube, and this was a meaningful selling point to candidates who wanted to work at scale but didn’t necessarily want to work at FAANG

The surprising value of polarization

Today’s job seeker is in equal parts jaded and savvy, and we’re currently in a labor market climate where talent has the power. The latter makes branding especially important, and by now, engineers have been told all manners of generalities about how much impact they’re going to have if they join a small startup and how whatever you’re working on is going to change the world… to the point where these things have become cliches that shows like Silicon Valley deride with impunity. To avoid cliches like this, think about what TRULY makes you special, and even if it’s a bit polarizing, own it. It’s your story, and the more honest you are about who you are and what you do, the more trust you’ll build with your audience and the more they’ll want to engage.

Another way to say this is that the most memorable stories might be shrouded in a bit of controversy. That’s not to say that you have to be controversial or contrive it when it isn’t there, but if you do operate in a space or have some aspect to your culture or tech stack that not everyone agrees with, you might find that the resulting self-selection among candidates can work to your advantage. Below are some examples of polarizing narratives.

  • Your work style. Some companies really value work-life balance, whereas others exalt burning the midnight oil. Some run on chaos and some take a more orderly approach. Some work by the book, and some choose more of a bulldoze your way to success and ask forgiveness rather than permission approach. An example of the latter is Uber — for a long time, their culture was known for a take-no-prisoners approach to getting things done, and this approach has a certain type of appeal for the right people.
  • Your tech stack. Certainly choosing your tech stack is, first and foremost, and engineering decision, but this decision has some implications for hiring, and choosing a fringe or niche stack or language can be a great way to attract the entire community around it. The more culty a language, the more fiercely passionate its acolytes will be about working for you (e.g. Rust… though by the time this guide comes out it might be something else!). Note that it doesn’t have to be thaaaat fringe of a language choice as long as there’s a tight-knit community around it, e.g. C#.
  • Your engineering culture. Do you subscribe to any particular type of methodology that might be controversial, e.g. are you super into TDD? Are you adamant about rolling all your own everything?

Note that there is no right or wrong here — to loosely paraphase Tolstoy, every startup is broken in its own way, and one saying we’ve heard is that, especially during the early days, The only thing you can do wrong is not own who you are — if you misrepresent how you work or make decisions, you’ll find yourself in one of two regrettable positions: either your hires will leave well before their time or you’ll have a bunch of people marching in different directions or completely paralyzed and unable to choose the right course of action on their own.

Exercise 3 – Your employees’ unique perspective

As your team grows beyond you, you will find that your employees’ reasons for working for you are likely different than the answers to the questions above. Talking to them (or, if you don’t want to put them on the spot, having another team member do so) can surface gold for your narrative. In particular, when I was a recruiter, one of the most useful exercises I did was asking my client to introduce me to a very specific handful of engineers. In particular, I was looking for people who 1) didn’t come to the company through personal referrals and 2) had a lot of good offers during their last job search. Why this mix of people? Because they’re the ones who, despite no personal connection to the company and despite other having other options, actively chose to work for you! You’d be surprised what stories I heard, and they’re rarely just about the company mission. For instance, one candidate I spoke to was really excited about the chance to closely interact with investors because he wanted to start his own company one day. Another was stoked at the chance to use Go in production.

Sometimes you’ll be surprised by what you’ll hear because the people working at your company might be there for very different reasons than you, but these anecdotes help flesh out your narrative and make it feel a bit more personal and real.

Once you have a few choice tidbits from employees, ask yourself whether each one is somehow charming or unusual and whether it’s a reason that a lot of people would find compelling about your company. If it’s all of these things, it should likely make it in your narrative. If it’s not particularly original (e.g. short commute) it may not be worth calling out in your primary narrative, but it’s well worth repeating and telling once you actually interact with candidates.

The finished product

So, what should the finished product look like? At a minimum, it’ll be some concise, compelling copy that you can use in your job descriptions. Hopefully, though, it’s more than that. Hopefully it becomes a consistent refrain you and your team use during calls, interviews, maybe investor pitches… a way to highlight all the things you’re most proud of about your company and the things that make you special… without having to reinvent the wheel every time.

Is brand the be-all and end-all of hiring? Not quite.

In closing, I’d like to leave you with a word or two of encouragement. Sure, as you saw in this post, brand matters. Having a great story will you get somewhere, but it won’t get you everywhere with candidates, and the truth is that the more established you are, the more candidates will come to you. But… there’s one piece of data we found in our interviewing and hiring adventures that flies in the face of brand completely proscribing your hiring destiny.

When we looked at how often candidates wanted to work at companies after interviewing there as a function of brand strength, its impact was not statistically significant. In other words, we found that brand strength didn’t matter at all when it came to either whether the candidate wanted to move forward or how excited the candidate was to work at the company. This was a bit surprising, so I decided to dig deeper. Maybe brand strength doesn’t matter overall but matters when the interviewer or the questions they asked aren’t highly rated? In other words, can brand buttress less-than-stellar interviewers? Not so, according to our data. Brand didn’t matter even when you corrected for interviewer quality. In fact, of the top 10 best-rated companies on our platform, half have no brand to speak of, 3 are mid-sized YC companies that command respect in Bay Area circles but are definitely not universally recognizable, and only 2 have anything approaching household name status.

So, what’s the takeaway here? Maybe the most realistic thing we can say is that while brand likely matters a lot for getting candidates in the door, once they’re in, no matter how well-branded you are, they’re yours to lose.

So, take heart.

Portions of this post will also appear in part in an upcoming, comprehensive Guide to Technical Recruiting and Hiring published by Holloway (where you can sign up if you’d like to read, review, or contribute to it).

1The 2019 Stack Overflow Developer Survey recently came out, and it turns out that in the US the most important thing for engineers is office/company culture… which realistically refers to the eng team culture because that’s engineers will spend most of their time. Anything you can do to call yours out (assuming, well, that it’s good) is going to be a win.

Featured

Uncategorized

You probably don’t factor in engineering time when calculating cost per hire. Here’s why you really should.

Posted on April 24th, 2019.

Whether you’re a recruiter yourself or an engineer who’s involved in hiring, you’ve probably heard of the following two recruiting-related metrics: time to hire and cost per hire. Indeed, these are THE two metrics that any self-respecting recruiting team will track. Time to hire is important because it lets you plan — if a given role has historically taken 3 months to fill, you’re going to act differently when you need to fill it again than if it takes 2 weeks. And, traditionally, cost per hire has been a planning tool as well — if you’re setting recruiting budgets for next year and have a headcount in mind, seeing what recruiting spent last year is super helpful.

But, with cost per hire (or CPH, as I’ll refer to it from now on in this post) in particular, there’s a problem. CPH is typically blended across ALL your hiring channels and is confined to recruiting spend alone. Computing one holistic CPH and confining it to just the recruiting team’s spend hides problems with your funnel and doesn’t help compare the quality of all your various candidate sources. And, most importantly, it completely overlooks arguably the most important thing of all — how much time your team is actually spending on hiring. Drilling down further, engineering time, specifically, despite being one of the most expensive resources, isn’t usually measured as part of the overall cost per hire. Rather, it’s generally written off as part of the cost of doing business. The irony, of course, is that a typical interview process puts the recruiter call at the very beginning of the process precisely to save eng time, but if we don’t measure eng time spent and quantify, then we can’t really save it.

For what it’s worth, the Twitterverse (my followers are something like 50/50 engineers and recruiters) seems to agree. Here are the results (and some associated comments) of a poll I conducted on this very issue:

And yet, most of us don’t do it. Why? Is it because it doesn’t measure the things recruiters care about? Or is it because it’s hard? Or is it because we can’t change anything, so why bother? After all, engineers need to do interviews, both phone screens and onsites, and we already try to shield them as much as possible by having candidates chat with recruiters or do coding challenges first, so what else can you do?

If you’d like to skip straight to how to compute a better, more inclusive CPH, you can skip down to our handy spreadsheet. Otherwise read on!

I’ve worked as both an engineer and an in-house recruiter before founding interviewing.io, so I have the good fortune of having seen the limitations of measuring CPH, from both sides of the table. As such, in this post, I’ll throw out two ways that we can make the cost per hire calculation more useful — by including eng time and by breaking it out by candidate source — and try to quantify exactly why these improvements are impactful… while building better rapport between recruiting and eng (where, real talk, relationships can be somewhat strained). But first, let’s talk about how CPH is typically calculated.

How is CPH typically calculated, and why does it omit eng time?

As I called out above, the primary purpose of calculating cost per hire is to plan the recruiting department’s budget for the next cycle. With that in mind, below is the formula that you’ll find if you google how to calculate cost per hire (pulled from Workable):

To figure out your CPH, you add up all the external and internal costs incurred during a recruiting cycle and divide by the number of hires.

“External” refers to any money paid out to third parties. Examples include job boards, tools (e.g. sourcing, assessment, your ATS), agency fees, candidate travel and lodging, and recruiting events/career fairs.

“Internal” refers to any money you spend within your company: recruiting team salaries, as well as any employee referral bonuses paid out over the course of the last cycle.

Note that internal costs don’t include eng salaries, as engineering and recruiting teams typically draw from different budgets. Hiring stuff is the domain of the recruiting team, and they pay for it out of their pockets… and engineers pay for… engineering stuff.

What’s problematic is that, while being called “cost per hire” this metric actually tells us what recruiting spends rather than what’s actually being spent as a whole. While tracking recruiting spend makes sense for budget planning, this metric, because of its increasingly inaccurate name, often gets pulled into something it ironically wasn’t intended for: figuring out how much the company is actually spending to make hires.

Why does factoring in engineering time matter?

As you saw above, not only is this the way we compute CPH inaccurate because it doesn’t factor in any time or resource expenditure outside the recruiting team (with eng being the biggest one). But, does engineering time really matter?

Yes, it matters a lot, for the following three reasons:

  1. Way more eng time than recruiting time goes into hiring (as you’ll see in this post!)
  2. Eng time is more expensive
  3. Eng time expenditure can vary wildly by channel

To establish that these things are (probably) true, let’s look at a typical eng hiring funnel.1 For the purposes of this exercise, we’ll start the funnel at the recruiter screen and assume that the costs of sourcing candidates are fixed.2

The green arrows are conversion rates between each step (e.g. 50% of people who get offers accept and get hired). The small gray text at the bottom of each box is how long that step takes for an engineer or recruiter (or both, in the case of an onsite). And the black number is how many times that needs to happen to ultimately make 1 hire, based on the green-arrow conversion rates.

So, with that in mind, to make one hire, let’s see how much time both eng and recruiting need to spend to make 1 hire and how much that time costs. Note that I’m assuming $100/hour is a decent approximation for recruiting comp and $150/hour is a decent approximation for eng comp.

Is eng time spent on recruiting really that costly?

Based on the funnel above, here’s the breakdown of time spent by both engineering and recruiting to make 1 hire. The parentheticals next to each line of time spent are based on how long that step takes times the number of times it needs to happen.

RECRUITING – 15 total hours
10 hours of recruiter screens (20 screens needed * 30 min per screen)
4 hours of onsites (4 onsites needed * 1 hour per onsite)
1 hour of offers (2 offer calls needed * 30 min per offer call)

To make 1 hire, it takes 15 recruiting hours or $1500.

ENGINEERING – 40 total hours
16 hours of phone screens (16 screens needed * 1 hour per screen)
24 hours of onsites (4 onsites needed * 6 hours per onsite)

For 1 hire, that’s a total of 40 eng hours, and on the face of it, it’s $6,000 of engineering time, but there is one more subtle multiplier on eng time that doesn’t apply to recruiting time that we need to factor in. Every time you interrupt an engineer from their primary job, which is solving problems with code, it takes time to refocus and get back into it. If you’re an engineer, you know this deep in your bones. And if you’re not, interruptions are very likely something you’ve heard your engineering friends decry… because they’re so painful and detrimental to continued productivity. Back when I was writing code on a regular basis, it would take me 15 minutes of staring at my IDE (or, if I’m honest, occasionally reading Hacker News or Reddit) to let my brain ease back into doing work after coming back from an interview. And it would take me 15 minutes before an interview to read a candidate’s resume and get in the mindset of whatever coding or design question I was going to ask. I expect my time windows are pretty typical, so it basically ends up being a half hour of ramping up and back down for every hour spent interviewing.

Therefore, with ramp-up and ramp-down time in mind, it’s more like $9,000 in eng hours.3

Ultimately, for one hire, we’re paying a total of $10,500, but eng incurs 6X the cost that recruiting does during the hiring process.

Why does breaking out cost per hire by source matter?

So, hopefully, I’ve convinced you that engineering time spent on hiring matters and that it’s the biggest cost you incur. But, if there’s nothing we can do to change it, and it’s just the cost of doing business, then why factor it in to CPH calculations? It turns out that eng time spent IS a lever you can pull, and its impact becomes clear when you think about cost per hire by candidate source.

To make that more concrete, let’s take a look at 2 examples. In both cases, we’ll pretend that one of our candidate sources has a different conversion rate than the overall rate at some step in the funnel. Then we’ll change up the conversion rate at one step in the funnel and try to guess that the financial implications of that are… and then actually calculate it. You might be surprised by the results.

What happens when you increase TPS to onsite conversion to 50%?

As you can see in the funnel above, a decent TPS to onsite conversion rate is 25%. Let’s say one of your sources could double that to 50% (by doing more extensive top-of-funnel filtering, let’s say). What do you think this will do to cost per hire?

In this model, we’re spending a total of 10 recruiting hours (worth $1000) and 32 eng hours (worth $7200).4 Unlike in the first example, we’re now paying a total of $8200 to make a hire.

In this case, you’ve reduced your recruiting time spent by 30% and your eng time spent by 20%, ultimately saving $2300 per hire. If one of your sources can get you this kind of efficiency gain, you probably want to invest more resources into it. And though doubling conversion from tech screen to onsite sounds great and perhaps something you would have known already about your source, without computing the cost per hire for this channel, it’s not intuitively clear just how much money a funnel improvement can save you, end to end.

What happens when you cut your offer acceptance rate in half?

Another possibility is that one of your sources does pretty well when it comes to candidate quality all the way to offer, but for some reason, those candidates are twice as hard to close. In this scenario, you double both the eng and recruiting time expenditure and ultimately pay an extra $7500 per hire for this source (which you’ll likely want to deallocate resources from here on out).5

In either of the examples above, until you break out CPH by source and see exactly what each is costing you, it’s a lot harder to figure out how to optimize your spend.

How to actually measure cost per hire (and include eng time of course!)

The usual way to calculate cost per hire is definitely useful for setting recruiting budget, as we discussed above, but if you want to figure out how much your whole company is actually spending on hiring, you need to factor in the most expensive piece — engineering time.

To do this, we propose a different metric, one that’s based on time spent by your team rather than overall salaries and fixed costs. Let’s call it “cost per hire prime” or CPH prime.

CPH prime doesn’t factor in fixed costs like salaries or events, which you can still do using the formula above… but it is going to be instrumental in helping you get a handle on what your spend actually looks like and will help you compare different channels.

To make your life easier, we’ve created a handy spreadsheet for you to copy and then fill in your numbers, like so:

As you can see, once you fill them the highlighted cells with your own conversion numbers (and optionally your hourly wages if yours differ much from our guesses), we’ll compute CPH prime for you.

And because we’re a business and want you to hire through us, we’ve included the average savings for companies hiring through our platform. We provide two big value-adds: we can pretty drastically improve your TPS to onsite conversion — about 65% of our candidates pass the tech screen at companies on average. From there, they get offers and accept them at the same rate as you’d see in your regular funnel.

Closing thoughts on building bridges between eng and recruiting

So, why does being cognizant of eng time in your CPH calculations matter? I’ve already kind of beaten it into the ground that it’s the biggest cost sink. However, there’s another, more noble reason, to care about eng time. In my career, having sat on all different sides of the table, I’ve noticed one unfortunate, inalienable truth: engineering and recruiting teams are simply not aligned.

Engineers tend to harbor some resentment toward recruiters because recruiters are the arbiters of how eng spends their time when it comes to hiring without a set of clear metrics or goals that help protect that time.

Recruiters often feel some amount of resentment toward engineers who tend to be resistant to interruptions, toward putting in the time to provide meaningful feedback about candidates so that recruiting can get better, and toward changes in the process.

In our humble opinion, much of the resentment on both sides could be cured by incorporating recruiting and engineering costs together in a specific, actionable way that will reduce the misalignment we’re seeing. Recruiters tend to hold the cards when it comes to hiring practices, so we’d love to see them take the lead to reach across the aisle by proactively factoring in eng time spent during hiring and ultimately incorporating recruiting and eng costs together in one metric that matters. Once that’s in place, recruiting can use the data they gather to make better decisions about how to use eng time, and in the process, rebuild much of the rapport and love that’s lost between the two departments.

1We’re basing these numbers on a mix of ATS reporting (Lever’s recruiting metrics report in particular) and what we’ve heard from our customers.

2We’re assuming sourcing costs are fixed for purposes of simplicity and because this post is largely about the importance of eng time factored in to the funnel. Of course, if you have channels that reduce sourcing time significantly, you’ll want to weigh that when deciding its efficacy.

3Really though, the value of an hour of work for an engineer is intangible and much higher than an hourly wage. There ARE inefficiencies and overhead to having a larger staff, not every hour is effective, and most likely it’s your best people who are conducting interviews. The reality is that the money spent on salaries is probably only a fraction of the true cost to the company, particularly for engineers (as opposed to recruiters).

4Here’s us showing our work in figuring out how much recruiting and eng time it takes to make a hire when your TPS to onsite conversion rate is 50%:
RECRUITING – 15 total hours or $1500
5 hours of recruiter screens (10 screens needed * 30 min per screen)
4 hours of onsites (4 onsites needed * 1 hour per onsite)
1 hour of offers (2 offer calls needed * 30 min per offer call)
ENGINEERING – 32 total hours or $7200
8 hours of phone screens (8 screens needed * 1 hour per screen)
24 hours of onsites (4 onsites needed * 6 hours per onsite)

5Here’s us showing our work in figuring out how much recruiting and eng time it takes to make a hire when you cut your offer acceptance rate in half:
RECRUITING – 30 total hours or $3000
20 hours of recruiter screens (40 screens needed * 30 min per screen)
8 hours of onsites (8 onsites needed * 1 hour per onsite)
2 hours of offers (4 offer calls needed * 30 min per offer call)
ENGINEERING – 80 total hours or $18,000
32 hours of phone screens (32 screens needed * 1 hour per screen)
48 hours of onsites (8 onsites needed * 6 hours per onsite)

Featured

Uncategorized

Can fake names create bias? An exploration into interviewing.io’s random name generator

Posted on March 7th, 2019.

Hello everyone, my name is Atomic Artichoke, and I’m the newest employee of the interviewing.io team, having joined a couple months ago as a Data Scientist.

Atomic Artichoke isn’t my real name, of course. That’s the pseudonym the interviewing.io platform gave me, right before I took my final interview with the company. If you’ve never used interviewing.io before (and hey, if you haven’t already, why not sign up now?), it’s a platform where you can practice technical interviewing anonymously with experienced engineers (and do real job interviews anonymously too).

On signup, interviewing.io creates an anonymous handle for you

On signup, interviewing.io creates an anonymous handle for you

When it’s time to interview, you and your partner meet in a collaborative coding environment with voice, text chat, and a whiteboard (check out recordings of real interviews to see this process in action). During interviews, instead of your name, your partner will see your pseudonym, like so:

During interviews, you are identified by your anonymized pseudonym

Within the platform, you are identified by your anonymized pseudonym

In my opinion, “Atomic Artichoke” is a pretty cool name. It sounds like a Teenage Mutant Ninja Turtles villain, and alliterative phrases are always cool. However, I had some reservations about that handle, because I feel like the pseudonym represented me in ways with which I didn’t identify. I don’t know how to eat or cook an artichoke, I never really understood atoms much, and I possess no mutant superpowers.

But I wondered, how did the interviewer perceive me? Did this person think “Atomic Artichoke” was a cool name? If so, did that name influence his or her perception of me in any way? More importantly, did my pseudonym have any influence in me getting hired? If I had a different, less cool name, would I have gotten this job?

I know, it’s a silly question. I’d like to think I was hired because of my skills, but who really knows? I was curious, so I wasted invested a few days to investigate.

What we already know about names in the hiring process

You might be asking, “Why does interviewing.io have pseudonyms, anyway?” Anonymity. We want candidates to be assessed on their actual skills, not on proxies of skill like the colleges they’ve attended, the notoriety of their social circles, or prior companies they’ve worked at. If a hiring manager knows a person’s name and knows how to use the Internet, it’s easy to find this information.

I’m not the first to wonder about names and hiring. Plenty of academic literature exists exploring the impact of name choice on various life outcomes. I’ll briefly touch on a handful of those perspectives.

As you can see, academic opinions differ. However, in the case that name-based bias actually exists, maybe we can implement a cheap-enough solution to eliminate the bias completely. Randomly-generated pseudonyms fits that bill nicely.

But as I wondered before, maybe the pseudonym name generator creates a different kind of bias, leaving us in a similarly biased place that using real people’s names leaves us. I first needed to understand how pseudonyms get generated, so I dug into some code.

Exploring code

After dusting off what little Javascript knowledge I acquired 6 years ago, I found the 13 lines of code that generates pseudonyms. Mechanics-wise it’s simple: there are two lists, one containing adjectives and one containing nouns. The pseudonym generator randomly pulls one adjective and one noun from each list, and mashes them together in that order, with a space in between. The generator outputs some sweet sounding pseudonyms like:

  • Serpentine Gyroscope
  • Moldy Parallelogram
  • Frumious Slide Rule
  • Supersonic Llama

But they can also come up with less memorable, more commonplace, and more boring phrases like:

  • Ice Snow
  • Warm Wind
  • Red Egg1
  • Infinite Avalanche

After running through a few example pseudonyms, anecdotally I felt the first list was more attractive to me than the second. It sparked more joy in me, one could say. I just couldn’t articulate why.

That’s when I noticed that certain themes kept recurring. For example, there were multiple Alice in Wonderland references, a bunch of animals, and many types of foods listed. At first glance the chosen words seemed odd. But after getting to know my co-workers better, the list of words began to make a lot more sense.

The co-worker sitting across from me is a huge Alice in Wonderland fan. Our founders seem to love animals, since they bring their dogs to work most days. Finally, food and restaurant discussions fuel most lunchtime arguments. Just in my first month, I had heard more discussion about chicken mole and Olive Garden than I ever had in my life.

While it’s true the pseudonym generator chooses words randomly, the choice of which words get onto the list isn’t necessarily random. If anything, the choice of words reflects the interests of the people who built the application. Might it be possible that the first list appealed to me because they reference math concepts, and I happen to like math-y things?

The hypothesis

This insight helped me craft my hypothesis more concretely: all else equal, do some candidates receive better ratings on interviews, because interviewers happen to associate positively with users whose pseudonyms reference the interviewers’ personal interests?

This hypothesis rests upon the assumption that people are drawn to stuff that’s similar to themselves. This seems intuitive: when individuals share common interests or backgrounds with others, chances are they’ll like each other. Therefore, is it possible that interviewers like certain candidates more because they find commonality with them, even though we manufactured that commonality? And did that likability translate to better interview ratings?

To test this, I categorized users into one of the following 6 categories based on the noun part of their pseudonym, which will be called Noun Category going forward.

  • Animal
  • Fantasy
  • Food
  • History
  • Object
  • Science

These broad categories aimed to differentiate among interest areas that might appeal differently to different interviewers. Among these 6 groups, I wanted to observe differences in interview performance. And knowing the pseudonym generator assigns names randomly, we would not expect to find a difference.

To proxy for interview performance, I used the “Would You Hire” response from the interviewer on the interviewee, which is the first item on the interviewer’s post-interview questionnaire.

An interviewer's rubric after an interviewing.io interview

An interviewer’s rubric after an interviewing.io interview

These two pieces of data led to a clear, testable null hypothesis: there should exist no relationship between Noun Category and the Would You Hire response. If we reject this null hypothesis, we would have evidence suggesting our pseudonyms can impact hiring decisions.

Data analysis and interpretation

I pulled data on a sample of a few thousand interviewing.io candidates’ first interview on our platform, and performed a Chi-Squared test against the observed frequencies of the 6 “Noun Categories” and 2 “Would You Hire” interviewer responses. Each cell of the 6 x 2 matrix contained at least 40 observations.

Below are the mean percentage of candidates who received a Yes from their interviewer, broken out by Noun Category. While most of the categories seemed to clump around a similar pass rate, the History group seemed to under-perform while the Fantasy group over-performed.

“Would You Hire” pass rate, by Noun Category

The Chi-Square test rejected the null hypothesis at a 5% significance level.

These results suggest a relationship might exists between Noun Category and an interviewer’s Would You Hire response. Which again, should not occur because a candidate’s Noun Category was randomly assigned!2

What next?

While this analysis doesn’t predict outcomes for specific individuals, the result suggests it isn’t totally crazy to believe I may gotten lucky on my interview. Maybe I don’t suffer from imposter syndrome, maybe I am an imposter. How depressing.

So what now? Fortunately (or unfortunately) for my new company, if we want to eliminate this bias, I can suggest potential next steps.

One solution might be to pander to an interviewer’s interests. We could randomly generate a new pseudonym for candidates every time they meet a different interviewer, ensuring that pseudonym creates positive associations with the interviewer. Similarly, we could generate more pseudonyms referencing Lord of the Rings and Warcraft, if we know our interviewer pool tends to be fantasy-inclined.

An alternative solution might be to give candidates pseudonyms with no meaning at all. For example, we could generate random strings, similar to what password managers generate for you. This would eliminate any real world associations, but we’d lose some whimsy and human readability that the current pseudonyms provide.

Yet another alternative solution could be to do more analysis before acting. The analysis didn’t quantify the magnitude of the bias, so we could construct a new sample to test a more specific hypothesis about bias size. It’s possible the practical impact of the bias isn’t huge, and we should focus our energy elsewhere.

Zooming out

On the face of it, this pseudonym bias seems trivial, and in the universe of all biases that could exist, that’s probably true. However, it makes me wonder how many other hidden biases might exist elsewhere in life.

I think that’s why I was hired. I’m obsessed with bias. Though I’ll be doing normal business-y Data Scientist stuff, my more interesting responsibilities will be poking at all aspects of the hiring market and examining the myriad of factors, mechanisms, and individuals that make the hiring market function, and perhaps not function effectively for some people.

Going a step further than identifying hiring biases, I’d like to shift discussions toward action. It’s great that the tech industry talks about diversity more, but I think we can facilitate more discussions around which concrete actions are being taken, and whether those actions actually achieve our goals, whatever those goals may be.

I think it all starts with being introspective about ourselves, and investigating whether something as innocuous as a randomly generated phrase could ever matter.

Atomic Artichoke
(Ken Pascual)

1This is the shortest pseudonym possible on interviewing.io.

2This is not entirely true. Users can re-generate a random pseudonym as often as they want, meaning a user can choose their name if they re-generate a lot. However, there’s no evidence this happens often, because we found no significant difference in the observed and theoretical randomized distribution of Noun Categories.

Featured

Uncategorized

There is a real connection between technical interview performance and salary. Here’s the data.

Posted on February 26th, 2019.

At the end of the day, money is a huge driver for the decisions we make about what jobs to go after. In the past, we’ve written about how to negotiate your salary, and there are a lot of labor statistics and reports out there looking at salaries in the tech industry as a whole. But as with many things in eng hiring, there’s very little concrete data on whether technical interview performance plays a role in compensation offers.

So we set out to gather the data and asked our users who had gone on to successfully get jobs after using our platform to share their salary info. With our unique dataset of real coding interviews, we could ask questions like:

  • Does interview performance matter when it comes to compensation packages?
  • Do engineers who prioritize other parts of a role over compensation (e.g. values alignment) end up with lower salaries?
  • What else seems to matter in getting a higher salary?

To be clear, this is an exploration of past average interview performance and its connection with current salary, versus looking at how someone did in an interview and then what salary they got when they took that specific job. In other words, we haven’t paired job interviews with the salary for that same job. We believe that looking at these more general measures is more informative than trying to match single interviews and job offers, given how volatile individual interview performance can be. But our interviewing platform allowed us to look at performance across multiple interviews for respondents, which gave us more stability and more data.

The setup

On the interviewing.io platform, people can practice technical interviews online and anonymously, with real engineers on the other side.

When an interviewer and an interviewee match on our platform, they meet in a collaborative coding environment with voice, text chat, and a whiteboard and jump right into a technical question. Check out our recordings page to see this process in action.

Interview questions on the platform tend to fall into the category of what you’d encounter at a phone screen for a back-end software engineering role, and interviewers typically come from top companies like Google, Facebook, Dropbox, Airbnb, and more.

After every interview, interviewers rate interviewees on a few different dimensions: technical skills, communication skills, and problem solving skills. These each get rated on a scale of 1 to 4, where 1 is “poor” and 4 is “amazing!”. On our platform, a score of 3 or above has generally meant that the person was good enough to move forward. You can see what our feedback form looks like below:

With this in mind, we surveyed interviewing.io users about their current roles, including salary, bonuses, and how satisfied they felt in their job and then tied their comp back to how they did in interviews on our platform. We ended up with responses from 494 engineers1, and because compensation packages are so complex and vary from company to company, we analyzed the data in several different ways, looking at annual salary numbers, bonuses, and equity. Then we tied compensation data to performance in technical interviews to see whether it matters, and if so, how much.

The results

We looked at the relationships between interview performance (technical skills, communication ability, and problem solving ability) and the following: base salary, bonuses, and equity. In all cases, we corrected for location (being in the Bay Area means a higher salary) and experience (senior engineers make senior salaries), and where we could, we corrected for company size (bigger companies can generally pay bigger salaries).

The mean yearly salary for all survey participants was around $130k, and 57% of them reported a yearly bonus. For that group, the average yearly bonus was $20k. For people who reported a dollar amount for equity, the average was $54k. Below is the distribution of experience level/seniority of survey respondents.

Salary vs. Technical Interview Performance

Here’s what we found.

Better technical skills correlate with higher compensation

As probably comes as no surprise, people who score higher on technical skills during interviews do make more money. First, let’s look at base salary.2

Salary vs. Technical Interview Performance

Bonuses, too, correlate with technical skills, with an additional point in performance potentially worth about 10k:3

Bonus vs. Technical Interview Performance

The relationship between compensation and other interviewing skills

We also looked at the two other ratings that interviewers give after interviews: communication and problem solving. Better communication scores had a small but statistically significant correlation with salaries (r = .15, p < .01), but we found no significant relationship for problem solving scores in isolation:

Bonus vs. Technical Interview Performance

We also didn’t see a relationship between either communication ability or problem solving ability when it came to bonuses.

The non-relationships didn’t surprise us too much, to be honest, because with a relatively small sample size it’s notoriously difficult to get subcomponents of ratings to show a relationship to something distal and complicated like salaries. It’s very possible these relationships do exist, and with many determiners besides actual interview performance, like seniority and market salary norms, we’d like to repeat this survey at a bigger scale to inform this question.

What else?

We asked engineers whether they felt satisfied with their role, and found that engineers who felt satisfied earned an average $14k more than engineers who felt dissatisfied.4

We also looked at people’s perceptions of their own performance. In a previous post, we explored how people rated their own technical performance after an interview compared to how the interviewer rated them and found that even experienced engineers aren’t great at guessing how they did. For this project, we were curious about whether overconfident engineers might net higher salaries (perhaps they negotiate harder!). So we also looked at people who rated their performance higher than their actual interview score — but found no difference in their compensation packages.

Another thing we were curious about was whether people who valued money over other factors while making a job decision would have higher salaries. So we asked people to rank the most important variables in their job decisions. 32% of respondents said that a compensation package was the most important part of their decision; the next highest response was “matches my interests and values”. But these question didn’t have any predictive value for the actual salary amount: people who said money matters the most didn’t have significantly different salaries from people who say money matters the least. It’s possible that with salaries being impacted by so many outside factors, like location and role type, candidates don’t truly have a lot of negotiating power over that salary number.

We looked at equity as well, and the average reported equity package size was 54k. We did not find any significant association between interview performance and reported equity packages. That said, enormous amounts of research have documented various salary gaps based on gender, race, and other important sociocultural and demographic factors, and we hope to repeat this analysis when we have more data.

What do these findings mean for you?
Interview performance doesn’t just get you in the door or not: it can have a demonstrable connection to your eventual compensation. For instance, doing just a point better in your technical interview could be worth 10k or more, and with bonus, it could add 20k to your annual comp.

Given how much technical interview performance matters, we’d be remiss if we didn’t suggest signing up for free, anonymous mock interviews on our platform. So, please go do that.

And, if you’re curious about what our salary survey looked like or want to participate and contribute to v2 of this post, please do so too!

1Our salary survey ended up with 494 respondents, but because some people filled out our survey but had not yet done an interview on our platform, only a subset of folks had both salary data and interview data: N = 234, or 47% of our salary sample. Therefore in all the analyses where we compare salaries to interview performance, it’s only for this subgroup.

2We ran both a correlation between these factors and a regression to correct for confounding factors like seniority and location. For the correlation, r = .22 and p < 0.001. For the regression, F = 16.06 and p < .001.

3As with base salary above, we ran both a correlation between these factors and a regression to correct for confounding factors like seniority and location. For the correlation, r = .17 and p < 0.05. For the regression, F = 1.63 and p < .05.

4Looking at it as a binary, satisfied engineers earned significantly more than nonsatisfied engineers in a predictive test, F = 5.2398, p < 0.02, and satisfaction and salary amount are also positively correlated.