interviewing.io logo interviewing.io blog
better interviewing through data
Navigation
Featured

Uncategorized

You probably don’t factor in engineering time when calculating cost per hire. Here’s why you really should.

Posted on April 24th, 2019.

Whether you’re a recruiter yourself or an engineer who’s involved in hiring, you’ve probably heard of the following two recruiting-related metrics: time to hire and cost per hire. Indeed, these are THE two metrics that any self-respecting recruiting team will track. Time to hire is important because it lets you plan — if a given role has historically taken 3 months to fill, you’re going to act differently when you need to fill it again than if it takes 2 weeks. And, traditionally, cost per hire has been a planning tool as well — if you’re setting recruiting budgets for next year and have a headcount in mind, seeing what recruiting spent last year is super helpful.

But, with cost per hire (or CPH, as I’ll refer to it from now on in this post) in particular, there’s a problem. CPH is typically blended across ALL your hiring channels and is confined to recruiting spend alone. Computing one holistic CPH and confining it to just the recruiting team’s spend hides problems with your funnel and doesn’t help compare the quality of all your various candidate sources. And, most importantly, it completely overlooks arguably the most important thing of all — how much time your team is actually spending on hiring. Drilling down further, engineering time, specifically, despite being one of the most expensive resources, isn’t usually measured as part of the overall cost per hire. Rather, it’s generally written off as part of the cost of doing business. The irony, of course, is that a typical interview process puts the recruiter call at the very beginning of the process precisely to save eng time, but if we don’t measure eng time spent and quantify, then we can’t really save it.

For what it’s worth, the Twitterverse (my followers are something like 50/50 engineers and recruiters) seems to agree. Here are the results (and some associated comments) of a poll I conducted on this very issue:

And yet, most of us don’t do it. Why? Is it because it doesn’t measure the things recruiters care about? Or is it because it’s hard? Or is it because we can’t change anything, so why bother? After all, engineers need to do interviews, both phone screens and onsites, and we already try to shield them as much as possible by having candidates chat with recruiters or do coding challenges first, so what else can you do?

If you’d like to skip straight to how to compute a better, more inclusive CPH, you can skip down to our handy spreadsheet. Otherwise read on!

I’ve worked as both an engineer and an in-house recruiter before founding interviewing.io, so I have the good fortune of having seen the limitations of measuring CPH, from both sides of the table. As such, in this post, I’ll throw out two ways that we can make the cost per hire calculation more useful — by including eng time and by breaking it out by candidate source — and try to quantify exactly why these improvements are impactful… while building better rapport between recruiting and eng (where, real talk, relationships can be somewhat strained). But first, let’s talk about how CPH is typically calculated.

How is CPH typically calculated, and why does it omit eng time?

As I called out above, the primary purpose of calculating cost per hire is to plan the recruiting department’s budget for the next cycle. With that in mind, below is the formula that you’ll find if you google how to calculate cost per hire (pulled from Workable):

To figure out your CPH, you add up all the external and internal costs incurred during a recruiting cycle and divide by the number of hires.

“External” refers to any money paid out to third parties. Examples include job boards, tools (e.g. sourcing, assessment, your ATS), agency fees, candidate travel and lodging, and recruiting events/career fairs.

“Internal” refers to any money you spend within your company: recruiting team salaries, as well as any employee referral bonuses paid out over the course of the last cycle.

Note that internal costs don’t include eng salaries, as engineering and recruiting teams typically draw from different budgets. Hiring stuff is the domain of the recruiting team, and they pay for it out of their pockets… and engineers pay for… engineering stuff.

What’s problematic is that, while being called “cost per hire” this metric actually tells us what recruiting spends rather than what’s actually being spent as a whole. While tracking recruiting spend makes sense for budget planning, this metric, because of its increasingly inaccurate name, often gets pulled into something it ironically wasn’t intended for: figuring out how much the company is actually spending to make hires.

Why does factoring in engineering time matter?

As you saw above, not only is this the way we compute CPH inaccurate because it doesn’t factor in any time or resource expenditure outside the recruiting team (with eng being the biggest one). But, does engineering time really matter?

Yes, it matters a lot, for the following three reasons:

  1. Way more eng time than recruiting time goes into hiring (as you’ll see in this post!)
  2. Eng time is more expensive
  3. Eng time expenditure can vary wildly by channel

To establish that these things are (probably) true, let’s look at a typical eng hiring funnel.1 For the purposes of this exercise, we’ll start the funnel at the recruiter screen and assume that the costs of sourcing candidates are fixed.2

The green arrows are conversion rates between each step (e.g. 50% of people who get offers accept and get hired). The small gray text at the bottom of each box is how long that step takes for an engineer or recruiter (or both, in the case of an onsite). And the black number is how many times that needs to happen to ultimately make 1 hire, based on the green-arrow conversion rates.

So, with that in mind, to make one hire, let’s see how much time both eng and recruiting need to spend to make 1 hire and how much that time costs. Note that I’m assuming $100/hour is a decent approximation for recruiting comp and $150/hour is a decent approximation for eng comp.

Is eng time spent on recruiting really that costly?

Based on the funnel above, here’s the breakdown of time spent by both engineering and recruiting to make 1 hire. The parentheticals next to each line of time spent are based on how long that step takes times the number of times it needs to happen.

RECRUITING – 15 total hours
10 hours of recruiter screens (20 screens needed * 30 min per screen)
4 hours of onsites (4 onsites needed * 1 hour per onsite)
1 hour of offers (2 offer calls needed * 30 min per offer call)

To make 1 hire, it takes 15 recruiting hours or $1500.

ENGINEERING – 40 total hours
16 hours of phone screens (16 screens needed * 1 hour per screen)
24 hours of onsites (4 onsites needed * 6 hours per onsite)

For 1 hire, that’s a total of 40 eng hours, and on the face of it, it’s $6,000 of engineering time, but there is one more subtle multiplier on eng time that doesn’t apply to recruiting time that we need to factor in. Every time you interrupt an engineer from their primary job, which is solving problems with code, it takes time to refocus and get back into it. If you’re an engineer, you know this deep in your bones. And if you’re not, interruptions are very likely something you’ve heard your engineering friends decry… because they’re so painful and detrimental to continued productivity. Back when I was writing code on a regular basis, it would take me 15 minutes of staring at my IDE (or, if I’m honest, occasionally reading Hacker News or Reddit) to let my brain ease back into doing work after coming back from an interview. And it would take me 15 minutes before an interview to read a candidate’s resume and get in the mindset of whatever coding or design question I was going to ask. I expect my time windows are pretty typical, so it basically ends up being a half hour of ramping up and back down for every hour spent interviewing.

Therefore, with ramp-up and ramp-down time in mind, it’s more like $9,000 in eng hours.3

Ultimately, for one hire, we’re paying a total of $10,500, but eng incurs 6X the cost that recruiting does during the hiring process.

Why does breaking out cost per hire by source matter?

So, hopefully, I’ve convinced you that engineering time spent on hiring matters and that it’s the biggest cost you incur. But, if there’s nothing we can do to change it, and it’s just the cost of doing business, then why factor it in to CPH calculations? It turns out that eng time spent IS a lever you can pull, and its impact becomes clear when you think about cost per hire by candidate source.

To make that more concrete, let’s take a look at 2 examples. In both cases, we’ll pretend that one of our candidate sources has a different conversion rate than the overall rate at some step in the funnel. Then we’ll change up the conversion rate at one step in the funnel and try to guess that the financial implications of that are… and then actually calculate it. You might be surprised by the results.

What happens when you increase TPS to onsite conversion to 50%?

As you can see in the funnel above, a decent TPS to onsite conversion rate is 25%. Let’s say one of your sources could double that to 50% (by doing more extensive top-of-funnel filtering, let’s say). What do you think this will do to cost per hire?

In this model, we’re spending a total of 10 recruiting hours (worth $1000) and 32 eng hours (worth $7200).4 Unlike in the first example, we’re now paying a total of $8200 to make a hire.

In this case, you’ve reduced your recruiting time spent by 30% and your eng time spent by 20%, ultimately saving $2300 per hire. If one of your sources can get you this kind of efficiency gain, you probably want to invest more resources into it. And though doubling conversion from tech screen to onsite sounds great and perhaps something you would have known already about your source, without computing the cost per hire for this channel, it’s not intuitively clear just how much money a funnel improvement can save you, end to end.

What happens when you cut your offer acceptance rate in half?

Another possibility is that one of your sources does pretty well when it comes to candidate quality all the way to offer, but for some reason, those candidates are twice as hard to close. In this scenario, you double both the eng and recruiting time expenditure and ultimately pay an extra $7500 per hire for this source (which you’ll likely want to deallocate resources from here on out).5

In either of the examples above, until you break out CPH by source and see exactly what each is costing you, it’s a lot harder to figure out how to optimize your spend.

How to actually measure cost per hire (and include eng time of course!)

The usual way to calculate cost per hire is definitely useful for setting recruiting budget, as we discussed above, but if you want to figure out how much your whole company is actually spending on hiring, you need to factor in the most expensive piece — engineering time.

To do this, we propose a different metric, one that’s based on time spent by your team rather than overall salaries and fixed costs. Let’s call it “cost per hire prime” or CPH prime.

CPH prime doesn’t factor in fixed costs like salaries or events, which you can still do using the formula above… but it is going to be instrumental in helping you get a handle on what your spend actually looks like and will help you compare different channels.

To make your life easier, we’ve created a handy spreadsheet for you to copy and then fill in your numbers, like so:

As you can see, once you fill them the highlighted cells with your own conversion numbers (and optionally your hourly wages if yours differ much from our guesses), we’ll compute CPH prime for you.

And because we’re a business and want you to hire through us, we’ve included the average savings for companies hiring through our platform. We provide two big value-adds: we can pretty drastically improve your TPS to onsite conversion — about 65% of our candidates pass the tech screen at companies on average. From there, they get offers and accept them at the same rate as you’d see in your regular funnel.

Closing thoughts on building bridges between eng and recruiting

So, why does being cognizant of eng time in your CPH calculations matter? I’ve already kind of beaten it into the ground that it’s the biggest cost sink. However, there’s another, more noble reason, to care about eng time. In my career, having sat on all different sides of the table, I’ve noticed one unfortunate, inalienable truth: engineering and recruiting teams are simply not aligned.

Engineers tend to harbor some resentment toward recruiters because recruiters are the arbiters of how eng spends their time when it comes to hiring without a set of clear metrics or goals that help protect that time.

Recruiters often feel some amount of resentment toward engineers who tend to be resistant to interruptions, toward putting in the time to provide meaningful feedback about candidates so that recruiting can get better, and toward changes in the process.

In our humble opinion, much of the resentment on both sides could be cured by incorporating recruiting and engineering costs together in a specific, actionable way that will reduce the misalignment we’re seeing. Recruiters tend to hold the cards when it comes to hiring practices, so we’d love to see them take the lead to reach across the aisle by proactively factoring in eng time spent during hiring and ultimately incorporating recruiting and eng costs together in one metric that matters. Once that’s in place, recruiting can use the data they gather to make better decisions about how to use eng time, and in the process, rebuild much of the rapport and love that’s lost between the two departments.

1We’re basing these numbers on a mix of ATS reporting (Lever’s recruiting metrics report in particular) and what we’ve heard from our customers.

2We’re assuming sourcing costs are fixed for purposes of simplicity and because this post is largely about the importance of eng time factored in to the funnel. Of course, if you have channels that reduce sourcing time significantly, you’ll want to weigh that when deciding its efficacy.

3Really though, the value of an hour of work for an engineer is intangible and much higher than an hourly wage. There ARE inefficiencies and overhead to having a larger staff, not every hour is effective, and most likely it’s your best people who are conducting interviews. The reality is that the money spent on salaries is probably only a fraction of the true cost to the company, particularly for engineers (as opposed to recruiters).

4Here’s us showing our work in figuring out how much recruiting and eng time it takes to make a hire when your TPS to onsite conversion rate is 50%:
RECRUITING – 15 total hours or $1500
5 hours of recruiter screens (10 screens needed * 30 min per screen)
4 hours of onsites (4 onsites needed * 1 hour per onsite)
1 hour of offers (2 offer calls needed * 30 min per offer call)
ENGINEERING – 32 total hours or $7200
8 hours of phone screens (8 screens needed * 1 hour per screen)
24 hours of onsites (4 onsites needed * 6 hours per onsite)

5Here’s us showing our work in figuring out how much recruiting and eng time it takes to make a hire when you cut your offer acceptance rate in half:
RECRUITING – 30 total hours or $3000
20 hours of recruiter screens (40 screens needed * 30 min per screen)
8 hours of onsites (8 onsites needed * 1 hour per onsite)
2 hours of offers (4 offer calls needed * 30 min per offer call)
ENGINEERING – 80 total hours or $18,000
32 hours of phone screens (32 screens needed * 1 hour per screen)
48 hours of onsites (8 onsites needed * 6 hours per onsite)

Featured

Uncategorized

Can fake names create bias? An exploration into interviewing.io’s random name generator

Posted on March 7th, 2019.

Hello everyone, my name is Atomic Artichoke, and I’m the newest employee of the interviewing.io team, having joined a couple months ago as a Data Scientist.

Atomic Artichoke isn’t my real name, of course. That’s the pseudonym the interviewing.io platform gave me, right before I took my final interview with the company. If you’ve never used interviewing.io before (and hey, if you haven’t already, why not sign up now?), it’s a platform where you can practice technical interviewing anonymously with experienced engineers (and do real job interviews anonymously too).

On signup, interviewing.io creates an anonymous handle for you

On signup, interviewing.io creates an anonymous handle for you

When it’s time to interview, you and your partner meet in a collaborative coding environment with voice, text chat, and a whiteboard (check out recordings of real interviews to see this process in action). During interviews, instead of your name, your partner will see your pseudonym, like so:

During interviews, you are identified by your anonymized pseudonym

Within the platform, you are identified by your anonymized pseudonym

In my opinion, “Atomic Artichoke” is a pretty cool name. It sounds like a Teenage Mutant Ninja Turtles villain, and alliterative phrases are always cool. However, I had some reservations about that handle, because I feel like the pseudonym represented me in ways with which I didn’t identify. I don’t know how to eat or cook an artichoke, I never really understood atoms much, and I possess no mutant superpowers.

But I wondered, how did the interviewer perceive me? Did this person think “Atomic Artichoke” was a cool name? If so, did that name influence his or her perception of me in any way? More importantly, did my pseudonym have any influence in me getting hired? If I had a different, less cool name, would I have gotten this job?

I know, it’s a silly question. I’d like to think I was hired because of my skills, but who really knows? I was curious, so I wasted invested a few days to investigate.

What we already know about names in the hiring process

You might be asking, “Why does interviewing.io have pseudonyms, anyway?” Anonymity. We want candidates to be assessed on their actual skills, not on proxies of skill like the colleges they’ve attended, the notoriety of their social circles, or prior companies they’ve worked at. If a hiring manager knows a person’s name and knows how to use the Internet, it’s easy to find this information.

I’m not the first to wonder about names and hiring. Plenty of academic literature exists exploring the impact of name choice on various life outcomes. I’ll briefly touch on a handful of those perspectives.

As you can see, academic opinions differ. However, in the case that name-based bias actually exists, maybe we can implement a cheap-enough solution to eliminate the bias completely. Randomly-generated pseudonyms fits that bill nicely.

But as I wondered before, maybe the pseudonym name generator creates a different kind of bias, leaving us in a similarly biased place that using real people’s names leaves us. I first needed to understand how pseudonyms get generated, so I dug into some code.

Exploring code

After dusting off what little Javascript knowledge I acquired 6 years ago, I found the 13 lines of code that generates pseudonyms. Mechanics-wise it’s simple: there are two lists, one containing adjectives and one containing nouns. The pseudonym generator randomly pulls one adjective and one noun from each list, and mashes them together in that order, with a space in between. The generator outputs some sweet sounding pseudonyms like:

  • Serpentine Gyroscope
  • Moldy Parallelogram
  • Frumious Slide Rule
  • Supersonic Llama

But they can also come up with less memorable, more commonplace, and more boring phrases like:

  • Ice Snow
  • Warm Wind
  • Red Egg1
  • Infinite Avalanche

After running through a few example pseudonyms, anecdotally I felt the first list was more attractive to me than the second. It sparked more joy in me, one could say. I just couldn’t articulate why.

That’s when I noticed that certain themes kept recurring. For example, there were multiple Alice in Wonderland references, a bunch of animals, and many types of foods listed. At first glance the chosen words seemed odd. But after getting to know my co-workers better, the list of words began to make a lot more sense.

The co-worker sitting across from me is a huge Alice in Wonderland fan. Our founders seem to love animals, since they bring their dogs to work most days. Finally, food and restaurant discussions fuel most lunchtime arguments. Just in my first month, I had heard more discussion about chicken mole and Olive Garden than I ever had in my life.

While it’s true the pseudonym generator chooses words randomly, the choice of which words get onto the list isn’t necessarily random. If anything, the choice of words reflects the interests of the people who built the application. Might it be possible that the first list appealed to me because they reference math concepts, and I happen to like math-y things?

The hypothesis

This insight helped me craft my hypothesis more concretely: all else equal, do some candidates receive better ratings on interviews, because interviewers happen to associate positively with users whose pseudonyms reference the interviewers’ personal interests?

This hypothesis rests upon the assumption that people are drawn to stuff that’s similar to themselves. This seems intuitive: when individuals share common interests or backgrounds with others, chances are they’ll like each other. Therefore, is it possible that interviewers like certain candidates more because they find commonality with them, even though we manufactured that commonality? And did that likability translate to better interview ratings?

To test this, I categorized users into one of the following 6 categories based on the noun part of their pseudonym, which will be called Noun Category going forward.

  • Animal
  • Fantasy
  • Food
  • History
  • Object
  • Science

These broad categories aimed to differentiate among interest areas that might appeal differently to different interviewers. Among these 6 groups, I wanted to observe differences in interview performance. And knowing the pseudonym generator assigns names randomly, we would not expect to find a difference.

To proxy for interview performance, I used the “Would You Hire” response from the interviewer on the interviewee, which is the first item on the interviewer’s post-interview questionnaire.

An interviewer's rubric after an interviewing.io interview

An interviewer’s rubric after an interviewing.io interview

These two pieces of data led to a clear, testable null hypothesis: there should exist no relationship between Noun Category and the Would You Hire response. If we reject this null hypothesis, we would have evidence suggesting our pseudonyms can impact hiring decisions.

Data analysis and interpretation

I pulled data on a sample of a few thousand interviewing.io candidates’ first interview on our platform, and performed a Chi-Squared test against the observed frequencies of the 6 “Noun Categories” and 2 “Would You Hire” interviewer responses. Each cell of the 6 x 2 matrix contained at least 40 observations.

Below are the mean percentage of candidates who received a Yes from their interviewer, broken out by Noun Category. While most of the categories seemed to clump around a similar pass rate, the History group seemed to under-perform while the Fantasy group over-performed.

“Would You Hire” pass rate, by Noun Category

The Chi-Square test rejected the null hypothesis at a 5% significance level.

These results suggest a relationship might exists between Noun Category and an interviewer’s Would You Hire response. Which again, should not occur because a candidate’s Noun Category was randomly assigned!2

What next?

While this analysis doesn’t predict outcomes for specific individuals, the result suggests it isn’t totally crazy to believe I may gotten lucky on my interview. Maybe I don’t suffer from imposter syndrome, maybe I am an imposter. How depressing.

So what now? Fortunately (or unfortunately) for my new company, if we want to eliminate this bias, I can suggest potential next steps.

One solution might be to pander to an interviewer’s interests. We could randomly generate a new pseudonym for candidates every time they meet a different interviewer, ensuring that pseudonym creates positive associations with the interviewer. Similarly, we could generate more pseudonyms referencing Lord of the Rings and Warcraft, if we know our interviewer pool tends to be fantasy-inclined.

An alternative solution might be to give candidates pseudonyms with no meaning at all. For example, we could generate random strings, similar to what password managers generate for you. This would eliminate any real world associations, but we’d lose some whimsy and human readability that the current pseudonyms provide.

Yet another alternative solution could be to do more analysis before acting. The analysis didn’t quantify the magnitude of the bias, so we could construct a new sample to test a more specific hypothesis about bias size. It’s possible the practical impact of the bias isn’t huge, and we should focus our energy elsewhere.

Zooming out

On the face of it, this pseudonym bias seems trivial, and in the universe of all biases that could exist, that’s probably true. However, it makes me wonder how many other hidden biases might exist elsewhere in life.

I think that’s why I was hired. I’m obsessed with bias. Though I’ll be doing normal business-y Data Scientist stuff, my more interesting responsibilities will be poking at all aspects of the hiring market and examining the myriad of factors, mechanisms, and individuals that make the hiring market function, and perhaps not function effectively for some people.

Going a step further than identifying hiring biases, I’d like to shift discussions toward action. It’s great that the tech industry talks about diversity more, but I think we can facilitate more discussions around which concrete actions are being taken, and whether those actions actually achieve our goals, whatever those goals may be.

I think it all starts with being introspective about ourselves, and investigating whether something as innocuous as a randomly generated phrase could ever matter.

Atomic Artichoke
(Ken Pascual)

1This is the shortest pseudonym possible on interviewing.io.

2This is not entirely true. Users can re-generate a random pseudonym as often as they want, meaning a user can choose their name if they re-generate a lot. However, there’s no evidence this happens often, because we found no significant difference in the observed and theoretical randomized distribution of Noun Categories.

Featured

Uncategorized

There is a real connection between technical interview performance and salary. Here’s the data.

Posted on February 26th, 2019.

At the end of the day, money is a huge driver for the decisions we make about what jobs to go after. In the past, we’ve written about how to negotiate your salary, and there are a lot of labor statistics and reports out there looking at salaries in the tech industry as a whole. But as with many things in eng hiring, there’s very little concrete data on whether technical interview performance plays a role in compensation offers.

So we set out to gather the data and asked our users who had gone on to successfully get jobs after using our platform to share their salary info. With our unique dataset of real coding interviews, we could ask questions like:

  • Does interview performance matter when it comes to compensation packages?
  • Do engineers who prioritize other parts of a role over compensation (e.g. values alignment) end up with lower salaries?
  • What else seems to matter in getting a higher salary?

To be clear, this is an exploration of past average interview performance and its connection with current salary, versus looking at how someone did in an interview and then what salary they got when they took that specific job. In other words, we haven’t paired job interviews with the salary for that same job. We believe that looking at these more general measures is more informative than trying to match single interviews and job offers, given how volatile individual interview performance can be. But our interviewing platform allowed us to look at performance across multiple interviews for respondents, which gave us more stability and more data.

The setup

On the interviewing.io platform, people can practice technical interviews online and anonymously, with real engineers on the other side.

When an interviewer and an interviewee match on our platform, they meet in a collaborative coding environment with voice, text chat, and a whiteboard and jump right into a technical question. Check out our recordings page to see this process in action.

Interview questions on the platform tend to fall into the category of what you’d encounter at a phone screen for a back-end software engineering role, and interviewers typically come from top companies like Google, Facebook, Dropbox, Airbnb, and more.

After every interview, interviewers rate interviewees on a few different dimensions: technical skills, communication skills, and problem solving skills. These each get rated on a scale of 1 to 4, where 1 is “poor” and 4 is “amazing!”. On our platform, a score of 3 or above has generally meant that the person was good enough to move forward. You can see what our feedback form looks like below:

With this in mind, we surveyed interviewing.io users about their current roles, including salary, bonuses, and how satisfied they felt in their job and then tied their comp back to how they did in interviews on our platform. We ended up with responses from 494 engineers1, and because compensation packages are so complex and vary from company to company, we analyzed the data in several different ways, looking at annual salary numbers, bonuses, and equity. Then we tied compensation data to performance in technical interviews to see whether it matters, and if so, how much.

The results

We looked at the relationships between interview performance (technical skills, communication ability, and problem solving ability) and the following: base salary, bonuses, and equity. In all cases, we corrected for location (being in the Bay Area means a higher salary) and experience (senior engineers make senior salaries), and where we could, we corrected for company size (bigger companies can generally pay bigger salaries).

The mean yearly salary for all survey participants was around $130k, and 57% of them reported a yearly bonus. For that group, the average yearly bonus was $20k. For people who reported a dollar amount for equity, the average was $54k. Below is the distribution of experience level/seniority of survey respondents.

Salary vs. Technical Interview Performance

Here’s what we found.

Better technical skills correlate with higher compensation

As probably comes as no surprise, people who score higher on technical skills during interviews do make more money. First, let’s look at base salary.2

Salary vs. Technical Interview Performance

Bonuses, too, correlate with technical skills, with an additional point in performance potentially worth about 10k:3

Bonus vs. Technical Interview Performance

The relationship between compensation and other interviewing skills

We also looked at the two other ratings that interviewers give after interviews: communication and problem solving. Better communication scores had a small but statistically significant correlation with salaries (r = .15, p < .01), but we found no significant relationship for problem solving scores in isolation:

Bonus vs. Technical Interview Performance

We also didn’t see a relationship between either communication ability or problem solving ability when it came to bonuses.

The non-relationships didn’t surprise us too much, to be honest, because with a relatively small sample size it’s notoriously difficult to get subcomponents of ratings to show a relationship to something distal and complicated like salaries. It’s very possible these relationships do exist, and with many determiners besides actual interview performance, like seniority and market salary norms, we’d like to repeat this survey at a bigger scale to inform this question.

What else?

We asked engineers whether they felt satisfied with their role, and found that engineers who felt satisfied earned an average $14k more than engineers who felt dissatisfied.4

We also looked at people’s perceptions of their own performance. In a previous post, we explored how people rated their own technical performance after an interview compared to how the interviewer rated them and found that even experienced engineers aren’t great at guessing how they did. For this project, we were curious about whether overconfident engineers might net higher salaries (perhaps they negotiate harder!). So we also looked at people who rated their performance higher than their actual interview score — but found no difference in their compensation packages.

Another thing we were curious about was whether people who valued money over other factors while making a job decision would have higher salaries. So we asked people to rank the most important variables in their job decisions. 32% of respondents said that a compensation package was the most important part of their decision; the next highest response was “matches my interests and values”. But these question didn’t have any predictive value for the actual salary amount: people who said money matters the most didn’t have significantly different salaries from people who say money matters the least. It’s possible that with salaries being impacted by so many outside factors, like location and role type, candidates don’t truly have a lot of negotiating power over that salary number.

We looked at equity as well, and the average reported equity package size was 54k. We did not find any significant association between interview performance and reported equity packages. That said, enormous amounts of research have documented various salary gaps based on gender, race, and other important sociocultural and demographic factors, and we hope to repeat this analysis when we have more data.

What do these findings mean for you?
Interview performance doesn’t just get you in the door or not: it can have a demonstrable connection to your eventual compensation. For instance, doing just a point better in your technical interview could be worth 10k or more, and with bonus, it could add 20k to your annual comp.

Given how much technical interview performance matters, we’d be remiss if we didn’t suggest signing up for free, anonymous mock interviews on our platform. So, please go do that.

And, if you’re curious about what our salary survey looked like or want to participate and contribute to v2 of this post, please do so too!

1Our salary survey ended up with 494 respondents, but because some people filled out our survey but had not yet done an interview on our platform, only a subset of folks had both salary data and interview data: N = 234, or 47% of our salary sample. Therefore in all the analyses where we compare salaries to interview performance, it’s only for this subgroup.

2We ran both a correlation between these factors and a regression to correct for confounding factors like seniority and location. For the correlation, r = .22 and p < 0.001. For the regression, F = 16.06 and p < .001.

3As with base salary above, we ran both a correlation between these factors and a regression to correct for confounding factors like seniority and location. For the correlation, r = .17 and p < 0.05. For the regression, F = 1.63 and p < .05.

4Looking at it as a binary, satisfied engineers earned significantly more than nonsatisfied engineers in a predictive test, F = 5.2398, p < 0.02, and satisfaction and salary amount are also positively correlated.

Featured

Uncategorized

Impostor syndrome strikes men just as hard as women… and other findings from thousands of technical interviews

Posted on October 30th, 2018.

The modern technical interview is a rite of passage for software engineers and (hopefully!) the precursor to a great job. But it’s also a huge source of stress and endless questions for new candidates. Just searching “how do I prepare for a technical interview” turns up millions of Medium posts, coding bootcamp blogs, Quora discussions, and entire books.

Despite all this conversation, people struggle to know how they’re even doing in interviews. In a previous post, we found that a surprisingly large number of interviewing.io’s users consistently underestimate their performance, making them more likely to drop out of the process and ultimately harder to hire. Now, and with considerably more data (over 10k interviews led by real software engineers!), we wanted to go deeper: what seems to make candidates worse at gauging their own performance?

We know some general facts that make accuracy a challenge: people aren’t always great at assessing or even remembering their performance on difficult cognitive tasks like writing code.1 Technical interviews can be particularly hard to judge if candidates don’t have much experience with questions with no single right answer. Since many companies don’t share any kind of detailed post-interview feedback (beyond a yes/no) with candidates for liability reasons, many folks never get any sense of how they did, what they did well, or what could have been better.2, 3 Indeed, pulling back the curtain on interviewing, across the industry, was one of the primary motivators for building interviewing.io!

But to our knowledge there’s little data out there looking specifically at how people feel after real interviews on this scale, across different companies–so we gathered it, giving us the ability to test interesting industry assumptions about engineers and coding confidence.

One big factor we were interested in was impostor syndrome. Impostor syndrome resonates with a lot of engineers,4 indicating that many wonder whether they truly match up to colleagues and discount even strong evidence of competence as a fluke. Impostor syndrome can make us wonder whether we can count on the positive performance feedback that we’re getting, and how much our opportunities have come from our own effort, versus luck. Of particular interest to us was whether this would show up for women on our platform. There’s a lot of research evidence that candidates from underrepresented backgrounds experience a greater lack of belonging that feeds impostor syndrome,5 and this could show up as inaccuracy about judging your own interview performance.

The setup

interviewing.io is a platform where people can practice technical interviewing anonymously, and if things go well, get jobs at top companies in the process. We started it because resumes uck and because we believe that anyone, regardless of how they look on paper, should have the opportunity to prove their mettle.

When an interviewer and an interviewee match on interviewing.io, they meet in a collaborative coding environment with voice, text chat, and a whiteboard and jump right into a technical question (feel free to watch this process in action on our interview recordings page).  After each interview, people leave one another feedback, and each party can see what the other person said about them once they both submit their reviews.

Here’s an example of an interviewer feedback form:

Feedback form for interviewers

Feedback form for interviewers

Immediately after the interview, candidates answered a question about how well they thought they’d done on the same 1-4 scale:

Feedback form for interviewees

Feedback form for interviewees

For this post, we looked at over 10k technical interviews led by real software engineers from top companies. In each interview, a candidate was rated by an interviewer on their problem-solving ability, technical ability, and communication skills, as well as whether the interviewer would advance them to the next round. This gave us a measure of how different someone’s self-rating was from the rating that the interviewer actually gave them, and in which direction. In other words, how skewed was their estimation from their true performance?

Going in, we had some hunches about what might matter:

  • Gender. Would women be harder on their coding performance than men?
  • Having been an interviewer before. It seems reasonable that having been on the other side will pull back the curtain on interviews.
  • Being employed at a top company. Similar to above.
  • Being a top-performing interviewee on interviewing.io — people who are better interviewees overall might have more confidence and awareness of when they’ve gotten things right (or wrong!)
  • Being in the Bay Area or not. Since tech is still so geographically centered on the Bay Area, we considered that folks who live in a more engineering-saturated culture could have greater familiarity with professional norms around interviews.
  • Within the interview itself, question quality and interviewer quality. Presumably, a better interviewer is also a better communicator, whereas a confusing interviewer might throw off a candidates’ entire assessment of their performance. We also looked at whether it was a practice interview, or for a specific company role.
  • For some candidates, we could also look at few measures of their personal brand within the industry, like their number of GitHub and Twitter followers. Maybe people with a strong online presence are more sure of themselves when they interview?

So what did we find?

Women are just as accurate as men at assessing their technical ability

Contrary to expectations around gender and confidence, we didn’t find a reliable statistically significant gender difference in accuracy. At first, it looked like female candidates were more likely to underestimate their performance, but when we controlled for other variables, like experience and rated technical ability, it turned out the key differentiator was experience. More experienced engineers are more accurate about their interview performance, and men are more likely to be experienced engineers, but experienced female engineers are just as accurate about their technical ability.

Based on previous research, we hypothesized that impostor syndrome and a greater lack of belonging could result in female candidates penalizing their interview performance, but we didn’t find that pattern.6 However, our finding echoes a research project from the Stanford Clayman Institute for Gender Research, which looked at 1,795 mid-level tech workers from high tech companies. They found that women in tech aren’t necessarily less accurate when assessing their own abilities, but do have significantly different ideas about what success requires (e.g., long working hours and risk-taking). In other words, women in tech may not doubt their own abilities but might have different ideas about what’s expected. And a survey from Harvard Business Review  asking over a thousand professionals about their job application decisions also made this point. Their results emphasized that gender gaps in evaluation scenarios could be more about different expectations for how scenarios like interviews are judged.

That said, we did find one interesting difference: women went through fewer practice interviews overall than men did. The difference was small but statistically significant, and harkens back to our earlier finding that women leave interviewing.io roughly 7 times as often as men do, after a bad interview.

But in that same earlier post, we also found that masking voices didn’t impact interview outcomes. This whole cluster of findings affirms what we suspected and what the folks doing in-depth studies of gender in tech have found: it’s complicated. Women’s lack of persistence in interviews can’t be explained only by impostor syndrome about their own abilities, but it’s still likely that they’re interpreting negative feedback more severely and making different assumptions about interviews.

Here’s the distribution of accuracy distance for both female and male candidates on our platform (zero indicates a rating that matches the interviewer’s score, while negative values indicate underestimated score, and positive values indicate an overestimated score). The two groups look pretty much identical:

Accuracy by gender

What else didn’t matter?

Another surprise: having been an interviewer didn’t help. Even people who had been interviewers themselves don’t seem to get an accuracy boost from that. Personal brand was another non-finding. People with more GitHub followers weren’t more accurate than people with few to no GitHub followers. Nor did interviewer rating matter (i.e. how well an interviewer was reviewed by their candidates), although to be fair, interviewers are generally rated quite highly on the site.

So what was a statistically significant boost to accurate judgments of interview performance? Mostly, experience.

Experienced engineers have a better sense for how well they did in interviews, compared with engineers earlier in their careers.7 But it doesn’t seem to just be that you’re better at gauging your interview performance because you’re better at writing code; although there is a small lift from this, with higher rated engineers being more accurate. But when you look at junior engineers, even top-performing junior candidates struggled to accurately assess their performance.8  

experienced versus juniors

Our data mirrors a trend seen in Stack Overflow’s 2018 Developer survey. They asked respondents several questions about confidence and competition with other developers, and noted that more experienced engineers feel less competitive and more confident.9 This isn’t necessarily surprising: experience is correlated with skill level, after all, and highly skilled people are likely to be more confident. But our analysis let us control for performance and code skill within career groups, and we still found that experienced engineers were better at predicting their interview scores. There are probably multiple factors here: experienced engineers have been through more interviews, have led interviews themselves, and have a stronger sense of belonging, all of which may combat impostor syndrome.

Insider knowledge and context also seems to help: Being in the Bay Area and being at a top company both made people more accurate. Like the experienced career group, engineers who seem more likely to have contextual industry knowledge are also more accurate. We found small but statistically significant lifts from factors like being located in the Bay Area and working at a top company. However, the lift from working at a top company seems to mostly measure a lift from overall technical ability: being at a top company is essentially a proxy measure for being a more experienced, higher quality engineer.

Finally, as you get better at interviewing and move into company interviews, you do get more accurate. People were more accurate about their performance in company interviews compared to practice interviews, and their overall ranking on the interviewing.io site also predicted improved accuracy: interviewing.io also gives users an overall ranking, based on their performance over multiple interviews and weighted toward more recent measures. People who scored in the top 25% were more likely to be accurate about their interview performance.

In general, how are people at gauging their interview performance overall? We’ve looked at this before, with roughly a thousand interviews, and now, with ten thousand, the finding continues to hold up. Candidates were accurate about how they did in only 46% of interviews, and underestimated themselves in 35% of interviews (and the remaining 19%, of course, are the overestimators). Still, candidates are generally on the right track — it’s not like people who score a 4 are always giving themselves a 1.10 Self-ratings are statistically significantly predictive for actual interview scores (and positively correlated), but that relationship is noisy.

The implications

Accurately judging your own interview performance is a skill in its own right and one that engineers need to learn from experience and context in the tech industry. But we’ve also learned that many of the assumptions we made about performance accuracy didn’t hold up to scrutiny — female engineers had just as accurate a view of their own skills as male ones, and engineers who had led more interviews or were well known on GitHub weren’t particularly better at gauging their performance.

What does this mean for the industry as a whole? First off, impostor syndrome appears to be the bleary-eyed monster that attacks across gender ability, and how good you are, or where you are, or how famous you are isn’t that important. Seniority does help mitigate some of the pain, but impostor syndrome affects everyone, regardless of who they are or where they’re from. So, maybe it’s time for a kinder, more empathetic interviewing culture. And a culture that’s kinder to everyone, because though marginalized groups who haven’t been socialized in technical interviewing are hit the hardest by shortcomings in the interview process, no one is immune to self-doubt.

We’ve previously discussed what makes someone a good interviewer, and empathy plays a disproportionately large role. And we’ve seen that providing immediate post-interview feedback is really important for keeping candidates from dropping out. So, whether you’re motivated by kindness and ideology or cold, hard pragmatism, a bit more kindness and understanding toward your candidates is in order.

Cat Hicks, the author of this guest post, is a researcher and data scientist with a focus on learning. She’s published empirical research on learning environments, and led research on the cognitive work of engineering teams at Google and Travr.se. She holds a PhD in Psychology from UC San Diego.

1Self-assessment has been explored in a number of domains, and often used to measure learning. One important criticism is that it’s highly impacted by people’s motivation and emotional state at the time of asking. See: Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. N. (2010). Self-assessment of knowledge: A cognitive learning or affective measure?. Academy of Management Learning & Education, 9(2), 169-191.

2Designing a good technical interview is no small task on the interviewer side. For an informal discussion of this, see this post.

3For some anecdotal conversation about interview self-assessment, see this one

4E.g., this article and this one.

5Some examples of further reading in social science research:
Good, C., Rattan, A., & Dweck, C. S. (2012). Why do women opt out? Sense of belonging and women’s representation in mathematics. Journal of personality and social psychology, 102(4), 700.
Master, A., Cheryan, S., & Meltzoff, A. N. (2016). Computing whether she belongs: Stereotypes undermine girls’ interest and sense of belonging in computer science. Journal of Educational Psychology, 108(3), 424.

6One complication for our dataset is the representation of experienced female engineers: we simply didn’t have very many, which is true to the demographics of the tech industry, but also means that selection biases in the small group of experienced female engineers we do have are more likely to be present, and this isn’t the be-all and end-all of exploring for group differences. We’d like to continue looking at interviews with female participants to explore this fully.

7These effects and the previous non-findings were all explored in a linear mixed model. Significant results for the individual effects are all p<.05

8Experienced engineers have an average skew of -.14; Junior engineers have an average skew of -.22, New Grads have an average skew of -.25.

9See also: https://insights.dice.com/2018/03/19/imposter-syndrome-tech-pros-age/

10Another wrinkle with the design behind this data is that there’s a floor and a ceiling on the scale: people who always score a 4, for example, can’t ever overrate themselves, because they’re already at the top of the scale. We dealt with this a couple of ways: by excluding people at the floor and ceiling and re-running analyses on the middle subset, and by binning skew into either accurate or not and looking at that. The findings hold up across this.

Featured

Uncategorized

Exactly what to say when recruiters ask you to name the first number… and other negotiation word-for-words

Posted on August 16th, 2018.

There are a lot of resources out there that talk about salary negotiation but many tend to skew a bit theoretical. In my experience, one of the hardest things about negotiating your salary is knowing what to say in tough, ambiguous situations with a power balance that’s not in your favor. What’s OK? What’s rude? What are the social norms? And so on.

Before I started interviewing.io, I’ve worked as a software engineer, an in-house recruiter, and an agency recruiter, so I’ve literally been on all sides of the negotiating table. For the last few years, I’ve been guest-lecturing MIT’s 6.UAT, a class about technical communication for computer science majors. Every semester, negotiation is one of the most-requested topics from students. In this post, I’m sharing the content of that lecture, which is targeted toward students, but has served seasoned industry folks just as well. You’re never too young or too old to advocate for yourself.

Btw, if you don’t like reading and prefer long, rambly diatribes in front of an unsightly glass wall, I covered most of this material (and other stuff) in a webinar I did with the fine people at Udacity (where I used to run hiring) a few months ago. So, pick your poison.

Why negotiate at all, especially if I’m junior?

If you’re early in your career, you might say that negotiation isn’t worth the hassle — after all, junior roles have pretty narrow salary bands. There are a few reasons this view is short-sighted and wrong. First, though it’s pretty unlikely in the grand scheme of things, if you’re applying to a startup, there might come a magical day when your equity is worth something. This is especially true if you’re an early employee — with a good exit, a delta of a few tenths of a percent might end up being worth a down payment on a home in San Francisco.

But, let’s get real, your equity is likely worthless (except interviewing.io’s equity… that’s totes gonna be worth something), so let me give you a better, more immediate reason to learn to haggle early in your career, precisely because that’s when the stakes are low. Humans are frighteningly adaptable creatures. Scared of public speaking? Give 3 talks. The first one will be gut-wrenchingly horrific, the stuff of nightmares. Your voice will crack, you’ll mumble, and the whole time, you’ll want to vomit. The next one will be nerve-wracking. The last one will mostly be OK. And after that, you’ll be just fine. Same thing applies to approach anxiety, mathematical proofs, sex, and, you guessed it, salary negotiation!

So, make all the awkward, teeth-cringing mistakes now, while it doesn’t matter, and where failure will cost you $5K or $10K a year. Because the further along you get in your career, the bigger the upside will be… and the bigger the downside will be for not negotiating. Not only will the salary bands be wider for senior roles, but as you get more senior, more of your comp comes from equity, and equity has an even wider range for negotiating. Negotiating your stock well can make 6-figure differences and beyond (especially if you apply some of these same skills to negotiating with investors over term sheets, should you ever start your own company)… so learn these skills (and fail) while you’re young, because the older you get, the harder it’s going to be to start and the more high-stakes it’s going to be.

So, below, as promised, I’ll give you a few archetypal, stress-inducing situations and what to say, word-for-word in each one. But first, let me address the elephant in the room…

Will my offer be rescinded if I try to negotiate?

As I mentioned earlier, this blog post is coming out of a lecture I give at MIT. Every semester, I start the negotiation portion of the lecture with the unshakeable refrain that no one will ever rescind your offer for negotiating. Last semester was different, though. I was just starting to feel my oats and get into my talk (the negotiation piece comes about halfway through) and smugly recited the bit about offers never being rescinded, followed by my usual caveat… “unless you act like a douche while negotiating.” Then, a hand shot up in the back of the room. Ah ha, I thought to myself, one of the non-believers. Ready to placate him, I called on the gentleman in the back.

“My offer got rescinded for negotiation.”

The class broke out into uproarious laughter. I laughed too. It was kind of funny… but it was also unnerving, and I wanted to get to the bottom of it.

“Were you a giant jerk when you negotiated?”

“Nope.” Shit, OK, what else can I come up with…

“Were you applying at a really small company with maybe one open role?” I asked, praying against hope that he’d say yes.

“Yes.”

“Thank god.”

So, there’s the one exception I’ve found so far to my blanket statement. After working with hundreds and hundreds of candidates back when I was still a recruiter, I had never heard or seen an offer get rescinded (and none of my candidates acted like douches while negotiating, thank god), until then. So, if you’re talking to a super small company with one role that closes as soon as they find someone, yes, then they might rescind the offer.

But, to be honest, and I’m not just saying this because I was wrong in front of hundreds of bloodthirsty undergrads, an early startup punishing a prospective employee for being entrepreneurial is a huge red flag to me.

OK, so, now onto the bit where I tell you exactly what to say.1

What to say when asked to name the first number

There will come a time in every job search where a recruiter will ask you about your compensation expectations. This will likely happen very early in said search, maybe even during the first call you’ll ever have with the company.

I think this is a heinous, illaudable practice fraught with value asymmetry. Companies know their salary ranges and roughly what you’re worth to them before they ever talk to you (barring phenomenal performance in interviews which kicks you into a different band). And they know what cost of living is in your area. So they already have all the info they need about you, while you have none about them or the role or even your market value. Sure, there are some extenuating circumstances where you are too expensive, e.g. you’re like an L6 at Google and are talking to an early stage startup that can only afford to pay you 100K a year in base, but honestly even in that situation, if the job is cool enough and if you have the savings, you might take it anyway.

So, basically, telling them something will only hurt you and never help you. So don’t do it. Now, here’s exactly what to say when asked to name the first number.

At this point, I don’t feel equipped to throw out a number because I’d like to find out more about the opportunity first – right now, I simply don’t have the data to be able to say something concrete. If you end up making me an offer, I would be more than happy to iterate on it if needed and figure out something that works. I also promise not to accept other offers until I have a chance to discuss them with you.

TADA!

What to say when you’re handed an exploding offer

Exploding offers, in my book, are the last bastion of the incompetent. The idea goes something like this… if we give a candidate an aggressive deadline, they’ll have less of a chance to talk to other companies. Game theory for the insipid.

Having been on the other side of the table, I know just how arbitrary offer deadlines often are. Deadlines make sense when there is a limited number of positions and applicants all come in at the same time (e.g. internships). They do not make any sense in this market, where companies are perpetually hiring all the time — therefore it’s an entirely artificial construct. Joel Spolsky, the creator of Trello and Stack Overflow, had something particularly biting to say on the matter of exploding offers many years ago (the full post, Exploding Offer Season, is really good):

“Here’s what you’re thinking. You’re thinking, well, that’s a good company, not my first choice, but still a good offer, and I’d hate to lose this opportunity. And you don’t know for sure if your number one choice would even hire you. So you accept the offer at your second-choice company and never go to any other interviews. And now, you lost out. You’re going to spend several years of your life in some cold dark cubicle with a crazy boss who couldn’t program a twenty out of an ATM, while some recruiter somewhere gets a $1000 bonus because she was better at negotiating than you were.”

Even in the case of internships, offer deadlines need not be as aggressive as they often are, and I’m happy to report that many college career centers have taken stands against exploding offers. Nevertheless, if you’re not a student or if your school hasn’t outlawed this vile practice, here’s exactly what to say if it ever happens to you.

I would very much appreciate having a bit more time. I’m very excited about Company X. At the same time, choosing where I work is extremely important to me. Of course, I will not drag things out, and I will continue to keep you in the loop, but I hope you can understand my desire to make as informed of a decision as possible. How about I make a decision by…?

The reverse used car salesman… or what to say to always get more

At the end of the day, the best way to get more money is to have other offers. I know, I know, interviewing sucks and is a giant gauntlet-slog, but in many cases, having just one other offer (so, I don’t know, spending a few extra days of your time spread over a few weeks) can get you at least $10K extra. It’s a pretty rational, clear-cut argument for biting the slog-bullet and doing a few more interviews.

One anecdote I’ll share on the subject goes like this. A few years ago, a close friend of mine who’s notoriously bad at negotiation and hates it with a passion was interviewing at one of the big 4 companies. I was trying to talk to him into getting out there just a little bit, for the love of god, and talk to at least one more company. I ended up introducing him to a mid-sized startup where he quickly got an onsite interview. Just mentioning that he had an onsite at this company to his recruiter from the bigco got him an extra $5K in his signing bonus.

Offers are, of course, better than onsites, but in a pinch, even onsites will do… because every onsite increases your odds of not accepting the offer from the company you’re negotiating with. So, let’s say you do have some offers. Do you reveal the details?

The answer is that it depends. If the cash parts of the offers you have are worth more than the one you have in hand, then you can reveal the details. If they’re worth more in total but less in cash, it’s a bit dicier because equity at smaller companies is kind of worthless… you can still use it as leverage if you tell the story that that equity is worth more to YOU, but that’s going to take a bit more finesse, so if you’ve never negotiated before, you might want to hold off.

If the cash part of your equity is not worth more, it’s sufficient to say you have offers and when pressed, you can simply say that you’re not sharing the details (it’s ok not to share the details).

But whether you reveal details or not, here’s the basic formula for getting more. See why I call it the reverse used car salesman?

I have the following onsites/offers, and I’m still interviewing at Company X and Company Y, but I’m really excited about this opportunity and will drop my other stuff and SIGN TODAY if…

So, “if” what? I propose listing 3 things you want, which will typically be:

  • Equity
  • Salary
  • Signing/relocation bonus

The reason I list 3 things above isn’t because I expect you’ll be able to get all 3, but this way, you’re giving the person you’re negotiating with some options. In my experience, you’ll likely get 2 out of the 3.

So, what amounts should you ask for when executing on the reverse used car salesman? It’s usually easier to get equity and bonuses than salary (taxed differently from the company’s perspective, signing bonus is a one-off rather than something that repeats every year). Therefore, it’s not crazy to ask for 1.5X-2X the equity and an extra 10-15% in salary. For the bonus portion, a lot depends on the size of the company, but if you’re talking to a company that’s beyond seed stage, you can safely ask for at least 20% of your base salary as a signing bonus.2

What if the company says no to all or most of these and are a big enough brand to where you don’t have much of a leg to stand on? You can still get creative. One of our users told me about a sweet deal he came up with — he said he’d sign today if he got to choose the team he could join and had a specific team in mind.

Other resources

As I mentioned at the beginning of this post, there are plenty of blog posts and resources on the internets about negotiation, so I’ll just mention two of my favorites. The first is a riveting, first-hand account of negotiation adventures from one of my favorite writers in this space, Haseeb Qureshi. In his post, Haseeb talks about how he negotiated for a 250K (total package) offer with Airbnb and what he learned along the way. It’s one of the most honest and thoughtful accounts of the negotiation process I’ve ever read.

The second post I’ll recommend is a seminal work in salary negotiation by Patrick McKenzie (patio11 on Hacker News, in case that’s more recognizable). I read it back when I was still an engineer, and it was one of those things that indelibly changed how I looked at the world. I still madly link anyone and everyone who asks me about negotiation to this piece of writing, and it’s still bookmarked in my browser.

If you’re an interviewing.io user and have a job offer or five that you’re weighing and want to know exactly what to say when negotiating in your own nuanced, unique situation, please email me, and I’ll whisper sweet, fiscal nothings in your ear like a modern-day Cyrano de Bergerac wooing the sweet mistress that is capitalism.3

1If you’re interviewing at interviewing.io, USE THESE ON ME. IT’LL BE GREAT. And while you’re at it, use these on me as well.

2Some of the larger tech companies offer huge signing bonuses to new grads (~100K-ish). Obviously this advice is not for that situation.

3An increasing number of our customers pay us on subscription, so we don’t get more money if you do.4 And for the ones who don’t, salary and recruiting fees typically come out of a different budget.

4In the early days of interviewing.io, we tried to charge a flat per-hire fee in lieu of a percentage of salary, precisely for this reason — we wanted to set ourselves up as an entirely impartial platform where lining up with our candidates’ best interests was codified into our incentive structure. Companies were pretty weirded out by the flat fee, so we went back to doing percentages, but these days we’re moving over as many of our customers to subscription as possible — it’s cheaper for them, better for candidates, and I won’t lie that I like to see that recurring revenue.