logo blog
better interviewing through data


I’ve conducted over 600 technical interviews on Here are 5 common problem areas I’ve seen.

Posted on October 15th, 2020.

Hey, Aline (founder of here. This is the second post in our Guest Author series The first post talked about red flags you might encounter while interviewing with companies. Complementarily, this post, authored by one of our prolific, long-time interviewers, explores common missteps that interviewees make.

One of the things I’m most excited about with the Guest Author series is the diversity of opinions it’s bringing to our blog. Technical interviewing and hiring is fraught with controversy, and not everything these posts contain will be in line with my opinions or the official opinions of But that’s what’s great about it. After over a decade in this business, I still don’t think there’s a right way to conduct interviews, and I think hiring is always going to be a bit of a mess because it’s a fundamentally human process. Even if we don’t always agree, I do promise that the content we put forth will be curated, high quality, and written by smart people who are passionate about this space.

If you have strong opinions about interviewing or hiring that you’ve been itching to write about, we’d love to hear from you. Please email me at to get started.

William Ian Douglas goes by “Ian”, and uses he/him pronouns. He lives in the Denver, Colorado region and graduated from a Computer Engineering program in 1996. His career spans back-end systems, API architecture, DevOps/DBA duties and security, and has been a team lead managing small teams, and Director of Engineering. Ian branched out into professional technical interview coaching in 2014, and in 2017 pivoted his entire career to teaching software development for the Turing School of Software & Design in the Denver area. He joined as a contract interviewer in the summer of 2017 and is a big fan of the data analytics blog posts that IIO produces to help expose and eliminate bias in our tech industry interviews. Ian writes technical coaching information at and you can reach him on Twitter, LinkedIn and GitHub.

I recently conducted my 600th interview on (IIO). I’d like to share lessons learned, why I approach interviews the way that I do, and shed some light on common problem areas I see happen in technical interviews. Every interviewer on the platform is different, and so your results may vary. We have some excellent folks helping out on the platform, and have a wonderful community working to better ourselves.

The Mock Interview

During our interviews on IIO, we rate people on three 4-point scales. A score of 1 means they did extremely poorly, and a 4 means they did extremely well in that category. I typically start my interview where everyone gets 3 out of 4 points right away, and then earn/lose points as the interview goes on.

Every interviewer on the platform will have some aspect that they favor over others. My own bias as an interviewer tends to be around communication and problem solving, which I’ll point out below.

Technical Proficiency

In this category, I grade a candidate on how proficient they seem in their language of choice, whether they had significant problems coding an algorithm of a particular style, if I needed to give a lot of hints during coding.

Problem Solving

Here, I grade a candidate on how well they break the problem into smaller pieces, come up with a strategy for solving the smaller problems, and also debugging issues along the way. The ability to think through problems while debugging is just as important as writing the code in the first place. Are they stumped when a problem happens, or are they able to find the root cause on their own?


Interviewers really want to hear your decision-making process. This is also very important when debugging code. I tended to hire folks who would fit in well on smaller teams or clusters of developers. With that in mind, collaboration and easy communication are a good way to win me over.

Common Problem Areas I See in Interviews

Here are the top problem areas I see in interviews, not just on IIO, but in general. I hope you find this advice helpful.

Common Problem Area 1: Jumping into code too soon

I see this in developers of all types and levels, but mostly in the “intermediate” level of 2-5 years of experience. They hear a problem, talk about a high-level design for 30 seconds or less, and are eager to get coding. They feel like they’re on a timer. They want to rush to get things finished. It’s a race to the finish line. First one across the finish line is the winner.



Please, slow down. Plan your work. And share your thought process along the way.

People who take time to think out a mid-level design, whether that’s pseudocode or just writing out notes of their approach, tend to spend less time debugging their code later. Folks who jump right into coding fall into what I call “design-as-you-go” problems, where you’re spending lots of time refactoring your code because you need to change a parameter passed or a return value, or wait, that loop is in the wrong place, etc.. This is very easy to spot as an interviewer.

Spending some time on mid-level design doesn’t guarantee your success, but it might save you time in the long run by thinking through your plan a little deeper, and that extra time you bought could be used to fix problems later.

Also, as an interviewer, I want to see you succeed. Especially if you’re “on-site” (in person, or remote nowadays) because you’re costing our company a lot more money in an on-site interview process. While I need to be fair to all candidates in the amount of help I can give, if I can see your design ahead of time, and spot a flaw in that design, I can ask leading questions to guide you to the problem and correct your approach earlier.

If you jump straight into code, I have no idea if your implementation is even going to work, and that’s not a great place to put your interviewer. It’s much harder for me to correct a design when you have 100 lines of Java code written before I really understand what’s going on in your code.

I saw this lack of planning backfire in a horrible way in a real interview in 2012. The candidate was brought to my interview room by someone in HR, asked if they would like a bottle of water, and promised to return. We introduced ourselves and got down to the technical challenge. The candidate shared no details, no design, barely talked about a high-level approach, wrote nothing down, and started writing code on a whiteboard. (This would be my second-to-last whiteboard interview I ever conducted, I hate whiteboard interviews!) HR showed up a few minutes later, knocking loudly on the door, offering the bottle of water and leaving. The candidate, grateful for a drink, uncapped the bottle and started to take a sip when this awful, draining look came over their face. The distraction of delivering a bottle of water made them completely lose their train of thought, and I couldn’t help them recover because they hadn’t shared any details with me about their approach. They spent several minutes re-thinking the problem and starting over.

On the “other side” of this coin, however, you can spend “too long” on the design stage and run out of time to implement your genius plan. I’ve seen candidates talk through a mid-level design, then write notes, then manually walk through an example with those notes to really make sure their plan is a good one, and now they only have a few minutes left to actually implement the work. Extra points on communication, maybe, but we need to see some working code, too.

So what’s the best approach here?

I typically recommend practicing until you spend about 5 minutes thinking through high-level design choices, 5 minutes to plan and prove the mid-level design, and then get to work on code. The good news here is that “practice makes better” — the more you practice this design break-down and problem solving, the better you’ll get. More on this later.

Common Problem Area 2: Communicating “Half-thoughts”

This is a term I’ve coined over the years, where you start to say a thought out loud, finish the thought in your head, and then change something about your code. It usually sounds something like this:

“Hmm, I wonder if I could … … … no, never mind, I’ll just do this instead.”

Back to my bias for communication.

Interviewers want to know what’s going on in your thought process. It’s important that they know how you’re making decisions. How are you qualifying or disqualifying ideas? Why are you choosing to implement something in a particular way? Did you spot a potential problem in your code? What was it?

This missing information is a hidden treasure for your interviewer. It takes mere seconds to change your communication to something more like this:

“I wonder if … hmm … well, I was thinking about implementing this as a depth-first-search, but given a constraint around ___ I think a better approach might be ___, what do you think?”

That took maybe 2 or 3 extra seconds, and you’ve asked for my opinion or buy-in, we can consider possibilities together, and now we’re collaborating on the process. You already feel like my future coworker!

Common Problem Area 3: Not asking clarifying questions

An interview challenge I often ask as a warm-up question goes something like this: 

You have a grouping of integer numbers. Write a method that finds two numbers that add up to a given target value, stop immediately, and report those numbers. Return two ‘null’ values if nothing is found.

This is a great question that shows me how you think about algorithms and the kinds of assumptions you make when you hear a problem.

I’ve been coding for a pretty long time. Since 1982, actually. There’s no data structure called “a grouping” in any language I’ve ever used. So what assumptions are you going to make about the problem?

Most candidates immediately assume the “grouping” of numbers is in an array. You can successfully solve this problem by using an array to store your numbers. Your algorithm will likely be an O(n^2) (n-squared) algorithm because you’ll be iterating over the data in an exponential way: for each value, iterate through the rest of the values. There’s a more efficient way to solve this in O(n) time by choosing a different data structure.

Go ahead and ask your interviewer questions about the problem. If they tell you to make your own assumptions that’s different, but ask if they’re good assumptions. Ask if there are alternate data sets that you’ll be using as test cases which could impact your algorithm.

Common Problem Area 4: Assuming your interviewer sets all the rules

Wait, what?

Yeah, you read me right.

Yes, you’re there for the interview, but you’re there to show them how you’ll work on the team, and teams work best when there is clear, open communication and a sense of collaboration. Spend the first few minutes of the interview setting expectations, especially around communication and work process.

There’s nothing wrong with having this kind of chat with your interviewer: “My typical work process in a technical challenge like this is to spend a minute or two thinking quietly about the problem and writing down notes, I’ll share those thoughts with you in a moment to get your input. Then, while I code, I tend to work quietly as well, but I’ll be sure to pause now and then to share my thought process as I go, and then walk you through the code more thoroughly before we run it the first time. Would that be okay with you, or do you have different expectations of how you’d like me to communicate or work through the problem?”

I promise you’ll blow their mind. Most interviewers won’t be ready for you to take their expectations into consideration like this. It shows that you’ll work well on a team. You’re setting the environment where you’re advocating for yourself, but also being considerate of others. You’re stating your intentions up front, and giving them the opportunity to collaborate on the process.

Common Problem Area 5: Not asking for help sooner

As your interviewer, I have a small amount of help that I’m likely able to provide during a technical challenge. I can’t coach you through everything, obviously, but I’d rather give you a hint, deduct a point on a rubric, and see you ultimately succeed at the problem, than to struggle silently and spin in circles and make us both feel like the interview is a waste of time.

As a professional interviewer and an instructor at a software school, I’ve become pretty good at asking leading questions to guide you to a realization or answer without me giving you the solution.

It’s okay to admit when you’re stuck. It doesn’t make you a failure, it makes you human. Let your interviewer know what you’re thinking and  where you’re having problems. Listen very carefully to their response, they might be offering a clue to the problem, or might give you more thorough advice on how to proceed.

My Favorite Resources to Share

When our interviews at IIO are over, I like to dive into a lot of feedback on their process and where I think they could use extra practice to improve. Generally, I spend 10 to 20 minutes, sometimes going way beyond my one-hour expected time slot, to answer questions for someone, and going into more detail on things. I LOVE to help people on IIO.

Here are a few common areas of advice I offer to folks.


There’s nothing worse than listening to your own recorded voice. But all IIO interviews are recorded, and I often tell folks in my feedback and in the review notes I type up afterward to listen to the last few minutes of the interview recording to review the feedback I give them. You can also pause those recordings and grab a copy of your code at any time. (These recordings are of course private to your and your interviewer.)

During the playback, listen to your own thought process and how you communicate your ideas. As you work through other challenges, find a way to record yourself talking through the problem out loud if possible, and play that back for yourself. You’ll get better at articulating full and complete thoughts.

Problem Solving and Mid-Level Design

The more common practice sites like HackerRank, CodeWars, LeetCode, etc, are great for writing a coded algorithm, but don’t give you any way to exercise your design process.

I send my students to Project Euler. Euler was a mathematician, so the problems on the website will generally be pretty math-heavy, but you can change the problems to be whatever you’re comfortable building. If you don’t know how to calculate a prime number, that’s fine, swap that out for whether a number is equally divisible by 17 or something instead.

I like Project Euler because the challenges there are just word problems. You have to think of everything: the algorithm, which data structure(s) to use, and especially how to break the problem into smaller pieces.

One of my favorite problems is #19 in their archive: counting how many months between January 1901 and December 1999 began on a Sunday. They give you the number of days in each calendar month, tell you January 1st 1900 is a Monday, and how to calculate a leap year. The rest is up to you.

The more you expose yourself to different types of problems, the better you’ll get at spotting patterns.

Practice, Practice, Practice

One piece of advice we give our students is to practice each technical challenge several times. Our executive director, Jeff Casimir, tells students to practice something 10 times. That feels like a big effort. I aim more for 3 to 4 times, and here’s my reasoning:

The first time you solve a problem, all you’ve done is solve the problem. You might have struggled through certain parts, but your only real achievement here is finishing.

If you erase your work and start it a second time, you might think of a different approach to solving the problem, maybe a more efficient solution. Maybe not, but at least you’re getting practice with this kind of problem.

Now erase your work and do it a third time. Then a fourth time. These are the times when you will start to actively build a memory of the strategy it takes to solve this particular problem. This “muscle memory” will help you when you see other technical challenges, where you’ll start to spot similarities. “Oh, this looks like the knapsack problem” and because you’ve solved that several times, the time you take on high level design and mid-level design just shortened quite a lot.

One of my favorite technical challenges can be solved using a handful of different algorithms (DFS, BFS, DP, etc). If you think you can solve a problem in a similar fashion, solve it 3 or 4 times with each of those algorithms as well. You’ll get REALLY good at spotting similarities, and have a great collection of strategies to approach other technical problems.

Shameless Self-Promotion

I’ve been writing up notes for aspiring new developers at It’s not complete, but I have a lot of my own thoughts on preparing for technical interviews, networking and outreach, resumes and cover letters, and so on. I still have a few chapters to write about negotiation tactics and graceful resignations, but I’m happy to take feedback from others on the content.

I also have a daily email series covering several kinds of interview questions, but not from a perspective of how to answer the questions perfectly, there are plenty of resources out there to do that. Instead, I examine questions from an interviewer’s perspective — what am I really asking, what do I hope you’ll tell me, what do I hope you won’t say, and so on. A small preview, for example: when you’re asked “Tell me about yourself” they’re not really asking for your life story. They’re really asking “Tell me a summary of things about you that will make you a valuable employee here”.



6 red flags I saw while doing 60+ technical interviews in 30 days

Posted on September 15th, 2020.

Hey, Aline (founder of here. We’re trying something new. Up till now, all posts on this blog have been written by employees or contractors. Why? Frankly, it’s hard to find great content in the recruiting space. There’s so much fluff and bad advice out there, and we didn’t want any part of that.

The other day though, I was reading Hacker News and saw an article by Uduak Obong-Eren about how he did over 60 technical interviews in 30 days and what he learned from that gauntlet of an experience. I thought it was honest, vulnerable, well-written, and brimming with actionable advice. So, I reached out to him to see if he’d want to write something else. Fortunately, he did, and the article below is the inaugural post in what I hope will become our Guest Author series. You can read more about Uduak in the bio below.

A quick note because this is the first time we’re doing this. One of the things I’m most excited about with this new Guest Author series is the diversity of opinions it will bring to our blog. Technical interviewing and hiring is fraught with controversy, and not everything these posts contain will be in line with my opinions or the official opinions of But that’s what’s great about it. After over a decade in this business, I *still* don’t think there’s a right way to conduct interviews, and I think hiring is always going to be a bit of a mess because it’s a fundamentally human process. Even if we don’t always agree, I do promise that the content we put forth will be curated, high quality, and written by smart people who are passionate about this space.

Now, off we go!

Uduak Obong-Eren is a Software Engineer based in the San Francisco Bay Area who is passionate about architecting and building scalable software systems. He has about five years of industry experience and holds a Masters in Software Engineering from Carnegie Mellon University. He is also an open source enthusiast and writes technical articles – you can view some of his writings at He especially enjoys conducting free mock technical interviews to help folks get better at technical interviewing. You can follow him on Twitter @meekg33k.

What is the one thing you would look out for if you had to join a company?

Sometime between January and February 2020, I wanted to change jobs and was looking to join a new company. This, among other reasons, led me to embark on a marathon of technical interviews – 60+ technical interviews in 30 days

Doing that many number of interviews in such a short time meant I had an interesting mix of experiences from the various companies I interviewed with, each with their unique culture and values that often reflected in the way their interviews were conducted, intentionally or not.

In this article, I will be sharing some of the red flags I observed while I was on this marathon of technical interviews. I will not be mentioning names of any companies because that’s not the intent behind this article. 

The goal of this article is also not to make you paranoid and be on the hunt for red flags in your next interview, far from it. Rather the goal is to equip you with knowledge to help you immediately identify exactly the same or similar red flags in your next interview, and hopefully identifying them will set you up to better handle them. 

Even though the stories I’ll be sharing come from my marathon of technical interviews, these red flags do not apply only to technical interviews. They apply to all kinds of interviews and so there’s a lot to learn here for everyone.

The Red Flags

Your interviewer is only open to solving the problem ONE way

In the world of computing and in life generally, for any given problem, there is typically more than one way to solve that problem. For example, given a sorting problem, you could solve it using a merge-sort algorithm or a heap sort algorithm. 

Having this rich number of techniques to solve a problem makes it even more interesting and the general expectation in technical interviews is that you should have the flexibility to solve a problem using your preferred technique.

I had an interview where the interviewer asked me to solve an algorithmic problem. I had started solving the problem using a specific technique when the interviewer stopped me in my tracks and asked that I use another technique. 

When I probed a bit further to know why, it appeared that the reason he asked that I used the second wasn’t to test my knowledge of that second technique; it was because he was more ‘comfortable’ with that approach.

It is different if the interviewer wants to test your knowledge of something very specific. For example, given a problem that can be solved using iteration and recursion, the interviewer may want to test your knowledge of recursion and can ask you to solve the problem recursively. That wasn’t the case here.

I ended up using both techniques and discussed the trade-offs but frankly that experience left a bad taste in my mouth especially because that interview was with the hiring manager — my would-be manager, who is someone that can significantly influence your career growth and trajectory. 

Undue pressure to accept an offer letter

It’s very exciting and fulfilling when you go through all preliminary stages of a technical interview, through to the onsite interview (remote or in-person) and then you receive that “Congratulations <insert name here>, we are pleased to offer you…” email.

However, that excitement often becomes short-lived when there is some form of pressure from your soon-to-be employer to accept the offer. It’s a bit more manageable when the pressure comes from someone in HR or the recruiter, but when it’s from the hiring manager, that can be harder to manage.

That was the case for me when I interviewed with a startup based in Palo Alto. They were a small company in terms of staff strength. My onsite interview with them had gone quite well. I had a good conversation with the hiring manager and an even better conversation with the VP of Engineering, so much so that I could tell that I was going to be extended an offer. I asked to know how long I had to accept the offer letter and I was told seventy-two hours.

The offer letter arrived later that evening, and it looked great — a six-figure offer definitely didn’t seem like a bad start. I was also at the final stage of the interview process with other companies too and thankfully, I had enough time to negotiate and accept the offer, or so I thought. 

Then the pressure started, incessant calls from the hiring manager and the VP of Engineering, back-to-back emails, all within the allotted time. So much was the pressure that it got to a point where I wasn’t sure I wanted to negotiate the offer anymore. I turned down the offer.

I turned down the offer because the experience got me thinking about the company’s work culture. Were the methods employed by the company to get me to accept the offer indicative of their work culture? If they needed to get something done, how far would they go? 

Now don’t get me wrong, yes the company wants to employ you, yes the recruiting team wants to ‘close the deal’ however, it’s very important to pay attention to how the company does this. Do they remain professional about it?

A company’s values go beyond what they say, it shows in what they do and how they do it.

Not enough clarity about your role

Among the many reasons why you would join a company is your desire to be involved in valuable work. I had the opportunity to join a US company based in Boulder, Colorado. They had contracted a recruiting agency to help them find someone to fill a Software Engineer position in their firm. 

The hiring process started with an exploratory interview with the recruiting agency closely followed by a second interview with a recruiter from the company. In both interviews, I couldn’t get a clear sense of the details of my specific role was — what team I would be on, what kinds of projects I’d be working on, what the career growth pathway was, etc. 

I understand that sometimes companies can be going through restructuring, but that didn’t seem to be the case here. It seemed more like the company was focused on completing their headcount. Even though there’s nothing wrong with completing a headcount, I think there is everything wrong with not having a clear purpose for a role for a couple of reasons:

  • It means the role may not be critical to the company’s core business.
  • If the role isn’t that important, it may mean when a layoff comes, your position may be impacted.

On a more personal note, I don’t want to be just a number. I want to work at a place where I have the opportunity to contribute in an impactful way, and I like to believe you would too. So it’s important to get clarity about your role, for where you are today and for future career growth.

Consistent lack of interest or low morale from interviewers

When looking to join a company, one of the things you simply must care about is the team you will end up working on. At least 25% of your waking hours will be spent interacting with that team whether in-person or virtually.

Interviews offer you an opportunity to experience firsthand what it will look like to work with your prospective teammates, especially since, unless you’re interviewing at a huge company, your interviewers are likely to become your teammates.

If through all the different stages of the interview process, you experience a consistent lack of interest or low morale from your interviewers, you might want to pay attention . 

When I experienced that during one of my interviews, I couldn’t exactly tell what the cause was, but I knew something just wasn’t right. After some internal tussle, I decided to trust my gut feelings and ended the interview process with the company. 

Fast forward to two months after, two of my interviewers (would-be teammates) had left the company and joined another company (no I wasn’t stalking, I just checked on LinkedIn). 

Now I’m not saying that during the interview process, there won’t be one or two people, who because of their busy schedules, would have preferred to be doing something else rather than interviewing. Yet, when all of the interviewers don’t want to be there, you certainly want to pay attention to that.

A lack of interest or low morale could be pointers to a combination of any of the following:

  • Your prospective team-mates may be experiencing burnout.
  • Some internal dissatisfaction with company — culture, policies, something, anything.
  • The team isn’t that interested in you (hard pill to swallow?), maybe they don’t see you as a long-term hire.

Or it could be for reasons that I have not included here, but I implore you to not ignore this red flag if you see it in your next interview.

Your interviewers aren’t prepared for the interview

Have you been in an interview before where the interviewer doesn’t seem to have any questions to ask you? Trust me it can get really awkward.

That was my experience during a technical phone-screen interview with an educational technology company based in California. The interviewer wasn’t prepared for our interview and didn’t have any questions at hand. He wasn’t even sure of who he was interviewing and what role I was interviewing for. It wasn’t a pleasant experience.

I understand that there are a myriad of reasons why interviewers may not be prepared for an interview. Some of which include:

  • Lack of proper planning by the HR/recruiting team.
  • Last-minute changes on the interviewee.
  • Busy schedules for the interviewer.
  • The interviewer just wasn’t prepared.

I typically won’t act on this red flag in isolation. I will be looking for other red flags in a bid to form a cluster of patterns before making any decision.

Lack of a clear direction on where the company is headed

It’s fulfilling to be a part of a company that is involved in meaningful work that creates value for its users. Joining such a company would mean you have the desire to contribute  in helping the organization meet its goals. This invariably means the organization must have some goals right?

I was contacted by a startup based in San Francisco via AngelList. I had a first introductory call with a recruiter from the company, closely followed by a phone screen technical interview. 

In both interviews, even though the interviewers shared some details about the company, there was a lot of vagueness and about the company’s direction and where the company was headed. 

I particularly remember that one question I asked at the time, was about how the company would deal with its growing competition. Sadly, the answers I got didn’t seem convincing and the company later got acquired by the competition.

When you are interviewing to join a company, you are selling more than just your skills, but also yourself — your unique experience. While it’s important to do that, I think it’s equally important that the company should be able to sell you on its vision and what it hopes to achieve.

When I think of joining a company, I picture myself in that company for the next 2–5 years. If my vision for where I want to be in my career doesn’t align with the company’s vision, that is a mismatch that shouldn’t be ignored.


We sometimes focus more on securing the job and even though that is very important, even more important than getting the job is staying fulfilled on the job. For me, fulfillment meant joining a company that had a clear vision of where they were headed, working in a role that was critical to the company’s business while being equipped with a lot of growth opportunities

Hopefully, these red flags I have shared will equip you to make better decisions on what companies you choose to grow your career with. I would generally not advise making a decision based on one or two red flags, but if you see a cluster of red flags, you shouldn’t ignore them. I wish you the best in your career journey.

If you ever need someone to do a mock interview with you, feel free to schedule one here or you can reach out directly to me on Twitter @meekg33k.

And if you’d like a list of things to ask companies while you’re interviewing that may help you identify these red flags (and others!) sooner, take a look at this one.

If you have something to say about your adventures in interviewing or hiring, write a guest post on our blog! Please email me at to get started.



Announcing the Technical Interview Practice Fellowship

Posted on July 23rd, 2020.

I started because I was frustrated with how inefficient and unfair hiring was and how much emphasis employers placed on resumes.

But the problem is bigger than resumes. We’ve come to learn that interview practice matters just as much. The resume gets you in the door, and your interview performance is what gets you the offer. But, even though technical interviews are hard and scary for everyone — many of our users are senior engineers from FAANG who are terrified of getting back out there and code up the kinds of problems they don’t usually see at work while someone breathes down their neck — interview prep isn’t equitably distributed.

This inequity never really sat right with me (that’s why exists), but when we started charging for interview practice post-COVID, it really didn’t sit right with me.

As you may have read, if you follow news, COVID-19 turned our world upside down. In its wake, the pandemic left a deluge of hiring slowdowns and freezes. For a recruiting marketplace, this was an existential worst nightmare — in a matter of weeks, we found ourselves down from 7-figure revenue to literally nothing. Companies didn’t really want or need to pay for hiring anymore, and we were screwed.

Then, we pivoted and started charging our users, who had previously been able to practice on our platform completely for free (albeit with some strings, more on that in a moment). While this pivot was the right thing to do — without it, we would have had to shut down the company, unable to provide any practice at all — charging people, especially those from underrepresented backgrounds, didn’t sit right with us, and in our last post announcing our model, we made the following promises:

  • We’d ALWAYS have a free tier 
  • We’d immediately start working on a fellowship for engineers from underrepresented backgrounds or in a visa crisis/experiencing financial hardship ← That’s what this post is about!
  • We’d find a way to let people defer their payments

We launched with a free tier, and it’s still there and going strong. We’re still working on deferred payments and are in the thick of user research and price modeling.

But, the rest of this post is about the 2nd promise. To wit, I’m so proud to tell you that we’ve officially launched the first (pilot) cohort of the Technical Interview Practice Fellowship. This cohort will be focused on engineers from backgrounds that are underrepresented in tech. We are acutely aware, of course, that our first cohort couldn’t capture everyone who’s underrepresented, that gender and race isn’t enough, and that we need to do more for our users who can’t afford our price tags, regardless of who they are or where they come from.

Our hope is to expand this Fellowship to anyone who needs it.

We’re also working on the much harder problem of how to navigate the visa situation we’re in right now (different than when we wrote the first post, sadly… but especially important to me, given that I’m an immigrant myself).

What is the Fellowship, and why does it exist?

Before we tell you a little bit about the Fellows in our inaugural cohort and what the Fellowship entails, a quick word about why this matters.

In order to get a job as a software engineer, it’s not enough to have a degree in the field from a top school. However you learned your coding skills, you also have to pass a series of rigorous technical interviews, focusing on analytical problem solving, algorithms, and data structures.

This interview style is controversial, in part because it’s not entirely similar to the work software engineers do every day but also because 1) like standardized testing, it’s a learned skill and 2) unlike standardized testing, interview results are not consistent or repeatable — the same candidate can do well in one interview and fail another one in the same day. According to our data, only about 25% of candidates are consistent in their performance from interview to interview, and women quit 7X more often than men after a poor performance.

To account for both of these limitations, the best strategy to maximize your chances of success is to practice a lot so you can 1) get better and 2) accept that the results of a single interview are not the be-all and end-all of your future aptitude as a software engineer and that it’s ok to keep trying.

The main problem created by modern interview techniques is that, despite interview practice being such a critical prerequisite to success in this field, access to practice isn’t equitably distributed. We want to fix this, and we’re well equipped to do so. Based on our data, engineers are twice as likely to pass a real interview after they’ve done 3-5 practice sessions on our platform.

Our Fellows will get these practice sessions completely for free. These will be 1:1 hour-long sessions with senior engineers from a top company who have graciously volunteered their time and expertise. Huge thank you and a big shout-out to them all.

After each session, Fellows will get actionable feedback that will help them in their upcoming job search, and we will be helping Fellows connect with top companies as well.

Note: We’d like to be able to offer even more support – and are actively seeking more partners to do so. Please see the How you can help section below if you or your organization would like to get involved!

Why now?

The world seems to be in a place, now more than ever, to have the conversation about race, gender, socioeconomic, and other kinds of equity, in hiring. This is our small part of that conversation.

Who are the Fellows?

After opening up our application process, we close to 1,000 submissions in a week, and (though it was really, really hard) we culled those down to 56 Fellows.

Our first cohort is:

  • 82% Black, Latinx, and/or Indigenous
  • 53% women
  • 55% senior (4+ years of experience) & 45% junior (0-3 years of experience)

Here are some of their (anonymized) stories. There were a lot of stories like these.

My goal is to keep pressing as well as to share and give to underrepresented communities because the journey in tech can be isolating. Often I am the only one. It is critical that there are more people that look like me that are engineers *and* ascend the leadership ladder.

My parents immigrated from [redacted] to The Bronx without a formal education. I’m the first individual in my household to graduate from college and I’m the only Software Engineer in my family. I grew up in a poor neighborhood where many individuals had limited economic and educational opportunity. I aim to make the path to become a Software Engineer easier for those who were in my situation.

My journey to becoming a software engineer almost never happened. Throughout my undergraduate studies I was faced with having to drop out multiple times, due to the immigration status of my parents…. I was tasked with assisting in my family’s living situation and paying for school. I worked full time and started my own construction company in order to take care of my family and studies. It was always tough having to work 8-10 hours a day and then going to class or doing homework… Becoming a software engineer was always a goal of mine, and realizing that goal was well worth the struggle, given the struggle my parents went through to bring us here in the first place.

I spent 5 years in public education working directly with marginalized communities in the struggle for equity. My journey through software engineering is a continuation of this spirit of advocacy and changemaking. Software engineering is a tool to be put at the service of advocacy.

What can I do to help?

There are a number of ways you can help and get involved!

Help sponsor future Fellowship cohorts & create scholarships for underrepresented engineers!

Every Fellow in this first cohort represents at least 100X who are not. We have the tech to scale the hell out of this program, and all we need is backing and resources from people or organizations who recognize there’s a need (donations are tax-deductible). Please email if you’d like to get involved or want more information.

Hire through us!

Despite mounting evidence that resumes are poor predictors of aptitude, companies were obsessed with where people had gone to school and worked previously. On, software engineers, no matter where they come from or where they’re starting, can book anonymous mock interviews with senior interviewers from top companies. We use data from these interviews to identify top performers much more reliably than a resume, and fast-track them to real job interviews with employers on our platform through the same anonymous, fair process. Because we use data, not resumes, our candidates end up getting hired consistently by companies like Facebook, Uber, Twitch, Lyft, Dropbox, and many others, and 40% of the hires we’ve made to date have been candidates from non-traditional backgrounds. Many of our candidates have literally been rejected based on their resumes by the same employer who later hired them when they came through our anonymous platform (one notable candidate was rejected 3 times from a top-tier public company based on his resume before he got hired at that same company through our anonymous interview format).

Please email to get rolling.

Buy an individual practice session for someone who can’t afford it

If you know individual engineers who need interview practice but can’t afford it, use our handy interview gifting feature. Interviews are $100 each. They’re not cheap, but we have to price them that way to pay for interviewer time (interviewers are senior FAANG engineers) and cover our costs. Sadly that means practice interviews are not affordable to everyone. Even if you can’t get involved to help us fund interviews at scale, if you know someone who needs practice but can’t afford it, you can buy them an anonymous mock interview or two individually. It’s the best gift you can give to an engineer who’s starting their job search.


Uncategorized is finally out of beta. Anonymous technical interview practice for all!

Posted on June 4th, 2020.

First, for the brevity-minded, a TL;DR:

  • is now open to all engineers in North America and the UK, regardless of seniority level
  • We have both free and paid, premium mock interviews
  • No more limits on how many practice interviews you can do
  • You can now give (or receive!) practice interviews as gifts

I started 5 years ago. After working as both an engineer and a recruiter, my frustration with how inefficient and unfair hiring had reached a boiling point. What made me especially angry was that despite mounting evidence that resumes are poor predictors of aptitude, employers were obsessed with where people had gone to school and worked previously. In my mind, any great engineer, regardless of how they look on paper, should have the opportunity to get their foot in the door wherever they choose.

So, we set out to build a better system. On, software engineers can book anonymous mock interviews with senior engineers from companies like Facebook, Google, and others, and if they do well in practice, get fast-tracked with top employers regardless of how they look on paper. Fast-tracking means that you bypass resume screens, scheduling emails, and recruiter calls, and go straight to the technical interview (which, by the way, is still anonymous1) at companies of your choice. Because we use interview data, not resumes, our candidates end up getting hired consistently by companies like Facebook, Uber, Twitch, Lyft, Dropbox, and many others, and 40% of the hires we’ve made to date have been candidates from non-traditional backgrounds. What’s nuts is that many of our candidates have literally been rejected based on their resumes by the same employer who later hired them when they came through One notable candidate was rejected three times from a top-tier public company based on his resume before he got hired at that same company through us.

Over the past 5 years, we’ve hosted over 50,000 technical interviews (both practice and real) on our platform. Our YouTube channel, where you can watch other people interview, has gotten over 3.5M views, and, most importantly, we have helped thousands of engineers get great jobs.

All practice interviews are completely anonymous and include actionable, high-fidelity feedback

Despite that for our entire, multi-year existence, we’ve been in beta. Over the past year or so, this increasingly inaccurately named “beta” became kind of a smoke screen. Our product was stable, we had plenty of interviewers, but sadly, we couldn’t serve many of the people who needed us most. Because we made money by charging companies for hires, despite a growing waitlist of >180,000 engineers, we could only serve the ones whom we had a shot at placing, i.e. engineers who 1) were located in a city where we had customers and 2) had 4 or more years of experience⁠—sadly, despite our best efforts, employers across the board were not willing to pay for junior hires.

Then, COVID-19 happened and with it, a deluge of hiring slowdowns and freezes. In a matter of weeks, we found ourselves down from 7-figure revenue to literally nothing. Companies didn’t really want or need to pay for hiring anymore.

But these hiring slowdowns freezes weren’t just affecting us. In parallel, we saw a growing sea of layoffs, and we realized that, soon, more and more candidates would be vying for a shrinking number of jobs. On top of that, because a disproportionate amount of layoffs targeted recruiters, an overworked in-house skeleton recruiting team would go back to relying on resumes and other old-school proxies, unintentionally marginalizing non-traditional candidates once again. We also realized that many of the folks getting laid off would be here on visas, which meant that they’d have a paltry 60 days to find their next job or risk deportation.

So, we made a hard call. You may know that historically, we’ve offered completely free mock interviews. What you may not know, is that we pay our professional interviewers, as it’s the only way to ensure that we have seasoned, experienced engineers delivering a consistent, realistic, and high-quality experience. This is often our largest expense.

Since we previously funded practice by charging companies for hires, we had to find another revenue stream to continue. In the face of either shutting down the company, unable to provide any practice at all, or to begin to charge for premium practice interviews, we made the choice to launch a paid tier. But, because charging for practice felt anathema to our mission, we knew we needed some ground rules in place. I started this company to fix hiring, after all, and that’s why the people who work at are here, too. 

After 50,000 interviews, our data shows that where someone goes to school has no correspondence to their interview performance. Despite that, we are all too aware that just because aptitude is uniformly distributed among populations, resources are not. We understand that paying for interviews will be prohibitive to many of the people who need help most, so our ground rules and goals for this pivot were as follows:

  • We’d ALWAYS have a free tier
  • We’d immediately start working on a fellowship for engineers from underrepresented backgrounds or in a visa crisis experiencing financial hardship
  • We’d find a way to let people defer their payments

There is an upside to our new model though. Now that we’re no longer strictly beholden to employers, we’re able to open up to engineers of all seniority levels and many more locations. And because our revenue isn’t coming from placements but directly from practice, we don’t have to constrain our users to a limited number of practice interviews.

So, as of today, is open to engineers of all experience levels in North America and the UK. Engineers who sign up can book mock interviews 24 hours out, and top performers will still get fast-tracked for great jobs.2

 You can book either free (peer practice with other users) or premium interviews with experienced interviewers from FAANG, as early as 24 hours out

Here’s how it works:

  • If you’re able to pay, you can now book unlimited interviews with professional interviewers from FAANG and other top companies. We’re no longer constraining you to 2 or 3, and we have the interviewer supply to make this work. Interviews cost between $100 and $225.
  • If you can’t pay, there’s a free tier where you can do peer practice with other users.

What about the goals and ground rules above? We’ve already made some headway against these. The free tier was there from day one. Moreover, a number of our professional interviewers have stepped up and volunteered their time to help the people who need it most prepare for free (if you’d like to volunteer your time to help engineers from underrepresented groups practice interviewing, please email with the subject line ‘Volunteer’), and a formal fellowship is in the works. Lastly, we’re working on a deferred payment plan, where users who buy premium interviews will not have to pay us for them until they find their next job.

COVID-19 has changed a lot of things for our team (that’s us above doing the remote work thing). But not how much we want to fix hiring.

Look, whether you got laid off, are a student who’s reeling from your internship being withdrawn, lost your visa, whether you’re fortunate enough to still be employed but worry about your job stability, or if you just want to help making hiring fairer, we hope you’ll take us for a spin. Technical interviews are hard, and hiring is broken. And whether you’re new to interviewing or are just rusty after being off the market, and whether you can pay or not, we have your back. 

The coming months are going to be hard. We know that in the current climate more people than ever are feeling helpless, and the world feels like it’s burning. And it might not even seem like technical interviews matter that much. But they do to us… because this is our way of creating a world where access to opportunity isn’t determined by who you are or where you come from but by what you can do.

P.S. If you don’t need practice for yourself but you or your organization want to help others get awesome at technical interviews, check out our new gifting feature.



The Eng Hiring Bar: What the hell is it?

Posted on March 31st, 2020.

Recursive Cactus has been working as a full-stack engineer at a well-known tech company for the past 5 years, but he’s now considering a career move.

Over the past 6 months, Recursive Cactus (that’s his anonymous handle on has been preparing himself to succeed in future interviews, dedicating as much as 20-30 hours/week plowing through LeetCode exercises, digesting algorithms textbooks, and of course, practicing interviews on our platform to benchmark his progress.

Recursive Cactus’s typical weekday schedule

6:30am – 7:00amWake up
7:00am – 7:30amMeditate
7:30am – 9:30amPractice algorithmic questions
9:30am – 10:00amCommute to work
10:00am – 6:30pmWork
6:30pm – 7:00pmCommute from work
7:00pm – 7:30pmHang out with wife
7:30pm – 8:00pmMeditate
8:00pm – 10:00pmPractice algoirthmic questions

Recursive Cactus’s typical weekend schedule

8:00am – 10:00amPractice algorithmic questions
10:00am – 12:00pmGym
12:00pm – 2:00pmFree time
2:00pm – 4:00pmPractice algorithmic questions
4:00pm – 7:00pmDinner with wife & friends
7:00pm – 9:00pmPractice algorithmic questions

But this dedication to interview prep has been taking an emotional toll on him, his friends, and his family. Study time crowds out personal time, to the point where he basically has no life beyond work and interview prep.

“It keeps me up at night: what if I get zero offers? What if I spent all this time, and it was all for naught?”

We’ve all been through the job search, and many of us have found ourselves in a similar emotional state. But why is Recursive Cactus investing so much time, and what’s the source of this frustration?

He feels he can’t meet the engineer hiring bar (aka “The Bar”), that generally accepted minimum level of competency that all engineers must exhibit to get hired.

To meet “The Bar,” he’s chosen a specific tactic: to look like the engineer that people want, rather than just be the engineer that he is.

It seems silly to purposefully pretend to be someone you’re not. But if we want to understand why Recursive Cactus acts the way he does, it helps to know what “The Bar” actually is. And when you think a little more about it, you realize there’s not such a clear definition.

Defining “The Bar”

What if we look at how the FAANG companies (Facebook, Amazon, Apple, Netflix, Google) define “The Bar?” After all, it seems those companies receive the most attention from pretty much everybody, job seekers included.

Few of them disclose specific details about their hiring process. Apple doesn’t publicly share any information. Facebook describes the stages of their interview process, but not their assessment criteria. Netflix and Amazon both say they hire candidates that fit their work culture/leadership principles. Neither Netflix nor Amazon describes exactly how they assess against their respective principles. However, Amazon does share how interviews get conducted as well as software topics that could be discussed for software developer positions.

The most transparent of all FAANGs, Google publicly discloses its interview process with the most detail, with Laszlo Bock’s book Work Rules! adding even more insider color about how their interview process came to be.

And speaking of tech titans and the recent past, Aline (our founder) mentioned the 2003 book How Would You Move Mount Fuji? in a prior blog post, which recounted Microsoft’s interview process when they were the pre-eminent tech behemoth of the time.

In order to get a few more data points about how companies assess candidates, I also looked at Gayle Laakmann McDowell’s “Cracking the Coding Interview”, which is effectively the Bible of interviewing for prospective candidates, as well as Joel Spolsky’s Guerilla Guide to Interviewing 3.0, written by an influential and well-known figure within tech circles over the past 20-30 years.

Definitions of “The Bar”

Source Assessment Criteria
AppleNot publicly shared
AmazonAssessed against Amazon’s Leadership principles
FacebookNot publicly shared
NetflixNot publicly shared
Google1. General cognitive ability
2. Leadership
3. “Googleyness”
4. Role-related knowledge
Cracking the Coding Interview – Gayle Laakmann McDowell– Analytical skills
– Coding skills
– Technical knowledge/computer science fundamentals
– Experience
– Culture fit
Joel Spolsky– Be smart
– Get things done
Microsoft (circa 2003)– “The goal of Microsoft’s interviews is to assess a general problem-solving ability rather than a specific competency.”
– “Bandwidth, inventiveness, creative problem-solving ability, outside-the-box thinking”
– “Hire for what people can do rather than what they’ve done”
– Motivation
Defining “Intelligence”

It’s not surprising that coding and technical knowledge would be part of any company’s software developer criteria. After all, that is the job.

But beyond that, the most common criteria shared among all these entities is a concept of intelligence. Though they use different words and define the terms slightly differently, all point to some notion of what psychologies call “cognitive ability.”

SourceDefinition of cognitive ability
Google“General Cognitive Ability. Not surprisingly, we want smart people who can learn and adapt to new situations. Remember that this is about understanding how candidates have solved hard problems in real life and how they learn, not checking GPAs and SATs.”
Microsoft (circa 2003)“The goal of Microsoft’s interviews is to assess a general problem-solving ability rather than a specific competency… It is rarely dear what type of reasoning is required or what the precise limits of the problem are. The solver must nonetheless persist until it is possible to bring the analysis to a timely and successful conclusion.”
Joel Spolsky“For some reason most people seem to be born without the part of the brain that understands pointers.” 
Gayle Laakmann McDowell“If you’re able to work through several hard problems (with some help, perhaps), you’re probably pretty good at developing optimal algorithms. You’re smart.”

All these definitions of intelligence resemble early 19th-century psychologist Charles Spearman’s theory of intelligence, the most widely acknowledged framework for intelligence. After performing a series of cognitive tests on school children, Spearman found that those who did well in one type of test tended to also perform well in other tests. This insight led Spearman to theorize that there exists a single underlying general ability factor (called “g” or “g factor”) influencing all performance, separate from specific, task-specific abilities (named “s”).

If you believe in the existence of “g” (many do, some do not… there exist different theories of intelligence), finding candidates with high measures of “g” aligns neatly with the intelligence criteria companies look for.

While criteria like leadership and culture fit matter to companies, “The Bar” is not usually defined in those terms. “The Bar” is defined as having technical skills but also (and perhaps more so) having general intelligence. After all, candidates aren’t typically coming to to specifically practice leadership and culture fit.

The question then becomes how you measure these two things. Measuring technical skills seems tough but doable, but how do you measure “g?”

Measuring general intelligence

Mentioned in Bock’s book, Frank Schmidt’s and John Hunter’s 1998 paper “The Validity and Utility of Selection Methods in Personnel Psychology” attempted to answer this question by analyzing a diverse set of 19 candidate selection criteria to see which predicted future job performance the best. The authors concluded general mental ability (GMA) best predicted job performance based on a statistic called “predictive validity.”

In this study, a GMA test referred to an IQ test. But for Microsoft circa 2003, puzzle questions like “How many piano tuners are there in the world?” appear to have taken the place of IQ tests for measuring “g”. Their reasoning:

“At Microsoft, and now at many other companies, it is believed that there are parallels between the reasoning used to solve puzzles and the thought processes involved in solving the real problems of innovation and a changing marketplace. Both the solver of a puzzle and a technical innovator must be able to identify essential elements in a situation that is initially ill-defined.”

– “How Would You Move Mount Fuji?” – page 20

Fast forward to today, Google denounces this practice, concluding that “performance on these kinds of questions is at best a discrete skill that can be improved through practice, eliminating their utility for assessing candidates.”

So here we have two companies who test for general intelligence, but who also fundamentally disagree on how to measure it.

Are we measuring specific or general intelligence?

But maybe as Spolsky and McDowell have argued, the traditional algorithmic and computer science-based interview questions are themselves effective tests for general intelligence. Hunter & Schmidt’s study contains some data points that could support this line of reasoning. Among all single-criteria assessment tools, work sample tests possessed the highest predictive validity. Additionally, when observing the highest validity regression result of two-criteria assessment tool (GMA test plus work sample test), the standardized effect size on the work sample rating was larger than that of the GMA rating, suggesting a stronger relationship with future job performance.

If you believe algorithmic exercises function as work sample tests in interviews, then the study suggests traditional algorithm-based interviews could predict future job performance, maybe even more than a GMA/IQ test.

Recursive Cactus doesn’t believe there’s a connection.

There’s little overlap between the knowledge acquired on the job and knowledge about solving algorithmic questions. Most engineers rarely work with graph algorithms or dynamic programming. In application programming, lists and dictionary-like objects are the most common data structures. However, interview questions involving those are often seen as trivial, hence the focus on other categories of problems.

– Recursive Cactus

In his view, algorithms questions are similar to Microsoft’s puzzle questions: you learn how to get good at solving interview problems, which to him don’t ever show up in actual day-to-day work, which, if true, wouldn’t actually fit with the Schmidt & Hunter study.

Despite Recursive Cactus’s personal beliefs, interviewers like Spolsky still believe these skills are vital to being a productive programmer.

A lot of programmers that you might interview these days are apt to consider recursion, pointers, and even data structures to be a silly implementation detail which has been abstracted away by today’s many happy programming languages. “When was the last time you had to write a sorting algorithm?” they snicker.

Still, I don’t really care. I want my ER doctor to understand anatomy, even if all she has to do is put the computerized defibrillator nodes on my chest and push the big red button, and I want programmers to know programming down to the CPU level, even if Ruby on Rails does read your mind and build a complete Web 2.0 social collaborative networking site for you with three clicks of the mouse.

– Joel Spolsky

Spolsky seems to concede that traditional tech interview questions might not mimic actual work problems, and therefore wouldn’t act as work samples. Rather, it seems he’s testing for general computer science aptitude, which is general in a way, but specific in other ways. General intelligence within a specific domain, one might say.

That is, unless you believe intelligence in computer science is general intelligence. McDowell suggests this:

There’s another reason why data structure and algorithm knowledge comes up: because it’s hard to ask problem-solving questions that don’t involve them. It turns out that the vast majority of problem-solving questions involve some of these basics.

– Gayle Laakmann McDowell

This could be true assuming you view the world primarily through computer science lenses. Still, it seems pretty restrictive to suggest people who don’t speak the language of computer science would have more difficulty solving problems.

At this point, we’re not really talking about measuring general intelligence as Spearman originally defined it. Rather, it seems we’re talking about specific intelligence, defined or propagated by those grown or involved in traditional computer science programs, and conflating that with general intelligence (Spolsky, McDowell, Microsoft’s Bill Gates, and 4 of 5 FAANG founders studied computer science at either some Ivy League university or Stanford).

Maybe when we’re talking about “The Bar,” we’re really talking about something subjective, based on whoever is doing the measuring, and whose definition might not be consistent from person-to-person.

Looking at candidate assessment behavior from interviewers on the platform, you can find some evidence that supports this hypothesis.

“The Bar” is subjective

On the platform, people can practice technical interviews online and anonymously, with interviewers from top companies on the other side. Interview questions on the platform tend to fall into the category of what you’d encounter at a phone screen for a back-end software engineering role, and interviewers typically come from companies like Google, Facebook, Dropbox, Airbnb, and more. Check out our interview showcase to see how this all looks and to watch people get interviewed. After every interview, interviewers rate interviewees on a few different dimensions: technical skills, communication skills, and problem-solving skills. Each dimension gets rated on a scale of 1 to 4, where 1 is “poor” and 4 is “amazing!”. You can see what our feedback form looks like below:

If you do well in practice, you can bypass applying online/getting referrals/talking to recruiters and instead immediately book real technical interviews directly with our partner companies (more on that in a moment).

When observing our most frequent practice interviewers, we noticed differences across interviewers in the percent of candidates that person would hire, which we call the passthrough rate. Passthrough rates ranged anywhere between 30% and 60%. At first glance, certain interviewers seemed to be a lot stricter than others.

Because interviewees and interviewers are anonymized and matched randomly[1], we wouldn’t expect the quality of candidates to vary much across interviewers, and as a result, wouldn’t expect interviewee quality to explain the difference. Yet even after accounting for candidate attributes like experience level, differences in passthrough rates persist[2].

Maybe some interviewers choose to be strict on purpose because their bar for quality is higher. While it’s true that candidates who practiced with stricter interviewers tended to receive lower ratings, they also tended to perform better on their next practice.

This result could be interpreted in a couple of ways:

  • Stricter interviewers might systematically underrate candidates
  • Candidates get so beat up by strict interviewers that they tended to improve more between practices, striving to meet their original interviewer’s higher bar

If the latter were true, you would expect that candidates who practiced with stricter interviewers would perform better in real company interviews. However, we did not find a correlation between interviewer strictness and future company interview passthrough rate, based on real company interviews conducted on our platform[3].

Interviewers on our platform represent the kinds of people a candidate would encounter in a real company interview, since those same people also conduct phone screens and onsites at the tech companies you’re all applying to today. And because we don’t dictate how interviewers conduct their interviews, these graphs could be describing the distribution of opinions about your interview performance once you hang up the phone or leave the building.

This suggests that, independent of your actual performance, whom you interview with could affect your chance of getting hired. In other words, “The Bar” is subjective.

This variability across interviewers led us to reconsider our own internal definition of “The Bar,” which determined which candidates were allowed to interview with our partner companies. Our definition strongly resembled Spolsky’s binary criteria (“be smart”), heavily weighing an interviewer’s Would Hire opinion way more than our other 3 criteria, leading to the bimodal, camel-humped distribution below.

While our existing scoring system correlated decently with future interview performance, we found that an interviewer’s Would Hire rating wasn’t as strongly associated with future performance as our other criteria were. We lessened the weight on the Would Hire rating, which in turn improved our predictive accuracy[4]. Just like in “Talledega Nights” when Ricky Bobby learned there existed places other than first place and last place in a race, we learned that it was more beneficial to think beyond the binary construct of “hire” vs. “not hire,” or if you prefer, “smart” vs. “not smart.”

Of course, we didn’t get rid of all the subjectivity, since those other criteria were also chosen by the interviewer. And this is what makes assessment hard: an interviewer’s assessment is itself the measure of candidate ability.

If that measurement isn’t anchored to a standard definition (like we hope general intelligence would be), then the accuracy of any given measurement becomes less certain. It’s as if interviewers used measuring sticks of differing lengths, but all believed their own stick represented the same length, say 1 meter.

When we talked to our interviewers to understand how they assessed candidates, it became even more believable that different people might be using measuring sticks of differing lengths. Here are some example methods of how interviewers rated candidates:

  • Ask 2 questions. Pass if answer both
  • Ask questions of varying difficulty (easy, medium, hard). Pass if answers a medium
  • Speed of execution matters a lot, pass if answers “fast” (“fast” not clearly defined)
  • Speed doesn’t matter much, pass if have a working solution
  • Candidates start with full points. When candidates make mistakes, start docking points

Having different assessment criteria isn’t necessarily a bad thing (and actually seems totally normal). It just introduces more variance to our measurements, meaning our candidates’ assessments might not be totally accurate.

The problem is, when people talk about “The Bar,” that uncertainty around measurement usually gets ignored.

You’ll commonly see people advising you only to hire the highest quality people.

A good rule of thumb is to hire only people who are better than you. Do not compromise. Ever.

– Laszlo Bock

Don’t lower your standards no matter how hard it seems to find those great candidates.

– Joel Spolsky

In the Macintosh Division, we had a saying, “A player hire A players; B players hire C players”–meaning that great people hire great people.

– Guy Kawasaki

Every person hired should be better than 50 percent of those currently in similar roles – that’s raising the bar.

– Amazon Bar Raiser blog post

All of this is good advice, assuming “quality” could be measured reliably, which as we’ve seen so far, isn’t necessarily the case.

Even when uncertainty does get mentioned, that variance gets attributed to the candidate’s ability, rather than the measurement process or the person doing the measuring.

[I]n the middle, you have a large number of “maybes” who seem like they might just be able to contribute something. The trick is telling the difference between the superstars and the maybes, because the secret is that you don’t want to hire any of the maybes. Ever.

If you’re having trouble deciding, there’s a very simple solution. NO HIRE. Just don’t hire people that you aren’t sure about.

– Joel Spolsky

Assessing candidates isn’t a fully deterministic process, yet we talk about it like it is.

Why “The Bar” is so high

“Compromising on quality” isn’t really about compromise, it’s actually about decision-making in the face of uncertainty. And as you see from the quotes above, the conventional strategy is to only hire when certain.

No matter what kind of measuring stick you use, this leads to “The Bar” being set really high. Being really certain about a candidate means minimizing the possibility of making a bad hire (aka “false positives”). And companies will do whatever they can to avoid that.

A bad candidate will cost a lot of money and effort and waste other people’s time fixing all their bugs. Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it.

– Joel Spolsky

Hunter and Schmidt quantified the cost of a bad hire: “The standard deviation… has been found to be at minimum 40% of the mean salary,” which in today’s terms would translate to $40,000 assuming a mean engineer salary of $100,000/year.

But if you set “The Bar” too high, chances are you’ll also miss out on some good candidates (aka “false negatives”). McDowell explains why companies don’t really mind a lot of false negatives:

“From the company’s perspective, it’s actually acceptable that some good candidates are rejected… They can accept that they miss out on some good people. They’d prefer not to, of course, as it raises their recruiting costs. It is an acceptable tradeoff, though, provided they can still hire enough good people.”

In other words, it’s worth holding out for a better candidate if the difference in their expected output is large, relative to the recruiting costs from continued searching. Additionally, the costs of HR or legal issues downstream from potentially problematic employees also tilt the calculation toward keeping “The Bar” high.

This is a very rational cost-benefit calculation. But has anyone ever done this calculation before? If you have done it, we’d love to hear from you. Otherwise, it seems difficult to do.

Given that nearly everyone is using hand-wavy math, if we do the same, maybe we can convince ourselves that “The Bar” doesn’t have to be set quite so high.

As mentioned before, the distribution of candidate ability might not be so binary, so Spolsky’s nightmare bad hire scenario wouldn’t necessarily happen with all “bad” hires, meaning the expected difference in output between “good” and “bad” employees might be lower than perceived.

Recruiting costs might be higher than perceived because finding and employing +1 standard deviation employees gets increasingly difficult. By definition, fewer of those people exist as your bar rises. Schmidt and Hunter’s “bad hire” calculation only compares candidates within an applicant pool. The study does not consider the relative cost of getting high-quality candidates into the applicant pool to begin with, which tends to be the more significant concern for many of today’s tech recruiting teams. And when you consider that other tech companies might be employing the same hiring strategy, competition would increase the average probability that offers get rejected, extending the time to fill a job opening.

Estimating the expected cost of HR involvement is also difficult. No one wants to find themselves interacting with HR. But then again, not all HR teams are as useless as Toby Flenderson.

Taken together, if the expected output between “good” and “bad” candidates were less than expected, and the recruiting costs were higher than perceived, it would make less sense to wait for a no-brainer hire, meaning “The Bar” might not have to be set so high.

Even if one does hire an underperformer, companies could adopt the tools of training and employee management to mitigate the negative effects from some disappointing hires. After all, people can and do become more productive over time as they acquire new skills and knowledge.

Employee development seems to rarely get mentioned in conjunction with hiring (Laszlo Bock makes a few connections here and there, but the topics are mostly discussed separated). But when you add employee development into the equation above, you start to see the relationship between hiring employees and developing employees. You can think of it as different methods for acquiring more company output from different kinds of people: paying to train existing employees versus paying to recruit new employees.

You can even think of it as a trade-off. Instead of developing employees in-house, why not outsource that development? Let others figure out how to develop the raw talent, and later pay recruiters to find them when they get good. Why shop the produce aisle at Whole Foods and cook at home when you can just pay Caviar to deliver pad thai to your doorstep? Why spend time managing and mentoring others when you can spend that time doing “real work” (i.e. engineering tasks)?

Perhaps “The Bar” is set high because companies don’t develop employees effectively, which puts more pressure on the hiring side of the company to yield productive employees.

Therefore, companies can lower their risk by shifting the burden of career development onto the candidates themselves. In response, candidates like Recursive Cactus have little choice but to train themselves.

Initially, I thought Recursive Cactus was a crazy outlier in terms of interview preparation. But apparently, he’s not alone.

Candidates are training themselves

Last year we surveyed our candidates about how many hours they spent preparing for interviews. Nearly half of the respondents reported spending 100 hours or more on interview preparation[5].

We wondered whether hiring managers and recruiters had the same expectations for candidates they encounter, Aline asked a similar question on Twitter, and results suggest they vastly underestimate the work and effort candidates endure prior to meeting with a company.

Decision makers clearly underestimate the amount of work candidates put into job hunt preparation. The discrepancy seems to reinforce the underlying and unstated message pervading all these choices around how we hire: If you’re not one of the smart ones (whatever that means), it’s not our problem. You’re on your own.

“The Bar” revisited

So this is what “The Bar” is. “The Bar” is a high standard set by companies in order to avoid false positives. It’s not clear whether companies have actually done the appropriate cost-benefit analysis when setting it, and it’s possible it can be explained by an aversion to investing in employee development.

“The Bar” is in large part meant to measure your general intelligence, but the actual instruments of measurement don’t necessarily follow the academic literature that underlies it. You can even quibble about the academic literature[6]. “The Bar” does measure specific intelligence in computer science, but that measurement might vary depending on who conducts your interview.

Despite the variance that exists across many aspects of the hiring process, we talk about “The Bar” as if it were deterministic. This allows hiring managers to make clear binary choices but discourages them to think critically about whether their team’s definition of “The Bar” could be improved.

And that helps us understand why Recursive Cactus spends so much time practicing. He’s training himself partially because his current company isn’t developing his skills. He’s preparing for the universe of possible questions and interviewers he might encounter because hiring criteria varies a lot, which cover topics that won’t necessarily be used in his day-to-day work, all so he can resemble someone that’s part of the “smart” crowd.

That’s the system he’s working within. Because the system is the way it is, it’s had significant impact on his personal life.

My wife’s said on more than one occasion that she misses me. I’ve got a rich happy life, but I don’t feel I can be competitive unless I put everything else on hold for months. No single mom can be doing what I’m doing right now.

– Recursive Cactus

This impacts his current co-workers too, whom he cares about a lot.

This process is sufficiently demanding that I’m no longer operating at 100% at work. I want to do the best job at work, but I don’t feel I can do the right thing for my future by practicing algorithms 4 hours a day and do my job well.

I don’t feel comfortable being bad at my job. I like my teammates. I feel a sense of responsibility. I know I won’t get fired if I mail it in, but I know that it’s them that pick up the slack.

– Recursive Cactus

It’s helpful to remember that all the micro decisions made around false positives, interview structure, brain teasers, hiring criteria, and employee development add up to define a system that, at the end of the day, impacts people’s personal lives. Not just the lives of the job hunters themselves, but also all the people that surround them.

Hiring is nowhere near a solved problem. Even if we do solve it somehow, it’s not clear we would ever eliminate all that uncertainty. After all, projecting a person’s future work output after spending an hour or two with them in an artificial work setting seems kinda hard. While we should definitely try to minimize uncertainty, it might be helpful to accept it as a natural part of the process.

This system can be improved. Doing so requires not only coming up with new ideas, but also revisiting decades-old ideas and assumptions, and expanding upon that prior work rather than anchoring ourselves to it.

We’re confident that all of you people in the tech industry will help make tech hiring better. We know you can do it, because after all, you’re smart.

[1] There does exist some potential for selection bias, particularly around the time frames when people choose to practice. Cursory analysis suggests there’s not much of a relationship, but we’re currently digging in deeper (hinthint: could be a future blog post). You can also choose between the traditional algorithmic interview vs. a systems design interview, but the vast majority opt for the traditional interview. The passthrough rates shown are for the traditional interview.
[2] You might be wondering about the relative quality of candidates on While it’s hard to pin down the true distribution of quality (which is the underlying question of this blog post), on average our practice interviewers have communicated to us that the quality of candidates on tends to be similar to the quality of candidates they encounter during their own company’s interview process, particularly during the phone screen process.
[3] This only includes candidates who have met our internal hiring bar and attended a company interview on our site. This does not represent the entire population of candidates who have interviewed with an interviewer.
[4] For those of you that have used before, you may remembere that we had an algorithm that adjusted for interviewer strictness. Upon further inspection, we found this algorithm also introduced variance to candidate scores in really unexpected ways. Because of this, we don’t rely on this algorithm as heavily.
[5] Spikes at 100 and 200 hours occurred because of an error in the labeling and max value of the survey question. The 3 survey questions asked were the following: 1) In your most recent job search, how many hours did you spend preparing for interviews? 2) How many hours did you spend on interview preparation before signing up for 3) How many hours did you spend on interview preparation after signing up for (not including time using itself)? Each question had a max value of 100 hours, but many respondents responded to questions 2) and 3) where their sum exceeded 100. The distribution here shows the sum of 2) and 3). The median from question 1) responses was 94, nearly identical to the median of the sum of 2) and 3), so we used the sum to observe the shape of the distribution beyond 100 hours. Key lessons: assume a larger max value than you’d expect, and double-check your survey.
[6]I found the study a little hard to reason about, mainly because I’m not a psychologist, so techniques like meta analysis were a little foreign to me even if the underlying statistical tools were familiar. It’s not a question of whether the tools are valid, it’s that reasoning about the study’s underlying data was difficult. Similar to spaghetti code, the validation of the underlying datasets is spread across decades of prior academic papers, which makes it difficult to follow. It’s likely this is the nature of psychology, where useful data is harder to acquire, at least compared to the kinds of data we deal with in tech. Beyond that, I also had other questions about their methodology, which this article asks in far greater detail than I could.