LinkedIn endorsements are dumb. Here’s the data.

If you’re an engineer who’s been endorsed on LinkedIn for any number of languages/frameworks/skills, you’ve probably noticed that something isn’t quite right. Maybe they’re frameworks you’ve never touched or languages you haven’t used since freshman year of college. No matter the specifics, you’re probably at least a bit wary of the value of the LinkedIn endorsements feature. The internets, too, don’t disappoint in enumerating some absurd potential endorsements or in bemoaning the lack of relevance of said endorsements, even when they’re given in earnest.

Having a gut feeling for this is one thing, but we were curious about whether we could actually come up with some numbers that showed how useless endorsements can be, and we weren’t disappointed. If you want graphs and numbers, scroll down to the “Here’s the data” section below. Otherwise, humor me and read my completely speculative take on why endorsements exist in the first place.

LinkedIn endorsements are just noisy crowdsourced tagging

Pretend for a moment that you’re a recruiter who’s been tasked with filling an engineering role. You’re one of many people who pays LinkedIn ~$9K/year for a recruiter seat on their platform1. That hefty price tag broadens your search radius (which is otherwise artificially constrained) and lets you search the entire system. Let’s say you have to find a strong back-end engineer. How do you begin?

Unfortunately, LinkedIn’s faceted search (pictured below) doesn’t come with a “can code” filter2.

So, instead of searching for what you really want, you have to rely on proxies. Some obvious proxies, even though they’re not that great, might be where someone went to school or where they’ve worked before. However, if you need to look for engineering ability, you’re going to have to get more specific. If you’re like most recruiters, you’ll first look for the main programming language your company uses (despite knowledge of a specific language not being a good indicator of programming ability and despite most hiring managers not caring which languages their engineers know) and then go from there.

Now pretend you’re LinkedIn. You have no data about how good people are at coding, and though you do have a lot of resume/biographical data, that doesn’t tell the whole story. You can try relying on engineers filling in their own profiles with languages they know, but given that engineers tend to be pretty skittish about filling in their LinkedIn profile with a bunch of buzzwords, what do you do?

You build a crowdsourced tagger, of course! Then, all of a sudden, your users will do your work for you. Why do I think this is the case? Well, if LinkedIn cared about true endorsements rather than perpetuating the skills-based myth that keeps recruiters in their ecosystem, they could have written a weighted endorsement system by now, at the very least. That way, an endorsement from someone with expertise in some field might mean more than an endorsement from your mom (unless, of course, she’s an expert in the field).

But they don’t do that, or at least they don’t surface it in candidate search. It’s not worth it. Because the point of endorsements isn’t to get at the truth. It’s to keep recruiters feeling like they’re getting value out of the faceted search they’re paying almost $10K per seat for. In other words, improving the fidelity of endorsements would likely cannibalize LinkedIn’s revenue.

You could make the counterargument that despite the noise, LinkedIn endorsements still carry enough signal to be a useful first-pass filter and that having them is more useful than not having them. This is the question I was curious about, so I decided to cross-reference our users’ interview data with their LinkedIn endorsements.

The setup

So, what data do we have? First, for context, interviewing.io is a platform where people can practice technical interviewing anonymously with interviewers from top companies and, in the process, find jobs. Do well in practice, and you get guaranteed (and anonymous!) technical interviews at companies like Uber, Twitch, Lyft, and more. Over the course of our existence, we’ve amassed performance data from close to 5,000 real and practice interviews.

When an interviewer and an interviewee match on our platform, they meet in a collaborative coding environment with voice, text chat, and a whiteboard and jump right into a technical question. Interview questions on the platform tend to fall into the category of what you’d encounter at a phone screen for a back-end software engineering role. Some examples of these interviews can be found on our public recordings page.

After every interview, interviewers rate interviewees on a few different dimensions, including technical ability. Technical ability gets rated on a scale of 1 to 4, where 1 is “poor” and 4 is “amazing!”. On our platform, a score of 3 or above has generally meant that the person was good enough to move forward. You can see what our feedback form looks like below:

new_interviewer_feedback_circled

As promised, I cross-referenced our data with our users’ LinkedIn profiles and found some interesting, albeit not that surprising, stuff.

Endorsements vs. what languages people actually program in

The first thing I looked at was whether the programming language people interviewed in most frequently had any relationship to the programming language for which they were most endorsed. It was nice that, across the board, people tended to prefer one language for their interviews, so we didn’t really have a lot of edge cases to contend with.

It turns out that people’s interview language of choice matched their most endorsed language on LinkedIn just under 50% of the time.

Of course, just because you’ve been endorsed a lot for a specific language doesn’t mean that you’re not good at the other languages you’ve been endorsed for. To dig deeper, I took a look at whether our users had been endorsed for their interview language of choice at all. It turns out that people were endorsed for their language of choice 72% of the time. This isn’t a particularly powerful statement, though, because most people on our platform have been endorsed for at least 5 programming languages.

That said, even when an engineer had been endorsed for their interview language of choice, that language appeared in their “featured skills” section only 31% of the time. This means that most of the time, recruiters would have to click “View more” (see below) to see the language that people prefer to code in, if it’s even listed in the first place.

So, how often were people endorsed for their language of choice? Quantifying endorsements3 is a bit fuzzy, but to answer this meaningfully, I looked at how often people were endorsed for that language relative to how often they were endorsed for their most-endorsed language, in the cases when the two languages weren’t the same (recall that this happened about half the time). Perhaps if these numbers were close to 1 most of the time, then endorsements might carry some signal. As you can see in the histogram below, this was not the case at all.

The x-axis above is how often people were endorsed for their interview language of choice relative to their most-endorsed language. The bars on the left are cases when someone was barely endorsed for their language of choice, and all the way to right are cases when people were endorsed for both languages equally as often. All told, the distribution is actually pretty uniform, making for more noise than signal.

Endorsements vs. interview performance

The next thing I looked at was whether there was any correspondence between how heavily endorsed someone was on LinkedIn and their interview performance. This time, to quantify the strength of someone’s endorsements4, I looked at how many times someone was endorsed for their most-endorsed language and correlated that to their average technical score in interviews on interviewing.io.

Below, you can see a scatter plot of technical ability vs. LinkedIn endorsements, as well as my attempt to fit a line through it. As you can see, the R^2 is piss-poor, meaning that there isn’t a relationship between how heavily endorsed someone is and their technical ability to speak of.

Endorsements vs. no endorsements… and closing thoughts

Lastly, I took a look at whether having any endorsements in the first place mattered with respect to interview performance. If I’m honest, I was hoping there’d be a negative correlation, i.e. if you don’t have endorsements, you’re a better coder. After running some significance testing, though, it became clear that having any endorsements at all (or not) doesn’t matter.

So, where does this leave us? As long as there’s money to be made in peddling low-signal proxies, endorsements won’t go away and probably won’t get much better. It is my hope, though, that any recruiters reading this will take a second look at the candidates they’re sourcing and try to, where possible, look at each candidate as more than the sum of their buzzword parts.

Thanks to Liz Graves for her help with the data annotation for this post.

1Roughly 60% of LinkedIn’s revenue comes from recruiting, so you can see why this stuff matters.

 

2You know what comes with a can code filter? interviewing.io does! We know how people are doing rigorous, live technical interviews, which, in turn, lets us reliably predict how well they will do in future interviews. Roughly 60%3 of our candidates pass technical phone screens and make it onsite. Want to use us to hire?

3There are a lot of possible approaches to comparing endorsements, to each other and to other stuff. In this post, I decided to, as much as possible mimic how a recruiter might think about a candidate’s endorsements when looking at their profile. Recruiters are busy (I know; I used to be one) and get paid to make quick judgments. Therefore, given that LinkedIn doesn’t normalize endorsements for you, if a recruiter wanted to do it, they’d have to actually add up all of someone’s endorsements and then do a bunch of pairwise division. This isn’t sustainable, and it’s much easier and faster to look at the absolute numbers. For this exact reason, when comparing the endorsements for two languages, I chose to normalize the relative to each other rather than relative to all other endorsements. And when trying to quantify the strength of someone’s programming endorsements as a whole, I opted to just count the number of endorsements for someone’s most-endorsed language.

4See footnote 3 above; I used the same rationale.

22 thoughts on “LinkedIn endorsements are dumb. Here’s the data.”

  1. I hope you didn’t spend much time on this. ‘Everybody’ ‘must’ ‘know’ that LinkedIn endorsements are a fraud — I am regularly endorsed by people I have never met, let alone worked with (perhaps these days that should be the other way around), for things I have never done.

  2. `Well, if LinkedIn cared about true endorsements rather than perpetuating the skills-based myth that keeps recruiters in their ecosystem, they could have written a weighted endorsement system by now, at the very least. That way, an endorsement from someone with expertise in some field might mean more than an endorsement from your mom (unless, of course, she’s an expert in the field).`

    I don’t know if you’ve used the updated LinkedIn web app, or the mobile app, but this is already a feature. You research is lacking

  3. Pingback: LinkedIn endorsements are dumb. Here’s the data. | Ace Infoway

  4. Pingback: LinkedIn endorsements are dumb. Here’s the data. | ExtendTree

  5. “people’s interview language of choice matched their most endorsed language on LinkedIn just under 50% of the time, so, you know, just slightly worse than flipping a coin”

    ummmm… everyone uses exactly two programming languages? You know, someone does not understand probability.

  6. Self reporting is ~20% accurate. 2nd party (endorsements) reporting on LinkedIn is usually a courtesy click, and is may not be first hand knowledge, but more so an observation of the individual, and speaks more to Soft Skills within the category being endorsed. Long hand endorsements are usually (personal) knowledge that speak more to character than specific skill levels. LinkedIn is a professional social site not a job matching platform.

  7. So if a recruiter in your platform did not recruit but marked him/her as good anyways, this intelligent system will be able to figure it out???

  8. Pingback: LinkedIn endorsements are dumb. Here's the data. - Feediu.Com

  9. Culture also matters. As an example my friend from Nordics. He has +500 connections from Nordic countries and maybe 30-40 Americans (he studied 1 year in US). More than half of his endorsements come from those 30-40 Americans although they have no way of knowing how good he is at his job nowadays…..

  10. Pingback: Weekly Links & Thoughts #110 | meshedsociety.com

  11. I disagree, endorsements maybe don’t tell you what was the last languages you used, but surely they can tell you what you are familiar as well, and in the same way they can tell you what you touched (even long time ago). In my case (and surely the case of many others) I’m not a java developer, but I remember to spend a couple of years coding in Java long time ago, now I cannot program anything serious in java, should be the endorsements then be removed from my profile?, in that case my story is only the one about the last year/two years ago?. Is only that what I am?, I disagree. My career is longer than that. Endorsements also show my evolution as coder, show also that stuff I’m familiar with as well.

  12. I agree fully with David, I have so many endorsements from people I have never met, but I believe Caspar hit the nail on the head. It is a retention feature driven by regular, random, requests to endorse people in your network. Personslly i do not endorse people unless i can validate the endorsement.
    What Linked in should be pushing is more use of the recommendation which come from people who actually know you.

  13. This was great, thanks for sharing. Glad to have a rebuttal against the notifications on my profile about “People with endorsements get XX more offers!”

    I had a bit of a hard time understanding this sentence, is there a left/right mix-up perhaps?

    “The bars on the left are cases when someone was barely endorsed for their language of choice, and all the way to left are cases when people were endorsed for both languages equally as often.”

  14. Pingback: What are you looking for in your interview? | Daniel J Scheufler

  15. Annoying criticism from an economist: the “Technical Ability vs. Number of LinkedIn Endorsements” would make more sense as a model of ordered categorical variables, like ordered logit. The type of data you have violates the assumptions of linear regression.

    It sounds like it sometimes makes a difference and sometimes doesn’t. See https://www.researchgate.net/post/Will_the_results_of_an_ordinal_logit_model_be_different_from_OLS_regression_with_discrete_dependent_variables.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top