AI

OK, what’s going on with LinkedIn’s algo?

One day in November, a product strategist we’ll call Michelle (not her real name) logged into her LinkedIn account and changed her gender to male. She also changed her name to Michael, she told TechCrunch.

She took part in an experiment called #WearthePants where women tested hypothesized that LinkedIn’s new algorithm was biased against women.

For months, what a heavy LinkedIn users complained about seeing a decline in engagement and impressions on the career-focused social network. This came after the company’s vice president of engineering, Tim Jurka, said in August that the platform had “more recently” implemented LLMs to help display content useful to users.

Michelle (whose identity is known to TechCrunch) was suspicious of the changes because she has more than 10,000 followers and ghostwrites posts for her husband, who only has about 2,000. Still, she and her husband tend to get about the same number of post impressions, she said, despite her larger following.

“The only significant variable was gender,” she said.

Marilynn Joyner, a founder, also changed the gender of her profile. She has been posting consistently on LinkedIn for two years and has noticed a decline in the visibility of her posts in recent months. “I changed my gender on my profile from female to male, and my impressions increased 238% in a day,” she told TechCrunch.

Megan Cornish reported similar results, as did Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson and soon.

WAN event

San Francisco
|
October 13-15, 2026

LinkedIn said that its “algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed” and that “a side-by-side snapshot of your own feed updates that are not perfectly representative or equal in reach does not automatically imply unfair treatment or bias” within the Feed.

Social algorithm experts agree that explicit sexism may not have been the cause, although implicit biases may be at work.

Platforms are “an intricate symphony of algorithms that simultaneously and continuously use specific mathematical and social levers,” Brandeis Marshalla data ethics consultant, to TechCrunch.

“Changing someone’s profile picture and name is just one of those levers,” she said, adding that the algorithm is also affected by things like how a user has and is currently interacting with different content.

See also  What's behind the decline in international visitors to the US by 2025?

“What we don’t know are all the other levers that cause this algorithm to prioritize one person’s content over another person’s. This is a more complicated problem than people assume,” Marshall said.

Bro-coded

The #WearthePants The experiment started with two entrepreneurs: Cindy Gallop and Jane Evans.

They asked two men to create and post the same content as them, curious if gender was the reason so many women felt a dip in engagement. Gallop and Evans both have a significant following – more than 150,000 combined, compared to the two men who had about 9,400 at the time.

Gallop reported that her post reached only 801 people, while the man who posted it the exact same content reached 10,408 people, more than 100% of his followers. Other women then joined in. Some, like Joyner, who uses LinkedIn to market her company, were concerned.

“I would really like to see LinkedIn take responsibility for any biases within the algorithm,” Joyner said.

But LinkedIn, like other LLM-reliant search and social media platforms, provides few details about how content selection models were trained.

Marshall said most of these platforms “naturally have a white, male, Western-centric viewpoint embedded” because of who trained the models. Researchers find evidence of human prejudices such as sexism and racism in popular LLM models because the models are trained on human-generated content, and people are often directly involved in post-training or reinforcement learning.

Yet the way each individual company implements its AI systems is shrouded in the secrecy of the algorithmic black box.

LinkedIn says the #WearthePants experiment failed to demonstrate gender bias against women. Jurka’s August statement said – and LinkedIn’s head of Responsible AI and Governance, Sakshi Jain, repeated in another post in November – that its systems do not use demographic information as a signal for visibility.

Instead, LinkedIn told TechCrunch it is testing millions of messages to connect users with opportunities. It says demographic data is only used for such tests, such as to see whether posts “from different creators compete on an equal footing and whether the scrolling experience, what you see in the feed, is consistent across the audience,” the company told TechCrunch.

LinkedIn is known for it research and adjust its algorithm to try to give a less biased view experience for users.

See also  OpenAI now reveals more of its o3-mini model's thought process

It’s the unknown variables, Marshall said, that likely explain why some women had more impressions after changing their profile gender to male. For example, participating in a viral trend can lead to a boost in engagement; some accounts posted for the first time in a long time, and the algorithm might have rewarded them for that.

Tone and writing style can also play a role. For example, Michelle said that the week she posted as “Michael,” she adjusted her tone slightly and wrote in a more simplistic, direct style, as she does for her husband. Then she said impressions increased by 200% and engagements increased by 27%.

She concluded that the system was not “explicitly sexist” but seemed to view communication styles often associated with women as “a measure of lower value.”

Stereotypical male Writing styles are believed to be more concisewhile the stereotypes about writing styles for women are supposed to be softer and more emotional. If an LLM is trained to encourage writing that conforms to male stereotypes, that is a subtle, implicit bias. And as we previously reported, researchers have found that most LLMs are riddled with them.

Sarah Dean, an assistant professor of computer science at Cornell, said platforms like LinkedIn often use entire profiles in addition to user behavior when determining what content to promote. That includes job postings on a user’s profile and the type of content they typically interact with.

“A person’s demographics can influence ‘both sides’ of the algorithm: what they see and who sees what they post,” Dean said.

LinkedIn told TechCrunch that its AI systems look at hundreds of signals to determine what to push to a user, including insights from a person’s profile, network and activity.

“We continually test to understand what helps people find the most relevant, timely content for their careers,” the spokesperson said. “Member behavior also determines the feed, what people click, save and interact with every day, and what formats they like or don’t like. Of course, this behavior also determines what appears in feeds in addition to any updates from us.”

Chad Johnson, a sales expert active on LinkedIn, described the changes such as deprioritizing likes, comments and reposts. The LLM system “no longer cares how often you post or what time of day you post,” Johnson wrote in a message. “It matters whether your writing demonstrates understanding, clarity, and value.”

See also  Diane Morgan, Charlie Booker about What's Next

All of this makes it difficult to determine the true cause of any #WearthePants results.

People just don’t like the algae

Nevertheless, it seems that many people, regardless of gender, don’t like or understand LinkedIn’s new algorithm – whatever it is.

Shailvi Wakhulu, a data scientist, told TechCrunch that she averaged at least one post per day for five years and saw thousands of impressions. Now she and her husband are lucky to see a few hundred. “It’s demotivating for content creators with large, loyal followings,” she said.

One man told TechCrunch that he saw a 50% drop in engagement in recent months. Yet another man said he has seen post impressions and reach increase by more than 100% over a similar period. “This is largely because I write about specific topics for a specific audience, and that’s what the new algorithm rewards,” he told TechCrunch, adding that his clients are seeing a similar increase.

But in Marshall’s experience, she, who is black, believes that messages about her experiences perform worse than messages related to her race. “If Black women only get interaction when they talk about Black women, but not when they talk about their specific expertise, then that’s a bias,” she said.

The researcher, Dean, believes the algorithm simply amplifies “the signals that are already there.” It could reward certain posts not because of the writer’s demographics, but because they received more responses across the platform. While Marshall may have encountered another area of ​​implicit bias, her anecdotal evidence is not enough to determine that with certainty.

LinkedIn offered some insights into what’s working well now. The company said its user base has grown and as a result, the number of posts posted has increased 15% year-over-year, while the number of comments has increased 24% year-over-year. “This means more competition in feed,” the company said. Posts about professional insights and career lessons, industry news and analysis, and education or informational content about work, business and the economy are all doing well, the report said.

If anything, people are just confused. “I want transparency,” Michelle said.

However, since content selection algorithms have always closely guarded the secrets of their businesses and transparency can lead to them being gamed, that’s a big ask. It’s a situation that will probably never be satisfied.

Source link

Back to top button