Sympathy, and Job Offers, for Twitter’s Misinformation Experts

by SITKI KOVALI

In the weeks since Elon Musk took over Twitter, dozens of people responsible for keeping dangerous or inaccurate material in check on the service have posted on LinkedIn that they resigned or lost their jobs. Their statements have drawn a flood of condolences — and attempts to recruit them.

Overtures arrived from rival tech services, retailers, consulting firms, government contractors and other organizations that want to use the former Twitter employees — and those recently let go by Meta and the payments platform Stripe — to track and combat false and toxic information on the internet.

Ania Smith, the chief executive of TaskRabbit, the Ikea-owned marketplace for gig workers, commented on a former Twitter employee’s post this month that he should consider applying for a product director role, working in part on trust and safety tools.

“The war for talent has really been exceptional in the last 24 months in tech,” Ms. Smith said in an interview. “So when we see layoffs happening, whether it’s at Twitter or Meta or other companies, it’s definitely an opportunity to go after some of the very high-caliber talent we know they hire.”

She added that making users feel safe on the TaskRabbit platform was a key component of her company’s success.

“We can’t really continue growing without investing in a trust and safety team,” she said.

The threats posed by conspiracy theories, misleadingly manipulated media, hate speech, child abuse, fraud and other online harms have been studied for years by academic researchers, think tanks and government analysts. But increasingly, companies in and outside the tech industry see that abuse as a potentially expensive liability, especially as more work is conducted online and regulators and clients push for stronger guardrails.

On LinkedIn, under posts eulogizing Twitter’s work on elections and content moderation, comments promoted openings at TikTok (threat researcher), DoorDash (community policy manager) and Twitch (trust and safety incident manager). Managers at other companies solicited suggestions for names to add to recruiting databases. Google, Reddit, Microsoft, Discord and ActiveFence — a four-year-old company that said last year that it had raised $100 million and that it could scan more than three million sources of malicious chatter in every language — also have job postings.

Alethea founder Lisa Kaplan is trying to recruit former Twitter employees.Credit…Carolina Andrade for The New York Times

The trust and safety field barely existed a decade ago, and the talent pool is still small, said Lisa Kaplan, the founder of Alethea, a company that uses early-detection technology to help clients protect against disinformation campaigns. The three-year-old company has 35 employees; Ms. Kaplan said she hoped to add 23 more by mid-2023 and was trying to recruit former Twitter employees.

Disinformation, she said, is like “the new malware” — a “digital reality that is ultimately going to impact every company.” Clients that once employed armed guards to stand outside data rooms, and then built online firewalls to block hackers, are now calling firms like Alethea for backup when, for example, coordinated influence campaigns target public perception of their brand and threaten their stock price, Ms. Kaplan said.

The Spread of Misinformation and Falsehoods

  • Covid Myths: Experts say the spread of coronavirus misinformation — particularly on far-right platforms like Gab — is likely to be a lasting legacy of the pandemic. And there are no easy solutions.
  • Midterms Misinformation: Social media platforms struggled to combat false narratives during the 2022 U.S. midterm elections, but it appeared most efforts to stoke doubt about the results did not spread widely.
  • The Pelosi Misinformation Loop: We tracked how prominent Republican figures amplified groundless and often homophobic claims about the attack on Paul Pelosi.
  • A New Misinformation Hub?: Misleading edits, fake news stories and deepfake images of politicians are starting to warp reality on TikTok.

“Anyone can do this — it’s fast, cheap and easy,” she said. “As more actors get into the practice of weaponizing information, either for financial, reputational, political or ideological gain, you’re going to see more targets. This market is emerging because the threat has risen and the consequences have become more real.”

Disinformation became widely recognized as a significant problem in 2016, said John Kelly, who was an academic researcher at Columbia, Harvard and Oxford before founding Graphika, a social media analysis firm, in 2013. The company’s employees are known as “the cartographers of the internet age” for their work building detailed maps of social media for clients such as Pinterest and Meta.

Graphika’s focus, initially on mining digital marketing insights, has steadily shifted toward topics such as disinformation campaigns coordinated by foreigners, extremist narratives and climate misinformation. The transition, which began in 2016 with the discovery of Russian influence operations targeting the U.S. presidential election, intensified with the onslaught of Covid-19 conspiracy theories during the pandemic, Mr. Kelly said.

“The problems have spilled out of the political arena and become a Fortune 500 problem,” he said. “The range of online harms has expanded, and the range of people doing the online harm has expanded.”

Graphika CEO John Kelly said his company shifted its focus with the 2016 election.Credit…Michelle V. Agins/The New York Times

Efforts to tackle misinformation and disinformation have included research initiatives from top-tier universities and policy institutes, media literacy campaigns and initiatives to repopulate news deserts with local journalism outfits.

Many social media platforms have set up internal teams to address the problem or outsourced content moderation work to large companies such as Accenture, according to a July report from the geopolitical think tank German Marshall Fund. In September, Google completed its $5.4 billion acquisition of Mandiant, an 18-year-old company that tracks online influence activities as well as offering other cybersecurity services.

A growing group of start-ups, many of which rely on artificial intelligence to root out and decode online narratives, conduct similar exercises, often for clients in corporate America.

Alethea wrapped up a $10 million fund-raising round in October. Also last month, Spotify said it bought the five-year-old Irish company Kinzen, citing its grasp on “the complexity of analyzing audio content in hundreds of languages and dialects, and the challenges in effectively evaluating the nuance and intent of that content.” (Months earlier, Spotify found itself trying to quell an uproar over accusations that its star podcast host, Joe Rogan, was spreading vaccine misinformation.) Amazon’s Alexa Fund participated in a $24 million funding round last winter for five-year-old Logically, which uses artificial intelligence to identify misinformation and disinformation on topics such as climate change and Covid-19.

“Along with all the fantastic aspects of the web come new problems like bias, misinformation and offensive content to name a few,” Biz Stone, a Twitter co-founder, wrote on a crowdfunding page last year for Factmata, another A.I.-fueled disinformation defense operation. “It can be confusing and difficult to cut through to the trusted, truthful information.”

Google just acquired cybersecurity firm Mandiant.Credit…Annegret Hilse/Reuters

The businesses are hiring across a broad spectrum of trust and safety roles despite a host of recent layoff announcements.

Companies have courted people expert at recognizing content posted by child abusers or human traffickers, as well as former military counterterrorism agents with advanced degrees in law, political science and engineering. Moderators, many of whom work as contractors, are also in demand.

Mounir Ibrahim, the vice president of public affairs and impact for Truepic, a tech company specializing in image and digital content authenticity, said many early clients were banks and insurance companies that relied more and more on digital transactions.

“We are at an inflection point of the modern internet right now,” he said. “We are facing a tsunami of generative and synthetic material that is going to hit our computer screens very, very soon — not just images and videos, but text, code, audio, everything under the sun. And this is going to have tremendous effects on not just disinformation but brand integrity, the financial tech world, on the insurance world and across nearly every vertical that is now digitally transforming on the heels of Covid.”

Truepic was featured with companies such as Zignal Labs and Memetica in the German Marshall Fund report about disinformation-defense start-ups. Anya Schiffrin, the lead author and a senior lecturer at Columbia’s School of International and Public Affairs, said future regulation of disinformation and other malicious content could lead to more jobs in the trust and safety space.

She said regulators around the European Union were already hiring people to help carry out the new Digital Services Act, which requires internet platforms to combat misinformation and restrict certain online ads.

“I’m really tired of these really rich companies saying that it’s too expensive — it’s a cost of doing business, not an extra, add-on luxury,” Ms. Schiffrin said. “If you can’t provide accurate, quality information to your customers, then you’re not a going concern.”

Related Posts