Facebook is now enmeshed in several investigations into Russia’s interference in the 2016 election. Last week, the company agreed to give Congress 3,000 political ads linked to Russian actors that it sold and ran during the 2016 election cycle; it previously had handed that information to special investigator Robert Mueller. But the details of how the social-networking giant found itself at the center of all of this, and, crucially, what that could mean for President Trump, can easily get lost amid competing headlines around healthcare, hurricanes, and a steadily escalating nuclear standoff with North Korea.
To help, we’re here to walk you through everything we know—and don’t know—about Facebook’s role in the 2016 election, and the subsequent investigations. We’ll update this list of questions and answers as we learn more.
What did Facebook give Mueller?
In early September, Facebook said it had identified $150,000 of political ads purchased by fake accounts linked to Russia. It attributed about $100,000 of the total, or 3,000 ads, to 470 accounts related to a Russian propaganda group called Internet Research Agency. It found another 2,000 ads worth $50,000 by searching for ads purchased through US internet addresses whose accounts were set to the Russian language. The ads touched on hot-button social issues such as immigration and LGBT rights and, according to a report from The Washington Post, included content aimed at stoking racial resentment against blacks and Muslims. About 25 percent of the ads geographically targeted certain regions of the United States. The majority of these ads ran in 2015.
After suspending the accounts and writing a vague blog post on the subject, Facebook remained largely silent about what the ads contained, who they reached, or how they were discovered. But on Sept. 21, Facebook confirmed it had shared the the ads with Mueller’s team and would do the same with Congressional investigators. Facebook has not yet agreed to meet with Congress for further questioning.

vCard.red is a free platform for creating a mobile-friendly digital business cards. You can easily create a vCard and generate a QR code for it, allowing others to scan and save your contact details instantly.
The platform allows you to display contact information, social media links, services, and products all in one shareable link. Optional features include appointment scheduling, WhatsApp-based storefronts, media galleries, and custom design options.
How did Facebook find these ads?
The only detail Facebook has shared publicly is that it looked for American IP addresses set to the Russian language, then “fanned out,” from there, as the Facebook spokesperson put it. That makes it impossible to know whether Facebook has identified all suspect ads, or just those the Russians were laziest about hiding. The Facebook spokesperson declined to comment on whether Mueller’s team has access to the company’s investigative process.
It’s likely, however, that Facebook’s search has not covered everything. On Sept. 21, during a Facebook Live address, CEO Mark Zuckerberg admitted as much, saying, “We may find more, and if we do, we will continue to work with the government.” We know, for instance, that Internet Research Agency, the propaganda group, has officially shut down. But similar firms, including one called Glavset, operate with the same people at the same addresses. The Facebook spokesperson would not discuss whether its investigation would have caught these other shell companies.
Related Stories

Disclosure rules for funding political ads are murky. Now Facebook wants to regulate itself.

national affairs
In the face of brewing backlash, Facebook is redesigning its election integrity processes and sharing (some) information with Congress.

National Affairs
The congressional investigation turns its attention to the digital team that helped Trump win.
What changes did Zuckerberg announce, and will they make a difference?
During his Facebook Live, Zuckerberg outlined how the company plans to overhaul its election-integrity processes. For starters, it will require political advertisers to disclose–on the ads—who paid for the ad. It will also require political advertisers to publicly catalog all of the variations of ads that they target to different Facebook audiences. The goal here is to make it easier for the public to see when politicians send different messages to different groups of people. President Trump has been criticized for using so-called “dark posts” to send messages about the border wall to core supporters that conflict with his more public statements. That kind of targeted advertising is par for the course in the internet age, but now, Facebook says it will ensure that when it’s used in politics, the public has more visibility into those messages. Facebook also said it would add 250 people to its election-integrity team to more thoroughly vet who’s buying political ads.
And yet, the question remains: What constitutes a political ad? Are campaigns and Super PACs the only ones subject to this disclosure on Facebook? Or will anyone who wants to advertise about a political issue be subject to the same scrutiny? And what about fake news publishers that pay to boost their own articles? Facebook isn’t providing much detail about how it will implement its plan, but answers to those questions are critical to understanding how effective this self-regulation will be.
Could Russians have placed other ads that Facebook hasn’t yet identified?
Absolutely. In the case of the $150,000 in ads, one digital breadcrumb led to the next, until Facebook uncovered a cohesive effort by the Internet Research Agency to spread misleading information to American voters. It’s easier to spot such a coordinated campaign than it is to find every ally of Vladimir Putin who might have spent a few thousand dollars to give a fake news story some extra exposure. Facebook sold $27 billion in ads in 2016. Combing through that pile of cash for signs of Russian dirty work is a tremendously complex, if not impossible, task.
Is there anything the government can do about this?
Facebook has never been particularly welcoming of government intervention. In 2011, it asked the Federal Election Commission to exempt it from rules requiring political advertisers to disclose in an ad who paid for that ad. Facebook argued its ads should be regulated as “small items,” like campaign buttons. The FEC failed to reach a decision on the issue, so Facebook and other platforms have been allowed to host political ads with no disclosures.
Now, some members of Congress are looking to change that. Democratic Senators Mark Warner and Amy Klobuchar are working on a bill that would require those disclosures and also require tech platforms with more than 1 million users to publicly track all “electioneering communication” purchased by anyone spending at least $10,000 on the platform. The FEC defines electioneering communication as ads “that refer to a federal candidate, are targeted to voters and appear within 30 days of a primary or 60 days of a general election.” For now, the term applies only to broadcast ads.
The FEC is also re-opening comments on rules related to online political ads—the same rules the FEC failed to clarify back in 2011. Last week, 20 members of Congress sent a letter to the FEC urging the agency to develop guidelines for platforms like Facebook.
Were these Facebook ads the only way that foreigners tried to influence the 2016 election??
Hardly. Earlier this year, WIRED investigated a wave of fake news sites that emerged in Macedonia last year. The fake news creators wrote phony blogs about Hillary Clinton’s health or the Pope endorsing Trump, and then posted them in key Facebook groups to attract attention. Once the posts drew sufficient traffic, their creators placed Google Ads on their sites to make some extra money. These mostly teenage hoaxsters never needed to touch Facebook’s advertising tools.
Did the Russians use other platforms as well?
Yes. The group Securing Democracy tracks 600 Russia-linked Twitter accounts and analyzes the role they play in promoting certain hashtags. When Twitter meets with Congressional investigators , the use of bots by foreign agents will be central to the conversation. Google, meanwhile, has said it found no evidence of Russians buying ads. But Facebook told WIRED the same thing earlier this summer, before its recent disclosure.
How did the Russians decide which Americans to target with the Facebook ads?
The short answer is we don’t know. There are suspicions that the Russians might have had help from the Trump campaign or its allies. But the Russians may not have needed more than the targeting tools Facebook offers to every advertiser.
Facebook allows any advertiser to upload lists of names or email addresses that it would like to target. In most states, voter files are publicly available for free or for purchase. Advertisers can then design so-called lookalike audiences that have lots in common with the original list. They can target ads based on geography, profession, and interest. Facebook knows the news you read, the posts you like, and what you shop for, along with a million other things about you. The company stitches this information together to make educated guesses about what kind of person you are.
Armed with so much data, a Russian operative would hardly need to call in help. That doesn’t mean they didn’t. It just means we have no evidence so far that they did.
What kind of evidence would there be?
One way to find out if the Trump campaign helped Internet Research Agency would be to compare the targeting criteria the campaign used on Facebook to the targeting criteria the Russian propagandists used. If both groups targeted the same audience, that’s worth looking into. Investigators could do the same with any further suspicious accounts Facebook unearths.
What about Cambridge Analytica? Could it have been involved?
Cambridge Analytica was President Trump’s data-mining firm during the 2016 election. The Trump team, led by digital director Brad Parscale, worked with Cambridge, as well as the Republican National Committee, to analyze data about the American electorate to guide decisions about where and how to advertise on television and online. That’s not unusual. Hillary Clinton’s campaign tapped similar analyses from a data-analytics firm called BlueLabs, as well as the Democratic National Committee.
What is unusual about Cambridge Analytica is its backstory. The company, which is an American spinoff of the UK-based firm SCL Elections, is financially backed by billionaire financier Robert Mercer, who spends liberally to advance his fiercely conservative views.
Cambridge has also been accused of amassing data from Facebook users—such as what they like on the site and who their friends are—via silly personality quizzes. (Facebook has since closed this privacy gap.) Cambridge combined those results with data from elsewhere to sort people into categories based on their personality types, so advertisers could send them specially tailored messages. Cambridge calls this approach psychographic targeting, as opposed to demographic targeting.
During the election cycle, some Republican operatives outside the Trump campaign accused the company of overselling its technical wizardry. Now, Cambridge’s approach is viewed by some, including Hillary Clinton, as a form of ugly psychological warfare that was unleashed on the American electorate.
Its parent company, SCL, has been known to use questionable methods in other countries’ elections. In Trinidad, it reportedly staged graffiti to give voters the impression that SCL’s client had the support of Trinidadian youth. And Cambridge is currently being investigated in the UK for the role it may have played in swaying voters to support Brexit. It’s worth noting, though, that the UK has stricter laws around how citizens’ data can be used near elections. The US does not have the same protections.
Is Cambridge involved with these Russian ads on Facebook?
Not as far as we know. While Cambridge helped the Trump campaign target its own advertisements, there’s no evidence so far that Cambridge did the same for any Russians. Whether any connection exists, of course, is a key question both Mueller’s team and Congress will continue to investigate.
In a recent BBC interview, Theresa Hong, the former digital content director for the Trump campaign, said Facebook, Google, and Twitter had offices inside the Trump campaign headquarters during the campaign. Is that normal?
Tech companies regularly assign dedicated staffers to political campaigns that advertise on their platforms. Clinton’s campaign also worked closely with Facebook and other tech companies, if not physically side-by-side.
Still, perhaps the least secretive part of the whole affair is the outsized role digital advertising played in the Trump campaign’s strategy. Shortly after the election, Parscale told WIRED, “Facebook and Twitter were the reason we won this thing. Twitter for Mr. Trump. And Facebook for fundraising.” The Trump campaign ran as many as 50,000 variants of its ads each day on Facebook, tweaking the look and messaging to see which got the most traction. Days after the election, Andrew Bleeker, who ran digital advertising for the Clinton campaign, acknowledged that the Trump team used digital platforms “extremely well.” He said the Trump campaign “spent a higher percentage of their spending on digital than we did.”
Could Facebook have prevented this?
That’s complicated. Obviously, Facebook was bluffing when it told the FEC in 2011 that disclosing who paid for campaign ads right on the ad would be impractical. That’s what Zuckerberg recently announced.
Still, it’s unclear if those steps would have prevented Russia from spreading misinformation on Facebook. For starters, while the ads Internet Research Agency purchased were about election issues, they weren’t explicitly about the 2016 election. It’s not clear those would have been considered election ads under FEC guidelines. And even if they were, the Supreme Court has given nonprofit groups wide latitude to raise money to influence elections both online and offline without revealing their donors. That’s why it’s called dark money.
Senator Warner recently said, “[Facebook] took down 50,000 accounts in France. I find it hard to believe they’ve only been able to identify 470 accounts in America.” What did he mean, and does he have a point?
Yes and no. In April, Facebook disclosed that it suspended 30,000 accounts, not 50,000, that were spreading fake news in France ahead of elections there. It did not explicitly tie those accounts to Russian actors. Instead, it identified those accounts after updating its tools for identifying fake accounts, adding flags on accounts that, for instance, repeatedly post the same content or suddenly produce a spike in activity.
That means the French example is not directly comparable to the election ads purchased by accounts that Facebook connected to Russia. Facebook is not asserting that those 470 accounts represent the totality of fake accounts on the platform. They’re merely the accounts Facebook has so far linked to Russia. That said, Warner’s point is well taken: without more information on how Facebook found those accounts, it’s impossible to know what the company may have missed.
ProPublica recently found that it’s possible to target ads on Facebook to categories of people who identify as “Jew haters” and other anti-Semitic terms. How does that relate to this?
These are distinct issues, but there is some overlap. ProPublica recently reported that it had purchased $30 of ads, targeted at users Facebook thought might be interested in terms like “Jew hater,” “how to burn Jews,” and “why Jews ruin the world.” Facebook’s advertising tool had scraped these terms from users’ profiles and turned them into categories advertisers could target. Those categories comprised a tiny subset of the 2 billion Facebook users, but ProPublica showed that it could assemble such a cohort and send its members targeted ads in 15 minutes. Facebook temporarily changed its ad tool to prevent these user-generated terms from being turned into advertising categories.
The company views this as a separate issue from Russian ads. And yet, both incidents point to a lack of oversight of Facebook’s advertising platform. The reason Russians could easily buy political ads to sway American voters is the same reason anyone can target ads to neo-Nazis: Facebook’s advertising systems are largely automated, and anyone can set up an ad campaign with little human oversight from Facebook.
Last week, Facebook Chief Operating Officer Sheryl Sandberg issued a statement saying Facebook had restored the ability of advertisers to target user-generated terms, but had taken measures to weed out the bad ones. It’s also adding additional human oversight to the process of selling ads, and is setting up a system through which anyone can report abuses of the ad tool. Something tells us they’re in for an onslaught.