ProPublica discovered last Thursday that Facebook’s ad tools could target racists and anti-Semites using the very information those users self-report. That initial report kicked off a series of experiments conducted by news organizations that found that Google’s search engine would not only let you place ads next to search results for hateful rhetoric, but its automated processes would even suggest similar, equally hateful search terms to sell ads against. Twitter was alsocaught up in the controversy, when its filtering mechanisms failed to prevent ads from targeting “Nazi” and the n-word, an issue the company inexplicably attributed to “a bug we have now fixed.” This week, Instagram converted a journalist’s post about a violent threat she received into an ad that it then served to the journalist’s contacts.
In the most thorough response to the ongoing debacle, Facebook COO Sheryl Sandberg said Tuesday that the issue was the result of a failure on the company’s part. “We never intended or anticipated this functionality being used this way — and that is on us,” Sandberg wrote. “And we did not find it ourselves — and that is also on us.” Sandberg said that, as a someone who is Jewish, the ability to target ads based on an affinity for Hitler made her “disgusted and disappointed.” In an attempt to rectify that oversight, Facebook is now increasing human moderation for its automated processes; improving enforcement of its ad guidelines to prevent targeting that uses attacks on race, ethnicity, gender, and religious affiliation; and creating a more robust user reporting mechanism to cut down on abuses.FACEBOOK “NEVER ANTICIPATED” ITS AD SYSTEM BEING USED TO TARGET RACISTS AND ANTI-SEMITES
But the outrage and indignation that an executive like Sandberg displays, though likely genuine, also feels superficial. The success of her company was built on its propensity and eagerness to perform exactly the type of function that was revealed last week. In other words, the embarrassment isn’t a sign that the platform’s ad system is broken, but the exact opposite. It’s evidence that it’s working as it was designed. Facebook, Google, and others have developed automated systems that blindly vacuum up, and then monetize, such a wealth of data that the events of last week were almost inevitable.
“These kinds of controversies will keep happening because the scale and expectations around how many employees are needed to oversee the content or ad programs is teeny compared to the number of ads being served,” says Kendra Albert, a lawyer and fellow at Harvard Law School’s Cyberlaw Clinic. “I think it’s true that often these companies could not have reached the scale that they reached without automating things that traditionally had a human in the loop.”
FACEBOOK AND GOOGLE HAVE BECOME UNVARNISHED REFLECTIONS OF HOW HUMANS BEHAVE ON THE INTERNET
“Let’s be clear: what Facebook is doing now won’t have any affect on your ability to target anti-Semites on Facebook. You just won’t be able to type in ‘anti-Semite’ and do it that way,” says Eli Pariser, author of The Filter Bubble and founder of the viral media site Upworthy. “The inference-based targeting, which is how most of targeting works, makes it almost impossible to stop those groups from doing so.” You can still, of course, target visitors of certain Facebook pages, as well as any number of subtle signifiers that speak volumes about a user’s politics, tastes, and attitudes.
Pariser thinks this controversy is a useful public education moment, because it offers a vivid demonstration of how Facebook’s system operates. But, as part of the larger conversation around the social network’s role in society and its responsibilities to regulate user behavior, we’re still largely in uncharted territory. “I don’t envy them,” Pariser says of Facebook’s role.
Sandberg said as much in her response, noting how Facebook “never intended or anticipated” ads that targeted “Hitler did nothing wrong,” but still effectively gave anyone the tools to do so. But it seems that platform companies like Facebook, Google, and Twitter keep finding themselves in these positions — be it for hosting ISIS propaganda or accidentally demonetizing inoffensive YouTube videos or censoring historic war photography — because it’s easier to build and deploy a piece of technology before, not after, thinking through all its implications.Neither does Albert, who says the company is stuck between a rock and a hard place. “Unless companies are thinking really proactively about how their platforms are going to be abused, you’re going to keep seeing instances where organizations will find ways that these targeting mechanisms will be used in ways that the company didn’t intend or that has really negative results,” Albert says.
Albert says that when new technology arrives on the scene, society is often forced to rethink previously unregulated behavior. This change often occurs after the fact, when we discover something is amiss. “The speed at which this tech is rolled out to the public can make it hard for society to keep up,” Albert adds. “When you’re trying to build as big as possible or as fast as possible, it’s easy for folks who are more skeptical or concerned have issues they’re raising left by the wayside, not out of maliciousness but because, ‘Oh, we have to meet this ship date.’”
Revelations such as those last week are bound to come up again, and there are likely few, if any, concrete solutions available to weed them out in a way that makes everyone happy. But the onus is on tech companies like Facebook and Google to improve. Both companies grew at astronomical pace through the novel combination of unprecedented reach and data collection, cemented through market dominance, with the low overhead of a largely automated system. The failure to anticipate these edge cases is a symptom of their insatiable quest for growth mixed with a lack of meaningful human oversight.
CEO Mark Zuckerberg’s outlook has shifted from last November, when he deniedthat Facebook had any influence on the US election. Now, with evidence that a pro-Russian propaganda group bought thousands of dollars of political Facebook ads, and a growing realization regarding Facebook’s unprecedented role in society, Zuckerberg can no longer ignore the situation. In a detailed Facebook Live video on Thursday, the Facebook chief says the company will improve transparency around political advertising and plans to put more resources toward protecting election integrity. While distinct from the issue of hateful ad targeting, it’s another acknowledgment that Facebook has failed its users by designing a platform that fosters, instead of prevents, such manipulation.Reckoning with its ad sales model is just one of the hard facts that Facebook is waking up to. (Google, its search engine less of a lightning rod for controversy, is remaining more tight-lipped.) The broader issue, one this ad controversy illustrates, is Facebook’s inability to grapple with the power and influence it’s amassed, and how vulnerable that influence is to bad actors eager to exploit it.
“You wake up one morning and you’re mayor of a city, and maybe you never wanted to be a mayor and people are asking, ‘Why does the water run here and not there,’ and ‘What are we going to do about trash pickup,’” Pariser says. “I don’t know that Facebook set out to have that role, but by virtue of being the place where the city was built, it’s now got some responsibility to sort those things out.”
COMMENTS