The web’s biggest social media companies have been targeting ads to racists. Facebook, Google and Twitter have allowed advertisers to target groups expressing interests in topics such as “Jew hater,” How to burn jews” and “why jews ruin the world.” Facebook’s advertising algorithm went even further by suggesting other racist and hate interests to advertisers including search terms like “Hitler did nothing wrong.” After being notified by ProPublica Facebook removed the antisemitic categories.
But that did not solve the problem entirely. Online magazine Slate used Facebook’s ad targeting to create ads targeting those interested in “Ku-Klux-Klan,” and other white nationalist interests.
ProPublica first attempted to purchase three ads, or “promoted posts,” using Facebook’s targeting tool. At first they were rejected. But not for the reason you might think. The ad placement was rejected because the number of Facebook users searching the racist terms was beneath a pre-programmed number of users. ProPublica then added a larger category to “Jew hater” and the others. Facebook’s ad tool then said the selected audience was “great!” Fifteen minutes later the company’s ad system had approved all three ads.
Facebook is not alone. BuzzFeed discovered Google, the world’s largest advertising platform, also had problems in its ad targeting. Google’s ad targeting allowed advertisers to target people searching phrases such as “black people ruin everything” and “jews control the media.”
Like Facebook, Google’s algorithm auto-suggested similar phrases, such as “the evil jew.” To confirm their findings BuzzFeed ran the ads and verified the ads did indeed appear on the web.
After BuzzFeed’s report Google disabled the keyword searches used in the ad buy. However, according to BuzzFeed, the search term “blacks destroy everything,” remained.
Sridhar Ramaswamy, Google’s senior vice president of advertising, in an email.”We’ve already turned off these suggestions and any ads that made it through, and will work harder to stop this from happening again.”
Twitter too was found to have algorithms that played into racism. According to the Daily Beast Twitter allowed the targeting of ads to racist. Twitter’s algorithm allowed advertisers to target millions of customers searching terms like “wetback” and “nigger.” The Daily Beast was also able to successfully placed the ads online. And, like Facebook and Google, Twitter generated suggestions in response to racist terms.
According to Twitter it fixed its algorithm that permitted marketers to target racist.
The Daily Beast reported that Twitter Ads returned 26.3 million users who may respond to the term “wetback,” 18.6 million to “Nazi,” and 14.5 million to “nigger.”
Facebook appears to be dealing with a recurring problem. That is racist ad targeting. In 2016 AACR reported that advertisers were using the social media platform ad targeting to exclude people of color from seeing ads for housing and other services. Facebook uses the term “affinity marketing” but it is also known as “red lining.” This is the practice of denying certain groups access to homes, jobs and other service based on race.
Are these companies undercover racists or is something else happening here? Technology is supposed to be color blind but these issues keep popping up, especially for Facebook.
According to Facebook, algorithms select categories based on what users list as employment or education. People have used terms such as “Jew Hater,” as their jobs and listed employers as “Jew Killing Weekly Magazine.” Or as education they list “Threesome Rape.” As a result Facebook’s algorithm, which is not designed to understand the meaning of these terms, create target market categories.
Some experts believe that many algorithms that are programmed to make decisions are programmed on data sets that do not include a diverse range of people.
Graphic designer Johanna Burai created the World White Web project after she searched for an image of human hands. Her search on Google resulted in images of millions of hands almost exclusively white.
Google responded by saying its image search results are “a reflection of content from across the web, including the frequency with which types of images appear and the way they’re described online” and are not connected to its “values”.
Joy Buolamwini, a postgraduate student at the Massachusetts Institute of Technology launched The Algorithmic Justice League (AJL) in November 2016.
Buolamwini, a dark skin African-American, was attempting to use facial recognition software for a project but the program could not process her face.
“I found that wearing a white mask, because I have very dark skin, made it easier for the system to work. It was the reduction of a face to a model that a computer could more easily read.”
It was not the first time Buolamwini experienced the problem. Once before she had to ask a lighter-skinned room-mate to help her.
“I had mixed feelings. I was frustrated because this was a problem I’d seen five years earlier was still persisting,” she said. “And I was amused that the white mask worked so well.”
But is technology and these algorithms really racist? Maybe not so much as some would have us to believe. Algorithms are programs and programs are created by people who program computers. So it is not unthinkable that racism and biases creep into programs.
Thank you for bringing more information to this topic for me. I’m truly grateful and really impressed. Absolutely this article is incredible. And it is so beautiful.