Online targeted advertising algorithms can have serious reputational consequences for brands
When in late May women’s groups fired off some 60,000 tweets and 5,000 emails to advertisers, protesting against their presence on Facebook pages glorifying violence against women, the giant social networking site did more than acknowledge the complaints. It took action – though only after the likes of American Express and Nissan suspended their marketing campaigns.
Organisers of the online protest declared victory, with Laura Bates of the Everyday Sexism Project writing in the Financial Times of the considerable risks posed by a new form of “targeted” advertising now present on most social media.
“The response of several companies, such as Dove, who said they would ask for their ads to be removed from the pages in question, but would not pull them from Facebook altogether, showed a fundamental lack of understanding of how this advertising works,” wrote Bates.
Targeted advertising identifies that a person is likely to buy a particular product, and then automatically places ads for that product on whatever page he or she visits. It’s why Facebook users see advertisements on their profile page that are tailored to their gender, music taste, or location. But because those ads follow users wherever they go, the reputational risk is considerable.
In one screenshot posted by the #FBRape campaign, a user reported an image of a woman shot in the head with the caption “I like her for her brains”. Below, an automated response from Facebook read: “We reviewed the photo you reported, but found it doesn’t violate Facebook’s community standard.”
Facebook first contended that some of the content was humorous in nature and therefore not actionable hate speech. Later it responded by announcing that it would update its guidelines to ensure its employees were accurately identifying prohibited content. Most notable was a pledge to prevent users from posting such content anonymously, a step applauded by social media experts.
Some called the exchange between Facebook and campaigning groups a watershed moment. Really?
For one thing, no easy technological fix yet exists. An algorithmic solution isn’t at hand. Nor could an increase in human resources begin to manually keep pace with what now amounts to more than a billion active users on the site.
“It’s not necessarily something we have rules in place to regulate. We are concerned with the content of the ads, not where they appear,” says Matt Wilson, a spokesman for the UK Advertising Standards Authority. “That’s a conversation for the advertiser and platform provider.”
Meanwhile, social media sites are ever-more dependent on targeted advertising as a lucrative revenue source. In the first three months of this year, Facebook generated revenues of $1.46bn, a 38% rise over the same period a year earlier, as it rolled out new tools allowing advertisers to target individual users.
Indeed, there’s an opportunity to “sell ads all over the world,” as Facebook chief operating officer Sheryl Sandberg recently said in a conference call with analysts. That would include markets in developing countries such as India, where recently women’s groups professed “shock” at finding community pages produced in Kolkata promoting prostitution with graphic photographs of prostitutes engaged in sexual acts.
“Does Facebook need every single problematic image or site to be broadcast by major social media players for them to take action?” asked one campaigner involved in the effort to have the site removed.
“Brands will want to put their ads next to as many things as they can,” says social media strategist Andrew Grill, “so I think the problem will only get worse.”
Big advertisers have a responsibility to ensure online media owners are doing everything they can to moderate the content they host, Grill adds.
“Brands have to be quick off the mark. They have to be seen as doing the right thing.”advertising Eric Marx Ethics Facebook marketing privacy