//
post
Media

AI and advertising

Most prominent web publishers such as Facebook, Twitter, WordPress, Reddit and news sites, generate revenue by placing banner, side-bar and in-text adverts on their pages. The ads are tailored for your eyes, and may differ from what others see on the same pages.

Programmatic advertising directs what ads you see on web pages and social media. You only see ads for products and brands you might eventually think about buying. Infectious Media is one company specialising in such targeted advertising. They explain how ads can “be trafficked and targeted to specific users.”

Ad auctions

Every time I visit a webpage with ads, potential advertisers (or their agents) for that site harvest information about where I am located physically and the type of computer I am using. If the advertisers’ cookies are already downloaded to my browser then the potential advertisers may have a record of my browser history. They may even have information about my purchasing habits from online retail, loyalty schemes, and other sources.

That data then informs an algorithm to put in a bid to present an advertisement to me on the page I am visiting. Different advertisers compete to place their ad. The highest bidder wins. The web user is of course unaware of the process. It happens in micro-seconds. A BBC article describes programmatic advertising as an auction between advertisers.

Misplaced ads

Such automation carries risks for advertisers. The data and algorithms these systems use constitute a kind of artificial intelligence, mainly drawing on advanced pattern detection techniques. But over millions of web visits a few ads at least are bound to appear misplaced.

Automated ad placement cannot always take full account of the content of the pages on which ads appear. Advertisers are not indifferent to content. This problem is particularly acute in the case of Youtube, Facebook, Reddit and blog sites where the content is unregulated.

Ads that appear next to news items can also appear incongruous, and even amusing. The magazine Private Eye, features a regular segment called “Malgorithms” that highlights examples of inappropriate juxtapositions of ads and content: e.g. an ad for luggage appears next to a news report about body parts found in a suitcase; an ad for ovens appears next to a story about Nazi concentration camps. Advertisers don’t want to offend. Nor do they want to corrupt their brand image. Google “inappropriate ad placement” for more examples.

Presumably the reverse concern applies: ads appearing that are inappropriate to the ethos of the content, or the spaces in which they appear — though advertisers seem less concerned about that.

Protect your brand

So there is a new industry to protect brands. For example, Crisp is a UK company that uses big data, AI techniques, and human expertise to detect “illegal and toxic web content.” They tout their operations in terms of solving “the biggest threat to online advertisers and social platforms.” They seek to “identify the highest risk and most shareable content within minutes, reporting it directly to the social networks teams to action or even automate complete take down.”

To identify malignant web content is of obvious social value, but the strategy also aims to help brands preserve their reputations. No respectable brand wants their ads to accidentally endorse terrorist propaganda, violence, hate preaching, terrorist instruction videos, and incitements to join terrorist organisations.

The brand protection industry covers inappropriate comments, gore videos, scams, and profanities, extending to the wider problems for a company of a PR crisis, celebrity or executive scandal, negative feelings about a brand, customer complaints, reports of product failures, fake reviews and IP infringements. See www.crispthinking.com/social-media-risk-management.

What starts as a means of reducing “extremist and illegal content,” ends in the protection of advertisers who wish to immunise themselves against criticism — abetted by automation and AI. Does that service extend to protecting politicians, celebrities, rogue companies and others who can pay from public criticism?

Also see Big Data: a non-theory about everything, Is the high street ruining the Internet? and posts tagged AI.

Reference

About Richard Coyne

The cultural, social and spatial implications of computers and pervasive digital media spark my interest ... enjoy architecture, writing, designing, philosophy, coding and media mashups.

Discussion

No comments yet.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

University of Edinburgh logo

Richard on Facebook


Or "like" my Facebook
page for blog updates.

Try a one year research degree

Wooden specimens of Platonic solids in the background

AHRC/EPSRC/ESRC/MRC project

book cover
book cover

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 287 other followers

Site traffic

  • 161,990 page views

%d bloggers like this: