Companies like AT&T and Johnson & Johnson said they would pull their ads from YouTube, as well as Google’s
display advertising business, until they could get assurances that such placement would not happen again.
With more than a billion videos on YouTube, 400 hours of new content being uploaded every minute
and three million ad-supported channels on the platform, Mr. Schindler said it was impossible to guarantee that Google could eradicate the problem completely.
But the issue has gained urgency in recent weeks, as The Times of London and other outlets have written about brands
that inadvertently fund extremists through automated advertising — a byproduct of a system in which YouTube shares a portion of ad sales with the creators of the content those ads appear against.
Google said it had already flagged five times as many videos as inappropriate for advertising,
although it declined to provide absolute numbers on how many videos that entailed.
As other brands started fleeing YouTube, Unilever discovered three instances in which its brands appeared on objectionable YouTube channels.
But Keith Weed, chief marketing officer of Unilever, decided not to withdraw its ads
because the number of ads appearing with objectionable content was proportionally small.
Now teaching computers to understand what humans can readily grasp may be the key to calming fears among big-spending advertisers
that their ads have been appearing alongside videos from extremist groups and other offensive messages.