- Developers are turning to machines that use artificial
intelligence to spot fake news stories on the
- AdVerif.ai currently works with content platforms and
advertising networks in the United States and Europe that don’t
want to be associated with false or potentially offensive
- But AdVerif.ai isn’t the only startup that sees an
opportunity in providing an AI-powered truth serum for online
It may have been the first bit of fake news in the history
of the Internet: in 1984, someone posted on Usenet that the
Soviet Union was joining the network. It was a
harmless April’s Fools Day
prank, a far cry from today’s weaponized disinformation
campaigns and unscrupulous fabrications designed to turn a quick
In 2017, misleading and maliciously false online content is
so prolific that we humans have little hope of digging ourselves
out of the mire. Instead, it looks increasingly likely that the
machines will have to save us.
One algorithm meant to shine a light in the darkness
is AdVerif.ai, which is
run by a startup of the same name. The artificially intelligent
software is built to detect phony stories, nudity, malware, and a
host of other types of problematic content.
AdVerif.ai, which launched a beta version in November, currently
works with content platforms and advertising networks in the
United States and Europe that don’t want to be associated with
false or potentially offensive stories.
The company saw an opportunity in focusing on a product for
companies as opposed to something for an average user, according
to Or Levi, AdVerif.ai’s founder.
While individual consumers might not worry about the veracity of
each story they are clicking on, advertisers and content
platforms have something to lose by hosting or advertising bad
And if they make changes to their services, they can be effective
in cutting off revenue streams for people who earn money creating
fake news. “It would be a big step in fighting this type of
content,” Levi says.
AdVerif.ai scans content to spot telltale signs that
something is amiss—like headlines not matching the body, for
example, or too many capital letters in a headline. It also
cross-checks each story with its database of thousands of
legitimate and fake stories, which is updated weekly.
The clients see a report for each piece the system
considered, with scores that assess the likelihood that something
is fake news, carries malware, or contains anything else they’ve
ask the system to look out for, like nudity. Eventually, Levi
says he plans to add the ability to spot manipulated images and
have a browser plugin.
Testing a demo version of the AdVerif.ai, the AI recognized
the Onion as satire
(which has fooled many
people in the past). Breitbart stories were
classified as “unreliable, right, political, bias,”
while Cosmopolitan was
It could tell when a Twitter account was using a logo but the
links weren’t associated with the brand it was portraying.
AdVerif.ai not only found that a story on Natural News with the
headline “Evidence points to Bitcoin being an NSA-engineered
psyop to roll out one-world digital currency” was from a
blacklisted site, but identified it as a fake news story popping
up on other blacklisted sites without any references in
legitimate news organizations.
Some dubious stories still get through. On a site called Action
News 3, a post headlined “NFL Player Photographed Burning an
American Flag in Locker Room!” wasn’t caught, though it’s
been proved to be a
fabrication. To help the system learn as it goes, its
blacklist of fake stories can be updated manually on a
AdVerif.ai isn’t the only startup that sees an opportunity in
providing an AI-powered truth serum for online
firms in particular have been quick to add
bot- and fake news-spotting operations to their repertoire,
pointing out how similar a lot of the methods look to hacking.
Facebook is tweaking its
algorithms to deemphasize fake news in its
newsfeeds, and Google partnered with a fact-checking
far with uneven
The Fake News Challenge, a competition run by volunteers in the
AI community, launched at the end of last year with the goal of
encouraging the development of tools that could help combat
Delip Rao, one of its organizers and the founder of Joostware, a
company that creates machine-learning systems, said spotting fake
news has so many facets that the challenge is actually going to
be done in multiple steps.
The first step is “stance detection,” or taking one story and
figuring out what other news sites have to say about the topic.
This would allow human fact checkers to rely on stories to
validate other stories, and spend less time checking individual
Fabrizio Bensch/ Reuters
The Fake News Challenge released data sets for teams to use, with
50 teams submitting entries. Talos Intelligence, a cybersecurity
division of Cisco, won the challenge with an algorithm that got
more than 80 percent correct—not quite ready for prime time, but
still an encouraging result.
The next challenge might take on images with overlay text (think
memes, but with fake news), a format that is often promoted on
social media, since its format is harder for algorithms to break
down and understand.
“We want to basically build the best tools for the fact checkers
so they can work very quickly,” Rao said. “Like fact checkers on
Even if a system is developed that is effective in beating back
the tide of fake content, though, it’s unlikely to be the end of
the story. Artificial-intelligence systems are already able to
create fake text, as well as incredibly convincing images and
video. (see “Real or Fake? AI
Is Making It Very Hard to Know”).
Perhaps because of this, a
study predicted that by 2022, the majority
of people in advanced economies will see more false than true
information. The same report found that even before that happens,
faked content will outpace AI’s ability to detect it, changing
how we trust digital information.
What AdVerif.ai and others represent, then, looks less like the
final word in the war on fake content than the opening round of
an arms race, in which fake content creators get their own AI
that can outmaneuver the “good” AIs (see “AI Could Set Us
Back 100 Years When It Comes to How We Consume News”).
As a society, we may yet have to reevaluate how we get our
VISIT THE SOURCE ARTICLE
Author: || World Economic Forum