It’s
a common psychological phenomenon: repeat any word enough times, and it
eventually loses all meaning, disintegrating like soggy tissue into
phonetic nothingness. For many of us, the phrase “artificial
intelligence” fell apart in this way a long time ago. AI is everywhere
in tech right now, said to be powering everything from your TV to your
toothbrush, but never have the words themselves meant less.
It shouldn’t be this way.
While the phrase “artificial intelligence” is
unquestionably, undoubtedly misused, the technology is doing more than
ever — for both good and bad. It’s being deployed in
health care and
warfare; it’s helping people
make music and
books; it’s scrutinizing your
resume, judging your
creditworthiness, and tweaking the photos you take on your
phone. In short, it’s making decisions that affect your life whether you like it or not.
It can be difficult to square with the hype and bluster
with which AI is discussed by tech companies and advertisers. Take, for
example, Oral-B’s Genius X toothbrush, one of the many devices unveiled
at CES this year that touted supposed “AI”
abilities.
But dig past the top line of the press release, and all this means is
that it gives pretty simple feedback about whether you’re brushing your
teeth for the right amount of time and in the right places. There are
some clever sensors involved to work out where in your mouth the brush
is, but calling it artificial intelligence is gibberish, nothing more.
When there’s not hype involved, there’s misunderstanding. Press coverage can exaggerate research, sticking a
picture of a Terminator on any vaguely AI story. Often this comes down to confusion about what artificial intelligence even
is.
It can be a tricky subject for non-experts, and people often mistakenly
conflate contemporary AI with the version they’re most familiar with: a
sci-vision of a conscious computer many times smarter than a human.
Experts refer to this specific instance of AI as artificial
general intelligence, and if we do ever create something like this, it’ll likely to be a
long way in the future. Until then, no one is helped by exaggerating the intelligence or capabilities of AI systems.
It’s better, then, to talk about “machine learning” rather
than AI. This is a subfield of artificial intelligence, and one that
encompasses pretty much all the methods having the biggest impact on the
world right now (including what’s called
deep learning). As a phrase, it doesn’t have the mystique of “AI,” but it’s more helpful in explaining what the technology does.
How does machine learning work? Over the past few years,
I’ve read and watched dozens of explanations, and the distinction I’ve
found most useful is right there in the name: machine learning is all
about enabling computers to learn on their own. But what that means is a much bigger question.
Let’s start with a problem. Say you want to create a program that can recognize cats. (It’s
always cats
for some reason). You could try and do this the old-fashioned way by
programming in explicit rules like “cats have pointy ears” and “cats are
furry.” But what would the program do when you show it a picture of a
tiger? Programming in every rule needed would be time-consuming, and
you’d have to define all sorts of difficult concepts along the way, like
“furriness” and “pointiness.” Better to let the machine teach itself.
So you give it a huge collection of cat photos, and it looks through
those to find its own patterns in what it sees. It connects the dots,
pretty much randomly at first, but you test it over and over, keeping
the best versions. And in time, it gets pretty good at saying what is
and isn’t a cat.
So far, so predictable. In fact, you’ve probably read an
explanation like this before, and I’m sorry for it. But what’s important
is not reading the gloss but really thinking about what that gloss
implies. What are the side effects of having a decision-making system
learn like this?
Well, the biggest advantage of this method is the most
obvious: you never have to actually program it. Sure, you do a hell of a
lot of tinkering, improving how the system processes the data and
coming up with smarter ways of ingesting that information, but you’re
not telling it what to look for. That means it can spot patterns that
humans might miss or never think of in the first place. And because all
the program needs is data — 1s and 0s — there are so many jobs you can
train it on because the modern world is just stuffed full of data. With a
machine learning hammer in your hand, the digital world is full of
nails ready to be bashed into place.
But then think about the disadvantages, too. If you’re not explicitly
teaching the computer, how do you know how it’s making its decisions?
Machine learning systems
can’t explain their thinking,
and that means your algorithm could be performing well for the wrong
reasons. Similarly, because all the computer knows is the data you feed
it, it might pick up a
biased view of the world, or it might only be good at
narrow tasks
that look similar to the data it’s seen before. It doesn’t have the
common sense you’d expect from a human. You could build the best
cat-recognizer program in the world and it would never tell you that
kittens shouldn’t drive motorbikes or that a cat is more likely to be
called “Tiddles” than “Megalorth the Undying.”
Teaching computers to learn for themselves is a brilliant
shortcut. And like all shortcuts, it involves cutting corners. There’s
intelligence in AI systems, if you want to call it that. But it’s not
organic intelligence, and it doesn’t play by the same rules humans do.
You may as well ask: how clever is a book? What expertise is encoded in a
frying pan?
So where do we stand now with artificial intelligence?
After years of headlines announcing the next big breakthrough (which,
well, they haven’t quite stopped yet), some experts think we’ve reached
something of a
plateau.
But that’s not really an impediment to progress. On the research side,
there are huge numbers of avenues to explore within our existing
knowledge, and on the product side, we’ve only seen the tip of the
algorithmic iceberg.
Kai-Fu Lee, a venture capitalist and former AI researcher,
describes
the current moment as the “age of implementation” — one where the
technology starts “spilling out of the lab and into the world.” Benedict
Evans, another VC strategist,
compares
machine learning to relational databases, a type of enterprise software
that made fortunes in the ‘90s and revolutionized whole industries, but
that’s so mundane your eyes probably glazed over just reading those two
words. The point both these people are making is that we’re now at the
point where AI is going to get normal fast. “Eventually, pretty much
everything will have [machine learning] somewhere inside and no-one will
care,” says Evans.
He’s right, but we’re not there yet.
In the here and now, artificial
intelligence — machine learning — is still something new that often goes
unexplained or under-examined. So in this week’s special issue of The Verge,
AI Week, we’re going to show you how it’s all happening right now, how
this technology is being used to change things. Because in the future,
it’ll be so normal you won’t even notice.