Is A.I. Just Marketing Hype?

Early today, Slate pointed out that breakthrough technologies always seem to be “five to 10 years away,” citing numerous tech forecasts (energy sources, transportation, medical/body-related technologies, etc.) containing that exact phrase.

The also included some quotes predicting breakthroughs in “Robots/A.I.” in “five to 10 years,” but the earliest was from 2006 and rest were from the past two years. The lack of older quotes is probably because with A.I., the big breakthrough–the “singularity” that approximates human intelligence–has a fuzzier threshold.

Here’s are some highlight in the history of A.I. predictions:

  • 1950: Alan Turing predicts a computer will emulate human intelligence (it will be impossible to tell whether you’re texting with a human or a computer) “by the end of the century.”
  • 1970: Life Magazine quotes several distinguished computer scientists saying that “we will have a machine with the general intelligence of a human being” within three to fifteen years.
  • 1983: The huge bestseller The Fifth Generation predicts that Japan will create intelligent machines within ten years.
  • 2002: MIT scientist Rodney Brooks predicts machines will have “emotions, desires, fears, loves, and pride” in 20 years.

Similarly, the futurist Ray Kurzweil has been predicting that the “singularity” will happen in 20 years for at least two decades. His current forecast is that it will happen by 2029. Or maybe 2045. (Apparently he made both predictions at the same conference.)

Meanwhile, we’ve got Elon Musk and Vladmir Putin warning about A.I. Armageddon and invasions of killer robots, and yet… have you noticed that when it comes to actual achievements in A.I., there seems to be far more hype than substance?

Perhaps this is because A.I.–as it exists today–is very old technology. The three techniques for implementing A.I. used today–rule-based machine learning, neural networks and pattern recognition–were invented decades ago.

While those techniques have been refined and “big data” added as a way to increase accuracy (as in predicting the next word you’ll type), the results aren’t particularly spectacular, because there have really been no breakthroughs.

For example, voice recognition is marginally more accurate than 20 years ago in identifying individual spoken words but still lacks any sense of context, which is why, when you’re dictating, inappropriate words always intrude. It’s also why the voice recognition inside voice mail systems is still limited to letters, numbers and a few simple words.

Apple’s SIRI is another example. While it’s cleverly programmed to seem to be interacting, it’s easily fooled and often inaccurate, as evidenced by the wealth of SIRI “fail” videos on YouTube.

Another area where A.I. is supposed to have made big advances is in strategy games. For years, humans consistently beat computers in the Chinese game of GO. No longer. And computers have long been able to defeat human chess champions.

However, while the ability to play a complex game effectively seems like intelligence, such programs are actually quite stupid. For example, here are three chess pieces:

The piece on the left is a Knight (obviously) and the piece in the middle is a Queen (again obviously). The piece on the right is called a “Zaraffa” and it’s used in a Turkish variation of chess. If you look at the Zaraffa carefully and you know how to play regular chess, you immediately know its legal moves.

Deep Blue–or any other chess program–could scan that photo for eternity and not “get” it; far less incorporate a “knight plus queen” way of moving into its gameplay. Game playing programs can’t make mental adjustments that any novice chess player would grasp in a second. They would need to be completely reprogrammed.

Similarly, self-driving cars are also frequently cited as a (potentially job-killing) triumph of A.I. However, the technologies they use–object avoidance, pattern recognition, various forms of radar, etc.–are again decades old.

What’s more, even the most ambitious production implementations of self-driving cars are likely to be limited to freeway driving, the most repetitive and predictable of all driving situations. (While it’s possible self-driving cars may eventually cause fewer accidents than human drivers, that’s because human drivers are so awful.)

The same thing is true of facial recognition. The facial recognition in Apple’s iPhone X is being touted in the press as a huge breakthrough; in fact, the basic technology has been around for decades; what’s new is miniaturizing it so it will fit on a phone.

But what about all those “algorithms” we keep hearing about? Aren’t those A.I.? Well, not really. The dictionary definition of algorithm is “a process or set of rules to be followed in calculations or other problem-solving operations.”

In other words, an algorithm is just a fancy name for the logic inside a computer program. It’s just a reflection of the intent of the programmer. Despite all the sturm-und-drang brouhaha about computers replacing humans, there’s not the slightest indication that any computer program has created, or ever will create, something original.

IBM’s Watson supercomputer is a case in point. Originally touted as an A.I. implementation that was superior to human doctors in diagnosing cancer and prescribing treatment, it’s since become clear that it does nothing of the kind. As STAT recently pointed out:

“Three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer.”

What’s more, some of Watson’s capabilities are of the “pay no attention to the man behind the curtain” variety. Again from STAT:

“At its heart, Watson for Oncology uses the cloud-based supercomputer to digest massive amounts of data — from doctor’s notes to medical studies to clinical guidelines. But its treatment recommendations are not based on its own insights from these data. Instead, they are based exclusively on training by human overseers, who laboriously feed Watson information about how patients with specific characteristics should be treated.”

Watson, like everything else under the A.I. rubric, doesn’t live up to the hype. But maybe that’s because the point of A.I. isn’t about breakthroughs. It’s about the hype.

Every ten years or so, pundits dust off the “A.I.” buzzword and try to convince the public that there’s something new and worthy of attention in the current implementation of these well-established technologies.

Marketers start attaching the buzzword to their projects to give them a patina of higher-than-thou tech. Indeed, I did so myself in the mid 1980s by positioning an automated text processing system I had built as “A.I.” because it used “rule-based programming.” Nobody objected. Quite the contrary; my paper on the subject was published by the Association for Computing Machinery (ACM).

The periodic return of the A.I. buzzword is always accompanied by bold predictions (like Musk’s killer robots and Kurzweil’s singularity) that never quite come to pass. Machines that can think forever remain “20 years in the future.” Meanwhile,, all we get is SIRI and a fancier version of cruise control. And a boatload of overwrought hand-wringing.

Tech

A.I. tools came out of the lab in 2016

You shouldn’t anthropomorphize computers: They don’t like it.

That joke is at least as old as Deep Blue’s 1997 victory over then world chess champion Garry Kasparov, but even with the great strides made in the field of artificial intelligence over that time, we’re still not much closer to having to worry about computers’ feelings.

Computers can analyze the sentiments we express in social media, and project expressions on the face of robots to make us believe they are happy or angry, but no one seriously believes, yet, that they “have” feelings, that they can experience them.

Other areas of A.I., on the other hand, have seen some impressive advances in both hardware and software in just the last 12 months.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

2016’s top trends in enterprise computing: Containers, bots, A.I. and more

It’s been a year of change in the enterprise software market. SaaS providers are fighting to compete with one another, machine learning is becoming a reality for businesses at a larger scale, and containers are growing in popularity.

Here are some of the top trends from 2016 that we’ll likely still be talking about next year.

Everybody’s a frenemy

As more companies adopt software-as-a-service products like Office 365, Slack and Box, there is increasing pressure for companies that compete with each another to collaborate. After all, nobody wants to be stuck using a service that doesn’t work with the other critical systems they have.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing

Scientists look at how A.I. will change our lives by 2030

By the year 2030, artificial intelligence (A.I.) will have changed the way we travel to work and to parties, how we take care of our health and how our kids are educated.

That’s the consensus from a panel of academic and technology experts taking part in Stanford University’s One Hundred Year Study on Artificial Intelligence.

Focused on trying to foresee the advances coming to A.I., as well as the ethical challenges they’ll bring, the panel yesterday released its first study.

The 28,000-word report, “Artificial Intelligence and Life in 2030,” looks at eight categories — from employment to healthcare, security, entertainment, education, service robots, transportation and poor communities — and tries to predict how smart technologies will affect urban life.

To read this article in full or to leave a comment, please click here

Computerworld Cloud Computing