You’ve seen the headlines: “AI can read your mind,” “AI is brewing your next whisky,” “AI is better than doctors at something or other,” “This AI sounds so convincing it’s too dangerous to release into the wild,” and on and on.
And, of course, the ubiquitous, AI is destroying jobs.
Most people can agree that the coverage of the phenomenon of artificial intelligence in the popular media is bad. AI researchers know it, some reporters know it, and probably the average consumer of media suspects it.
The headlines are mostly filled with urgent appeals to panic, and the substance of articles is vague, obscure, and anthropomorphized, leading to terrible presumptions of sentience.
Also: MIT finally gives a name to the sum of all AI fears
Fifty years ago, Drew McDermott at the MIT AI Lab had a great term for such misleading characterizations. He called it, “artificial intelligence meets natural stupidity.” Back then, McDermott was addressing his peers in the AI field and their unreasonable anthropomorphizing. It seems these days, natural stupidity is alive and well in journalism.
Why is that the case? It is the case because a lot of writing about AI is not about AI, per se, it is writing around AI, avoiding what it is.
What’s missing in AI reporting is the machine. AI doesn’t operate in a mysterious ether. It is not a glowing brain, as seen in so many AI stock photography images. It is an operation of computers. It is bound up with what computers have been and what they’re becoming. Many seem to forget this.
Computers are and always have been function machines: They take input and transform it into output, such as, for example, transforming keystrokes into symbols on a screen.
Machine learning is also about functions. Machine learning operations of a computer take input data and transform it in various ways, depending on the nature of the function. For decades, that function had to be engineered. As data and computing became really cheap, some aspects of the function could be shaped by the data itself rather than being engineered.
Also: Why chatbots still leave us cold
Machine learning is just a function machine, some of whose properties are shaped by data.
Anytime “an AI” does something, it means someone has engineered a function machine whose transformation of input into output is allowed a certain degree of freedom, a flexibility beyond the explicit programming.
That is not the same as producing a human-like consciousness that is “learning,” in the sense that we think about it. Such machines aren’t “figuring stuff out,” another anthropomorphic trope mis-applied. Rather, their functions are being changed by the data in ways that allow input to be converted into output in surprising ways, results that we seem almost to recognize as being like human thought.
That machine truth about AI is obscured in the coverage of AI
One reason the machine is left out of the discussion is that reporters are generally intellectually lazy. They don’t like to explore difficult ideas. That means they won’t do the work required to understand AI as a product of computing technology, in its various forms. They won’t crack a book, say, or read the research to learn the language of the discipline. Their ignorance about everything they didn’t bother to learn about computing is being compounded now by everything they’re not bothering to learn about AI
Also: AI ain’t no A student: DeepMind nearly flunks high school math
How about those headlines, though? Headlines are often written by editors, not by reporters. No matter what the story is like, the headline can end up being clickbait. AI is a term with sizzle. It makes for good search-engine optimization, or SEO, to drive traffic to online articles.
The other reason AI reporting ends up being terrible is that many parties that are actually doing AI don’t explain what they’re doing. Academics and scientists may have some incentive to do so, out of a respect for understanding in general. But it’s often not clear what the actual benefit is to their work when reporters haven’t even tried to meet those researchers halfway, by doing the intellectual work required to gain some basic understanding.
For-profit enterprises, such as technology companies, are actively inclined to maintain obscurity. Some may want to preserve the secrecy of intellectual property. Others just want to exploit the imprimatur of “AI” while actually not engaging in AI, per se.
A lot of software being developed may involve fairly mundane statistical approaches that bear no resemblance to AI. Therefore, it’s not in the interest of a corporation to let the cat out of the bag and reveal how mundane the technology is.
If you ask these enterprises what kind of neural network they might be using, such as, say, a convolutional neural network, a long-short-term memory approach, or any such question, they’ll change the subject or mumble something vague. The reporter who takes the trouble to gain some basic understanding of machine learning will generally run up against stone-walling from such entities.
Divorcing AI from the entire history of computing, detaching it from the material details that make up machine learning, not only leads to poor articles, it’s confusing the discussion of ethics in AI
If AI is a function machine the nature of which is in some part determined by the data, the responsibility for all the bad things that can happen with AI rests not solely with AI Some of it rests with other aspects of computing that were there before this era. Things such as automation have been an effect of machines for a long time. Computers that automate tasks can have an impact on jobs, and although the affects can be amplified by AI, the issue is not strictly an AI issue; it is a computing issue, a machine issue, an automation issue, and, ultimately, an issue of societal values.
A society that doesn’t know what computing is, and how it relates to AI, and therefore doesn’t really understand what AI is, can’t possibly have a good debate about the ethics of AI
Hopefully, efforts such as one recently proposed by MIT scientists will bring greater understanding. The MIT researchers argue for a new science akin to ethology, in which computers will be researched in a broader fashion that takes into account all of the ways that they are designed, and all the ways they are used in society, not simply the narrow cases of machines that seem to mimic human behavior. Their term, “machine behavior,” may help to put the computer back into the picture.