When imagining neural networks and artificial intelligence, it’s hard not to immediately imagine the sentient algorithms starting wars in The Terminator or The Matrix. But despite their popularity, these concepts do not track with the current state of neural networks or artificial intelligence.
That said, despite alarmist headlines (and tweets that someone may or may not have sent while upon a certain porcelain throne), neural networks are not conscious. Ilya Sutskever, a head scientist at the research group OpenAI, tweeted on February 9 that, “it may be that today’s large neural networks are slightly conscious.”
To understand why these networks aren’t conscious, The Debrief spoke with Emily M. Bender, a professor of linguistics at the University of Washington, who studies ethical issues with large language processing algorithms. Neural networks themselves are much less impressive than their name suggests.
“These systems are developed by creating an initial system with random weights on connections between its components and then training the system by showing it inputs comparing its output to some ‘gold standard’ or ‘ground truth’ output, and then adjusting weights within the system to bring its answers slightly more in accordance to the ‘gold standard,” Bender explained.
Think of a computer program that takes a picture of a cat as input with the output being the breed. Running a neural network would allow the system to assign each cat the correct breed from its picture, to match a “gold-standard” dataset.
“I think that some of this traces to a cultural tendency within the field of computer science to ‘sell’ research by making very grand claims,” Bender said. “the standard research practice in machine learning (so-called “AI”) is to use standard datasets called “benchmarks” to evaluate different systems and measure progress on the tasks that the benchmarks are meant to represent.”
However, performance on these specific benchmarks is then misrepresented as evidence of larger capabilities. “We see claims like AI has surpassed humans at general language understanding which are in fact completely unsupported,” Bender said, adding that we tend to anthropomorphize the abilities of these networks.
Neural networks, as well as other machine learning algorithms, are already deployed for applications that are not well-founded with the potential for harm. “This includes systems that claim to be able to “predict” such things as “criminality” from images of people’s faces, systems that purport to monitor students taking online examples for cheating,” Bender said. “The more the public has the idea that computers can do seemingly impossible things (like “achieving consciousness”), the more likely we are to end up with harmful AI snake oil deployed in sensitive situations and the less likely we are to be able to create sensible regulation.”
Deborah Raji, a fellow at the Mozilla Foundation and Ph.D. student at UC Berkley tweeted: “Once we characterize AI as a person, we heap ethical expectations we would normally have of people – to be fair, to explain themselves, etc – unto the artifact, and then try to build those characteristics into the artifact, rather than holding the many humans involved accountable”
It is hard enough to agree on a definition let alone determine how it emerges. Sutskever’s claims are presented without evidence or further explanation, leaving many scientists in the space skeptical. Further, many researchers point out the discourse about consciousness distracts from more important issues in the space, placing accountability on the algorithm rather than their creators and the system within which they operate.
Open AI’s own natural language processing algorithm called GPT-3 generates strings of somewhat coherent text but has a tendency toward racist and sexist stereotypes. This issue, rather than the purported consciousness of algorithms, should be front and center.
Simon Spichak is a science communicator and journalist with an MSc in neuroscience. His work has been featured in outlets that include Massive Science, Being Patient, Futurism, and Time magazine. You can follow his work at his website, http://simonspichak.com, or on Twitter @SpichakSimon.