Smarter machines risk creating dumber humans

When one of Google’s principal researchers asked the company’s LaMDA chatbot if it was a “philosophical zombie” (exhibiting human-like behavior without having any inner life, consciousness or sensitivity), he answered: “Of course not”. Unconvinced, Blaise Aguera y Arcas asked the AI-enabled chatbot how he knew it was true. “Just take my word for it. You also can’t ‘prove’ you’re not a philosophical zombie,” LaMDA replied.

Our machines are getting smarter – and sadder – at an astonishing and disconcerting rate. LaMDA is part of a new generation of large, or basic, language models that use machine learning techniques to identify word patterns in large datasets and automatically replicate them on demand. They work like quick auto-completion functions, but without instinctive or learned preferences, without memory, and without a sense of history or identity. “LaMDA is indeed, to use a blunt (if admittedly, humanizing) term, bullshit,” Aguera y Arcas wrote.

When OpenAI, a San Francisco-based research company, released one of the first basic models, called GPT-3, in 2020, it stunned many users with its ability to generate reams of plausible text at lightning speed. outstanding. Since then, these templates have gotten bigger and more powerful, ranging from text and computer code to images and video. They are also emerging from protected research environments in the real-world wilderness and are increasingly deployed in marketing, finance, scientific research and healthcare. The crucial question is to what extent these technological tools should be controlled. The risk is that smarter machines will only make humans dumber.

Positive business uses of the technology are highlighted by Kunle Olukotun, a Stanford University professor and co-founder of SambaNova Systems, a Silicon Valley startup that helps customers deploy AI. “The pace of innovation and the size of models are increasing dramatically,” he says. “Just when you thought we were reaching our limits, people are inventing new tricks.”

These new models can not only generate text and images, but also interpret them. This allows the same system to learn in different contexts and handle multiple tasks. For example, Hungarian bank OTP is working with the government and SambaNova to deploy AI-based services across its operations. The bank aims to use technology to add automated agents to its call centers, personalize services to its 17 million retail customers and streamline internal processes by analyzing documents. “No one really knows what banking will look like in 10 years — or what technology will look like. But I am 100% sure that AI will play a key role,” says Peter Csanyi, Chief Digital Officer of OTP.

Some of the companies that have developed powerful core models, such as Google, Microsoft, and OpenAI, limit access to the technology to known users. But others, including Meta and EleutherAI, share it with a wider customer base. There is a tension between allowing outside experts to help detect flaws and biases and preventing more sinister use by unscrupulous people.

Foundation patterns can be “really exciting and impressive”, but are open to abuse because they are “designed to be sneaky”, says Carissa Véliz, associate professor at the University of Oxford’s AI Ethics Institute. . If trained on historically biased datasets, baseline models can produce harmful outputs. They can threaten privacy by mining digital details about an individual and using bots to reshape personas online. They can also devalue the currency of truth by flooding the internet with false information.

Véliz draws an analogy with financial systems: “We can trust money as long as there is not too much counterfeiting. But if there is more fake money than real money, the system collapses. We create tools and systems that we cannot control. This argues for implementing randomized controlled trials for foundation models before release, she says, just like for pharmaceutical drugs.

The Stanford Institute for Human-Centered AI pushed for the creation of an expert review board to establish community standards, share best practices, and agree on standardized access rules before the publication of the basic models. Democracy is not just about transparency and openness. It is also about the institutional design of collective governance. We are, as Rob Reich of the Stanford Institute says, in a race between “disruption and democracy”.

Until effective collective governance is put in place to control the use of foundation models, it is far from certain that democracy will prevail.

[email protected]

Comments are closed.