Merchant: How AI doomsday hype helps sell ChatGPT - Los Angeles Times
Advertisement

Column: Afraid of AI? The startups selling it want you to be

A man gestures while speaking to the media.
OpenAI Chief Executive Sam Altman speaks to the media Feb. 7 about the integration of the Microsoft Bing search engine and Edge browser with OpenAI.
(Bloomberg via Getty Images)
Share via

You’ve probably heard by now: AI is coming, it’s about to change everything, and humanity is not ready.

Artificial intelligence is passing bar exams, plagiarizing term papers, creating deepfakes that are real enough to fool the masses, and the robot apocalypse is nigh. The government isn’t prepared. Neither are you.

Tesla founder Elon Musk, Apple co-founder Steve Wozniak and hundreds of AI researchers signed an open letter this week urging a pause on AI development before it gets too powerful. “A.I. could rapidly eat the whole of human culture,†three tech ethicists wrote in a New York Times op-ed. A cottage industry of AI hustlers have taken to Twitter, Substack and YouTube to demonstrate the formidable potential and power of AI, racking up millions of views and shares.

Advertisement

The doomscroll goes on. A Times columnist had a series of conversations with Bing and wound up afraid for humanity. A Goldman Sachs report says AI could replace 300 million jobs.

The concern has made its way into the halls of power too. On Monday, Sen. Christopher S. Murphy (D-Conn.) tweeted, “ChatGPT taught itself to do advanced chemistry. It wasn’t built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked.â€

“Something is coming. We aren’t ready.â€

Nothing of the sort has happened, of course, but it’s hard to blame the senator. AI doomsaying is absolutely everywhere right now. Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake — or unmake — the world, wants it.

Advertisement

Google and Microsoft think chatbots that can converse like humans are the future of web search. But the human workers who make sure they don’t screw up are treated as disposable.

OpenAI is behind the buzziest and most popular AI service, the text generator ChatGPT, and its technology currently powers Microsoft’s new AI-infused Bing search engine, the product of a deal worth $10 billion. ChatGPT-3 is free to use, a premium tier that guarantees more stable access is $20 a month, and there’s a whole portfolio of services available for purchase to meet any enterprise’s text or image-generation needs.

Sam Altman, the chief executive of OpenAI, declared that he was “a little bit scared†of the technology that he is helping to build and aiming to disseminate, for profit, as widely as possible. OpenAI’s chief scientist, Ilya Sutskever, said last week, “At some point it will be quite easy, if one wanted, to cause a great deal of harm†with the models they are making available to anyone willing to pay. And a new report produced and released by the company proclaims that its technology will put “most†jobs at some degree of risk of elimination.

Let’s consider the logic behind these statements for a second: Why would you, a CEO or executive at a high-profile technology company, repeatedly return to the public stage to proclaim how worried you are about the product you are building and selling?

Advertisement

Answer: If apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy.

AI, like other, more basic forms of automation, isn’t a traditional business. Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.

OpenAI has worked for years to carefully cultivate an image of itself as a team of hype-proof humanitarian scientists, pursuing AI for the good of all — which meant that when its moment arrived, the public would be well-primed to receive its apocalyptic AI proclamations credulously, as scary but impossible-to-ignore truths about the state of technology.

OpenAI was founded as a research nonprofit in 2015, with a large grant from Musk, a noted AI doomer, with the aim of “democratizing†AI. The company has long cultivated an air of dignified restraint in its AI endeavors; its stated aim was to research and develop the technology in a way that was responsible and transparent. The blog post announcing OpenAI declared, “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.â€

With new round of financial support for OpenAI, Microsoft will gain access to some of the most popular and advanced artificial intelligence systems.

For years, this led the media and AI scientists to treat the organization as if it was a research institution, which in turn allowed it to command greater levels of respect in the media and the academic community — and bear less scrutiny. It garnered good graces by sharing examples of how powerful its tools were becoming — OpenAI’s bots winning an esports championship, early examples of entire articles written by its GPT-2 AI — while exhorting the need to be cautious, and keeping its models secret and out of the hands of bad actors.

In 2019, the company transitioned to a “capped†for-profit company, while continuing to insist its “primary fiduciary duty is to humanity.†This month, however, OpenAI announced that it was taking the formerly open source code that made its bots possible private. The rationale: Its product (which is available for purchase) was simply too powerful to risk falling into the wrong hands.

Advertisement

OpenAI’s nonprofit background nonetheless imbued it with a halo of respectability when the company released a working paper with researchers from the University of Pennsylvania last week. The research, which, again, was carried out by OpenAI itself, concluded that “most occupations†now “exhibit some degree of exposure†to large language models, or LLMs, such as the one underlying ChatGPT. Higher-wage occupations have more tasks with high exposure. And “approximately 19% of jobs†will see at least half of all the tasks they involve exposed to LLMs.

Cover letters are notoriously hard to write. These job seekers decided to outsource the task to ChatGPT, an AI chatbot, with impressive results.

These findings were covered dutifully in the media, while critics, including Dan Greene, an assistant professor at University of Maryland’s Information Studies College, pointed out that this was less a scientific assessment than a self-fulfilling prophecy. “You use the new tool to tell its own fortune,†he said. “The point is not to be ‘correct’ but to mark down a boundary for public debate.â€

Whether OpenAI set out to become a for-profit company in the first place, the end result is the same: the unleashing of a science fiction-infused marketing frenzy unlike anything in recent memory.

Now, the benefits of this apocalyptic AI marketing are twofold. First, it encourages users to try the “scary†service in question — what better way to generate a buzz than to insist, with a certain presumed credibility, that your new technology is so potent it might unravel the world as we know it?

The second is more mundane: The bulk of OpenAI’s income is unlikely to come from average users paying for premium-tier access. The business case for a rando paying monthly fees to access a chatbot that is marginally more interesting and useful than, say, Google Search, is highly unproven.

OpenAI knows this. It’s almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies. That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.

Enterprise deals have always been where automation technology has thrived — sure, a handful of consumers might be interested in streamlining their daily routine or automating tasks here and there, but the core sales target for productivity software or automated kiosks or robotics is management.

Advertisement

And a big driver in motivating companies to buy into automation technology is and always has been fear. The historian of technology David Noble demonstrated in his studies of industrial automation that the wave of workplace and factory floor automation that swept the 1970s and ‘80s was largely spurred by managers submitting to a highly pervasive phenomenon that today we recognize as FOMO. If companies believe a labor-saving technology is so powerful or efficient that their competitors are sure to adopt it, they don’t want to miss out — regardless of the ultimate utility.

The great promise of OpenAI’s suite of AI services is, at root, that companies and individuals will save on labor costs — they can generate the ad copy, art, slide deck presentations, email marketing and data entry processes fast and cheap.

This is not to suggest that OpenAI’s image and text generators are not capable of interesting, amazing or even unsettling things. But the conflicted genius schtick that Altman and his OpenAI coterie are putting on is wearing thin. If you are genuinely concerned about the safety of your product, if you seriously want to be a responsible steward in the development of an artificially intelligent tool you believe to be ultra-powerful, you don’t slap it onto a search engine where it can be accessed by billions of people; you don’t open the floodgates.

Altman argues that the technology needs to be released, at this relatively early stage, so that his team can make mistakes and address potential abuses “while the stakes are fairly low.†Implicit in this argument, however, is the notion that we should simply trust him and his newly cloistered company with how best to do so, even as they work to meet revenue projections of $1 billion next year.

I’m not saying don’t be nervous about the onslaught of AI services — but I am saying be nervous for the right reasons. There’s plenty to be wary about, especially given the prospect that companies most certainly will find the sales pitch alluring and whether it works, a lot of copywriters, coders and artists are suddenly going to find their work not necessarily replaced but devalued by the ubiquitous and much cheaper AI services on offer. (There’s a reason artists have already launched a class-action lawsuit alleging AI systems were trained on their work.)

But the hand-wringing over an all-powerful “artificial general intelligence†and the incendiary hype tend to obscure those nearer-term types of concerns. AI ethicists and researchers such as Timnit Gebru and Meredith Whittaker have been shouting into the void that an abstract fear of an imminent SkyNet misses the forest for the trees.

Advertisement

“One of the biggest harms of large language models is caused by claiming that LLMs have ‘human-competitive intelligence,’†Gebru said.

There’s a real and legitimate danger that this stuff will produce biased or even discriminatory results, help misinformation proliferate, steamroll over artists’ intellectual property and more — especially because a lot of Big Tech companies just happen to have fired their AI ethics teams.

It’s perfectly legitimate to be afraid of the power of a new technology. Just know that OpenAI — and all of the other AI companies that stand to cash in on the hype — very much want you to be.

VIDEO | 06:03
LA Times Today: Afraid of AI? The startups selling it want you to be

Watch L.A. Times Today at 7 p.m. on Spectrum News 1 on Channel 1 or live stream on the Spectrum News App. Palos Verdes Peninsula and Orange County viewers can watch on Cox Systems on channel 99.

Advertisement