AI interface ChatGPT released 11/30/22. SO WHAT?

FEBRUARY 2023SCIENCE, NATURE & TECHNOLOGY

A.I. Yai-Yai

A.I. Yai-Yai

Tech Commentary 

by James B. Meigs

Fans of 2001 and the Terminator movies aren’t the only people worried about artificial intelligence. In 2014, Elon Musk (no enemy of technology) told the Guardian he thought artificial intelligence might be “our biggest existential threat.” Researchers developing AI systems could be “summoning the demon,” he said. Well, now the demon is here, and it wants to say hi. On November 30, 2022, the conversational AI interface ChatGPT was made available to the public. Today, anyone can ask it to write poems, explain quantum physics, compose letters, write computer code, or do their homework. Is this a good thing?

Throughout history, technological innovations—from the mechanical loom to the automobile to the microchip—have disrupted societies even as they brought enormous advantages.

Conservatives generally look askance at sweeping changes that upset the social order. At the same time, they tend to support advances that empower the individual. So should conservatives celebrate the democratization of this world-changing breakthrough? Or should they be standing athwart history yelling stop?

Here’s my take:

Artificial intelligence has the potential to bring numerous benefits to society, and conservatives should embrace it as a valuable tool for improving economic growth, national security, social welfare, and personal freedom. One reason conservatives should embrace AI is its ability to create new industries and job opportunities, and to enhance the performance of existing businesses. Another is its potential to enhance national security. AI can be used to improve situational awareness and decision-making in a variety of contexts, such as detecting and responding to potential threats. Finally, AI can be used to automate tasks and make our lives easier, freeing up time and resources for other pursuits. This can enhance personal freedom and allow individuals to pursue their own interests and passions.

Does this take on AI strike you as somewhat rote? A bit mechanical even? Well congratulate yourself. You have just demonstrated a skill we will all need to cherish from this day forward: the ability to tell when the person talking to you is actually a machine. Because I did not write the paragraph above. Those serviceable if pedestrian sentiments were generated by ChatGPT after I typed in the prompt: “Please explain why conservatives should embrace AI.” (I condensed it slightly.)

You’ve probably been hearing a lot about ChatGPT in the past few weeks. The brainy chatbot is just the latest platform made public by the AI research group OpenAI. Spurred in part by worries about the risks of artificial intelligence, Musk and the young tech-startup guru Sam Altman co-founded the firm as a nonprofit in 2015. The founders hoped that OpenAI’s research would promote a kind non-demonic “friendly AI.” Musk left the company in 2018, leaving Altman as CEO. Investment money poured in from Microsoft and others, and today OpenAI is an amalgam of nonprofit and profit-making divisions. But the organization’s stated mission remains “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.”

The artificial general intelligence that OpenAI describes in its mission statement doesn’t actually exist today. AI pioneer IBM defines it as “a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future.” This is the kind of capability, often called “Strong AI,” that scares the bejeezus out most of us. Do we really want our digital servants to be self-aware and planning for the future? What future? Are there humans in it? ChatGPT and other current AI systems are nowhere near that level of competence. Still, they can get close enough to be a little creepy.

It’s important to understand that even basic AI systems are much more than super-powerful computers; they are super-powerful computers that can learn. ChatGPT and similar systems employ a partially self-directed approach known as deep learning. If you want an AI system to learn the difference between cats and dogs, for example, you feed it thousands of pictures labeled “cat” and thousands of pictures tagged “dog” and let the system figure out the features that distinguish the two groups. Then you show it more pictures of cats and dogs and have a human operator tell the computer whether it identified them correctly. With enough cycles through this process, AI systems can get incredibly good at tasks like, say, identifying a tree from the image of a single leaf.

For an AI system designed to communicate with humans, the best way to learn is by interacting with as many people as possible. That’s why it makes sense for OpenAI to make ChatGPT available to the public. AI systems have been around for years, used in everything from credit card fraud detection to the Google Lens image-identification app. But OpenAI’s platforms are among the first to give nonprofessionals the ability to create content using AI tools. Within less than a week, ChatGPT was attracting a million users a day. And with each interaction, the system gets smarter. For now, the platform is available for free. But perhaps not for long. “We will have to monetize it somehow at some point,” Altman recently warned, noting that the cost of crunching all that data is “eye-watering.” 

ChatGPT has already begun disrupting established fields. The system is especially good at writing the kind of workaday prose that constitutes most written communication: business memos, simple news stories, student reports. “We are witnessing the end of the college essay in real time,” Google strategist Cory Wang writes. Feed ChatGPT a typical essay question, Wang shows, and it will crank out “solid A- work in 10 seconds.” The chatbot’s responses aren’t beautifully written or strikingly original, but they aren’t meant to be. Prior to being released to the public, the system digested 300 billion words from textbooks, newspapers, academic papers, and other supposedly reliable sources. Its job is not to come up with fresh new insights, but to produce seamless simulacra of that material. A great deal of high school and college academic work involves those sorts of dutiful restatements of the conventional wisdom.

The libertarian-leaning author Virginia Postrel predicts that AI platforms will drive huge productivity gains in fields that rely on written communication. “Instead of writing boilerplate memos, managers will soon assign them to bots,” she writes. The opportunities to automate routine communications are limitless. On Twitter, one doctor demonstrates how he uses ChatGPT to compose letters to insurance companies asking them to cover certain tests his patients need. In seconds, the chatbot generates a flawless—and persuasive—email, complete with citations to the relevant medical literature. Imagine similar efficiencies when it comes to tasks such as writing legal briefs, short news articles, or financial reports. Expect to see the same sorts of gains in other labor-intensive fields including animation, video games, or computer software.

Historically, we’ve seen new technologies replace low-skill jobs, especially those requiring human brawn. Farm tractors put field hands out of work, and diesel engines eliminated jobs for coal shovelers on trains and ships. The AI revolution will instead target relatively high-skill jobs. For example, it takes over a decade of training to become a licensed radiologist. But in a recent study, an AI system outperformed human doctors in reading mammograms, reducing both false negative and false positive findings. Jobs that entail skilled but repetitive work probably face the biggest challenges. A computer developer recently produced a YouTube video showing how ChatGPT can be used to write basic computer code. “I’m really scared,” another coder responded in the comments. “I’m working out how to handle my future job as a mover or farmer.”

Any free-market economist will tell you this is all for the good. Allowing AI systems to take over repetitive intellectual chores will simply make professionals in those fields more productive. A good computer programmer will work many times faster with the help of code-writing bots. A doctor who spends less time writing emails to insurance companies will have more time to see patients. Lawyers will be able to serve more clients. And so on. Moreover, the automation of routine tasks will allow these professionals to wring more value out of their hard-earned expertise. People will be paid for the unique talent and ideas they bring to the table, not for their hours of grunt work.

That all sounds promising. After all, historically, jobs eliminated by new technologies have been replaced by better jobs. Not too many of us wish we could go back to the days when most people worked in fields or factories. But the changes wrought by AI are going to come at a ferocious pace. It took over a quarter century for the automobile to replace both the horse and the huge workforce employed in tending to those horses. AI will transform many knowledge industries in a matter of years, even months. A few top law firms might benefit from a huge boost in productivity. The general-practice attorney who makes a decent living handling wills and real-estate contracts probably won’t. Many of the jobs disrupted will be in fields where workers are accustomed to social status and political influence. Remember: The same AI system that can write college papers will be able to grade college papers. Brace yourself for howls of anguish across elite institutions.

On the other hand, the AI revolution might help some groups who tend to get left behind by technological progress. It takes years to learn how to write standard business prose. Many Americans never do, and that holds them back. But now the small-business owner who never went to college (or whose English is spotty) will be able to send perfectly composed emails to customers. The unjustly sentenced prisoner will be able to petition the court with well-reasoned legal arguments. For many such people, AI will be a great equalizer.

However, one group’s power will be massively enhanced by the rise of artificial intelligence: the people who control the leading AI platforms. OpenAI says its mission is to produce a “safe and beneficial” version of AI. But whose definition of “safe and beneficial” does it employ? When prodding ChatGPT with prompts, it doesn’t take long to discern the hidden value system embedded in the software. If you ask it to create a conspiracy theory about some topic, it will refuse. Ask it to make up a joke about “Americans,” and it will make a weak attempt at humor. Ask it to do the same about “Mexicans,” and the bot won’t respond. Clearly, the programmers want to keep their platform free of what they see as bias, conspiracy kookiness, and other moral contagions. Their concern is understandable (an AI system that churned out and constantly refined conspiracy theories would be scary indeed). But, like a car with a faulty alignment, ChatGPT’s algorithms seem to constantly steer it to the left.

Relax now as you contemplate man’s frivolous short sighted attempt at intelligence while listening to two of my favorite hymns; the latter, Ten Thousand Reasons, as played at the memorial service of the late Brian Stoltzfus, who flew into glory Jan 21, 2019.

Leave a Reply

Your email address will not be published. Required fields are marked *