#Tech news

Opinion | ChatGPT Is an Obnoxious Toddler and It’s Up to Us to Parent It – The New York Times

Stay ahead in the ever-evolving world of Artificial Intelligence with our curated selection of insightful blogs. Explore the latest trends, breakthroughs, and expert perspectives on AI technology. From machine learning to robotics, our handpicked collection of AI blogs offers a glimpse into the future of innovation. Dive into the realm of artificial intelligence and discover the transformative power of cutting-edge ideas and advancements in this dynamic field.
Advertisement
Supported by
Guest Essay

Ms. Spiers, a contributing Opinion writer, is a journalist and a digital media strategist.
As the mother of an 8-year-old, and as someone who’s spent the past year experimenting with generative A.I., I’ve thought a lot about the connection between interacting with one and with the other. I’m not alone in this. A paper published in August in the journal Nature Human Behaviour explained how, during its early stages, an artificial intelligence model will try lots of things randomly, narrowing its focus and getting more conservative in its choices as it gets more sophisticated. Kind of like what a child does. “A.I. programs do best if they start out like weird kids,” writes Alison Gopnik, a developmental psychologist.
I am less struck, however, by how these tools acquire facts than by how they learn to react to new situations. It is common to describe A.I. as being “in its infancy,” but I think that’s not quite right. A.I. is in the phase when kids live like tiny energetic monsters, before they’ve learned to be thoughtful about the world and responsible for others. That’s why I’ve come to feel that A.I. needs to be socialized the way young children are — trained not to be a jerk, to adhere to ethical standards, to recognize and excise racial and gender biases. It needs, in short, to be parented.
Recently I used Duet, Google Labs’ generative A.I., to create images for a presentation, and when I asked for an image of “a very serious person,” it spat out an A.I.-generated illustration of a bespectacled, scowling white man who looked uncannily like Senator Chuck Grassley. Why, I wondered, does the A.I. assume a serious person is white, male and older? What does that say about the data set it’s trained on? And why is robot Chuck Grassley so angry?
I modified the prompt, adding more characteristics each time. I was watching to see if the bot would arrive at the conclusion on its own that gender, age and seriousness are not correlated, nor are serious people always angry — not even if they have that look on their face, as anyone who’s ever seen a Werner Herzog interview knows. It was, I realized, exactly the kind of conversation you have with children when they’ve absorbed pernicious stereotypes.
It’s not enough to simply tell children what the output should be. You have to create a system of guidelines — an algorithm — that allows them to arrive at the correct outputs when faced with different inputs, too. The parentally programmed algorithm I remember best from my own childhood is “do unto others as you would have done unto you.” It teaches kids how, in a range of specific circumstances (query: I have some embarrassing information about the class bully; should I immediately disseminate it to all of my other classmates?), they can deduce the desirable outcome (output: no, because I am an unusually empathetic first grader who would not want another kid to do that to me). Turning that moral code into action, of course, is a separate matter.
Trying to imbue actual code with something that looks like moral code is in some ways simpler and in other ways more challenging. A.I.s are not sentient (though some say they are), which means that no matter how they might appear to act, they can’t actually become greedy, fall prey to bad influences or seek to inflict on others the trauma they have suffered. They do not experience emotion, which can reinforce both good and bad behavior. But just as I learned the Golden Rule because my parents’ morality was heavily shaped by the Bible and the Southern Baptist culture we lived in, the simulated morality of an A.I. depends on the data sets it is trained on, which reflect the values of the cultures the data is derived from, the manner in which it’s trained and the people who design it. This can cut both ways. As the psychologist Paul Bloom wrote in The New Yorker, “It’s possible to view human values as part of the problem, not the solution.”
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.
Advertisement

source

Opinion | ChatGPT Is an Obnoxious Toddler and It’s Up to Us to Parent It – The New York Times

How To Use The New AI Coding

Opinion | ChatGPT Is an Obnoxious Toddler and It’s Up to Us to Parent It – The New York Times

Create your own AI Chat bot with

Leave a comment

Your email address will not be published. Required fields are marked *