#Chatbots

If ChatGPT creator Sam AItman is warning about AI, is the world in trouble? – firstpost.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
OpenAI CEO Sam Altman, the man behind ChatGPT, highlighted the possibilities and perils of using artificial intelligence (AI) during a US Senate hearing. Urging for regulation, the 38-year-old shed light on how AI could interfere with elections and the risks it poses to job security across the world read more
“My worst fear is that we, the field, the technology, the industry, cause significant harm to the world. If this technology goes wrong it can go quite wrong, we want to work with the government to prevent that from happening.” In a US Senate hearing on Tuesday, which went on for little less than three hours, ChatGPT creator and OpenAI chief executive Samuel Altman uttered these ominous words when asked about the about the possibilities – and pitfalls – of the new technology and artificial intelligence. During the hearing, the 38-year-old, who has become the global face of AI, urged American lawmakers to regulate artificial intelligence, describing the technology’s current boom as a potential “printing press moment” but one that required safeguards. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his opening remarks. Incidentally, this is not the first time that Altman has warned about the dangers of AI. While releasing the newer version of ChatGPT in March, he had said, “We’ve got to be careful here. “I think people should be happy that we are a little bit scared of this. I’m particularly worried that these models could be used for large-scale disinformation. Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.” We analyse Altman’s remarks during the Senate hearing and explain the dangers of AI and why it needs to be regulated. AI, disinformation and elections At the beginning of the hearing, Altman said, “OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks.” Lawmakers at the hearing expressed great concern about misinformation that could be spread through AI, as an election year looms ahead. Josh Hawley, a US Senator for Missouri, said: “We could be looking at one of the most significant technological innovations in human history.” “My question is, what kind of innovation is it going to be? Is it going to be like the printing press… Or is it going to be more like the atom bomb?” he asked. Senator Mazie Hirono also noted the danger of misinformation as the 2024 election nears. “In the election context, for example, I saw a picture of former President Trump being arrested by NYPD and that went viral,” she said, pressing Altman on whether he would consider the faked image harmful. Also read: Donald Trump ‘arrested’? Vladimir Putin ‘in jail’? How deepfakes are spreading misinformation online Reacting to Hirono, Altman said that creators should make clear when an image is generated rather than factual. He further reiterated that he was “nervous” about the use of AI to interfere with election integrity and said that rules and guidelines should be in place. According to a report by Mashable, Altman was open to “nutrition labels” about the nature and source of generative AI content from third parties. Gary Marcus, the New York University professor emeritus and leader of Uber’s AI labs, who was also present at the hearing painted a scary picture. Speaking of how dangerous AI is, he told the lawmakers, “They can and will create persuasive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened.” AI and job security Long before the hearing, there have been concerns about loss of jobs as AI tools replace employees at the workplace. When asked if this was a concern, Altman’s view was that AI might replace jobs, leading to layoffs in certain fields. However, he added that it would also create new ones: “I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better.” He noted that models like ChatGPT were “good at doing tasks, not jobs” and would therefore make work easier for people, without replacing them altogether. IBM chief privacy and trust officer, Christina Montgomery, who was also at the hearing too acknowledged that “some jobs would transition away” but echoed Altman’s optimism on it being a job creator. Earlier, Goldman Sachs had predicted that if the AI wave continued, an estimated 300 million jobs would be lost out to technology. The investment bank had said that office administrative support, legal, architecture and engineering, business and financial operations, management, sales, healthcare and art and design would be sectors impacted by AI.
Path to regulate AI Unlike previous hearing on Big Tech, this hearing was less hostile with Altman urging for regulation in the AI sector. In fact, he laid out a three-point plan on the same. He said that a government agency should be formed that would be in charge of licensing large AI models. This agency should also be empowered to revoke the licences of companies that didn’t comply with government standards. Moreover, set up safety standards for AI models and also make it mandatory for companies to be independently audited by experts. However, one concern voiced by lawmakers about setting up a governmental agency to regulate AI was whether it would be able to keep up with the technology that is moving so fast. In fact, Senator Corey Booker was of the opinion that the tech industry pause all development on AI until it was possible to establish all the risks and remedies. Altman disagreed with the idea of taking a break, but he confirmed that his company is currently not training any AI models. Montgomery was not keen, either. “I’m not sure how practical it is to pause but we absolutely should prioritise regulation,” she told lawmakers. Senator Richard Blumenthal strongly disagreed with the idea of pausing AI innovation. “The world won’t wait. The global scientific community won’t wait. We have adversaries that are moving ahead,” he warned Congress. “Safeguards and protections yes, but a flat stop sign, sticking our heads in the sand, I would advocate against that.” Dangers of AI In recent times, AI has become a buzzword and a lot has been said about it. However, as it slowly seeps into daily life of people across the world, experts have cited the dangers posed by it. Disinformation and job loss is the primary concerns highlighted by experts. They observe that because these systems deliver information with what seems like complete confidence, it can be a struggle to separate truth from fiction when using them. Experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions. “There is no guarantee that these systems will be correct on any task you give them,” Subbarao Kambhampati, a professor of computer science at Arizona State University, was quoted as telling New York Times. Some believe that AI poses very serious risks and people aren’t yet aware of them. One such person is Geoffrey Hinton, who is widely seen as the godfather of artificial intelligence. Earlier in May, the 75-year-old announced his resignation from Google, saying he now regretted his work. Tech billionaire Elon Musk and others also expressed concern about AI and in March had written an open letter, calling for a pause on all developments more advanced than the current version of AI chatbot ChatGPT so robust safety measures could be designed and implemented. Yoshua Bengio, another so-called godfather of AI, who along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning, also signed the letter. Bengio wrote that it was because of the “unexpected acceleration” in AI systems that “we need to take a step back”. With inputs from agencies Read all the Latest News, Trending News, Cricket News, Bollywood News, India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
is on YouTube
Copyright @ 2024. Firstpost – All Rights Reserved

source