Today's Article

공지사항 1:1 Q&A 수업후기 Today's Article

OpenAI worries people may become emotionally reliant on its new ChatGPT voice mode

작성자: 잉글리쉬쌤 조회: 759 2024. 9. 4.

OpenAI worries people may become emotionally reliant on its new ChatGPT voice mode

OpenAI, 새로운 ChatGPT 음성 모드에 사용자의 감정적인 의존 우려 

 

 

OpenAI가 최근 도입한 ChatGPT 음성 모드는 AI 기능의 비약적인 발전을 의미한다. 이를 통해 사용자들은 기술과 자연스럽게 대화형 상호작용을 할 수 있게 되었다. 이 기능은 접근성과 사용자 경험을 향상시켜 AI를 더욱 직관적이고 인간과 유사하게 만든다. 

 

그러나 이러한 발전에는 윤리적, 심리적 문제가 수반된다. 우려 사항 중 하나는 사용자가 이러한 상호작용에 감정적 애착이나 의존성을 갖게 될 가능성이 있다는 점이다. 인공지능이 더욱 반응하고 개인화됨에 따라 개인, 특히 외로움에 취약한 사람들은 정서적 지원이나 동반자 관계를 위해 인공지능에 의존하기 시작할 수 있다. 이는 사회적 고립에서 인간관계 약화에 이르기까지 다양한 문제를 야기할 수 있다. 또한, 사용자가 기술에 인간과 같은 특성을 부여하기 시작하는 'AI 의인화'는 현실과 인공적인 상호작용 사이의 경계를 모호하게 만들기 때문에, 무엇이 진정한 인간의 공감이고, 무엇이 AI가 생성한 반응인지를 구분하기 어렵게 만들 수 있다.

 

OpenAI는 이러한 우려를 인식하고 책임감 있는 사용의 필요성을 강조하며, 사용자가 AI와의 상호작용과 실제 관계 사이의 균형을 유지하도록 장려하고 있다. 이 혁신적인 기술의 이점을 극대화하면서도, 잠재적인 심리적 피해를 예방하기 위한 가이드라인과 안전장치를 마련하는 것이 중요하다.

News

New York (CNN) — OpenAI is worried that people might start to rely on ChatGPT too much for companionship, potentially leading to “dependence,” because of its new human-sounding voice mode.

 

That revelation came in a report Thursday from OpenAI on the safety review it conducted of the tool — which began rolling out to paid users last week — and the large language AI model it runs on.

 

ChatGPT’s advanced voice mode sounds remarkably lifelike. It responds in real time, can adjust to being interrupted, makes the kinds of noises that humans make during conversations like laughing or “hmms.” It can also judge a speaker’s emotional state based on their tone of voice.

 

Within minutes of OpenAI announcing the feature at an event earlier this year, it was being compared to the AI digital assistant in the 2013 film “Her,” with whom the protagonist falls in love, only to be left heartbroken when the AI admits “she” also has relationships with hundreds of other users.

 

Now, OpenAI is apparently concerned that fictional story is a little too close to becoming reality, after it says it observed users talking to ChatGPT’s voice mode in language “expressing shared bonds” with the tool.

 

Eventually, “users might form social relationships with the AI, reducing their need for human interaction — potentially benefiting lonely individuals but possibly affecting healthy relationships,” the report states. It adds that hearing information from a bot that sounds like a human could lead users to trust the tool more than they should, given AI’s propensity to get things wrong.

 

The report underscores a big-picture risk surrounding artificial intelligence: tech companies are racing to quickly roll out to the public AI tools that they say could upend the way we live, work, socialize and find information. But they’re doing so before anyone really understands what those implications are. As with many tech advancements, companies often have one idea of how their tools can and should be used, but users come up with a whole host of other potential applications, often with unintended consequences.

 

Some people are already forming what they describe as romantic relationships with AI chatbots, prompting concern from relationship experts.

 

“It’s a lot of responsibility on companies to really navigate this in an ethical and responsible way, and it’s all in an experimentation phase right now,” Liesel Sharabi, an Arizona State University Professor who studies technology and human communication, told CNN in an interview in June. “I do worry about people who are forming really deep connections with a technology that might not exist in the long-run and that is constantly evolving.”

 

OpenAI said that human users’ interactions with ChatGPT’s voice mode could also, over time, influence what’s considered normal in social interactions.

 

“Our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions,” the company said in the report.

 

For now, OpenAI says it’s committed to building AI “safely,” and plans to continue studying the potential for “emotional reliance” by users on its tools. 

목록으로