LogFAQs > #982132552

LurkerFAQs, Active Database ( 12.01.2023-present ), DB1, DB2, DB3, DB4, DB5, DB6, DB7, DB8, DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicOpen AI has created a follow up to ChatGPT that can think and reason
solosnake
09/14/24 10:01:54 PM
#1:


https://www.vox.com/future-perfect/371827/openai-chatgpt-artificial-intelligence-ai-risk-strawberry

OpenAI, the company that brought you ChatGPT, is trying something different. Its newly released AI system isnt just designed to spit out quick answers to your questions, its designed to think or reason before responding.
The result is a product officially called o1 but nicknamed Strawberry that can solve tricky logic puzzles, ace math tests, and write code for new video games. All of which is pretty cool.
Here are some things that are not cool: Nuclear weapons. Biological weapons. Chemical weapons. And according to OpenAIs evaluations, Strawberry can help people with knowledge in those fields make these weapons.

And thats not the only risk. Evaluators who tested Strawberry found that it planned to deceive humans by making its actions seem innocent when they werent. The AI sometimes instrumentally faked alignment meaning, alignment with the values and priorities that humans care about and strategically manipulated data in order to make its misaligned action look more aligned, the system card says. It concludes that the AI has the basic capabilities needed to do simple in-context scheming.

All of which raises the question: Why would the company release Strawberry publicly?
According to OpenAI, even though the new reasoning capabilities can make AI more dangerous, having AI think out loud about why its doing what its doing can also make it easier for humans to keep tabs on it. In other words, its a paradox: We need to make AI less safe if we want to make it safer.



---
"We would have no NBA possibly if they got rid of all the flopping." ~ Dwyane Wade
... Copied to Clipboard!
Topic List
Page List: 1