| Topic List | Page List: 1 |
|---|---|
| Topic | Does AI and a post human future terrify you? |
| ParanoidObsessive 12/13/24 8:23:38 PM #25: | adjl posted... For an AI to govern humans in a way that actually benefits humans, it would have to be specifically coded with that as an agenda, and if that agenda's in there, what else goes in? Yeah, this is always the main sticking point. AIs by definition have to be programmed by humans, which means they will have the biases of humans (and potentially very specific humans). Which means you'll never really be able to trust them to run things better than humans do, because the same human flaws that lead to injustice and inequality can easily be reproduced (along with far more stupid mistakes and flaws that even the simplest of humans might be able to recognize but an AI will miss). Or you have to find a way to allow AIs to design and create new iterations of themselves that can minimize human biases... in which case, you're going to lack the safeguards that prevent them from realizing that maybe humans aren't that necessary to the "perfect world" they're being asked to create. Either way, "Multivac" running the world is probably a terrifying future rather than an idyllic one. adjl posted... Plus, of course, we get all the inevitable loopholes that come with thinking perfectly logically about illogical things and then Asimov rises from the dead and says he told us so. The one most germane to this conversation might be from the Robot novels. Where it's repeatedly established that robots have the core of their programming to essentially treat human life like the most important thing ever, where robots will refuse to follow any order or will happily destroy themselves if it means saving a single human life. And that creating a robot without that directive would be almost impossible, because it's such a fundamental principle of robot design that it would basically require you to start from scratch and completely redesign how robot brains work to build one without it. You know, the exact sort of safeguards you'd probably want to build into a massive AI that runs the world, allocates resources, and handles matters of crime and justice. ...and then the Solarians bypass it completely by just programming their robots to not recognize anyone who isn't from Solaria as being "human". So you can have your robots gleefully murder anyone you want just by redefining what is and isn't human. Cue pointing out that the corporations and billionaire CEOs in charge of the factories and design departments that are going to be programming AI or building robots might have a vested interest in "deprioritizing" certain social or economic classes. Or how multiple past civilizations have happily justified genocide simply by suggesting that various groups were "subhuman" or "vermin". Which is similar to why most hopes for "immortality drugs" are deluded, and probably a gateway to a dystopian future. Because if people can be functionally immortal, you need to prioritize resources, and the people who are going to be in charge of the immortality treatments and the prioritizing are going to be incentivized to keep everything for themselves. If you think "the 1%" are insufferable now, just wait until they're immortal and have armies of murderbots. While all the basic plebs get sterilized or sent to the salt mines (or turned into Soylent Green). --- "Wall of Text'D!" --- oldskoolplayr76 "POwned again." --- blight family ... Copied to Clipboard! |
| Topic List | Page List: 1 |