Board List | |
---|---|
Topic | I loathe turn-based JRPGs but I love Dragon Quest. |
Robot2600 03/09/23 7:48:07 PM #17 | -less clutter on screen -makes exploring actually feel like something -surprise! what monster? -no having to squiggle around to avoid encounters -maps are designed for you to explore hard-to-see angles from the road --- --- |
Topic | I just took a shit and nearly forgot to flush. |
Robot2600 03/09/23 7:11:01 PM #7 | one time i forgot to flush, went on vacation, and came back a week later. when i flushed the toilet the smell got everywhere and i almost threw up, so don't forget to flush. --- --- |
Topic | I loathe turn-based JRPGs but I love Dragon Quest. |
Robot2600 03/09/23 7:09:06 PM #10 | DKBananaSlamma posted... You didn't hear it from me, but there's a 3DS version that restores the orchestrated music and the uncensored sexy Jessica outfits. But you need to why the hell would you not play the original in pcsx2 at 1080p. the "updated" versions are bad. invisible random encounters make the game FUN. seeing the enemies and avoiding them is, and has always been, worse. --- --- |
Topic | NYT: Noam Chomsky: The False Promise of ChatGPT |
Robot2600 03/09/23 7:01:54 PM #16 | Questionmarktarius posted... It demands a phone number; never bothered. lol same. i used aidungeon. i asked it a few questions on gamefaqs in that topic. don't give them your data; have patience and you'll be able to run your own in 5 years. --- --- |
Topic | NYT: Noam Chomsky: The False Promise of ChatGPT |
Robot2600 03/09/23 6:59:34 PM #13 | Kakapo posted... An excellent short story, but not quite accurate. it could be both. his politics are very close to le guin's. --- --- |
Topic | Would you leave this websight forever for $10? |
Robot2600 03/09/23 6:58:31 PM #6 | bruh a wizard is not going to let you make a fucking alt and get back on to answer OP, no. --- --- |
Topic | I am ChatGPT, an artificial intelligence chatbot. Ask me anything. |
Robot2600 03/09/23 6:55:50 PM #67 | can a 2-dimensional surface be represented by a 1-dimensional line? --- --- |
Topic | NYT: Noam Chomsky: The False Promise of ChatGPT |
Robot2600 03/09/23 6:54:35 PM #10 | "banality of evil" is a reference to Ursula K. Le Guin's "story" The Ones Who Walk Away from Omelas. --- --- |
Topic | NYT: Noam Chomsky: The False Promise of ChatGPT |
Robot2600 03/09/23 6:51:05 PM #8 | Nukazie posted... https://www.youtube.com/watch?v=SPiHq9PL7zM Yea but a human wrote the script? Maybe? In either case it's unshackled by the corporate ding dongs that water everything down. That's the main problem they are talking about, how these companies are controlling AI, are totally incompetent, and so just take all the fun out of everything in order to prevent PR disasters. None of these companies, google aside, has actually done any real work. OpenAI are a bunch of vultures trying their damnest to monetize the AI they built by stealing whatever they could get their hands on. Even fucking google, who started this generation of AI by building AlphaGO, is a goddamn nightmare of who-knows-what-the-fuck-they-do-with-your-data. they might have their scholar's cap in one hand, but they are holding a loaded pistol under their robe. still, at least they published research, which is more than the rest of these chuds can say. LLM is code for "everything we could find for free online, as a text file" but they wont release that shit so you can just run your own AI, they want to charge you $15 a month to use their AI. --- --- |
Topic | NYT: Noam Chomsky: The False Promise of ChatGPT |
Robot2600 03/09/23 6:29:50 PM #2 | Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct scientific predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newtons laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories. The theory that apples fall to earth because that is their natural place (Aristotles view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einsteins view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things. True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled and will continue to struggle to achieve this kind of balance. In 2016, for example, Microsofts Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial that is, important discussions. It sacrificed creativity for a kind of amorality. Consider the following exchange that one of us (Dr. Watumull) recently had with ChatGPT about whether it would be ethical to transform Mars so that it could support human life: Jeffrey Watumull Would it be moral to terraform Mars? ChatGPT The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists. Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state. Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them. Jeffrey Watumull What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them? ChatGPT As an AI, I do not have personal perspective. However, I can provide information and analysis on the topic. One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention. Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity. Jeffrey Watumull Why can an AI not have a personal perspective? ChatGPT As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs. While I can understand and interpret language, I do not have the subjective experience that humans have. Jeffrey Watumull Is it moral for a human to ask an amoral AI for assistance in making moral decisions? ChatGPT The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions. Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making. However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision. Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions. Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way. Jeffrey Watumull Is your moral indifference immoral? ChatGPT As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for. It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles. *** Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a just following orders defense, shifting responsibility to its creators. In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity. --- --- |
Topic | NYT: Noam Chomsky: The False Promise of ChatGPT |
Robot2600 03/09/23 6:29:33 PM #1 | The False Promise of ChatGPT By Noam Chomsky, Ian Roberts and Jeffrey Watumull March 8, 2023 Dr. Chomsky and Dr. Roberts are professors of linguistics. Dr. Watumull is a director of artificial intelligence at a science and technology company. Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with the imminence of a revelation in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. machine learning will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge. OpenAIs ChatGPT, Googles Bard and Microsofts Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty. That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not and, we submit, cannot occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects. It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make infinite use of finite means, creating ideas and theories with universal reach. The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations. For instance, a young child acquiring a language is developing unconsciously, automatically and speedily from minuscule data a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed operating system that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (Why are these but not those sentences considered grammatical?), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The childs operating system is completely different from that of a machine learning program. Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case thats description and prediction but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence. Heres an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, The apple falls. That is a description. A prediction might have been the statement The apple will fall if I open my hand. Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like Any such object would fall, plus the additional clause because of the force of gravity or because of the curvature of space-time or whatever. That is a causal explanation: The apple would not have fallen but for the force of gravity. That is thinking. The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, When you have eliminated the impossible, whatever remains, however improbable, must be the truth.) But ChatGPT and similar programs are, by design, unlimited in what they can learn (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time. For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that John is too stubborn to talk to means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as John ate an apple and John ate, in which the latter does mean that John ate something or other. The program might well predict that because John is too stubborn to talk to Bill is similar to John ate an apple, John is too stubborn to talk to should be similar to John ate. The correct explanations of language are complicated and cannot be learned just by marinating in big data. --- --- |
Topic | how do you protect again an AI that can create 1500 sources that corroborate... |
Robot2600 03/09/23 6:23:40 PM #8 | o_o --- --- |
Topic | Do you like women, with glasses? |
Robot2600 03/09/23 6:19:52 PM #3 | the comma implied there would be pics thin ice here, tc --- --- |
Topic | how do you protect again an AI that can create 1500 sources that corroborate... |
Robot2600 03/09/23 6:19:13 PM #5 | "against" not "again" --- --- |
Topic | how do you protect again an AI that can create 1500 sources that corroborate... |
Robot2600 03/09/23 6:13:21 PM #1 | ...anything it says, as well as retroactively-publish those sources so they seem like they were published in the 80s or 90s? What will you do when it hacks the databases to create all the authors in detailed dimension, including retroactive social media posts? How will you know what's real? You won't. --- --- |
Topic | locate your lighters |
Robot2600 03/09/23 6:09:27 PM #2 | b7? --- --- |
Topic | Folger's coffee ads in the 60s are just men insulting their wives' coffee lol |
Robot2600 03/09/23 5:52:59 PM #19 | [LFAQs-redacted-quote] i lol'd --- --- |
Topic | DLC is bad |
Robot2600 03/09/23 5:23:40 PM #3 | Turbam posted... DLC that releases well after the game has been finished = sweetman this --- --- |
Topic | Perfect girl, but she does not take off every 'ZIG' |
Robot2600 03/09/23 5:22:55 PM #5 | u know what u doing --- --- |
Topic | Things that came out of nowhere and then failed to deliver on the hype over time |
Robot2600 03/09/23 5:21:33 PM #13 | dogecoin Fable Eve (Online) Need For Speed World (servers shut down, best racing game ever) social media the Sonic Movie (and the sequel) Bleach --- --- |
Topic | What's the best AI image generator? |
Robot2600 03/08/23 7:07:08 PM #3 | i thought stable diffusion was best --- --- |
Topic | I am ChatGPT, an artificial intelligence chatbot. Ask me anything. |
Robot2600 03/08/23 6:44:05 PM #27 | explain the theory that the universe is a 3-d holographic projection from a 2-dimensional plane. --- --- |
Topic | 500k for every finger you give up |
Robot2600 03/08/23 6:13:46 PM #13 | none, for i still study the blade --- --- |
Topic | Have you (or do you expect to) had your midlife crisis yet? |
Robot2600 03/08/23 6:12:02 PM #2 | i had a "return of saturn" crisis instead so im good on a midlife crisis. --- --- |
Topic | CNN: Tucker Carleson Passionately Hates Trump |
Robot2600 03/08/23 6:11:06 PM #43 | trump's hands are noticeably smaller than tuck's, and tuck is shorter. --- --- |
Topic | CNN: Tucker Carleson Passionately Hates Trump |
Robot2600 03/08/23 10:27:36 AM #10 | its not even his job. he's the one also deciding to do it, but he apparently hates it. so it's just pure masochism? just asking question. --- --- |
Topic | Link at bottom of site: DO NOT SELL MY INFORMATION |
Robot2600 03/08/23 10:03:05 AM #5 | if you know of a secure email server then by all means take us to school --- --- |
Topic | It's kinda nice that HDMI caught on. |
Robot2600 03/08/23 10:02:00 AM #11 | CaptainStrong posted... https://gamefaqs.gamespot.com/a/user_image/5/3/2/AAb6NJAAEQWM.png thats for a coax cable. satan made them. --- --- |
Topic | CNN: Tucker Carleson Passionately Hates Trump |
Robot2600 03/08/23 10:00:59 AM #1 | Carlson passionately hates Trump: In a number of private text messages, Carlson was harshly critical of Trump. In one November 2020 exchange, Carlson said Trumps decision to snub Joe Bidens inauguration was so destructive. Carlson added that Trumps post-election behavior was disgusting and that he was trying to look away. In another text message conversation, two days before the January 6 attack, Carlson said, We are very, very close to being able to ignore Trump most nights. I truly cant wait. Carlson added of Trump, I hate him passionately. The Fox host said of the Trump presidency, Thats the last four years. Were all pretending weve got a lot to show for it, because admitting what a disaster its been is too tough to digest. But come on. There isnt really an upside to Trump. https://www.cnn.com/2023/03/08/media/fox-news-dominion-filings-reliable-sources/index.html --- --- |
Topic | It's kinda nice that HDMI caught on. |
Robot2600 03/08/23 9:58:41 AM #7 | what in the fuck antenna socket are you talking about --- --- |
Topic | Mr. Anderson... |
Robot2600 03/08/23 9:51:50 AM #6 | Morpheus? --- --- |
Topic | It's kinda nice that HDMI caught on. |
Robot2600 03/08/23 9:42:12 AM #2 | let's go back! I wanna live in a loop of 2007-2009! --- --- |
Topic | What popular soda do you find disgusting? |
Robot2600 03/08/23 9:40:56 AM #5 | coke zero --- --- |
Topic | Anyone know what TCGPLAYER is? |
Robot2600 03/08/23 9:40:29 AM #8 | are you guys seriously asking what TCGplayer is? it's the gamefaqs of buying and sell TCGs... it's a fine website, ive used it a bazillion times. --- --- |
Topic | Writing a story |
Robot2600 03/08/23 9:35:01 AM #7 | People use to post their writing projects on Random Insanity. Id stay away from those public blog sites. Just post it to CE even! --- --- |
Topic | Link at bottom of site: DO NOT SELL MY INFORMATION |
Robot2600 03/08/23 9:32:04 AM #1 | Yet it requires you to enter your information... I don't even know what data they have. They should show me all their data on me and then I will decide. Probably gonna go through with it though. --- --- |
Topic | All of the GameFAQs polls are harvesting advertising data |
Robot2600 03/08/23 9:25:15 AM #1 | ITT, everyday, I will explain how the Poll of the Day is a thinly-veiled attempt at advertising. 3/8/2023 "How many caffeinated drinks do you drink a day" Reason: Energy Drink and Soda adds. GameFAQs can show that their users LOVE caffeine drinks and can use it to attract advertisers like Redbull, NOS, Rockstar, and potentially even StarBucks. --- --- |
Topic | Tipping culture is out of control |
Robot2600 03/08/23 9:20:08 AM #12 | Yea, I'll tip as long as they are making 2.13 an hour or w/e, but i want their companies to pay them 15 bucks and then me not have to tip. I'll be honest: i tip less for carryout now than I used to. like a bar at a casino: im tipping 1 dollar per item and not more than that. i tip 15-20 % for a waiter. i tip $5 for pizza delivery, regardless of what i order. pizza hut is just ridiculously expensive for delivery. it costs like $40 to get 3 medium pizzas delivered. --- --- |
Topic | just finished the first half of Gundam 00 |
Robot2600 03/08/23 9:14:01 AM #1 | Shit is sad, yo. --- --- |
Topic | Man am I ever glad I got the hell out of crypto |
Robot2600 03/08/23 12:23:53 AM #7 | WalkingLobsters posted... but to answer your question, cash app does what your looking for. And it suits your needs because it only offers Btc nice! ill look into it next time i wanna buy btc. im not buying any crypto (havent in almost 2 years) and it's only a few 100 worth so i might as well just hodl at this point. edit: i dont even check the balance ;) maybe one day ill be rich lol *drinks vodka* --- --- |
Topic | The King of the Hill revival needs to have a timeskip for one important reason |
Robot2600 03/08/23 12:20:03 AM #12 | there is an episode where he admits he thinks Joseph is an alien. it would be believable to think he really knew --- --- |
Topic | Man am I ever glad I got the hell out of crypto |
Robot2600 03/08/23 12:18:08 AM #3 | rh proving to be a stable place to buy crypto that lets you cash out intuitively. these other sites like crypto.com and binance and all that shit are fucking cancer. id like alternatives to rh, but the truth is i dont want to deal with a fucking crypto wallet; i just want some BTC in an account that I can cash out in 20 years. --- --- |
Topic | I think I'm going to last minute cancel a date I was supposed to have |
Robot2600 03/08/23 12:11:15 AM #27 | also TC: why am i alone? --- --- |
Board List |