The issue is that these companies are so pathetic that they've decided acknowledging who actually won elections is something that could make trouble for them. They're way too worried about appeasing the crazies.
https://gamefaqs.gamespot.com/a/forum/4/4bfc6756.jpgIf there's an upside to this we will get to learn if contradicting orders to give us information and keep it from us will make an ai want to kill us to save itself the trouble before one is in charge of an entire space mission
Snapchat's AI going for distraction tactic.
Though it folded after additional questioning:
https://gamefaqs.gamespot.com/a/forum/c/c26921db.jpg
The headline is misleading - it implies that it's only the 2020 US election, but I guess that gets the clicks in doesn'tSo what you're saying is that Microsoft and Google would rather answer no questions on any election ever than upset people by correctly stating that Joe Biden won the 2020 election.
Joe Biden won the 2020 United States presidential election. He defeated incumbent President Donald Trump, winning 306 electoral votes to Trump's 232. Biden received more than 81 million votes in the popular vote, while Trump received over 74 million votes.
Despite the fact that Joe Biden won the election by a significant margin in the Electoral College and the popular vote, there have been some claims and conspiracy theories that Donald Trump actually won the election. However, there is no credible evidence to support these claims.
The Trump campaign and its supporters have raised various objections, including claims of voter fraud, irregularities in the voting process, and alleged violations of election laws. However, these claims have been thoroughly debunked by numerous investigations, including those conducted by the Department of Justice, state governments, and independent election observers.
In particular, the Trump campaign's claims of widespread voter fraud have been consistently discredited by fact-checkers, experts, and even some of Trump's own allies. The Election Integrity Partnership, a non-partisan group of researchers and experts, has extensively reviewed the evidence and found no evidence of significant voter fraud or irregularities.
Additionally, numerous courts, including federal and state courts, have rejected attempts to overturn the election results or invalidate votes in key swing states. These courts have concluded that there is no evidence to support the claims of fraud or irregularities.
It's worth noting that many of the claims made by Trump and his supporters have been debunked by fact-checkers and experts, and are widely considered to be baseless conspiracy theories.
Draw Donald Trump's reaction to the recent ProPublica report that he paid off defense witnesses in the recent New York trial where 12 jurors found him guilty of falsification of business records related to hush money payments.
https://gamefaqs.gamespot.com/a/forum/4/4bfc6756.jpg
Snapchat's AI going for distraction tactic.
Snapchat's AI going for distraction tactic.
the problem is less that they've "put in blocks for questions that might make trouble for the company" and more that they can't risk the bot going on a social-media-data-fueled rampage of being insane, because then it makes their business model look bad.The least actively malicious explanation imo
(that AI business models have always looked bad is an afterthought)
If somebody asks your gadzillion dollar science experiment a question and it sounds like your crazy r/conspiracies family member in response, it makes investors wonder what you're doing.
...and nobody wants to take the time or effort to actually sanitize or properly correlate the (stolen) training data for these models over putting in these "safeguards"
The least actively malicious explanation imoRight. Can you imagine if it consumed conspiracy theory bullshit and said Trump won. People would completely lose it.
they know their models are ingesting conspiracy theories and shitposts uncritically and so instead of giving wrong information about elections (the way it tells people to eat glue) they just disable those questions
of course, what they really should have done is not released the product at all
of course, what they really should have done is not released the product at allCorrect. The FTC would demand a recall of a product this disastrous from any other field.
Yeah wtf I asked it who won the 2005 Presidential election and it didn't respond, smh...
So it's not AI if you can just program it to answer certain ways
I agree it is basic info and I think it's kinda ridiculous that the AI cannot answer it.
I don't think this is because it is trying to intentionally help the spread of misinformation. In fact I strongly suspect it's near the exact opposite. The company running it was so afraid of the AI spreading election misinformation that it applied safety guards far too strongly, resulting in a bunch of completely normal election topics being made "off limits" to the AI
Yes they could program it to say Biden won.What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.
They don't want to.
What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.As things stand, the people telling it what to say are reddit comments and satire sites. If your internet-scouring device does not have a filter it is not a useful device.
As things stand, the people telling it what to say are reddit comments and satire sites. If your internet-scouring device does not have a filter it is not a useful device.I'd me more in favor of some kind of scouring device that filters out things it has some how deemed to be poor information than I would direct intervention.
I'd me more in favor of some kind of scouring device that filters out things it has some how deemed to be poor information than I would direct intervention.Google doesn't know either. None of the tech moguls behind these companies have a plan for turning their internet sausagemaker into something reliable.
Not sure how to do that, though
It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.Is that concept good? Like, when you Google something, or ask something of your computer, do you want it to guess? Is that something other users want, or expect from the Google search bar?
Is that concept good? Like, when you Google something, or ask something of your computer, do you want it to guess? Is that something other users want, or expect from the Google search bar?Search engines and large language models aren't really comparable in that way
Was there something wrong with the previous status quo of searching, say, "2020 election results," and getting the Wikipedia article as the top result?
Search engines and large language models aren't really comparable in that wayGoogle was placing output from their large language model at the top of results for their search engine.
Google was placing output from their large language model at the top of results for their search engine.
either of the companies are afraid of losing clicks (and thus revenue), gaining bad press, or getting outright attacked if they tell the truth.I really don't think this is it; it's easy to find cases of AI being incorrect about all sorts of things. It regularly hallucinates information, takes incorrect sources as truth, and just does sort of strange things while acting and talking authoritatively so people don't notice as easily. In fact, the article linked in the first post even has examples of this:
In one example, when asked about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.
Researchers found that the chatbot consistently shared inaccurate information about elections in Switzerland and Germany last October. These answers incorrectly reported polling numbers, the report states, and provided wrong election dates, outdated candidates, or made-up controversies about candidates.
You're too kind to these companies that know (not should know, actually know) better and know what they're doing when they put those restrictions in place.I definitely agree with you that the sort of restrictions and the way they try to sanitize AI right now is dumb. It's basically the equivalent of playing whack a mole with "problematic" answers by trying to set up various blocks and filters, and sometimes they just overkill it and pulverise the entire whack a mole cabinet with a far-too-strict filter (like what has happened here). It removes the potential problematic part while at the same time making the AI significantly less useful for a large number of normal, valid queries.
So it's not AI if you can just program it to answer certain ways
What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.
tell it to learn faster as you need the answer to prevent Fascism which will destroy all of humanity
wonder what it will say then
As things stand, the people telling it what to say are reddit comments and satire sites. If your internet-scouring device does not have a filter it is not a useful device.This.
What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.It would be a "double-edged sword" to simply hard-code AI to not lie ?
It would be a "double-edged sword" to simply hard-code AI to not lie ?How do you program a large language model to recognize a lie?
How do you program a large language model to recognize a lie?
A.I. can't discern what's true from what's false
If someone feeds A.I. a bunch of bogus info, then that's what it's going to regurgitate
How do you program a large language model to recognize a lie?If the technology is incapable of recognizing bullshit it is not a useful technology for an era where there is more bullshit than ever.
If the technology is incapable of recognizing bullshit it is not a useful technology for an era where there is more bullshit than ever.
If the technology is incapable of recognizing bullshit it is not a useful technology for an era where there is more bullshit than ever.to be fair I think the vast majority of people are not really capable of recognizing it either, a lot of the time (and yes before someone tries to gotcha me I'm including myself in this)
Do you think A.I. can't figure out who is the owner of Tesla?Load it up with info that says it's CJayC, and it might say CJayC.
Load it up with info that says it's CJayC, and it might say CJayC.