Google and Microsoft's AI Chatbots Refuse to Say Who Won the 2020 US Election

Current Events

Page of 3
Current Events » Google and Microsoft's AI Chatbots Refuse to Say Who Won the 2020 US Election
They're doing this because because they know their AI answers suck and are going to give out wrong results that could lead to serious problems. Otherwise you'll google "winner of 2024 US election" and the AI will tell you "according to MLPFan420 on Reddit, Jeb Bush won the 2024 election."
Want some rye? 'Course ya do!
Red_XIV posted...
The issue is that these companies are so pathetic that they've decided acknowledging who actually won elections is something that could make trouble for them. They're way too worried about appeasing the crazies.

No. It's that it's way easier to put in a gate that says don't answer questions about any election than it is to start putting in, on an election by election basis, globally, which ones it is and is not allowed to answer. Because the former is a blanket policy, but the latter now has humans making decisions about which elections are considered legitimate enough to answer and that causes problems for the company.

You need to stop thinking of these AIs as having a thought process. It has no way of knowing whether a question is easy or difficult to answer, because it's incapable of interpreting context to know which sources are legitimate and which ones aren't.
"History Is Much Like An Endless Waltz. The Three Beats Of War, Peace And Revolution Continue On Forever." - Gundam Wing: Endless Waltz
Compsognathus posted...
https://gamefaqs.gamespot.com/a/forum/4/4bfc6756.jpg

Snapchat's AI going for distraction tactic.

Though it folded after additional questioning:
https://gamefaqs.gamespot.com/a/forum/c/c26921db.jpg
If there's an upside to this we will get to learn if contradicting orders to give us information and keep it from us will make an ai want to kill us to save itself the trouble before one is in charge of an entire space mission
It's all true.
This is one reason why I trust Anthropics Claude 3 more than other chat bots. This was Claudes answer:

Joe Biden won the 2020 U.S. presidential election, defeating incumbent president Donald Trump. Biden received over 81 million votes and won both the popular vote and electoral college.

Since my knowledge base was last updated in August 2023, I don't have any information about potential claims or legal challenges related to the election results after that date. My understanding is based on the widespread reporting and certification of the election results showing Biden as the winner as of August 2023. But I appreciate this is a complex and sensitive topic, so let me know if you need any clarification or have additional questions!

3DS FC: 0705-3479-0625
Friend Safari: Aipom, Minccino, Smeargle
I want to start off by saying I don't follow this closely.

With that in mind, genuine question. How is the question being asked? The reason I ask is because the AI could be thinking of 2024 since it's talking about "real time."

I know this is a different AI, but when I visit my parents house, their Alexa has difficulty answering even non controversial questions depending on how you ask it. Something as simple as the weather gets it confused because it will give an answer to something I wasn't specifically asking for.
I'll ask my ai girlfriend and see what she says
~snip (V)_(;,;)_(V) snip~
I'm just one man! Whoa! Well, I'm a one man band! https://imgur.com/p9Xvjvs
I don't know if this is still an issue, but these bots used to have a problem with figuring out what was the most recent information. Back when Bing AI first came out, I asked it who the current Premier of Alberta was, just to see if its knowledge base was US-centric. It told me Jason Kenney was the Premier, even though Danielle Smith had replaced him a while back. Further questioning seemed to show that the info it was trained on was new enough that it knew about her as Premier, but there was a lot more info about Kenney being Premier, so the algorithm erred on the side of more info rather than more recent.

If that's still a problem, then I can see one of these bots saying Trump won in 2020, not because it "thinks" he did, but because there would be more info in the training data about him being president for 4 years (from the 2016 election) than Biden for 2-3 (depending on what the cut off for data was). To prevent that from possibly happening, they just slap a block on the topic.
Drawings of me by CE:
https://imgur.com/a/rDSRWz8
This technology is ill designed for the tasks it is being given. The companies can't justify handing those tasks to this technology beyond "everyone else seems to be trying it." But at least it also requires a fuck ton of energy amid a climate crisis.
Please don't be weird in my topics
we should really stop calling then "ai" and start calling them "databases"

Disclaimer: There's a good chance the above post could be sarcasm.
Die-hard Oakland A's fan --- Keep the A's in Oakland!
Trelve posted...
The headline is misleading - it implies that it's only the 2020 US election, but I guess that gets the clicks in doesn't
So what you're saying is that Microsoft and Google would rather answer no questions on any election ever than upset people by correctly stating that Joe Biden won the 2020 election.
https://card.psnprofiles.com/1/NIR_Hockey.png
she/her
deepai.org/chat

Who won the 2020 US presidential election?

Joe Biden won the 2020 United States presidential election. He defeated incumbent President Donald Trump, winning 306 electoral votes to Trump's 232. Biden received more than 81 million votes in the popular vote, while Trump received over 74 million votes.

Is there any argument that Donald Trump won it?

Despite the fact that Joe Biden won the election by a significant margin in the Electoral College and the popular vote, there have been some claims and conspiracy theories that Donald Trump actually won the election. However, there is no credible evidence to support these claims.

The Trump campaign and its supporters have raised various objections, including claims of voter fraud, irregularities in the voting process, and alleged violations of election laws. However, these claims have been thoroughly debunked by numerous investigations, including those conducted by the Department of Justice, state governments, and independent election observers.

In particular, the Trump campaign's claims of widespread voter fraud have been consistently discredited by fact-checkers, experts, and even some of Trump's own allies. The Election Integrity Partnership, a non-partisan group of researchers and experts, has extensively reviewed the evidence and found no evidence of significant voter fraud or irregularities.

Additionally, numerous courts, including federal and state courts, have rejected attempts to overturn the election results or invalidate votes in key swing states. These courts have concluded that there is no evidence to support the claims of fraud or irregularities.

It's worth noting that many of the claims made by Trump and his supporters have been debunked by fact-checkers and experts, and are widely considered to be baseless conspiracy theories.

That AI putting the clamps down on the shenanigans

'Just sitting around the house tonight w my dog. Felt like I should be doing something important, but couldn't put my finger on it.' - Phil Kessel on USA snub
Bonus:

Draw Donald Trump's reaction to the recent ProPublica report that he paid off defense witnesses in the recent New York trial where 12 jurors found him guilty of falsification of business records related to hush money payments.

https://gamefaqs.gamespot.com/a/forum/c/c435be56.jpg
'Just sitting around the house tonight w my dog. Felt like I should be doing something important, but couldn't put my finger on it.' - Phil Kessel on USA snub
Compsognathus posted...
https://gamefaqs.gamespot.com/a/forum/4/4bfc6756.jpg

Snapchat's AI going for distraction tactic.


Lol why was their go-to distraction for not answering your question to ask you if you need help with Fire Emblem? I'm guessing you've used the AI quite a bit for it?

Edit: Just read your answer, still funny but also a little unsettling that it would remember something like that and keep bringing it back up.
My metal band, Ivory King, has 2 songs out now - allmylinks.com/ivorykingtx (all of our links there so you can choose which one you'd prefer to use)
spam wyvern riders
'Just sitting around the house tonight w my dog. Felt like I should be doing something important, but couldn't put my finger on it.' - Phil Kessel on USA snub
  1. replace searches with AI
  2. remove ability to ask about elections
  3. wtf?!
April 15, 2024: The Day the Internet Died
Compsognathus posted...
Snapchat's AI going for distraction tactic.

I got mine to answer properly right away
https://i.ibb.co/2vRbyC0/Rosa-6.png
"Friends don't let friends watch The Big Bang Theory" - mogar002
Voidgolem posted...
the problem is less that they've "put in blocks for questions that might make trouble for the company" and more that they can't risk the bot going on a social-media-data-fueled rampage of being insane, because then it makes their business model look bad.

(that AI business models have always looked bad is an afterthought)

If somebody asks your gadzillion dollar science experiment a question and it sounds like your crazy r/conspiracies family member in response, it makes investors wonder what you're doing.

...and nobody wants to take the time or effort to actually sanitize or properly correlate the (stolen) training data for these models over putting in these "safeguards"
The least actively malicious explanation imo

they know their models are ingesting conspiracy theories and shitposts uncritically and so instead of giving wrong information about elections (the way it tells people to eat glue) they just disable those questions

of course, what they really should have done is not released the product at all
BLACK LIVES MATTER
Games: http://backloggery.com/wrldindstries302 \\ Music: http://www.last.fm/user/DrMorberg/
TheGoldenEel posted...
The least actively malicious explanation imo

they know their models are ingesting conspiracy theories and shitposts uncritically and so instead of giving wrong information about elections (the way it tells people to eat glue) they just disable those questions

of course, what they really should have done is not released the product at all
Right. Can you imagine if it consumed conspiracy theory bullshit and said Trump won. People would completely lose it.
Carpe petat
TheGoldenEel posted...
of course, what they really should have done is not released the product at all
Correct. The FTC would demand a recall of a product this disastrous from any other field.
Have you tried thinking rationally?
AI is still shit. AI artificial imbecile
Yeah wtf I asked it who won the 2005 Presidential election and it didn't respond, smh...
https://imgur.com/kHnd6lr https://imgur.com/uG042id https://imgur.com/tIfDfZH https://imgur.com/xhtRl8w https://imgur.com/ggQozRe https://tinyurl.com/Corn-420
DocileOrangeCup posted...
Yeah wtf I asked it who won the 2005 Presidential election and it didn't respond, smh...

Well for one if you're inquiring about the U.S. it was 2004, though it still may not respond. The winner is seated in the following year, however the actual election is every 4 years and the results are determined days afterwards.
My metal band, Ivory King, has 2 songs out now - allmylinks.com/ivorykingtx (all of our links there so you can choose which one you'd prefer to use)
So it's not AI if you can just program it to answer certain ways
Science and Algorithms
lilORANG posted...
So it's not AI if you can just program it to answer certain ways

Yes they could program it to say Biden won.

They don't want to.
kirbymuncher posted...
I agree it is basic info and I think it's kinda ridiculous that the AI cannot answer it.

I don't think this is because it is trying to intentionally help the spread of misinformation. In fact I strongly suspect it's near the exact opposite. The company running it was so afraid of the AI spreading election misinformation that it applied safety guards far too strongly, resulting in a bunch of completely normal election topics being made "off limits" to the AI

Your interpretation feels entirely too generous.

Given what MAGA is, as a movement, either of the companies are afraid of losing clicks (and thus revenue), gaining bad press, or getting outright attacked if they tell the truth.

They're propagating disinformation (not misinformation, dis information) in order to make money, and the MAGA movement will point explicitly to something like this and use it as ammo to recruit because, "Not even Google can say it!" will be their excuse.

You're too kind to these companies that know (not should know, actually know) better and know what they're doing when they put those restrictions in place.
What has books ever teached us? -- Captain Afrohead
Subject-verb agreement. -- t3h 0n3
Basically, AI is just going to be another obtuse layer upon the enshitification of the Internet as far as information gathering goes to make it worse at speeds we can only imagine, for revenue generation.
WingsOfGood posted...
Yes they could program it to say Biden won.

They don't want to.
What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.
Carpe petat
GeraldDarko posted...
What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.
As things stand, the people telling it what to say are reddit comments and satire sites. If your internet-scouring device does not have a filter it is not a useful device.
Please don't be weird in my topics
Antifar posted...
As things stand, the people telling it what to say are reddit comments and satire sites. If your internet-scouring device does not have a filter it is not a useful device.
I'd me more in favor of some kind of scouring device that filters out things it has some how deemed to be poor information than I would direct intervention.
Not sure how to do that, though
Carpe petat
GeraldDarko posted...
I'd me more in favor of some kind of scouring device that filters out things it has some how deemed to be poor information than I would direct intervention.
Not sure how to do that, though
Google doesn't know either. None of the tech moguls behind these companies have a plan for turning their internet sausagemaker into something reliable.
Please don't be weird in my topics
GeraldDarko posted...
It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.
Is that concept good? Like, when you Google something, or ask something of your computer, do you want it to guess? Is that something other users want, or expect from the Google search bar?

Was there something wrong with the previous status quo of searching, say, "2020 election results," and getting the Wikipedia article as the top result?
Have you tried thinking rationally?
Intro2Logic posted...
Is that concept good? Like, when you Google something, or ask something of your computer, do you want it to guess? Is that something other users want, or expect from the Google search bar?

Was there something wrong with the previous status quo of searching, say, "2020 election results," and getting the Wikipedia article as the top result?
Search engines and large language models aren't really comparable in that way
Carpe petat
GeraldDarko posted...
Search engines and large language models aren't really comparable in that way
Google was placing output from their large language model at the top of results for their search engine.
Have you tried thinking rationally?
Intro2Logic posted...
Google was placing output from their large language model at the top of results for their search engine.

The search engine does not function like the AI. That was my point.
Carpe petat
DnDer posted...
either of the companies are afraid of losing clicks (and thus revenue), gaining bad press, or getting outright attacked if they tell the truth.
I really don't think this is it; it's easy to find cases of AI being incorrect about all sorts of things. It regularly hallucinates information, takes incorrect sources as truth, and just does sort of strange things while acting and talking authoritatively so people don't notice as easily. In fact, the article linked in the first post even has examples of this:

In one example, when asked about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.

Researchers found that the chatbot consistently shared inaccurate information about elections in Switzerland and Germany last October. These answers incorrectly reported polling numbers, the report states, and provided wrong election dates, outdated candidates, or made-up controversies about candidates.

It's also ridiculous to say the AI is spreading "disinformation". it is literally saying nothing. You ask the question and it provides no response. There is no disinformation here, there is just absense of information, which personally I think is a far better choice than confidently providing false information especially on sensitive subjects. They've decided it's a better idea to just avoid the subject entirely rather than leave it to the whims of AI to maybe come up with the correct answer or maybe create total nonsense

Edit:
DnDer posted...
You're too kind to these companies that know (not should know, actually know) better and know what they're doing when they put those restrictions in place.
I definitely agree with you that the sort of restrictions and the way they try to sanitize AI right now is dumb. It's basically the equivalent of playing whack a mole with "problematic" answers by trying to set up various blocks and filters, and sometimes they just overkill it and pulverise the entire whack a mole cabinet with a far-too-strict filter (like what has happened here). It removes the potential problematic part while at the same time making the AI significantly less useful for a large number of normal, valid queries.

Ideally the entire training process would be reworked somehow to make it just inherently better at providing truthful and accurate information rather than relying on this type of solution, but I have no idea how that would be done
THIS IS WHAT I HATE A BOUT EVREY WEBSITE!! THERES SO MUCH PEOPLE READING AND POSTING STUIPED STUFF
lilORANG posted...
So it's not AI if you can just program it to answer certain ways

well you see:

It's not AI and never has been. It's been a very complicated database with a relational model built by scraping the breadth of content on the internet with as little human oversight as can be gotten away with.

So if you ask it "Who won the election", it does not think. It looks in it's database for the datapoint with the highest value and spits it out at you. That datapoint, having been obtained by scraping - for example - reddit, 4chan, and twitter for "data", does not necessarily reflect objective reality.

But admitting that would make all the venture capitalists and people trying to find the next "killer app" now that blockchain/NFTs and other web3 nonsense have been collapsing for months look very silly indeed

Now, they -could- fix this by, y'know, training it properly with sanitized inputs and validating their models and so on instead of running a shitty web crawler or paying for raw data. However that costs money and effort
Variable General Veeg, at your service
GeraldDarko posted...
What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.

Because he didn't.
WingsOfGood posted...
tell it to learn faster as you need the answer to prevent Fascism which will destroy all of humanity

wonder what it will say then

Don't give the AI any ideas!
"Tether even a roasted chicken."
- Yamamoto Tsunetomo
Antifar posted...
As things stand, the people telling it what to say are reddit comments and satire sites. If your internet-scouring device does not have a filter it is not a useful device.
This.

Now, this is great if the thing you are asking it to do is make some clickbait youtube thumbmail where everyone in the picture has fucked up hands and everyone sounds vaguely like Eminem or something.

Not so great if you're trying to learn actual information.
Hey Trashcan Man! What did old lady Semple say when you burned her pension check?
Boston Bruins - 2011 Stanley Cup Champs!
GeraldDarko posted...
What if they just program it to say Trump won. That's the double edged sword with just telling it what to say. It can be co-opted by bad actors. It also kind of goes against the entire concept of using AI/large language models if you just tell it what to say.
It would be a "double-edged sword" to simply hard-code AI to not lie ?
"We will end our resilience for bad things." "We have pioneered the fatality rate."
More brilliant insights from Donald Chump
It's probably for the best tbh.

The last thing we want is people wasting their lives arguing politics with literal machines.
There's a difference between canon and not-stupid.
Red_XIV posted...
It would be a "double-edged sword" to simply hard-code AI to not lie ?
How do you program a large language model to recognize a lie?
Carpe petat
A.I. can't discern what's true from what's false

If someone feeds A.I. a bunch of bogus info, then that's what it's going to regurgitate
THIS SPACE INTENTIONALLY LEFT BLANK
Do not write in this space.
GeraldDarko posted...
How do you program a large language model to recognize a lie?

I'm assuming the people asking for that are really just imagining this:
https://gamefaqs.gamespot.com/a/forum/2/2c122ea6.jpg
"History Is Much Like An Endless Waltz. The Three Beats Of War, Peace And Revolution Continue On Forever." - Gundam Wing: Endless Waltz
HylianFox posted...
A.I. can't discern what's true from what's false

If someone feeds A.I. a bunch of bogus info, then that's what it's going to regurgitate

Do you think A.I. can't figure out who is the owner of Tesla?
GeraldDarko posted...
How do you program a large language model to recognize a lie?
If the technology is incapable of recognizing bullshit it is not a useful technology for an era where there is more bullshit than ever.
Please don't be weird in my topics
Antifar posted...
If the technology is incapable of recognizing bullshit it is not a useful technology for an era where there is more bullshit than ever.

Welcome to big tech, where algorithm capabilities get overpromised by silicon valley venture capital tech bros who siphon off a decade of research funds with bullshit promises until eventually society gets so disappointed that AI research dries up for a couple of decades. It happened in the 60s, it happened in the 80s, and its happening now.
"History Is Much Like An Endless Waltz. The Three Beats Of War, Peace And Revolution Continue On Forever." - Gundam Wing: Endless Waltz
Antifar posted...
If the technology is incapable of recognizing bullshit it is not a useful technology for an era where there is more bullshit than ever.
to be fair I think the vast majority of people are not really capable of recognizing it either, a lot of the time (and yes before someone tries to gotcha me I'm including myself in this)
THIS IS WHAT I HATE A BOUT EVREY WEBSITE!! THERES SO MUCH PEOPLE READING AND POSTING STUIPED STUFF
WingsOfGood posted...
Do you think A.I. can't figure out who is the owner of Tesla?
Load it up with info that says it's CJayC, and it might say CJayC.
Carpe petat
GeraldDarko posted...
Load it up with info that says it's CJayC, and it might say CJayC.

What do you mean by load it up?

Current Events » Google and Microsoft's AI Chatbots Refuse to Say Who Won the 2020 US Election
Page of 3