LogFAQs > #954912102

LurkerFAQs, Active DB, DB1, DB2, DB3, DB4, DB5, DB6, DB7, Database 8 ( 02.18.2021-09-28-2021 ), DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicWhat happens if Facebook, reddit Google one day gets bought over by the Chinese?
joe40001
06/12/21 5:19:09 AM
#35:


Guide posted...
At this point, we are more on the same page, and are arguing semantics. But as opposed to people using semantics as code for "I'm bored now", I argue semantics with intent. It is my bread.

I agree with that it profits off of dumb people, I just wouldn't say it's making us dumb. It's taking advantage of people susceptible to a host of manipulations, "dumb people", but if the susceptibility doesn't exist, I don't believe in can turn just anyone into a likely candidate for foil hats. Absolutely not denying that signal boosting these people to each other is an issue.

Silicon valley people VERY often don't let their kids uses their software. Even people who are experts on what is happening and how it happens are able to get sucked it. The dopamine center of a brain is part of who we are. Awareness can help, but it isn't something that will keep you safe.

It's like a doctor who is an expert in the mechanisms of how drugs work wouldn't be immune from the addictive properties of heroin. So too is everybody to some extent vulnerable to the "dumbeming" some moreso than others, for sure, but it is negatively effective nearly everybody who engages with it.

I feel like maybe you caught this as you wrote it out, but someone has to decide what the better things are, in order to optimize towards those things. It's still, fundamentally, "knowing better".

It's literally not though. I've made some ML classifiers and understand the algorithms. I can train an image classifier on cats vs not cats without having any idea what makes something "cat like".

I explain it a bit in a video I made a while back: (timestamp 38s)
https://www.youtube.com/watch?v=Dc4NsbixvFY&t=38s

The point is you don't have to have a clear understanding of scientific truth to optimize an algorithm against misinformation. You simply have to switch the parameter from "engagement" to "value". And I know what you'll say "how could somebody possibly quantify value?" And it's like literally no more difficult than engagment. You get certain data/meta-data parameters and then you do A/B testing off control.

They are already hacking our brains like crazy, and like I could fucking write an algorithm that does an ok job of optimizing this shit in like a week. You are legit telling me that with all the money and brainpower they couldn't possibly do it?

But somehow they definitely can come to scientific conclusions they have no qualifications to conclude?

The truth is hard, ML is fucking easy. It's like harry potter shit.

---
"joe is attractive and quite the brilliant poster" - Seiichi Omori
https://imgur.com/TheGsZ9
... Copied to Clipboard!
Topic List
Page List: 1