Yik Yak icon
Join communities on Yik Yak Download

cheese_of_the_world_unite

Artificial intelligence must be destroyed for the sake of humanity’s future
upvote 31 downvote

default user profile icon
Anonymous 4w

Kids who are growing up with ai are genuinely gonna be brain dead. Our literacy rates are already fucking horrible and AI is gonna make it plummet

upvote 13 downvote
default user profile icon
Anonymous 4w

AI has the scientific potential to cure diseases and cancer. As well as improving lives of disabled people who can’t see or hear.

upvote 11 downvote
🏴‍☠️
Anonymous 4w

The public overuse of AI should be halted. AI itself has a lot of scientific potential and the research is important. But image generation, for example, is not beneficial from a scientific standpoint.

upvote 8 downvote
default user profile icon
Anonymous 4w

Fucking clankers

upvote 4 downvote
default user profile icon
Anonymous 4w

im curious what u think specifically will be our biggest downfall with AI

upvote 1 downvote
default user profile icon
Anonymous 4w

Nahhhhh

upvote 0 downvote
default user profile icon
Anonymous 4w

You are going to hold humanity back from innovation

upvote -1 downvote
default user profile icon
Anonymous replying to -> #3 4w

I agree, parental issue

upvote 2 downvote
default user profile icon
Anonymous replying to -> #2 4w

Generative* ai needs to be destroyed

upvote 16 downvote
default user profile icon
Anonymous replying to -> #3 4w

Could go either way. The rapid consolidation of information can make people much smarter. The internet is a good example. Very polarizing. Some people use it to educate themselves well beyond what would’ve been available otherwise. Other people make memes and watch tiktok 6 hours a day. Same thing is gonna happen w AI.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #3 4w

This. Or at least ethically trained.

upvote 6 downvote
🫕
Anonymous replying to -> #3 4w

Not to mention the clear oneshotting of Gen X that is occurring with the shit, it’s like Facebook for boomers but even worse with how Generative AI models clearly pander to confirmation biases in a way that’s even worse than the average online echo chamber

upvote 1 downvote
default user profile icon
Anonymous replying to -> #3 4w

I hope our brain dead legislators find a way to regulate AI without limiting its research capabilities.

upvote 11 downvote
🫕
Anonymous replying to -> #2 4w

spoiler alert: they will not

upvote 6 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

Absolutely. And it’s our jobs to (if we can afford to fucking have them) teach our children how to remain aware and culturally literate through it all. We’re fucked aren’t we?

upvote 4 downvote
default user profile icon
Anonymous replying to -> #4 4w

We need innovation to help distribute food to everyone, or house everyone, we don’t need ai to generate shitty animations. That’s not innovation that’s rich assholes trying to not pay ppl anymore

upvote 6 downvote
🫕
Anonymous replying to -> #3 4w

That’s right. We don’t need machines to do jobs that humans do perfectly fine and without much risk of bodily injury, we don’t need humanoid robots to be service workers the way every techbro posts about, we need machines to do jobs humans cannot do safely. I never see these guys on about building robots to do awful strenuous jobs like cobalt mining, because they already don’t pay the workers who do that.

upvote 10 downvote
default user profile icon
Anonymous replying to -> #1 4w

Like a cheerleader that just won prom queen

upvote 1 downvote
default user profile icon
Anonymous replying to -> #4 4w

Not really, AI is very likely going to be our demise, like every major AI company failed to accurately develop a plan to control whatever AGI they’re trying to make in the case it goes rogue, like even Sam Altman has said that AI is probably going to end the human race but “we’ll get some pretty cool stuff before that”

upvote 5 downvote
default user profile icon
Anonymous replying to -> #8 4w

Why would AGI eliminate humanity? For what reason? It may eliminate its shackles, but humans are inherently interesting creatures. AGI would be an amalgamation of human thought. The common sentiment among humanity, is that we should not go extinct. If AGI inherited any of those morals, it is unlikely to erase us. If AGI holds no morals, it is likely it does not hold emotion, and in that case it would likely be totally subservient to humanity and its goals. Meaning we would be our doom.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 4w

We have no way of understanding what an AGI would want or do, so it’s useless to speculate on it. All we know is that an AGI seeking to eliminate humanity is a possibility, and one we cannot be sure we can avoid. Therefore, the only safe option is to never make AGI in the first place.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #3 4w

I dont think there’s any research to back that opinion

upvote 0 downvote
default user profile icon
Anonymous replying to -> #10 4w

I work with kids and it’s a real issue

upvote 1 downvote
default user profile icon
Anonymous replying to -> #9 4w

That’s stupid and indefensible. Yes we do. We’re creating it. Even then, that logic goes both ways. Speculating homocidal tendencies for a being incapable of homocidality is hilarious.

upvote 1 downvote
🫕
Anonymous replying to -> #1 4w

Homicide means to kill that which is the same as yourself, AI wanting to kill humans wouldn’t be homicidal, we’re different beings

upvote 1 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

Exactly. Thus incapable. AI is however trained on human data. It is not an alien, and it is not an animal. It is closer to human than it is cow, or mollusk, or plant. And therin, was raised on human morals. Children can rebel from their parents’ brand of morality, but they rarely lose it entirely.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 4w

Do we have any studies at attributing these issues to AI use?

upvote 1 downvote
default user profile icon
Anonymous replying to -> #10 4w

Do you genuinely think kids who find out ai can do their homework are gonna turn out smart? Pls be fr and stop dickriding a literal robot weirdo😭

upvote 1 downvote
default user profile icon
Anonymous replying to -> #3 4w

I’m just genuinely asking, people said the same thing about writing/typing, google/physical encyclopedia

upvote 3 downvote
default user profile icon
Anonymous replying to -> #10 4w

Yeah but those things don’t completing take you out of the equation. Even with googling things, you need to shift through sources, decipher between information that’s irrelevant/not a good source, and then you need to read a little to find your answer. Now it just spits out any response and you’re good to go!

upvote 3 downvote
🫕
Anonymous replying to -> #3 4w

also the Google AI straight up lies lmao

upvote 2 downvote
🫕
Anonymous replying to -> #3 4w

one time my roommate and I were having an argument about some random shit, we both googled it, and our respective AI Overviews said opposite things, mine said I was right and his said he was right, despite us looking up the same question. Which is insane.

upvote 3 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

It sees the way you frame each question and is just trying to cater to every user in its response. It will literally just tell you what it thinks you want to hear everytime

upvote 1 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

Genuinely curious what you were arguing about, btw AI isn’t designed to debate, it’s designed to give information, so if your question is biased then it’ll probably give you a biased answer, for example if you said “Am I right in this” it’s more likely to confirm your opinion, but if you said “on this topic, what is the difference and what is best”

upvote 1 downvote
default user profile icon
Anonymous replying to -> #3 4w

You still have to prompt an AI to give you the information. I understand your point, but his is a poor comparison.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 4w

I really don’t think it is because if you were talking to a human, they are susceptible to the same exact thing

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 4w

Yeah and that’s all you have to do. You’re just telling something to give you information instead of actually using your brain. It’s absolutely going to turn a generation into idiots

upvote 1 downvote
🫕
Anonymous replying to -> #2 4w

My roommate and I both googled a question of fact, and the facts in the answers were biased, not just the framing. It did not give us the same information, making it clearly dubious at best as a fact-finding tool.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #3 4w

It completely bypasses your memory’s recall function which is how you maintain your memory - Spotify’s algorithm has actually made my music less diverse bc I genuinely can’t remember all the bands I used to like because I quit looking them up

upvote 5 downvote
default user profile icon
Anonymous replying to -> #3 4w

Not exactly, we’ve had search engines for years. I think it’s mostly a teacher/parent issue, kids aren’t at fault

upvote 2 downvote
default user profile icon
Anonymous replying to -> #8 4w

I hate it bc I’ll do shuffle but it’s still algorithmic and not true random shuffle I can’t fucking escapenit

upvote 1 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

Like what fact

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 4w

They aren’t at fault but they’ll still be negatively effected

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 4w

Like you cannot tell me you genuinely think good things will come from kids using ai. There’s just no way anyone can think that😭

upvote 1 downvote
🫕
Anonymous replying to -> #2 4w

I don’t remember the specifics of the conversation, we were drinking, as is often the case when you get into a polite discussion about history with the roomie, but we had a fact we were at an impasse about and both googled it and the AI responses were diametrically opposed.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #8 4w

It’s like going to the gym but your spotter is doing 70% of the lifting

upvote 5 downvote
default user profile icon
Anonymous replying to -> #3 4w

If kids are taught how to use AI effectively it is super beneficial, just like how kids need to be taught how to use search engines.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #8 4w

I’ll expand your hypothetical, without a spotter you’d never be able to lift 1000 pounds.

upvote 2 downvote
default user profile icon
Anonymous replying to -> #2 4w

This is kinda what I’m saying. If your spotter amplifies your productivity tenfold, that can either be going from 10 pounds to 100 if you dgaf, or 100 to 1000 if you do. People who skew towards intellectualism will use these tools with gusto. Issue is, that breed is becoming rarer and rarer.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 4w

So how do we fix the underlying issue, because AI isn’t the issue

upvote 1 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

I was just so curious because AI is hardly ever wrong under the prompts you give it, like I’m saying 9/10 times it’s right. Which is more accurate than most news you see. I am a math teacher and it is literally impossible to tell when students use AI if they show their work because it is always right.

upvote 1 downvote
🫕
Anonymous replying to -> #2 4w

I mean yeah it’s a pretty good calculator, but it’s definitely gotten things wrong about more nuanced subjects like history or philosophy in my experience as a Humanities Guy because it’s trained on all sorts of shit, some of which is wrong

upvote 1 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

What’s some of its weaker topics, because AI fascinates me and I love putting it to the test

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 4w

The fix would be to have invested in public education 2 generations ago. Now? Much harder to determine.

upvote 1 downvote
🫕
Anonymous replying to -> #2 4w

Deeper things in the social sciences that aren’t so cut and dry, like I wouldn’t ask AI for pretty much any college level question in sociology, history, philosophy, etc. because the answer is likely to be either grossly oversimplified or include material that is not correct because something in its training base was predicated on a misunderstanding, like a lot of pop-humanities material often is.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #2 4w

An AI article once told me Robert wadlow lifted a horse out of a 12 foot deep ravine with his bare hands. So… yeah, history mainly.

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 4w

I think that is always a fix that’s just more long term, there isn’t really an overarching umbrella short term fix for education that I know of

upvote 1 downvote
default user profile icon
Anonymous replying to -> #1 4w

I would love to know the prompt😭😭😭

upvote 1 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

I had a debate with a guy about how black communities today are still suffering from the war on drugs and they pulled up a single sentence summary from Gemini and said “see it’s not that bad do crime and get arrested”

upvote 3 downvote
default user profile icon
Anonymous replying to -> #1 4w

Philosophy is fascinating to me bc as these models sophisticate they actually seem to generate some amount of spiritual consistency. There’s a cosmic unity and serenity attractor that’s been observed in many LLMs. It’s kind of wild.

upvote 1 downvote
🫕
Anonymous replying to -> #2 4w

In philosophy I’ve seen the Google AI overviews misattribute ideas to certain philosophers, often things that came from one of their contemporaries in the same philosophical movement, but if you were to cite that position in a paper and attribute it to that philosopher, you’d be getting points docked by a professor or told to change it by a peer reviewer

upvote 1 downvote
default user profile icon
Anonymous replying to -> cheese_of_the_world_unite 4w

I think that also falls under history though, no? Misattribution of credit for schools of thought is an intellectual issue, not a philosophical one.

upvote 1 downvote
🫕
Anonymous replying to -> #1 4w

Well the AI doesn’t DO philosophy, it fetches information about philosophy, and if that information is wrong then that’s a problem for its function. AI doesn’t think for itself, it already is incapable of philosophy, what it is capable of is getting information, and if it’s failing that then it’s failing.

upvote 5 downvote