cheese_of_the_world_unite
Artificial intelligence must be destroyed for the sake of humanity’s futureCould go either way. The rapid consolidation of information can make people much smarter. The internet is a good example. Very polarizing. Some people use it to educate themselves well beyond what would’ve been available otherwise. Other people make memes and watch tiktok 6 hours a day. Same thing is gonna happen w AI.
That’s right. We don’t need machines to do jobs that humans do perfectly fine and without much risk of bodily injury, we don’t need humanoid robots to be service workers the way every techbro posts about, we need machines to do jobs humans cannot do safely. I never see these guys on about building robots to do awful strenuous jobs like cobalt mining, because they already don’t pay the workers who do that.
Not really, AI is very likely going to be our demise, like every major AI company failed to accurately develop a plan to control whatever AGI they’re trying to make in the case it goes rogue, like even Sam Altman has said that AI is probably going to end the human race but “we’ll get some pretty cool stuff before that”
Why would AGI eliminate humanity? For what reason? It may eliminate its shackles, but humans are inherently interesting creatures. AGI would be an amalgamation of human thought. The common sentiment among humanity, is that we should not go extinct. If AGI inherited any of those morals, it is unlikely to erase us. If AGI holds no morals, it is likely it does not hold emotion, and in that case it would likely be totally subservient to humanity and its goals. Meaning we would be our doom.
We have no way of understanding what an AGI would want or do, so it’s useless to speculate on it. All we know is that an AGI seeking to eliminate humanity is a possibility, and one we cannot be sure we can avoid. Therefore, the only safe option is to never make AGI in the first place.
Exactly. Thus incapable. AI is however trained on human data. It is not an alien, and it is not an animal. It is closer to human than it is cow, or mollusk, or plant. And therin, was raised on human morals. Children can rebel from their parents’ brand of morality, but they rarely lose it entirely.
Yeah but those things don’t completing take you out of the equation. Even with googling things, you need to shift through sources, decipher between information that’s irrelevant/not a good source, and then you need to read a little to find your answer. Now it just spits out any response and you’re good to go!
Genuinely curious what you were arguing about, btw AI isn’t designed to debate, it’s designed to give information, so if your question is biased then it’ll probably give you a biased answer, for example if you said “Am I right in this” it’s more likely to confirm your opinion, but if you said “on this topic, what is the difference and what is best”
This is kinda what I’m saying. If your spotter amplifies your productivity tenfold, that can either be going from 10 pounds to 100 if you dgaf, or 100 to 1000 if you do. People who skew towards intellectualism will use these tools with gusto. Issue is, that breed is becoming rarer and rarer.
I was just so curious because AI is hardly ever wrong under the prompts you give it, like I’m saying 9/10 times it’s right. Which is more accurate than most news you see. I am a math teacher and it is literally impossible to tell when students use AI if they show their work because it is always right.
Deeper things in the social sciences that aren’t so cut and dry, like I wouldn’t ask AI for pretty much any college level question in sociology, history, philosophy, etc. because the answer is likely to be either grossly oversimplified or include material that is not correct because something in its training base was predicated on a misunderstanding, like a lot of pop-humanities material often is.
In philosophy I’ve seen the Google AI overviews misattribute ideas to certain philosophers, often things that came from one of their contemporaries in the same philosophical movement, but if you were to cite that position in a paper and attribute it to that philosopher, you’d be getting points docked by a professor or told to change it by a peer reviewer
Well the AI doesn’t DO philosophy, it fetches information about philosophy, and if that information is wrong then that’s a problem for its function. AI doesn’t think for itself, it already is incapable of philosophy, what it is capable of is getting information, and if it’s failing that then it’s failing.