
it’s layered. LLMs are pretty much worthless. more specialized applications are a much more worthwhile thing. specifically ML has a lot of potential and in general is subject to a lot more direct supervision. the datasets used are often more heavily curated and tend to be much better about not violating IP rights. depending on how they’re trained, the environmental impact can also be reduced. much of the time, they can be trained once and are ready to ship, vs being continuously trained-
the way that a lot of the big LLMs are. with a pre trained model, it’s a lot more like running any other software than an LLM, they have a lot more potential for actual, practical applications. there is still an environmental impact that can’t be ignored, but at the same time, massive data centers/server farms aren’t anything new. to a certain extent all computers are evil. also “no ethical consumption under capitalism” and all that. still doesn’t hold a candle to the beef industry