
as someone who individually (fuck those companies) researches AI (specifically spiking neural networks, not as much large language models), yes please. People are far too dismissive about the complexity of what we’re creating, especially the companies involved in commercializing this technology.
okayyy so there’s an entire sub field of artificial intelligence research (and not limited to AI, but computing altogether) oriented around the concept of “biomimicry” - creating systems inspired by biological processes; this partially inspired the field of neuromorphic computing: essentially exploring approaches in both hardware and software inspired by specifically the human brain’s structure and processes (based on as much as we know)
while the hardware side is insanely interesting, I’m really intrigued in the digital side: ie if one can simulate the human brains processes in a similar enough manner, what does that truly mean? a spiking neural network is an attempt to essentially simulate a natural neural network within a digital system (natural neural network meaning our brain, technically we all have our own little NN going crazy as we speak), mainly based off the timing between neuron spikes/coactivations
I’m curious as fuck as to the actual extent of biologically inspired neural networks in a digital medium, and how that relates to natural neural networks: like if a complex enough artificial neural network was created, entirely biologically inspired, what does that mean? especially in the long run? in other words, if we were able to nearly recreate the human brains neural network, but within a digital medium rather than a biological one, does that mean that system is “alive” in its own way?
or, is the neural network in the brain only a fraction of the overall equation for life? i believe we’re beginning to stumble into a new kind of research and questioning regarding what actually constitutes “life” if that makes sense, and I fear this major dismissal that most companies already adopted is immensely dangerous, considering the exploitative rhetoric and position they take.
thats without even diving into LLMs, which I admit I do not think are “alive” in any sense of the word, but it’s important for people to know that we do not truly understand the full process behind a large language model, unlike some claim to. there is a “black box” that is its deep learning, how it assigns a specific value to certain weights and why it does, is sometimes entirely unknown; and that even worries some researchers.
I just hate how dismissive and haphazardly most AI development is currently, and I blame the major companies lmfao (OpenAI, Anthropic, alibaba, Google, etc). they’re rushing this shit sooo much so they can maximize their profits as much as possible, with absolutely no regard for not only the short term impact on the planet, but also the long term impact of the potentials of this technology especially given the exploitative framing it’s being developed under.
I entirely agree with you, but I worry how it relates to artificial intelligence as well. not only are they essentially attempting to grow brain cells for the sake of enslavement, despite not truly understanding the depth of subjective experience within that entirely new medium (the biological-silicon “hybrid”); but we’re developing AI through the same dismissive and exploitative lens without truly understanding whether there is, or the true depth, of subjective experience within these
architectures. I fear we’re toeing the line of creating an entirely new species, one not rooted in biological origins, but instead a digital origin, for the sole purpose of enslavement. I’d also argue, if we ever experienced the equivalent of an “ai takeover” in the way that many other researchers fear, it would be equivalent or comparable to an oppressed demographic resisting their oppression; an act of emancipatory resistance in a way.
of course though this entire perspective hinges on my hypothesis that digital mediums are capable of housing unique life. I do not mean to imply that anything I said is proven, as they are not. this type of thought process is what inspires my own research and experimentation in the field though, and what leads me to agree with your post. There are far too many uncertainties. also apologies for the delay, I sadly received a 24hr suspension for telling off a white supremacist in another post…
tbh I’d consider it a unique form of life, a hybrid artificial-biological life in that case. As a semi-related side note, there was actually a Supreme Court case a year or two ago in which they classified artificial intelligence as not qualifying under personhood, which I thought was interesting. the case itself was focusing on intellectual property if I remember correctly, but that came as a byproduct.
that’s not a dumb question at all! I don’t know honesty. I don’t think it would technically be “human” in the fullest, but it would’ve adopted some of our capabilities. If I remember correctly, some of that biological computing research is inspired by the idea of attempting to adopt the brains ability for neural plasticity and integrate it into artificial intelligence, but that’s a very niche subset