Introduction
One of my great pet peeves with our time is its weird relationship with computers. These machines have become so ubiquitous in our lives that we seem to humanize them to a worrying extent – worrying, because it degrades and twists our own self-perception. This tendency has become so egregious and commonplace with the rise of so called “artificial intelligences” that it is starting to look like a new strain of pseudoreligiosity, complete with its own bizarre theology and eschatology. People are eagerly awaiting the day when their computer “becomes aware” like others await the Second Coming!
I feel it is high time that some sobriety is given to the topic, and that the worst of these attitudes meet some well-needed opposition. For that reason, I am writing this essay on what I term computer fetishism – that is, the modern tendency to treat computers as objects of spiritual or religious significance. I am not here to discuss the technicalities of AI as such, but rather to critique the emerging religious attitudes that otherwise atheistic people are starting to hold in regards to computers. I intend to show why these attitudes are ultimately misplaced, and how they are the result of a suppressed need for meaning in a world that has been degraded and denuded by materialism.
It should go without saying that I will be taking a deeply philosophical approach to these questions, and I mention this only because I know how the computer cultists tend to despise philosophy. The simple truth is that the questions we will be tackling, such as the nature of mind and how it relates to reality, are philosophical questions at their heart. There is no scientific or engineering approach to be taken here. I am unapologetic about working from a perspective steeped in Platonism and Vedanta, since these are the schools of philosophy that deal most profoundly with these questions.
With this introduction out of the way, let us begin.
The Brain Is Not A Computer
The constant presence of computers in our lives has had the unfortunate effect of making us think of ourselves in computer terms. As far as I can tell, this phenomenon has its roots in the usage of the mind as a metaphor for the functioning of a computer. Before the Information Age, it wasn’t entirely clear to the uninitiated what it was that computers did, and so the mind was used as a metaphor to describe it. Computers stored information in its “memory”, when it processes information it “thinks”, and so on.
But somewhere along the way our culture became far more comfortable with digital software than it was with human psychology, and so it began using the computer as a metaphor for the human mind. What caused this I can’t say, but it seems to me that the hegemonic materialism of our age led to an abandonment of psychology in favor of mere human ethology. In addition to this, we became more used to spending time with our computers than with other people. Somewhere along the way, we began to understand computers better than we understand ourselves.
And so we started to envision the brain as a “biological computer”. But this is an entirely faulty and misguided way of viewing the brain. Even a cursory look at the differences between these two will reveal just how vastly different and incomparable they are. Any comparison of the brain with a computer is superficial and metaphorical at best, and certainly not profound.
The most obvious difference between the two is that a computer uses silicone-based transistors while the brain is composed of neurons. This is in no way a minor difference in terms of material – transistors and neurons function in qualitatively different ways. Transistors, for instance, are static and do not change. Neurons, on the other hand, are plastic. They are living things capable of rearranging themselves, creating new connections, forming new pathways and even regenerating themselves. They operate using transmitter chemicals which changes the interaction between neurons, a functionality that is not present with transistors. A computer has no motivations because it has no dopamine reward system! These are just a few examples.
What this means in practice is that the kind of processes that a transistor-based system is capable of is fundamentally different from those of a neuron-based system. A computer works with binary code, but a brain does not. So when a computer performs arithmetic operations, it does so by converting the numbers into binary and performing the operations on the binary numbers. When we do arithmetic operations we never use binary, which is awkward and unintuitive to us. Instead, we manipulate symbols (2 has a different meaning than 4), visualize (such as counting fingers) and verbalize (“One, two, three...”). It is a representational approach.
A common fallacy among computer fetishists is the belief that computers would be like the human brain if they were just faster or more powerful. This is clearly not the case. The differences between the brain and a computer are qualitative and not quantitative. Despite the fact that a brain uses a fraction of the power that your gaming PC uses, it still has far superior graphics and its intelligence is far more versatile and adaptable. This is not a problem of processing power but of processing methodology. Your brain isn’t doing things faster than your computer – it does them differently.
This is why your computer can perform rote mathematical operations far better than you can but it cannot solve a mathematical problem analytically. It’s why a computer can run through logic gates at blinding speed but it can’t do lateral problem solving. This is also why it is completely wrong to think of your brain in computer terms.
You aren’t “processing information”. You are thinking. You aren’t “retrieving information”. You are remembering. You aren’t “simulating”. You are imagining. You don’t “download updates”. You learn. Your memories aren’t just data points in your head, they are experiences that you’ve been through. When you learn something, that’s not data, it’s knowledge. What’s the difference? The same as the difference between memorizing a paragraph in Latin and knowing what it means!
The Nature Of The Mind
So far, I’ve approached this topic from a materialist framework, but this is merely to show that even using such a framework there is no basis from which to even compare a brain to a computer. The truly important points arise first when we consider the mind, and to consider the mind we must move beyond mere materialism.
If I had to offer a definition of what the mind is, I would say that it is the internal field or space where experiences occur. I say this because this is exactly how we experience the mind subjectively. The mind is not reducible to intelligence, since even unintelligent people have minds. Nor is it reducible to thoughts or emotions or sensations. The mind is where these things occur. It is where thoughts and emotions arise, where our sense perceptions happen. This understanding has some significant repercussions.
The first is that the mind is not a material object. The mind isn’t made of anything material, nor is the contents of the mind made of anything material. When I see a red ball, it is not possible to describe my subjective experience of the redness and the roundness of the ball in material terms. Attempts have been made at this by referring to these attributes as “qualia”, but this merely kicks the rock further down the road. There is no scientific evidence for qualia, nor can there be since these are subjective phenomena. No one has any clue how or from what they arise, or what they’re made of. And yet there is nothing we can know with such certainty as the existence of qualia, precisely because we are experiencing them directly. When dealing with the mind, materialism fails us as an explanatory model.
Because of this, we can’t approach the mind as a thing that can be conveniently reverse engineered. This leads us to the second conclusion – the mind is not the sum total of its contents or processes. The mind contains a great deal of varied experiences, and it performs a staggering number of actions and processes, but none of these things taken either by themselves or in their totality can account for the mind itself. If this were not the case, then we should be able to “increase” the amount of mind we have by solving difficult problems while we feel strong emotions in a very noisy environment.
But this isn’t how we experience the mind. On the contrary, we find such an environment to be completely overwhelming. The mind is less like a sack that expands the more you put into it, and more like the surface of a lake that gets more turbulent the more you throw into it. The mind is the “place” where mental contents arise and where mental processes occur, but it is neither the contents nor the processes.
This leads to the third conclusion, namely that the mind cannot be created merely by adding in more processes. This is the fundamental fallacy of “artificial general intelligence”, or AGI. The idea is that you want to create a computer that can create its own concepts, ideas, mental models and so on to tackle any problem you throw at it. This would require a mind, since the mind is where such things arise and where they have their existence. But you aren’t going to create a mind by increasing the number of processes that your computer can do. Even if you were to make a complex AI containing dozens of processes, this would be more like a Swiss army knife than a proper mind.
Some would perhaps like to argue that the mind as I have described it would be akin to an operating system, but this is flawed reasoning. Operating systems manage hardware and software and allow the running of programs due to providing the necessary background processes, but this is not what a mind does. The mind does not run your body – you cannot directly control your digestion or your heart rate with your mind. The body runs itself – it is a living thing and possesses within itself all the physiological processes needed to maintain itself. The mind does not control these processes, but what it does is to allow them to be experienced.
This provision of experience is also what allows conscious choice over those processes which the mind does have control over, such as our thoughts. The mind can bring forth mental processes and content, including external, bodily, conscious and subconscious content. It can also dissipate the same back into itself. This movement of the mind is what makes up our whole conscious experience.
The Mind Is Not A Computer Program
The points made so far on the nature of the mind may seem very abstract, but they are necessary for establishing what the central difference really is between mind and software. Just like we previously established that the brain is different from a computer in that it is a dynamic, living thing while a computer is a static, dead thing, a similar distinction exists between mind and software.
The mind provides the experiential bridge between the external world and the conscious subject within us. It is the internal space where we experience the external world of the body and of the sense objects. It is likewise the bridge which allows this conscious subject to act on the external world through the body. When the mind is not active, such as during anesthesia, we are neither aware of the external world, nor of the internal world, nor can we consciously control the body.
A computer program, by comparison, has no experiential component inherent to it. Whereas the mind is directed inward towards the acting subject, the computer is directed outward towards its user. A computer has no internal world and so everything it does is an externalization.
Aside from the obvious implications of this as it relates to consciousness, this does have a functional implication as well. People build computers and write computer programs. The computer is entirely a product of the human mind trying to externalize some of its processes. In other words, a computer program does not function like human mental processes do, but as the human mind has conceptualized these mental processes. Computers do not have mental processes, but mimic mental processes in ways that are meaningful to human minds, allowing the mind to outsource some of its work.
We see, therefore, that the mind is as different from the software of a computer as a brain is from the hardware of a computer. They are entirely different entities that work in entirely different ways and for different reasons.
A very important consequence of this difference is that a computer can never do something that a human mind has not conceptualized with some level of rigor. Any process within the mind which is opaque to the mind itself can therefore never be simulated using a computer. This brings us to another central question.
The Problem Of Consciousness
Can a computer become conscious? The answer, as is evident from what has been said so far, is that if consciousness is opaque to the mind then it cannot be simulated using a computer. If something cannot be rigorously conceptualized, objectified and externalized then we cannot create a computer program to do it, since computer programs are precisely such externalizations of the human mind. So can consciousness be objectified? This seems to be the question.
To answer this we need to first understand what consciousness is. This is not quite as simple as it may seem. We all know what it’s like to be conscious, but is it clear to everyone that consciousness is precisely knowing what things are like? Or, to put it differently, consciousness is the very activity of being subjectively aware of things. If the mind is the field of experience, consciousness is the experiencer. In a world of objects, consciousness is the subject. It is the true I.
Consciousness can therefore never be simulated by a computer because it is impossible to objectivize the subject so that it still remains the subject. What do I mean by this? Suppose I asked you who you are. To answer this question, you would need to form a concept of yourself and then communicate that to me. Our conversation would then look something like this.
Who are you?
“I’m John.”
No, that’s your name. Who are you?
“Well, I’m an American man.”
No, that’s your sex and nationality. Who are you?
“Uh, well, I work as a doctor…”
That’s your profession. Who are you?
“I, uh…”
What you are doing here in this example of mine is trying to point to an object and say “that’s me”. But none of these objects are really you, are they? And by trying to answer my question in this way, you would inevitably reach a point where you no longer have any more objects to point at. You would run out of words, and reach a threshold within that cannot be pointed to. This is your subjectivity, your consciousness, the experiencer that experiences first-hand what it’s like to be all those things that you associate with yourself.
This subject is the limit of experience. You cannot go behind it to experience it as an object. Any attempt at doing so would miss the point, because it isn’t a thing to experience but the one that experiences. And because it is impossible to view this subject from outside experience, it is also impossible to conceptualize it rigorously enough for a computer to be able to simulate it.
A computer can therefore never be made conscious.
This may seem like a very abstract discussion, but it lies at the heart of one of the key problems of AI, namely the hard problem of consciousness. This problem essentially deals with the following question: how does matter give rise to subjective experience? And the answer to this question should not surprise anyone at this point: it doesn’t.
The Reality Of Consciousness And Mind
Just like the material world cannot be reduced to a lump of matter, the mind cannot be reduced to its contents or mental processes, and consciousness is irreducibility itself. What does it mean that a thing is irreducible? That its being transcends the parts that seemingly make it up, and that by affecting the parts you do not affect the thing, that the thing is independent and has its own reality.
What this implies is that consciousness and mind, qualia and the inner world, experience and experiencer, etc. are not phenomena that emerge from matter, but are rather inherent to reality itself, to existence as such. This further implies that the ordered structure of reality – the intelligence and intelligibility of reality – is itself something inherent to reality.
What this means is that it is not possible to create an artificial intelligence or an artificial mind in the same way that it isn’t possible to create artificial natural laws, or artificial geometry, or artificial reality. These things cannot be reverse engineered from material objects, and can therefore not be constructed from material objects either. There is an order to the universe that cannot be rewritten, and likewise there is an order to the universe that governs in what circumstances mind appears to consciousness and the external world of matter becomes the internal world of experience.
A computer cannot create reality, but can simulate physical processes. It cannot create mind, but can mimic mental processes. And it certainly cannot mimic consciousness, because consciousness just is. These things are not mere byproducts of matter, but essential elements of reality itself. A computer, by comparison, can only offer useful mimicries.
It is one of the peculiarities of existence that the living brain is the physical structure through which the Universal Mind expresses itself as individual minds, and that the individual mind is the subtle structure through which the Universal Subject expresses itself as individual persons. Individual mental reality and personhood only occurs within living brains, and if we want to create minds and persons we must create living brains. A computer, therefore, can never have a mind nor be a person.
Rejecting The Inner Life
So we can see clearly why sentient computers are not possible, and why the things that AI enthusiasts search for in computers can never be found there. But the implications both of what I’ve said so far and of what it is that the computer fetishists truly believe about the world do not end there. The problem of the new computer religion is precisely that, a problem of religion. It touches the core of our beliefs about the world and of ourselves.
If you, dear readers, were to examine the conclusions I have drawn for you so far you would find something remarkable in them. I am denying that what I am is merely a byproduct of material processes. I am vehemently denying that I am a “biological computer” that just so happens to have some randomly generated software installed on it. I am not a thing, I am a person, and this quality of mine is so fundamental to all that I am that I can never be separated from it.
Furthermore, this whole internal world of experiences that I am aware of is not merely noise from chemical processes in my brain. It is the very opposite of noise – it is a direct, moment-to-moment awareness of order and meaning. That the world is intelligible is evidence of its intelligence; that it can be experienced is evidence of its sentience. The whole richness of my internal world is a testament to the fullness of existence itself, for I am existence experiencing itself.
Reality is not just a collection of things. It isn’t even just a thing. Reality is not something, but someone. And you and me both, and all living beings, and indeed the whole ensemble of the cosmos itself, we all participate in various ways within the inner life of that Someone. That is what it means to exist! If you take anything with you from this screed of mine, take this: that we are so much more than we think we are. All of this, all of life, is so much more, and we would be mad to try to reduce it to something so much smaller.
The brain and the living body are not just machines made out of meat, but the very way in which this Universal Subject experiences what it is like to live. They are not arbitrary constructs, but are themselves the product of order and meaning. That it is the brain and the living body that manifests individual mind and personhood is therefore no coincidence. As such, merely assembling pieces of technology to superficially resemble the brain is not enough to give rise to the richness of living, breathing experience.
What the computer religion amounts to is a degradation of the human being in favor of the computer. This is most clearly expressed in the perverse glee that computer fetishists display when they talk about AI “replacing humans”. But this glorification of the computer requires, by necessity, that the whole inner life of man be rejected. It requires that we deny any meaningful existence to the human experience, to the inner life of conscious beings, to consciousness itself. These things, as I have demonstrated, can never be simulated by a computer, and so if we take the computer to be the end-all be all, we must deny their existence.
The Desire For Subhumanity
So why would someone feel compelled to deny their inner life? I don’t think it’s just cold, overly rationalistic skepticism. The vehemence with which the computer religion denies the inner life is born from a sense of certainty and not of skepticism. I think the truth is rather more likely to be found in this perverse glee which I mentioned above. There is a desire today to degrade, on a conceptual level, the human being and to see it become something much less than it is capable of being. It is a hatred of anything that makes man more dignified, worthy, beautiful or excellent. Indeed, it is often a hatred of the very idea that these qualities even can pertain to us, or a downright denial that they exist at all.
To this new computer worldview, computers are rational while people are irrational. Computers are efficient while people are inefficient. Computers can therefore “do things better” than humans because they are better according to these criteria. Computers will replace humans because humans are messy and difficult and computers aren’t. But these are the denuded values of a factory assembly line, not the values that are conducive to life at the apex of human experience.
Computers aren’t rational, since they cannot reason. What is meant is that they are predictable and controllable. The irrational side of man – his emotions, desires and instincts – is not dross that lies in the way of some algorithmic machine “rationality”, but rather a distinct way of knowing. There are aspects of life that can only be known and understood through this irrational side. It is true that this side of man makes him unpredictable and difficult to control, but that is because man has agency. In contrast to a computer, man acts. He does so not from some lofty position of objectivity but from within the particularities of his life. This is not a fault of a man, but a requirement for him to become something unique and distinct, for him to become what he is.
When we are asked to pretend that these mindless probability calculators are “superhuman intelligences”, what does that say about our self-perception? It says that the highest thing we can aspire to be is a mindless worker drone that thoughtlessly repeats whatever it has been programmed to repeat. What the computer religion fetishizes is the golem, a mere automaton. There is a desire here to render people down into the subhuman. That this would appeal to corpos and technocrats is hardly surprising, but though such people may be the high priests of the computer religion, they are not the laity. The laity are those who rapturously believe that they will be replaced by these computers.
So there is more than a desire to turn others into subhumans for your own benefit. There is also a desire to become subhuman yourself. It is here we find the reason for the hatred I mentioned earlier. It is difficult to make something of oneself, and our time does not offer much encouragement on the path to actualizing oneself as a being of dignity and excellence. Our culture rejects as heinously evil the very notion of truth, goodness and beauty. Higher values are seen as offensive and reactionary, the politically suspect leavings of an age that oppressed “marginalized communities” and opposed “equality”. Perverts and midwits engage in “deconstruction” of anything that would make life meaningful and worthwhile in an effort to justify their own perversity and narcissism. And to top it off, it is becoming increasingly difficult to create a life for yourself that isn’t just empty consumption and work.
It is no wonder, then, that people would grow cynical and dismissive of their own nature and inner life. It is too painful to believe in, and too difficult to try. Why bother facing the emptiness you feel inside when you’re just a poor imitation of a real computer? Why struggle for excellence when you can just give up and let a robot replace you? You are just a data processor anyway, it’s not like your life matters. It is better and more believable to gift the world to a dead box of wires than to think there was any point in you being here at all.
And so AI promises a release of sorts. The intelligent computer will be the final proof that you never mattered, and that you have no culpability for throwing yourself on the dust pile of history.
Concluding Remarks
But why be so dramatic? Isn’t AI just another tool? It is, but it may be the first tool that has man convinced that neither the toolmaker nor the craftsman nor even the craft itself matters. With the coming of ChatGPT and AI art generators, the adherents of the computer religion have been quick to proclaim that the end of man is nigh. Art and writing are now obsolete, they say, but I find it difficult to believe. There is no shortage of uninspired, mediocre writing – will automating the process really be that revolutionary? And there is certainly no shortage of bad art. Do we really need more of it?
And so this is what I think is the real danger of AI. Not that we will be hunted to extinction by Skynet, but that we will further degrade ourselves into the role of castrated consumer-serfs because of a faulty understanding of what we really are. As I have shown, a computer can never be anything more than a mimicry of human mental processes. Sentience, true intelligence, agency, virtue, excellence, the richness of living and knowing and experiencing – these are ours alone, and they are not trivial. The fruits that we can cultivate out of our own being will never be outshone by computers, which are themselves one such fruit.
I can understand the excitement that this new technology is creating. It is one of the more interesting developments in our otherwise stagnant culture, and it evokes the wonder of science fiction. I don’t hold it against AI enthusiasts that they are enthusiastic, nor do I think it is a wasted effort to learn how to use this technology as a tool. Creation and use of AI can itself be a form of excellence. But we must not sell ourselves short simply because we have grown bored of our time and of ourselves.
Materialism is an abortive worldview. If life feels empty to you, it is because you are dismissing the mysteries of life rather than seriously considering them. A computer-based pseudoreligiosity will not fulfill you, and degrading yourself before the altar of the computer will not make you feel any less alienated. Buying into the fantasies of the computer religion will only serve to make you susceptible to manipulation by the technocrat priesthood that peddles these fantasies.
But perhaps that is a topic for another essay…
"The differences between the brain and a computer are qualitative and not quantitative. " Came to this conclusion myself not too long ago as I wrote up an essay using Grammarly (guessing you have heard of it). My writing style is unique to me, but according to Grammarly, it needs to be written in a different style. This takes the humanity and uniqueness out of writing, and can only lead to disadvantages in the long run. The computer has many advantages, but one must not depend on it for everything.
> Computers aren’t rational, since they cannot reason. What is meant is that they are predictable and controllable.
> Not that we will be hunted to extinction by Skynet, but that we will further degrade ourselves into the role of castrated consumer-serfs because of a faulty understanding of what we really are.
This is an important point, a I think it's worth discussing because I don't think people are well acquainted with 'control' as the source of the self-imposed, self-degradation.
Right now, the dominant paradigm for modern humans is control (and its child, management). This is the inverse of the notion of genuine relationship that gives rise to 'ownership'. Control can only ever approximate ownership, and the more it tries to replace ownership, the more it perverts itself.
I mention ownership as something beyond agency, because ownership involves using agency to take another's interest as one's own. A parent takes ownership of a child. A gardener takes ownership of his plants. This is what relationship is – positive ownership.
From my perception, the more humans try to *use* technology (as a slave), the more it returns the favor by making us into things that are to be used an enslaved. We're not willing to *relate* to technology, especially computers, and in turn let it expand this capacity of Relation.
A trained samurai knows that his blade is an extension of his being, and the blade responds by providing the necessary extension to Being. This is one of the many truths offered by 'Om Tat Tvam Asi', as you well know.