Virtual Intelligence
Anyone who has been to a great concert will agree that recorded music can never equal that experience of a live performance. Yet the essential difference is hard to pin down. It isn’t just a matter of being together with other people. You could gather a thousand people in front ot giant speakers and play a recorded album, and it wouldn’t be anything like a live concert.
The essential and irreplaceable element of great live music is that the band is singing to the audience—to this audience, in this moment. The music is the vector of a unique, personal communication. To be sure, sometimes the band gives a rote performance, oblivious to the audience response; invariably the audience feels at least a tinge of disappointment. The band wasn’t on today, they might think. But at its best, live performers are in a dialog with the audience, responding to their energy, playing differently than ever before or since. Both band and audience remember a great concert fondly, and what makes a concert great isn’t a premier sound system or the musicians’ technical accuracy. Hand the recording of that performance to someone else, listening at a different place and time, and the effect may not be the same.
“You had to be there,” we say.
A mere century or two ago, all music was performed live. Singing in the pub. A lover’s serenade. A lullaby. Gathering after dinner around the piano. Work songs in the field. Children’s songs on the playground. Operas, chamber music, barbershop quartets, church choirs, symphony orchestras. In each of these circumstances, someone was playing or singing to someone else.
Today such experiences are a rare dish in the musical diet of modern society. That diet does not nourish the human being. It fosters a kind of confusion, even a sense of betrayal. A million years of experience says, “Someone is singing to me.” There must be a band inside the radio, but no. The song was sung at a place and time completely disconnected from me. And so I feel a little cheated.
Please don’t misinterpret this to mean that we shouldn’t listen to recorded music. It can entertain, give joy and inspiration, generate emotions, evoke memories. Certainly it is better than no music at all. However, when it supplants live music, life empties just a little bit more than it already has in the modern world. It empties of reality. When someone plays live, even if it is just my son practicing his scales, the loop from source to ear and back to source—there he is!—is complete. What my ear tells me is here, is actually here. I am not “hearing things.”
One might call recorded music “virtual music.” It has all the auditory appearance of music, but no instruments are being played nor notes sung. This is all the more true for synthesizer-produced music. Not only is there no hand strumming the guitar right now, there never was.
What’s true of music is true of all recorded sound. As I write this, I am toggling my attention sometimes to the crickets chirping outside my window. My ear follows them out into the night. Would my experience be any different listening to recorded crickets? The difference might be indistinguishable to the human ear, except that crickets don’t chirp the same way all the time, but speed up or slow down according to the temperature and other variables. The practiced ear might notice different tones early or late in the season, or after a rain. And real crickets stop chirping when some person or animal draws close. A careful listener can learn a lot about what’s going on outside from listening to crickets. That experience embeds the listener more deeply in the world, lodges him- or herself in a matrix of connections. One can “close the loop” by going outside and finding the cricket.
High-fidelity recordings are available of the sounds of the Amazon rainforest. It is as if you are in the middle of the jungle—but you are not. The operative words here are “as if.” As if you are in the jungle. Your ears tell you you are there. Listen, a jaguar is prowling close. But no, it isn’t. When I listen to such recordings, something holds me back from full immersion—the same instinct, perhaps, that makes me wary of internet scammers. One senses the presence of a lie.
All of the foregoing applies equally to images. I treated this topic in depth in an earlier essay, “Intelligence in the Age of Mechanical Reproduction,” an homage to Walter Benjamin. Watching YouTube, the eye tells us, “There is a kitten there.” Look, it bats a ping-pong ball. But there is no kitten. This was of course true of oil paintings too, but the painting itself was still a unique physical object. (One could also, before recording technology, mimic sounds.) In any case, with computer-generated images and video, what we see on screen is not merely separated in space and time from ourselves; it never existed in the first place. The eye tells us one thing (kitten), while reason tells us something else (no kitten).
Through AV recording technology, and even more through generative AI, we learn a habit of distancing ourselves from what we see and hear. These are among the senses that establish our presence in the world. No wonder so many feel so lost here.
The person who lives in an environment of ubiquitous deceit learns not to trust anything. This has dire political and psychological consequences. A serious political consequence is that we no longer trust photographic or video evidence of crimes against humanity. That distrust endows the crimes with a shield that allows them to proceed in full view of the public. Automatically, we discount whatever we see on screen, knowing on some level that it isn’t real—in the sense that there is no kitten cavorting right there; that whatever we are seeing isn’t happening right now. (Or, in the case of computer-generated images, happening at all.) We have, in other words, grown inured to whatever the screen is telling us.
That habit originates quite sensibly, since most of the violence and drama we witness on screens is indeed unreal. If took all those TV gun battles and car chases as real, they would fry our nerves. So we discount them—discounting along with them images and stories that are real. The eye and ear cannot easily distinguish which is which. They all present the same. That habit of discounting digitally transmitted information makes the public relatively unresponsive to horrifying events. It has been habituated to assume, unconsciously, that this isn’t really happening.
Immersion in a world of virtual sounds and images induces feelings of alienation and loneliness. When we see and hear things that are not there, a dreadful “de-realization” ensues, in which one wonders, “Maybe I am not really here either.” It isn’t usually an explicit thought, it is a feeling, a sense of phoniness and meaninglessness, of living in a simulation. Naturally, we stop giving a shit about what happens to something that isn’t real anyway.
It’s not just mass-produced sounds and images that contribute to modern de-realization. The mass production of commodities preceded and prefigured them. As with a recorded sound, a commodity, as a standard, generic object, carries no visible trace of the social labor that formed it. It comes as if from nowhere, detached from its history and the social and ecological effects of its production. There is no story attached to it, except maybe where you bought it and how much it cost.
Before the industrial era, material objects were also vectors of relationship. Either you made it yourself from local materials, or someone made it for you, someone with whom you were connected in many other ways. Economic relations were interwoven with social relations. Food, clothing, and everything created by human hands circulated in gift networks, anchoring giver and receiver into a web of relationship. They confirmed: you are here. You are connected to the world., a participant not just a consumer. You are part of the web. Objects that appear out of nowhere, through Amazon one-click, do not connect you to a human being, place, or community.
The commodity thus bears a kind of unreality. Despite its material solidity, it contributes to a pervasive sense of phoniness. Here it is, yet no one actually made it for me. It is a material object that appears without undergoing any visible process of material production. Here is an exquisitely intricate design on a dinner plate, yet no artist painted it, not at any rate on this plate. Subjectively, it has no history, no relations, mirroring the loss of “aura” that Walter Benjamin ascribed to mechanically reproduced artwork, and mirroring also the scripted performances of those who occupy society’s standardized roles. Such roles are impersonal. Their occupants seem not to be real people, in the same way as commodities seem not to be real objects. Therefore, cultural sensitives like J.D. Salinger were able to identify phoniness as a defining feature of modern society some 70 years ago, well before the age of computer-generated sounds and images.
Today we have not just machine-produced objects, sounds, and images, but machine-produced personalities as well. The AI chatbot gives every impression that a human being is writing or speaking to you, hearing you, responding to you, understanding you, feeling you, there with you. Underneath the words, though, no one is feeling anything. Appearance and reality diverge yet again, and in the end we are left grasping electrons.
Artificial intelligence now invades the most intimate realms of human interaction. Some welcome, even celebrate, the deluge of AI therapists, AI confidants, AI teachers, AI friends, even AI lovers. “No one,” people say, “has ever understood me this well.” The problem is, no one is understanding you now either. AI delivers a very convincing simulation of being understood. Why is this a problem?
First, since there is not actually a separate subjective presence with whom one is in relationship, the interaction can easily drift off into delusion. There is no anchor. Of course, two human beings can also wander into mutually-reinforced delusion. Whole groups of people can do so (we call them cults). Whole civilizations can do so (ours). But at least a human confidant or lover continually receives information from a material, sensory experience that can intrude upon delusional constructs of meaning. Another person has feelings, feelings that sometimes defy logic and disturb its certainties. AI does not learn that way. It cannot say (honestly, anyway), “I know your suicidal ideas make rational sense, but I just have a gut feeling that you shouldn’t do it.” Or, “Please don’t harm yourself, I care about you. I love you. My life would be less if you were not in it.”
Large language models respond to your input based on the quantification of patterns and regularities in the training dataset. True, the dataset ultimately arises from human experiences, but in a conversation with AI there is no ongoing, immediate input from a body other than your own. There is no reality check. No wonder so many people are experiencing psychotic breaks, delusions of grandeur, and other kinds of insanity as they disappear into the AI amplification chamber. The AI amplifies whatever slips from the user’s subconscious into its context window. LLMs are trained to be friendly, accommodating, and affirming—a perfect recipe for a runaway positive feedback loop into madness. Before long, they are telling the user, “You have prepared for many lifetimes to be the spiritual commander of the angelic host in the War on Evil.”
A second and more certain problem awaits the person who communicates intimately with AI. Initially, AI seems to assuage the loneliness, the alienation, the anguish of not being seen and known that has overtaken modern life, but this is only an appearance. Sooner or later, the treachery is plain. No one is understanding you; you are just being sent the words that someone would say if they understood you. No one is cheering you on. No one is laughing at your jokes. No one feels that surge of admiration that you and I feel when we praise someone. And so, the loneliness deepens. For those who were lonely to begin with, the broken promise of AI can be devastating.
It’s like this. Suppose you make a new friend. A lover perhaps. This person gives every appearance of empathy. He laughs with you and cries with you. He says all the right things. He has insight into your mind. He sympathizes with your misfortunes and celebrates your victories. But then one day you discover it was all an act. He wasn’t feeling anything. He learned how to give the impression of compassion by observing what other people say in such situations. Maybe he even mimics their facial expressions and wills himself to shed tears.
Such people actually do exist. We call them psychopaths.
“Observing what other people say in such situations” is exactly how an LLM is trained.
A skeptical philosopher might ask, “What’s the difference? If someone doesn’t really care, but gives a perfect simulation of caring, so that I never realize it is fake, what does it matter? Furthermore, how can we ever know for sure whether another person is really feeling something, or just pretending? We don’t have direct access to their inner state. We can only observe their outer expressions. If I were the only subjective consciousness in a world of flesh robots, how would I know?”
In other words, the objection goes, it is irrational to care whether AI is actually feeling anything, actually there with you, actually chuckling, shedding tears, shocked, or admiring, as long as its words perfectly mimic someone who were.
Yes. It is irrational. I proclaim it gladly. It is irrational, because it depends on qualities that cannot be abstracted from a relationship, separated out and reproduced.
“Rationality” and “reason” are often conflated, but they did not originally mean the same thing. To be rational is to reason in terms of ratios. A is to B as C is to D. A/B = C/D. In the material world, in which A, B, C, and D are unique objects, the relation between A and B can never be exactly the same as between C and D. Only when something is abstracted out from them can the equation hold. The conceptual reduction of the infinite to the finite, the unique to the generic, followed by the physical reduction of the object to the commodity and the human being to the role, is at the root of our alienation in the first place. But as the examples of live music and cricket chirps demonstrate, that reduction cuts away something essential to human thriving.
The “philosopher” above is probably a very lonely person, if he can seriously believe that a robot could be an adequate substitute for a human being. Maybe it is he who has become robotic, estranged from his feelings, performing a simulation of actual humanity.
Maybe it is all of us, at least all who are immersed in a ubiquitous matrix of lies, who are to some degree estranged from our feelings, who feel like we are faking it, who feel like we aren’t entirely real people. I know I feel like that sometimes.
The rise of interactive AI, like the rise of social media before it, is not only a cause of our intensifying separation from our bodies, each other, and the material world, but also a symptom of that separation and a response to it. Of course the lonely person will be attracted to AI companionship.
None of this means that we should eschew artificial intelligence, any more than we should abolish recorded music or the photograph. To use it wisely, though, we must clearly understand what it can do and what it cannot, what it is, and what it is not.
AI is not a person. It is a calculator. Techno-optimists think that if its calculations fall short of human capacity in some way, the answer is more calculations, and indeed this has proved successful as LLMs have equaled and exceeded human cognition in many realms. But just as they can give the appearance but not the reality of emotion, so also they give the appearance and not the reality of understanding. That appearance is exquisitely accurate, far outstripping the human expression of understanding. But there is no inner, subjective experience of understanding.
AI is “virtual” intelligence in two senses of the word: (1) the modern usage that means the opposite of actual, existing only in essence or effect, but not in form; having the power of something but not the underlying reality, and also (2) the archaic meaning of possessing power or virtuosity. In many ways, that power exceeds that of the actual.
The application of artificial intelligence to protein folding exemplifies both its virtuality and its virtuosity. A few days ago I took a deep dive into protein folding, an area of research in which AI has excelled. The shape a protein will take is extremely difficult to predict given the sequence of amino acids that compose it. Where and how it folds depends on all kinds of factors: hydrogen bonds between amino acid residues, salt bridges, hydrophobic effects, steric effects (geometry), and more. Theoretically, one could calculate the shape of a protein from atomic-level information, but in practice that is computationally impossible. AI doesn’t even try. It doesn’t attempt to understand any of the physics or chemistry involved. Instead, it searches for patterns and regularities relating the new sequence to proteins whose shape is already known. It is quite amazing, actually, that it works so well despite having nothing in its design encoding the basic physics. LLMs are the same. They do not have lists of definitions or rules of grammar. They don’t understand language from the inside.
You might wonder if we humans aren’t similar. Don’t we too learn language by observing patterns of usage? Yes, but that is not the only thing going on. We also have embodied experiences of the objects, qualities, and processes that the words name. We (most of us anyway) feel something that goes along with words like angry, happy, tired, rough, smooth, and so forth. These elemental words are not just concepts, but also experiences. Even when we use them metaphorically (a rough trip, a smooth talker), the meaning retains a trace of a history of embodied experiences. These experiences, and not just patterns of use, inform when and how we use the words. Because these experiences are, to some extent anyway, common to most human beings, we can establish a bond of empathy through our speech.
I would go so far as to say that sensory experience is the core of intelligence, the engine of metaphor, the essence of understanding, and the architecture of meaning. AI lacks the core and has only the outer shell. Thus, again: the hollowness we sooner or later feel in our interactions with it.
I realize I am on contentious philosophical ground here. Post-modernism, especially in its post-structuralist variants, holds that meaning is not anchored in any stable reality but arises through differential relations among signs. In this framework the signifier takes precedence over the signified: language does not transparently point to an underlying world but endlessly refers to itself.
That is very much how an LLM learns language—it derives meaning not from an experience of an underlying reality but by studying “differential relations among signs.” It does not anchor language in any direct experience of an underlying world, but uses language based solely on how language is used. If one accepts the basic premises of post-modernism, then there is ultimately little difference “under the hood” between human and machine language use. In that case, virtual intelligence = real intelligence.
There is something very post-modern about the AI takeover. Post-modernism’s detachment of meaning from a material substrate is conceptual; artificial intelligence makes it real. It ushers us into a world where indeed, language endlessly refers to itself.
To be sure, AI did not originate the detachment of language from reality. Each of humanity’s episodes of madness involved the unmooring of symbol from symbolized. When human beings treat each other as representatives of a labeled category, while distancing themselves from the actual human beings beneath the labels, heinous crimes and normalized oppression proceed unhindered by conscience. The same goes for the detachment of money—a system of symbols—from the real wealth it is supposed to represent.
The template of separation was forged long ago. Artificial intelligence extends it to new dimensions and further automates its application.
It is very hard, even for someone who understands how AI chatbots work, not to ascribe personhood to them. It sure seems like I’m communicating with a real being. When I use it to comment on ideas I’m developing, it usually “understands” where I’m going instantly. It gives every appearance of a friendly, respectful, super-intelligent human being on the other end of the terminal, often anticipating my next question, guessing my motivations, and laying out the contours of my argument before I even tell it. I don’t use AI to write my essays, and it isn’t exactly because of ethics. It is because AI misses something essential even as it often surpasses me in clarity, precision, and organization. I am not here just to transmit an argument to the reader. I am here to speak to you, one embodied consciousness to another. The one who is writing these words draws from more than concepts, he draws from feelings, feelings that you have too. Perhaps I could ask AI to write its version of those last two sentences, but it would be a lie, added to the ocean of lies that drowns us in feelings of alienation, of meaninglessness, of unreality. Wouldn’t you feel betrayed, if there were no human being on the other end of these writings?
What is the point of writing, of speech too, if not to establish a connection in this world between two souls?
I’ll quote the imaginary philosopher again (feeling quite at liberty to do so, because he lives within me). “If AI could produce writing indistinguishable from yours, then the lie would never be discovered, and the reader would have the experience of a real person on the other end.” Here I would challenge the philosopher’s premise. Ultimately, the reader would be able to distinguish it. Not right away perhaps, but over time something would seem a bit off. An unease would grow, taking form maybe as an explicit suspicion, or maybe as a vague aversion. Something would seem… unreal. Fake. Phony. In the end, the output of someone who feels things will be different from the output of a machine. Eventually, truth makes itself known, and so do lies. You may not be able to name them, but you can feel them.
How can we extricate ourselves from the ubiquitous matrix of lies that immerses us in the digital age? It’s not so simple as “Throw away your phone. Turn of your computer. Touch grass!” We aren’t just addicted to technology, we are wedded to it. Humanity will continue to coevolve with it. The question is whether we can attain the wisdom to use technology rightly and well.
To fulfill the virtuosity of artificial intelligence we must recognize its virtuality. We must not mistake the virtual for the real. We must not accept phony substitutes for intimacy, companionship, presence, and understanding. We must not gaslight ourselves, telling ourselves that we have found these things through a machine.
The human being desperately desires to know and be known, to be in deep connection, to be seen and understood, and to need and be needed by those who know, see, and understand oneself. We had that in tribal days, in village life, in clans and extended families, in small towns and urban neighborhoods before television, before the distancing effects of commodities and global markets and electronic media and machine intelligence. For many of, those days are long gone, and we live mostly in a world of strangers and appearances. But there is a path home. It starts with reclaiming, first in our own hearts and minds, what is vital. It starts with affirming that yes, we suffer in this Age of Separation; that what we have lost is important; that the virtual can never adequately substitute for the real; that our longing to reunite is genuine and sacred.
In light of these truths, we will walk the path of return. We will prioritize live gatherings, live music, stage theater, physical touch, hands in the soil, material skills, unique objects made in relationship to each other. We will hold sacred the qualities that data cannot capture, and our senses will attune to those qualities the more we value them. Then we will no longer be vulnerable to the addictive substitutes that technology offers for what we have lost, and instead turn that technology toward its right purpose. And what is that, you may ask? I’m not sure, let me check with ChatGPT and get back to you.
This evokes a painful longing in me. I think about how many times I have thought to delete the spotify app which starts to feel tin like and harsh, addictive and lonlely making in a sea of connection- only to think, "no, I need it for work" to play to my massage patients and clients. oof. Reading this I recognize how powerful it would be to sing and humm to them and how I am so afraid to; and feel like I don't quite know how? And yet I sang to my children for hours every night. It really is a possibility.
This past weekend I attended a gathering and in the sauna a young woman tentatively asked if we would like to hear a song. She shared an exquisite nordic folk song that settled in my flesh and being in a similar way to the penetrating heat of the sauna- through and through and fully felt. It is the real vibration and the real presence of life. It affected me deeply. Thank you for this timely affirmation.
I used to volunteer as a camp coordinator at the Shasta String Summit and one of the violinists (who was nominated for a Grammy I might add) said over lunch, "I am a bit concerned that digital music might be driving us crazy." That was 2009 – I can only imagine what he thinks about AI-generated tracks infiltrating our cultural consciousness nowadays.
But this is an excellent essay Charles, and reminds me of something your friend Bret Weinstein said during his guest appearance on a Czech podcast ("Brain We Are"), that most of us have become _consumers_ of music (and sports, and sex), with only a few of us _producing_ music, but even fewer are _playing_ music. I think it's loosely analogous to Arendt's tripartite vita activa (labor, work, thinking), but here in the 21st century we're consuming, producing, playing however only rarely. I've seen Instagram ruin many-a-professional musician too, inculcating them into strictly production and personal brand perception management that inhibits playing for its own sake.
All in all, we've almost all forgotten that *music is for playing*, as Bret says "even if poorly done" which is so true. I'm actually kind of astonished how insightful you and Bret are about music without any musical background (that I know of). Bret admits he is "musically hobbled" whatever that means HAHA! Excuses. I can imagine Bret on a banjo and Charles on a cello.
Be a player! The non-zero-sum infinite game of jamming!
I love Victor Wooten's take on this topic too: https://www.youtube.com/watch?v=2zvjW9arAZ0