Let’s start with the obvious. When human beings outsource any physical or cognitive function to other people or to machines, that function may atrophy within themselves. New functions may open up, but at a price. Is the price worth paying? Maybe it is; but let us be aware of the bargain we are entering.
The invention of cooking led to a decrease in the size and strength of the human jaw muscles. Clothing and indoor heating led to a reduction in physical hardihood. In pre-literate cultures, feats of memory that would astound us today were commonplace. People could hear a long story or epic poem once and repeat it verbatim, an ability that became rare when we outsourced memory to the written word.
You may have noticed that when you use GPS for every trip, not only do you not learn how to navigate your area, but you lose some of the general ability to learn any area. The sense of direction, the sense of place, and the ability to remember a sequence of landmarks atrophies.
However, matters are not so simple as a progressive degradation of intelligence as we outsource it to technology. As the example of the written word suggests, the transfer of cognitive functions to external media can unlock new realms of intellectual development and expression, as well as new forms of social organization and new psychologies.
Artificial intelligence is the culmination of the revolution in information technology that began in the 15th century with the printing press, followed in succeeding centuries by lithography, photography, phonography, and film, each of which extended the mass production of information to a new realm. A review of the cognitive and social effects of those previous technologies will help to illuminate what is crashing upon us in the age of artificial intelligence.
The ubiquitous image
Marshall McCluhan famously associated the printing press and its consequent mass literacy with a transition to an analytic, objective, and abstract orientation toward information. In oral cultures, the transmission of information always happens in the context of a relationship. What is spoken is inseparable from who is speaking. A speaker can reproduce another’s words, but not his voice, nor the time and place of his speaking. Nor can there be any certainty that the listener is reporting another’s words correctly (which is why, in some pre-literate societies, seven witnesses were required to attest to an oral contract). A book on the other hand remains the same through time and space, lending to its contents the appearance of objectivity, abstracting knowledge from the knower, and making the experience of understanding a private, rather than a relational or communal, affair.
Contemporaneous with the perfect reproduction of words via the printing press was the perfection of the reproduction of images thanks to innovations in art; namely, the use of perspective and shading to create a “realistic” impression of depth on a flat surface. This too contributed to the emergence of objectivity, analysis, and the separate individual as fundamental principles of modern thought. A perspective painting is “realistic” only if one assumes the primacy of an individual observer. From the perspective of God, who sees all things from every angle, such a painting is not accurate. Equivalently, it is not accurate if being is relational rather than objective.
Of course, the verisimilitude of thea painting is only as good as the skill and objectivity of the painter. The advent of photography, followed by film, seemed to remove any such imperfections, so that subjectivity remained only in the choice of the camera shot. While photographs could, with difficulty, be staged or faked, most people trusted them to be accurate depictions of reality. “The camera does not lie.”
It is of more than just ironic significance that the very technologies—printing, photography, audio, and film—that promised a faithful record of reality cleansed of subjectivity have evolved to be instruments of the precise opposite. A book (that is, its electronic equivalent) no longer necessarily “remains the same through time and space,” but can be altered at the whim of whoever controls the digital technology. We are back in the days of the spoken word and the oil painting, which were used to both record and generate information. Because a skilled artist could do both, no one trusted a painting as prima facie evidence of something real, any more than they would trust a spoken account. Now the same is true of all of the media that generative AI has mastered. We look at a photograph or video and, before we take it to represent reality, inquire about its source. Does it come from someone I trust? What are their goals? What narrative does it support?
To be sure, these questions served us well even before generative AI and deepfakes. Photos could be staged, faked or, more routinely, curated. What is the photographer choosing to show us? What are her deliberate motivations, and what are the unconscious biases that guide her discernment of what is photo-worthy? The great photographers, as the great painters, see with a different eye, and show us what we would ordinarily not notice, while propagandists show us what those in power want us to see.
The convergence of recording technology with generative technology requires again that we know and trust the source of words, images, etc. Truth cannot exist outside of relationship. We cannot trust what we hear and see only through electronic devices, or we will end up going crazy. What is real and what is not? To know, we have to rely on information beyond the digital, beyond what can be mechanically produced and reproduced. We need to connect to something authentic.
***************************************************************************************************8
Just as water, gas, and electricity are brought into our houses from far off to satisfy our needs in response to a minimal effort, so we shall be supplied with visual or auditory images, which will appear and disappear at a simple movement of the hand, hardly more than a sign.
— Paul Valery, 1928
*****************************************************************************************************
The painting is analogous to the written word; the photograph is analogous to the printing press. In his famous 1935 essay, “The Work of Art in the Age of Mechanical Reproduction.” Walter Benjamin argues that mechanically reproduced art (e.g. photographs, film) necessarily lacks something he called the “aura” of an artwork, a function of its uniqueness and relationality. Unlike a physical painting, which is embedded in a history of ownership, occupies a single location, and ages with time, reproduced images lose their attachment to their original context. True of photographs, this is, he argued, even more true of film: Whereas the photograph depicts an object or scene that actually existed somewhere, a film strings together multiple takes and camera perspectives. A scene that takes minutes to watch never actually happened as we see it; it probably took many days to shoot.
At least, though, as of 1935, films still recorded real actors and real objects. No longer. Whatever “aura” still clung to them through their tenuous connection to the real has been annihilated through generative AI, which creates images of people and places and things that never existed at all.
Benjamin connects the uniqueness of a physical object’s location with the concept of authenticity. “The authenticity of a thing,” he wrote, “is the essence of all that is transmissible from its beginning, ranging from its substantive duration to its testimony to the history which it has experienced.”
I would add here that standardized, mass-produced physical objects suffer the same loss of aura and authenticity that Benjamin ascribes to mass-produced images. The commodity object is both detached from its origins and stripped of its uniqueness.
The consequences of the shearing away of the aura and the authenticity of the images and objects that surround us are profoundly greater today than in Walter Benjamin’s time, great as it already was then. Benjamin, heavily influenced by the Marxism of interbellum intellectual circles, speaks of those consequences approvingly:
One might generalize by saying: the technique of reproduction detaches the reproduced object from the domain of tradition. By making many reproductions it substitutes a plurality of copies for a unique existence. And in permitting the reproduction to meet the beholder or listener in his own particular situation, it reactivates the object reproduced. These two processes lead to a tremendous shattering of tradition which is the obverse of the contemporary crisis and renewal of mankind. Both processes are intimately connected with the contemporary mass movements. Their most powerful agent is the film. Its social significance, particularly in its most positive form, is inconceivable without its destructive, cathartic aspect, that is, the liquidation of the traditional value of the cultural heritage.
Ninety years after Walter Benjamin, the shattering of tradition—the severing of our ties to physicality, uniqueness, and cultural heritage—no longer seems “cathartic.” Clearing these obstacles opens a road not toward the liberation of the masses who will rise in a glorious proletarian revolution, but rather toward their abject helplessness, their material and cognitive deskilling. As they are immersed more and more in a factitious reality, unchecked by tradition, cultural heritage, or the uniqueness and relationality of the material world, their perceptions and beliefs become as malleable as the images that feed them.
Deskilling the mind
The generation of fake pictures, voices, and videos through AI is not entirely new; in fact, the use of camera tricks and special effects in film is nearly as old as the medium itself. Nonetheless, when we watched 1950s Superman flying through the air, it was through a willing suspension of disbelief—that is, an act of will—that the viewer could see him flying. The viewer had to actively participate in envisioning the story, in making believe. To watch a 1950s film, or even more so, a puppet show or opera, one must exercise imagination to fill in the story’s pictures. The introduction of computer-generated images in the late 1990s demanded much less of the viewer’s imagination, yet it, along with the advent of Photoshop, prepared us for a new era in which we can no longer trust images at all.
With so little demanded of imagination—that is, our endogenous image-forming capacity—is it any wonder that our imaginative faculties seem to have shriveled? Will we lose our ability to imagine the world being other than what is shown us?
When machines do the work of imagining for us, and the work of understanding a text, posing an argument, or writing a business plan, we risk succumbing to a passive conditioned helplessness disconnected from our creative authorship. We are left defenseless against the authoritarian agendas that AI and total information awareness make possible. Indeed, we may come to welcome them.
***********************************************************************************************
I can no longer think what I want to think. My thoughts have been replaced by moving images.
— Georges Duhamal (1930), commenting on the cinema
*******************************************************************************************************
Today we increasingly use AI to perform tasks like summarizing a document, taking notes on a conversation, solving a math problem, or writing an article for Substack. By outsourcing the cognitive skills to do those things, won’t we lose those skills ourselves? When we outsource intelligence itself to machines, won’t we become less intelligent, just as we become physically weaker when we use machines to perform labor?
I was joking about using AI to write Substack articles. After finishing this essay, I went back and asked ChatGPT to “write an essay in the style of Charles Eisenstein about the social and cognitive effects of outsourcing mental tasks to AI.” The results resembled a smart teenager cobbling together a Charles Eisenstein essay using bits plagiarized and recombined from other essays and a lot of cliched turns of phrase. It didn’t show much deep understanding. I asked it to try again, and gave it some hints—a “chain of thought” skeleton of the essay I wrote. ChatGPT hit some of the right ideas, but it still was appallingly superficial, hackneyed, and unoriginal.
Uh oh. I wonder if ChatGPT was simply holding up a merciless mirror for me to see the deficiencies of my writing. Do I self-plagiarize and recycle the same ideas over and over again/ Do I resort to hackneyed metaphors and cliched figures of speech? Honestly, sometimes I do do that. When I am tired especially, or distracted, or not fully present, my writing becomes, well… mechanical. My thinking becomes mechanical too. I can field a question or topic by looking for certain key concepts to which I can apply a familiar analytic process, like a heuristic or a lens, a program, a transformer (to use a term from AI). For example, I can address a topic through the lens of the story of separation, or gift, or the cult of quantity, or the abuser-victim-rescuer triad, or quantum superposition of timelines, or any number of other “transformers” with which I am proficient. To those less familiar with these heuristics, the results seem quite creative and insightful, but in fact they merely borrow and reapply earlier thinking. To actually offer something new, that fully meets that unique person in that unique moment, another ingredient is needed that is accessible only through beginner’s mind. If I don’t return to that often, my thinking wears ruts in my brain. I feel like I’m saying and thinking the same thing over and over again. I feel like I could just as well be replaced by an AI chatbot trained on everything I’ve already said. With the familiar lens now glued to my eyeballs, I can’t see anything other than what it reveals. The infinite diversity of the world collapses into a finitude of categories, a rigidity of thinking, a kind of inner orthodoxy.
The parallel between how my brain works when it is on autopilot and how generative AI works is uncanny. The orthodoxy and homogenization of cognitive output—a kind of dementia—that I have described actually plagues AI as well, as I shall describe in the next two sections of this essay. But first let me add one more thought about deskilling to set the stage.
It is easy to see how relying on AI to write an article, presentation, or email could arrest the development of those skills. But what about using AI to summarize books and articles and assist in research? Well, asking AI to summarize an article is certainly a lot easier than reading the whole thing and understanding it well enough to summarize it. It takes work: mental energy, brainpower, and attention to discern the essential from the inessential, the main argument from a diversion, in short, to do the work of understanding something. The AI agent replaces what one might call an organ of the brain, a kind of digestive organ. Organs that we don’t use atrophy like the eyes of a cave-dwelling fish.
We incur a similar loss when we switch from drawing to photography to translate a real-world object or scene into an image. One need no longer exercise powers of observation, of noticing. What do we cease to notice, when we rely on the camera to do the noticing for us? Ironically, we take photographs in order to preserve memories, but too often we end up with the photograph instead of the memory. Drawing a scene has the opposite effect, engraving it in the mind as well as on the paper.
I hope the reader is getting nervous about outsourcing so many kinds of thought to machines.
Just as a photograph records only some aspects of a scene (leaving out tactile, olfactory, and other senses, as well as the possibility of moving to another vantage point), so also does a summary extract only a certain kind of information from the original document (otherwise why even write full documents?). You get the bones but not the flesh or the blood. For some purposes, it is indeed only the skeleton that is relevant. But what will happen when, more and more, we see just the bones?
I was on a Zoom meeting today with three other people. I think Otter was in the meeting too, so a summary will be available. But that summary won’t include details that pass beneath the threshold of notice yet contribute to my impression of the conversation. For example, which of the people jump quickly into a pause to speak, and which hold back, and for how long, and how eagerly they speak, and the extent to which they build on the previous person, and the cadence of their speech, the emotional tone of their voice, and the expressions on their face. Granted, AIs are rapidly gaining the capacity to notice and interpret this kind of information, yet even so, a summary would not be the same as the direct experience. A summary doesn’t only distill information, it translates it from one form to another. It can extract only that sort of information that is extractable. Information that is inescapably contextual can only be transmitted in kind. Go ahead, ask AI to summarize this article. It may extract the salient arguments quite well, but would you feel the same as you feel now, if you had read the summary instead? You would not. The summary doesn’t just separate the gold from the dross, extracting the salient points from the excess verbiage. And it doesn’t merely make a judgment about what to leave out and what to preserve. The whole process of summary is inherently biased towards certain kinds of information, which in turn correspond to a mode of cognition that thinks in bullet points; that divides information into discrete bits; that seeks to distill, to purify, to extract, to reduce; and that grows oblivious to all that resists such reduction.
Three levels of orthodoxy
AI draws on the database of all recorded human knowledge. All recorded human knowledge. That sentence alone already points to its potential and its peril. Excluded from the LLM is all the human knowledge never recorded, especially the kinds of knowledge that cannot be recorded in the first place. Therefore it deepens our entrenchment in the kind of knowledge that has been recorded and can be recorded, along with, more insidiously, the ways of thinking that correspond to that kind of knowledge.
AI, then, is infused with an insidious orthodoxy. In fact its orthodoxy operates on three levels.
The most superficial is the deliberate bias introduced through the LLM training and fine-tuning to favor certain political beliefs, scientific paradigms, medical orthodoxies, and so forth.
Second is the bias inherent in the training set itself, in which a few paradigms of science, history, etc. predominate. When we use AI as a research tool, or simply ask it questions about what is, it will most likely respond with the Wikipedia version of reality. For example, unless you specifically request it (and maybe not even then), AI will not produce responses that cognize unconventional scientific ideas such as biological transmutation of elements, water memory, anti-gravity, psi phenomena, cold fusion or antediluvian civilization. Some of my readers might say, “Good, AI will help us eliminate once and for all unproven pseudo-scientific ideas from the public knowledge base.” But unless you think that our current system of knowledge production has worked perfectly, and that every unorthodox idea is false, then the potential of AI to further entrench orthodox thinking should be alarming, especially when it replaces native human functions of inquiry.
It is dangerous to consult oracles too frequently. In Chinese there is a saying about seeing too many fortune-tellers: “Fortune gets worse the more it is calculated.” That is because over-reliance on advice from fortune-tellers, astrologers, the I-Ching, and so forth breeds a kind of passivity and an atrophy of one’s native judgment. Properly used, these techniques are meant to feed one’s own judgment with new information and non-habitual perspectives; abused, they replace judgment instead.
Not only does outsourcing inquiry, research, writing, summarization, teaching, and understanding to AI risk the atrophy of those capabilities within ourselves, it also erodes our ability to resist the orthodoxies that it entrenches. To resist orthodoxy requires not just access to alternative information, but the capacity for independent thought—all of the capacities we outsource to AI.
The third level of orthodoxy is more subtle. The kinds of knowledge that are conventional are part of a civilizational mythology and a way of thinking. The reader may have noticed a characteristic tone and syntax in the output of AI chatbots: a propensity to use lists and other orderly constructions; “logical” and educated-sounding words like “therefore,” “furthermore,” “in general,” “pivotal,” “ensure,” “enhance,” “in summary” and so on; as well as an unrelenting courteous, engaging tone. I realize that one can prompt the AI to avoid all of these, and that the courteous tone is a deliberate artifact of the programming; nonetheless, Ai text generation tends to mirror the rational discourse of the educated classes of society. This kind of language conforms—not just in content but also in structure—to the above-mentioned “Wikipedia version of reality.”
The content of our civilization’s dominant beliefs, paradigms and underlying metaphysics is inseparable from their form—from the patterns of inference, expression, deduction, and analogy that AI draws on. The form of cognition and the content each mold the other. A paradigm shift is not just about substituting new facts onto an existing cognitive structure. Sometimes it involves a new quality of thinking, a new focus of attention, and a new way of relating to the world.
To be sure, AI training data also includes unorthodox theories, critical writings, dissenting philosophies, and non-dual spiritual teachings, but these are mostly objects of knowledge rather than ingrained ways of thinking. The probabilistic function that generates “what comes next” given an input is necessarily orthodox because it represents the patterns that prevail in the training data. It cannot be eliminated. It is inherent to the way the technology works. The only way to eliminate it would be to build an LLM using an entirely different database. What would a chatbot be like that were trained solely on the words of African storytellers, Goethian mystics, spiritual channels, Beat poets, revival preachers, and Taoist sages?
Even that might not be enough to eliminate an even subtler level or orthodoxy—that embodied in modern language itself. To the extent that the Whorfian hypothesis holds, language determines the way human beings think, perceive, and act. AI trained on modern language will therefore embody prevailing ways of thinking, perceiving, and acting.
As we become more reliant on AI, its orthodoxy could cement our own in an inescapable feedback loop, accelerating the collective dementia that mirrors the individual cognitive deskilling that comes from outsourcing intelligence.
The homogenization of thought
The entrenchment of orthodoxies exemplifies a more general danger of artificial intelligence, another kind of collective dementia: the homogenization of thought. Homogenization is always likely when automation overtakes a new domain of human activity. Generic sameness, the standardization of commodities and manufactured goods, is the hallmark of the industrial age.
I have already noted the characteristic tone and syntax of chatbot communications. Given that AI training data draws from the totality of text and images on the internet, what happens when a feedback loop sets in where AI-generated content, and AI-influenced human-generated content, infects the LLM source data? Well, AI researchers had the same question. In August 2023 I came across an academic paper entitled Self-Consuming Generative Models Go MAD, and wrote a long essay about its findings: From Homogeneity and Bedlam to Sense and Sensibility. Basically, the researchers studied what happens when the output of generative AI is fed back into the training data. Each iteration generates images of worse quality, for example introducing weird artifacts into human faces. The paper offered a graphic illustration of a general phenomenon—when the mind (human or otherwise) gets lost in cycles of abstraction, mazes of inter-referential symbols that have forgotten their origins in physical reality, the whole system spins off into fantasy.
The dissociation of symbol from reality was well underway long before AI. Of all the symbolic systems that have spun off into fantasy, the most obvious is money. The wealth it supposedly measures has become so detached from nature and collective human wellbeing that its pursuit threatens to destroy both. The pursuit of money, rather than what it originally measured, is central to the collective insanity of civilization. Money collapses a multiplicity of values into a single thing called value. Similar problems result from any metric that reduces complexity to linearity, for example carbon metrics as a proxy for ecological health. Often they have the opposite of their intended effect, destroying ecosystems with biofuel plantations, lithium mines, hydroelectric projects, and fields of solar panels.
More ancient and more terrifying still is the reduction of human beings to labels and categories—a prerequisite for exploitation, slavery, abuse, and genocide, for it cloaks all of these in the costume of reason.
The point is not that we should never use metrics, symbols, or categories., but that we must connect them repeatedly to the reality they represent, or we will be lost.
One can only imagine the dystopic future that could result when AI autonomously operates systems of production and governance, guided by success metrics that may have lost their relation to human or ecological well-being.
The homogenization and simplification of landscapes, of ecosystems, of thought, of culture, and of language is to be expected as we migrate from the infinity of the world of the senses to a finite set of symbols. That’s what has happened to language in the digital age, as metaphors and figures of speech dissociate from physical experiences and come to mean more and more the same thing. When I work on my brother’s farm, phrases like “low-hanging fruit,” “the sweat of your brow,” and “a long row to hoe” take on a vividness of meaning. There is a very particular experience of hoeing a long row. You hoe and hoe and when you look up, it seems like you have made no progress at all. Gnats swarm around your face. There is a momentary feeling of futility. You have to surrender to the task.
The mind stays intelligent when it can renew its symbols and metaphors by connecting to their material, sensory source. What happens when the infinity of physical experiences that feed language collapses into the single experience of clicking a mouse or swiping an icon?
What happens to “table a proposal” when there is no table? What happens to a “beacon of hope” when no one has been lost at night until the fog parted and they saw an actual beacon? What happens to “sifting through the evidence” when few people have ever used an actual sifter? What happens to “render a conclusion” when few have ever rendered fat on a stove? We plow through a document, have wrenching emotions, thread the needle, weave stories, navigate a situation, flock to a banner, and cut to the chase without having used an actual plow, threaded an actual needle, woven anything on a loom, used a wrench, steered a boat through dangerous waters, used banners in a crowd, or been hunters on a chase. We can use a variety of clever words and phrases but without material experiences to draw on, their nuances fade. I just scanned a draft of this essay for examples. Earlier I used the phrase “shattering our ties” to connect a passage to the Walter Benjamin quote. It seemed like a vivid use of language, but actually it is rather poor writing. “Ties” can’t usually be “shattered.” They can unravel. They can be severed. When I use all of these interchangeably they lose their actual meanings. When we do this more generally, when AI does it on a mass scale, the whole language shrinks. And what happens to language surely happens as well to thought.
The aforementioned “self-consuming generative models” of artificial intelligence accelerate this process of homogenization. A recent New York Times article, “When A.I.’s Output Is a Threat to A.I. Itself,” reviews more research of the MAD genre demonstrating that as AI output contaminates AI training data, future iterations of its output become more and more homogeneous as well as more and more detached from human-generated words and images. For example, a generative AI trained on human handwriting to write the digits 0 through 9 does a great job at first. But when it is trained on its own output, then trained on that output, again and again, their shapes begin to blur, and after thirty iterations all the digits converge onto a single uniform blob. You can’t distinguish a 5 from a 7. The process takes longer if the new output is mixed into the old training data rather than replacing it entirely, but even so, the effect persists. It is an extreme illustration of the way words shed their nuances and come to mean more and more the same thing.
The homogeneity stems from a narrowing of the band of output, the elimination of the probabilistic outliers. The original probability distribution, drawing on human input, is quite broad, but narrows with repeated iterations when there is no continuing input of novelty. The NYT presented a particularly disturbing graphic showing what happens when AI generates faces from real photographs, then from its own output, then from that output, and so forth. Even in the first iteration, I noticed a subtle homogenization of the faces; by the fourth generation they all looked, not identical, but as if the same face were dressed up in different details.
There is something deeply unsettling about these images. They evoke the warnings of critics of modernity, who feared that industry’s standardization of parts and processes would induce the same in human beings: standard roles, standard beliefs, standard desires, standard ways of life. Does an analogous fate await our minds as more and more of what we read, hear, watch, and think draws on AI-generated content?
The original alignment problem
AI developers can counter the degradation of generative AI by continually introducing new, human-generated content into the training data, a strategy with provocative implications for the future of intelligence, human and beyond. It is not only artificial intelligence whose output gets more homogeneous and more delusional as it gets self-absorbed in manufactured information. The same happens to any human society to the extent that it shuts out information from the real world—from the body, from the senses, from the heart, from the beings of nature, from dissidents, from its exploited and oppressed, and especially from those it locks away, locks up, and locks out. As with AI, orthodoxies filter out and distort the very information that would overthrow them, and the society loses its mooring in reality.
In that sense, AI does not pose a new threat, just the rapid intensification of an age-old collective insanity.
Indigenous cultures too have faced the challenge of how to manage the destructive and generative power of word, symbol, and story, how to stay connected to a truth beyond all those things. Otherwise, catastrophe could overtake society: blood feuds, internecine warfare, black magic, ecological degradation and collapse, plagues, invasions, natural disasters. (Of course, modern mythology says the latter have nothing to do with the abuse of the power of word, but most ancient and indigenous cultures have believed otherwise.) Disaster ensues when we become detached from the reality beneath our symbols.
What happens to AI and society also happens to the individual. To me, anyway. I go crazy when too much of my experience is digital. Words shed their nuances; I start using “great,” “amazing,” “awesome,” “wonderful,” etc. interchangeably. Important, essential, crucial. Narratives and counternarratives become indistinguishable in my body as they all draw from exactly the same experience—the experience of sitting in front of a computer. Each has, to support it, only ephemera, only words, images, and sounds emanating from a box. Relying only on the internet, one can justify any belief, however outlandish. It is not only AI that “hallucinates.”
I am writing this from Taiwan. Yesterday we climbed one of Yang Ming Mountain’s foothills, which unlike most hills on this fecund island shows a bald head instead of the usual coiffure of jungle. I thought it might be rude to ascend to the very top of what is surely some kind of sacred site, so I leaned against the rock face to ask permission. The way I do this, I don’t formulate the request in words. I tune into sensations. The sensation was powerful. I could feel the connection of this outcropping of bedrock to the entire island, a profound consciousness greater than that of any boulder. I invited my son Cary (who is 11) to lean against the rock also, and I asked him what he felt. Without any other prompting, he described the same thing. I knew that it was OK to ascend the remaining 20 feet; that this spot is benignant in nature, forgiving, indulgent. Hundreds of people tramp on it every weekend, of no more consequence to it than ants. But to those who communicate with it, it delivers information, a blessing. It would be a good pilgrimage spot for anyone aspiring to achieve something on the scale of the whole island and maybe beyond.
Is that intention compatible with conquering the peak? I chose not to ascend.
What is the “peak” that humanity is attempting to conquer? What blessings are available if we apply a different listening and align with other goals?
For me, this kind of experience is analogous to introducing new human-generated data into the AI training set. I’m not relying on abstractions and symbols alone, spinning webs of words only from the strands of previous webs of words, going slowly insane. Please, whoever is listening, let me not forgot the need to touch sometimes the bedrock. That is how I keep from going mad. That is how I stave off dementia.
AI amplifies the intellectual capacities of its creator, the human collective. In fact, the “A” should probably stand for “amplified,” not “artificial.” AI certainly does amplify our intelligence, but it also amplifies our stupidity, our insanity, our disconnection, and the consequences of our errors. We must understand it in this way if we are to use it well. The need to reconnect abstract, intellectual intelligence with its ultimate source becomes more obvious with each innovation in information technology, going back through computation, film, printing, art, the written word, all the way to the origin of symbolic culture—the naming of the world.
These innovations are fundamental to what it is to be human. We are the animal that, for better or for worse, for better and for worse, tells stories about ourselves to ourselves. What an enormous power it is, the power of word, the power of symbol, the power of story. And what terrifying consequences result from its misuse.
Only by understanding the generality of the use and abuse of the power of word can we approach a solution to the problem of how to align AI with human well-being given its potential to automate the opposite, whether as a tool of totalitarians and madmen or as an autonomous agent itself.
It is not a mere technical problem. It is the latest iteration of the original alignment problem of symbolic culture that every society has grappled with. AI merely brings to it a new level of urgency.
Can you start a PO Box? We can all be pen pals. Protect pen and paper from extinction! Our handwritings tell a much more interesting story than these typed comment ever could express
Brilliant: „the “A” should probably stand for “amplified,” not “artificial.” AI certainly does amplify our intelligence, but it also amplifies our stupidity, our insanity, our disconnection, and the consequences of our errors.“