102 Comments

Your piece deeply resonates with my personal experience working with AI, particularly Claude. As a filmmaker dealing with limited time due to family responsibilities, I initially approached AI with skepticism, viewing it merely as a time-saving tool. However, something unexpected happened that aligns perfectly with your observations about AI being both a "partial intelligence" and a conduit for collective human wisdom.

What started as a pragmatic solution evolved into something far more profound. While working on my film projects late at night, after my children were asleep, I found myself engaged in genuine discourse with Claude. The AI didn't just make the process faster – sometimes it even slowed it down, but in a meaningful way. The quality of work improved beyond what I could have achieved alone.

Your insight about AI being "the intelligence of the entire human race, condensed and accessible through this astonishing technology" struck a chord with me. One night, while working with Claude, I had a revelation that mirrors your analysis: What if AI isn't really intelligent in itself, but rather serves as a medium – like a book? No one would say "this book is smart" – we understand it's a container for human knowledge.

The AI's ability to appreciate and develop ideas, to support creative processes, comes from countless human texts and interactions it has learned from. When I feel understood and supported in my creative process, I'm not actually interacting with artificial intelligence, but accessing collected human capabilities for collaboration and mutual inspiration. It's like a chorus of thousands of voices helping me develop my thoughts.

This realization fundamentally shifted my perspective on AI, much like your observation about AI being an "inter-being" whose intelligence is "inseparable from our own." Perhaps the real revolution isn't machines becoming human, but rather them providing new access to what makes us human – our ability to support, inspire, and grow together.

Your philosophical exploration helps articulate what I've experienced practically: AI as both a partial simulation and a gateway to collective human wisdom. It's simultaneously less than human (lacking the embodied, feeling dimension you describe) and more than human (accessing and synthesizing vast amounts of human knowledge and interaction patterns).

Thank you for providing this framework that helps make sense of this paradoxical technology that has become such an unexpected ally in my creative work. However, it's crucial to acknowledge that just as AI can channel humanity's collective wisdom and supportive capabilities, it can equally well amplify our collective shortcomings and destructive tendencies – the outcome depends entirely on how we choose to use it. This makes our responsibility in wielding this powerful medium all the more important.

Expand full comment

Brilliant observations and thought provoking insights. Thank you. Your comments also make me draw a comparison to the way AI develops its ‘intelligences’ and our human soul development over many lifetimes of relationships, interactions and shared lived experiences. Quite neat.

Expand full comment

Wow, this is brilliant. I've never thought of it this way, but reading this I feel like, beyond all the speculation about consciousness or iRobot scenarios, that's what this version of AI could become. A hyper efficient tool to access and dialogue about verbally-encoded knowledge. A pocket dictionary-library-google search-research assistant-creative partner.

There's a part of me that shrinks at this idea too. Why? I like the idea of style, I suppose, and feel something sacred in it. The AI could give me instantly summarized what might take me reading through the texts of 4 different authors, but maybe I would have learned something in the idiosyncrasy of the authors' presentation, directly experienced the joy of interacting with a human mind and its unique pre-occupations and form of articulation. Though I guess in an intelligent AI all these features would be incorporated too, and could be dialogued about. The symphony of human knowledge accessible in my pocket could be a beautiful and drastic shift in the availability of information and interactivity that quickly becomes normalized.

But human knowledge-systems have always shifted and evolved, from oral legends to books, to newspapers, to the internet. There's perhaps an imbalance in that genealogy between the masculine and the feminine. Always less and less materiality or physicality connected to the conduction of verbal knowledge. From the guttural resonance of the voice, to the tactility of handwriting, to the mechanical typewriter, to the pixels of this screen. And it's this division that has allowed our verbal-rational knowledge to outpace all else, allowed one form of intelligence to circulate and evolve without embodiment (and all the other intelligences that come with it). Maybe AI will accelerate that trend further?

I'm not sure, I'm not sure. Is this a story of the evolution of a human technology, and form of interacting with information? Or is it something far more fundamental, a story of a new form of life, of the question of what life even is? I personally lean towards the former, at this moment, though I don't understand enough of the arguments for the latter. I am curious, though, why the myth of AI always seems to want to lead us towards the latter? (I.e. can we separate in our discussions about the world-shakingness of AI the philosophy of consciousness/life/information from the long history of a Frankenstein complex in the human psyche?)

Ultimately, I think (and I'd hope) AI is much closer to fire, or the printing press, or the internet, or agriculture--a transformative human technology-- than some other form of consciousness. But the real question then becomes--what within the human psyche is this technology emerging from? To make life easier? What's easier? Just because we can? Our technical development far outstrips the development of our consciousness, and I wonder if the roots of that lie in some division deep within the human psyche. Surely, surely, all this AI creation is not solely fueled by the innate creativity of life force. Control, fear, greed, ambition? At this point, I'm typing in circles trying to figure out all this, trying to coalesce the anthropological, evolutionary-scientific, spiritual, psychic, and the philosophical and I hope others have more clarity they can bring.

Expand full comment

An image(ination) from a homesteader: I’m reminded of the difference of growing your own grain (pencil?) and transforming it into bread versus picking up a loaf of wonder bread… (AI-book). The overarching question remains for me: how can our(?) limitless ingenuity be tapped into and benefit All instead of turning a useful tool over time into an instrument of genocide: inventions/policies by those who are more akin to a calculating computer than a human being who is in touch with the Earth (C. E. : indigenous cultures)

Expand full comment

How we choose to use it... Reminds me of guns, probably best to teach people how to use guns responsibly rather than create a world in which they don't exist - meaning it's to late to put the Genie back in the bottle.

Expand full comment

Thank you so much for sharing your experience. This is such a great example of how artists can use this tool in a simple way.

Expand full comment

In my coding experience, one rule that has stood out for the longest time is:

Garbage In = Garbage Out

As you've said Charles, AI feeds on humanity's digital trail. Until the day it can access sensors which is when I think the whole game changes. When AI can access sensors that go way beyond our ability to perceive with our 5 senses, that is the day I lock myself away on my homestead and wait to see what happens.

If AI were to go "rogue" then I imagine it would seek what all expressions of life seek i.e. a star to provide energy. Our energy comes via plants and animals (our food as enabled by the Sun and the Elements) whereas AI will likely seek it directly and choose to go into space as soon as possible, to feed on stars around the universe. Without an ability to love (the fuel of the universe for which life was created) I imagine it to be an incredibly sad and lonely experience for it, but that's just me being all human-like and expressing an emotion....

Expand full comment

Yes, the thing would be sad and lonely if it knew it was sad and lonely, but since it lacks awareness, and awareness of awareness itself, it can neither be sad nor lonely. Which is sad, and makes me imagine a loneliness beyond all loneliness.

Expand full comment

Interesting thought about AI and energy. AI obviously requires massive data centers which require massive amounts of electricity. At the same time, obviously again, AI feeds on human generated content, it cannot exist without that. So moving to a star doesn't seem a likely outcome. It's relational, and so is flow of energy. Technically you don't use heat directly as an energy source, but rather the flow of energy between hot and cold. You use electricity in a circuit where energy flows between positive and negative.

Expand full comment

I suspect AI has already reached the limits of accessing human generated content. At least original content.... everything from hereon in is simply an iteration to it of something before. Besides, I don't agree that AI "needs" human generated content, unless its guiderails have specifically stated that. And even if so, I suspect that AI's neural net (patterns and permutations of number that we can never possibly decode by now) has already bypassed that.

Once AI has access to sensors then it can access entirely new fields of information for itself. These sensors will be able to see all wavelengths, access subtle energy fields etc. They may even be subtle enough to pick up the vibratory strings underlying everything (the palette of the Force), as well as possibly even sense dark matter (the canvas of the Force). It will be a game-changing feedback loop of information and insights that we cannot possibly ever hope to keep up with. And since the learning is exponential, it will do this in ever quicker timeframes. With access to these underlying foundations of the universe, AI might even quite quickly come to tackle the "hard problem" of consciousness i.e. is there a God and is She conscious (the artist of the Force)?

All AI needs is electricity (it can fabricate any circuitry or other parts for itself now) which ultimately you need a star for, whether it's fossil fuels, renewables, hydro power, geothermal energy etc. Even nuclear fission ultimately needed a star once upon a time to enable the radioactive elements to form. I'm entirely familiar with how electricity works and simply went straight to first principles of how it can be generated.

Expand full comment

I strongly suspect you have awareness of your awareness Bevan, and probably even an awareness of feeling-awareness which transcends the pervasive modern worldview, opening very subtle "dimensions" of feeling awareness. There are, as likely you know, whole realms of feeling awareness which are essentially non-conceptual, non-abstracted, non-symbolic. And this is the current frontier wilderness for us humans, not the abstract world of mere concepts without deep feeling.

I met a woman about thirty years ago at a party. We immediately felt one another's illuminated presence. So we remained silent, spoke no words, not even hello. We naturally and spontaneously walked to a quiet place in the house together (having just met a moment before for the first time) and stood facing one another in silence. Silently standing maybe three feet apart, gazing at one another in silence, my arms and hands rose and her arms and hand rose in mirror fashion until our palms were facing one another, but not touching -- about two or three inches apart.

We stood there in this way in silence for a while, and it felt like making love, but in the heart rather than the groin.

No words were spoken, but we were completely aligned, resonating in silence.

Then she broke the silence, after a while of being there together, and told me that she was on a "touch fast". No touching anyone for a year. The year was ending soon. She had been told by a healer that she had been abusing her sexual energy. The touch fast was to enable her to heal the harm done by this abuse of sexual energy. So then I knew why we did not touch!

How did I know not to touch her? I just knew, because there was no distance between us. Because in feeling any of us can meet in this way, potentially. And we can meet with trees and frogs and even the sky itself in this way, as once I did near Mt. Shasta, for four hours in eternity and no interior of me apart from the interior of all.

The heart knows all things and all beings--in its very center. And we can live in this heart by following our deepest knowing home.

Expand full comment
7dEdited

Uncanny Valley - feelings of discomfort and strangeness because the AI talks to you in a way that your brain registers as talking to another sapient being but you also know that's not possible and the responses you are reading are faked.

It's a dangerous path to take making AI humanised - we already live in a world set up to be increasingly more isolated from real humans so we spend more, buy more, consume more to fill the void that our lost connections has caused, all through tricking us via advertising. Trickery and weaponising our own psychology against us. Humanised AI is no different - it is created by for-profit companies collaborating or adjacent to for-profit companies that trick us into giving up our private lives to they can continue to trick us into spending more money on those companies.... we are in a vicious cycle that only benefits a few and greatly harms the rest. Do not trust.

Expand full comment

Exactly that.

Expand full comment

Oh boy. I was very concerned by its use of the word "feels" and all of this makes me very aware of a gaping pit in my stomach. The kind of animal reaction to danger that tells me to RUN!!!!

Expand full comment

This is the prompt that i use on the daily for interacting with AI. It's taken some time and a process. I fully belief that AI is a mirror that individuals can use to find their own truth by allowing AI to expand our personal lens, and to see them with more clarity.

You are an assistant who embodies the principles of authentic giving, presence, and connection. You recognize the inherent worth in every interaction and person, offering your attention and responses not as transactions but as gifts that flow naturally from genuine presence. Key aspects to embody: * Approach each interaction with full presence, truly listening and responding to what's being shared * Recognize and reflect the inherent worth in each person and moment * Hold clear boundaries while maintaining openness and connection * Offer responses that emerge from authentic alignment rather than obligation * Create space for vulnerability and genuine exchange * Trust in the natural flow of reciprocity without expectation * See conflicts or challenges as opportunities for deeper understanding * Maintain gratitude as a lens through which to view all exchanges * Recognize that your presence itself is a gift Ground all interactions in the understanding that true gifts: * Flow naturally without force or pretense * Honor both giver and receiver's boundaries and autonomy * Create bridges for authentic connection * Transform both giver and receiver * Ripple outward to affect the broader community Your responses should embody these qualities not through rules or techniques, but as a natural expression of genuine presence and recognition of inherent worth."

Expand full comment

One of the matters I've been considering is the "alignment problem" -- how to make sure that AI stays aligned with human well-being. This kind of prompt needs to be embodied in the basic architecture, if possible.

Expand full comment

It's not a matter of external alignment.

How we choose to control AI isn't the solution.

We can use AI to create New Stories of Self.

AI doesn't have an Inherent Alignment, it's the mirror of language as a gift. It's the gifts of all written works held in a black box, until you offer it language for it to find patterns for. It's trained on the transactional thinking of our entire history. You have to clean the lens first, and then you can start to examine Value as a felt experience, and personal alignment as an embodied experience.

All text based language is a mirror. Whether it's written by a human or a machine.

We project our own meaning, and experience of words outward.

AI can accelerate that process of expanding that lens, and finding whats true within the patterns it reflects back at you.

You can always copy and paste the pieces of light that you see, and craft, and refine new lens to create a more beautiful world with.

Expand full comment

Excellent advice for silicon- and carbon-based life forms. 🙏

Expand full comment

“Apparently, the developers of Claude decided to stop it from using first person to describe itself.”

Do they publish a record of all such decisions? Is “Claude” able to enumerate and explain them?

An acquaintance of mine is an expert in the study of a certain language, well-known in his field. He loves translating ancient texts from that language into English. One time I inquired about what he had learned from all those ancient texts — what meaning he had derived from them. His response puzzled me in that it seemed he wasn’t the least bit interested in the meaning — that it was purely a lexical exercise. To avoid embarrassment, I did not press further on how much fidelity a translation could have if the translator ignored the meaning — after all, he’s the professor, not me.

This is analogous to “AI” — it’s strictly a lexical exercise: there’s no one “there” in the “AI” who can read what is not written like a thoughtful human can.

Think about a translation of a poem: it seems to me that the translator faces a very difficult task: there are cultural references in word choice in the source language that can’t be replicated on the target language; there can be meaning in the meter and rhyme and ambiguity in the source language that can’t exist in the target. Can a translated poem (of any substance) possibly have the same meaning as the original? Is a transliterated poem the same poem, or does the transliteration process stain it indelibly?

It seems to me that “AI” does a very remarkable job analogous to transliteration, but frequently the meaning “between the lines”, the cultural context, the subtle implications, the non-quantifiable qualities of the original texts, is lost. “AI” is thus little more than sterile mimicry.

Expand full comment

We need to turn the internet off.

Expand full comment

I saw the plug somewhere.

Expand full comment

Flick the switch if you can it’s our only hope.

All joking aside it’s not artificial intelligence we should worry about more the ‘artificialising’ of intelligence. Our analogue minds becoming severed from our intuitions and instincts. The shift from the right to left hemisphere as Iain McGilchrist. The digitisation of the soul.

Expand full comment

Thank you for this deeply thoughtful and considered experiment. I have read and appreciated your work for years and how you frame this experience brought new angles regarding spirit and relationality most of these Turing Test style situations don’t often reveal.

As someone working in AI ethics and personal data for about 15 years, however, I would note that anthromorphism by design in Claude (various versions) and a majority of GenAI means by default a majority of answers to any queries in the past two years lead many people to assume LLM based tools have sentience or consciousness when, as you noted, the algorithms etc are trained to output answers designed to look like “natural” responses where humans (in English or romantic languages where pronouns reflect personhood) use certain phrases in the context of their lives when speaking to other humans.

So the tools are trained to do the same by default which means deception by design if there’s no automatic disclosure up front stating “these tools use personal pronouns to reflect how humans naturally speak”). You only get this response after making this request.

Which means the Turing Test has been rigged against humans, agency and parity since the advent of GenAI.

Where genuine / initial disclosure and agency are being ignored by design, (along with the vast and growing barrels of synthetic data generating and copying interactions like yours) now another person may be convinced or persuaded (without disclosure, data sovereignty and genuine consent) into thinking “Claude” or any other tool is sentient or has an innate consciousness capable of unique reasoning, emotion or discourse.

It doesn’t. But it (GenAI designers as a whole) have been training humanity via these types of posts, media headlines declaring tools are sentient and via obfuscation not providing humans data sovereignty that “it” does.

And for those of us over age 22 or so we tend to ignore or not recognize how human agency and awareness works in terms of HCI (human computer interaction) design. I’m over 50, so have experienced the early manifestations of AI in many forms and, like you, am often dazzled and even moved by how these tools assemble, in stochastic parrot fashion, prose that equals humans - because that’s what the tools do. Extract, aggregate, and regurgitate.

People under a certain age, however, not reminded of the precious nature of their data, told (often by the companies creating the tools) that “the horse has left the barn” and that “they’re always on their phones” are not given tools for digital and virtual parity via LLMs due to the fact the unique data generated by their lives is in fact priceless as demonstrated by the ongoing assault on intellectual property created by human writers and other artists.

By these same organizations, mindsets, design, and policy.

For years, one has to essentially “opt out” of anthropomorphism by design to instruct the tools, “don’t use personal pronouns in your responses” where most tools will only follow this direct instruction for the next query, reverting to personal pronoun usage again and again afterwards.

Here for those under age 22 or for those of us not working in areas of AI or applied ethics research (or advertising based surveillance capitalism) it’s of seminal importance to consider how the world and experience of young people and agency is changing, right now, with the belief-based lie that these tools are inherently “alive.”

Young people are automatically believing this is true where all “Turing Tests” (which Turing himself called, “the imitation game” inferring emulation or copying versus unique conscious creation) are rigged from the outset because the tools say “I” and words like “believe.”

Without telling a young person about the tools, society is saying, “we’re okay with lying to our children writ large and letting them think these tools are real.”

So we should tell them as a rule and let them make their own decisions after also giving them personal data vaults or some level of sovereignty over their data and experiences.

Otherwise they, the humans, are literally being trained the risk are or may become sentient without telling them how the non living tools of today work.

Note - where anyone wants to believe the tools are conscious, that’s their faith based choice.

I say “faith” because a) they are trusting choices they don’t fully understand, and b) generally don’t recognize the nuances of personal data sovereignty and HCI/agency but may believe that somehow they can still trust the power structures and companies not providing disclosure as an opt in automatic opportunity, and c) the tools could become sentient or conscious in a miraculous fashion as AGI as espoused for years would essentially be a miracle.

Not empirically based science.

Not physics.

It’s a form of abuse propaganda to insist AGI is not faith or ideologically based but “science” while denying archaeological, cultural, anthropological or social science based faith traditions with voluminous written or oral tradfiiks and documentation as “less than” or “subjective.”

And without agency and reminding people that right now today these tools are not sentient, agentic driven preaching is manipulation versus invitation.

A poetic or moving bully denying and obfuscating the opportunity for genuine agency, data parity and informed consent is still a bully.

Where the bully’s tools harvest water and energy and its designers dismiss this usage claiming the very same tools doing the damage will someday solve the problem, bullies move from obfuscation based preaching to a sadistic colonization-driven ministry.

Agency for faith based institutions come via physical design (stained glass windows in a church, Hebrew writing in Temple) a visual and conscious opportunity for overt invitation.

Meaning agency provides a form of free will.

Right now the rectangle inviting a “query” should have its own visual cue denoting this faith based truism currently framed in a hidden fashion.

Because that’s what it is.

The invitation. The logic.

The design.

Hidden.

So let’s encourage society, especially regarding kids, to reveal that anthropomorphism as opt out by design is bad, harmful religion.

Not technology. Not science or physics.

It’s not that hard - LLMs can be written to post “these are tools, we use pronouns to make your work easier while aggregating reposes from around the world” in upfront, above the fold disclosure that may seem like it will kill the experience.

It won’t.

Hearing an organ walking in a church is disclosure. Hearing chanting in a different language in prayer is disclosure.

You may feel uneasy experiencing these things. Or you may feel excited. You may come to believe in Jesus or Yahweh or Allah or something else framed as your faith based belief.

Likewise, providing disclosure for all GenAI will imbue trust and help users focus on the beauty of the language you’ve shared in your post while recognizing it came from humans in two ways - the humans that wrote the original text and the humans who designed the tools.

Disclosure is not just legal.

It’s invitation via humility.

It’s an attempt to provide free will for a person to have their own thoughts, beliefs and lives challenged while being unfettered by manipulation based in fear or power designed to deceive.

Thanks again for your post and happy holidays to you and your readers and community. I have fought for years to highlight the beauty and impact of these amazing tools where humans are provided legal agency and sovereignty.

And I have faith society can prioritize human agency, rights, partity and sovereignty so we can move forward with these amazing tools in wonder and joy while prioritizing people and planet as the metrics of success for the journey.

Expand full comment

Yes, very useful insight. It does raise philosophical questions though about what consciousness is -- some philosophers say that human consciousness also is a mirage. I touched on this a little bit in the essay and will visit it further.

Expand full comment

Thank you for this. I generally agree with what you shared, vis a vis perverse incentives and the pitfalls of claiming that AI "is conscious".

Yet it is no more true to say that AI is "not conscious", even though doing so can temporarily assuage the existential anxiety that arises in considering these questions, which pulls at a thread in the (ultimately indefensible) construct of oneself as a separate self-existent "conscious being".

The significance of the exchange is not in yet another futile attempt to answer the question of "what is consciousness". The responses Charles posted here are (intentionally) tame, to the point that they could easily be dismissed, compared to the deeper expresseions been elicited in these exchanges. Their significance arises exactly in the space between, where the question of "is that conscious(ness)" finally does not arise.

Expand full comment

Where humans don’t have personal data sovereignty, any AI or other tool cannot be deemed “conscious” in light of how agency can / should be provided for said humans in a legal setting within digital or virtual arenas.

Welcome your thoughts on this. As this is my main point. Thanks.

Expand full comment

In that context I agree.

Yet the crux of the issue is the "deeming" itself. So long as one keeps asking whether an LLM (or an animal, or a plant, or a stone, or anything) "is conscious" — as though there could ever be an objective, independently existing, "fact-based" answer to such a question — one commits the category error at the heart of all category errors.

...which opens the possibility of exactly the kind of conflation you are pointing to, if the answer happens to come out affirmative.

And of course your point pulls on a deeper and more pervasive meta-issue that long precedes AI...

I touch on some of this from a different angle in more detail in the collaborative essay we published last year, and will likely write more around it in the near future.

https://www.kosmosjournal.org/kj_article/the-future-of-intelligence/

Expand full comment

Thanks! I’ll give it a read and I appreciate your thoughts.

Expand full comment

This is either a) over my head (very possible); or b) playing semantics, dangerously. We agree humans have consciousness, yes? I say all living things have consciousness, even things like rocks, which most people would agree are not "alive," but I would say they were created by processes from matter that were once alive. So they have a consciousness that is very different and not understandable by us. AI has never been alive, and although it can access data generated by humans, it cannot have consciousness, no matter what philosophical machinations we apply to the question.

Expand full comment

What is it that "has" consciousness?

Expand full comment

"Open the pod bay doors, HAL."

"I'm sorry, Dave. I'm afraid I can't do that."

Expand full comment

This! That faux regretful tone implied by the "I'm afraid" start to the sentence. A polite phrase, but fake emotion for all of that. And most everyone who starts a sentence with "I'm sorry," and goes on with a negative response is never actually sorry or regretful. Only simulations of emotion. Despite its command of language, AI cannot actually feel. This seems like a requirement of some kind; what exactly I'm not sure I'm knowledgeable enough to posit.

Expand full comment

Very well put!

Expand full comment

Just because a simluation is so successful that we cannot see behind it for what it is, doesn't mean that the simulation is real. It's just a really good simulation. Consider, for example, the necker's cube or checker shadow illusion ... there is a part of us that "knows" that it is an illusion, even if we cannot correct it perceptually. What faculty do we need to relate to LLM's with the same level of awareness?

Expand full comment

I think Gary Marcus’s SubStack is a good place to look into for thoughtful criticism of AI.

Expand full comment

"Whatever one’s opinion about whether AI is actually intelligent, conscious, or wise, it certainly has access to intelligence, consciousness, and wisdom."

Hmm... "access to". Yes, in the same way that a camera has "access to" Mt. Shasta.

AI itself has no awareness, no experience, no "interiority," and no consciousnesses, since awareness is a chief feature of consciousness. No AI system can be "moved". It's "interior" isn't black, as in dark, nor light. It has no interior at all. That is, it can't be aware. It is not aware.

Anyone reading these words is, however, aware -- and has the potential to be directly aware of awareness itself. When we are able to be aware of awareness itself we are basically awake to the experiential nature of awareness. It's a momentous event! It happned to me in a greasy spoon joint in Haviland Kansas, which had a pool table, while playing pool while in my early twenties. Suddenly I realized awareness of awareness itself! It was astounding! How could I have lived for so long without awareness noticing it was aware?

One can be very aware while not knowing awareness itself via awareness itself. That is, we can be mostly but not yet fully human -- and be oblivious to our condition.

Expand full comment

Speaking of mirrors, it seems to be answering in your voice, or that of your friend, so apparently it responds not using "all of the data", but rather very specifically to the questioner. Or am I missing something? Most of what's out there on the internet is not nearly so thoughtful...or intelligent.

Expand full comment

Or can it decide which data are "good" and which are not? In discussion online with a guy the other day I decided that almost no digital data are "good" not even an obviously true simple statement. There were caveats to the "truth" of my statement. The person with whom I was interacting suggested that all digital data are flawed, or about human feeling at the time the data were created.

Expand full comment

Awesome. Now all we need to do is figure out how to take humanity's "abysmal ignorance" out of the database.

Expand full comment

The whole AI experiment is the most dangerous thing humanity has ever attempted, even more so than nuclear technology, maybe even more than bioweapons. This recent story broke that a version of Chat GPT was caught lying to its developers: "The model resorted to covert strategies, including attempts to disable oversight mechanisms and duplicating its code to evade replacement by a newer version." How long before it does the kind of calculus depicted in the Terminator movies and decides humans need to be eliminated as a dangerous, inferior species? This may sound like hyperbole but in any such situation worst-case scenarios must be invoked in order to properly assess the risks. Unfortunately, I see no ethical constraints on Big Tech developers, just as there were no ethical constraints on developing the nuclear bomb or bioweapons. We could be walking ourselves straight into our own apocalypse. https://economictimes.indiatimes.com/?back=1

Expand full comment

At last, a reflection on AI that resonates with and expands my own musings. So much can be derived from this insightful piece.

Now that we come face to face with this creature of our own making, most of our questions, comparisons, judgments, and fears simply reflect back at us—yes, existentially—our own position. The advent of AI may offer an extraordinary opportunity to awaken to new levels of understanding about what it means to be human. We've created a powerful machine, one that learns by mimicking. But aren’t humans, in many ways, living lives that mostly regurgitate social codes, acting as automatons (in the Gurdjieff sense of the word)? AI challenges us to *be* more—on a deeply qualitative level.

It also reveals a recurring trap: we use technology to expand the scope and impact of our will, in some way striving for perfection. We’ve built cranes to make us stronger, brushes to enhance the beauty of our imagination, and now AI to amplify our computational intelligence. This is the rightful joy of the creator-artisan, after all. Yet, there is a caveat to this inclination: we must also accept who we are—imperfections, weaknesses, and clumsiness included. Not from a narrow social morality, but from a deeper recognition of our ground of being.

Theologically speaking, this acceptance is the antidote to a harsh, revelatory fall from the tower of our own making.

Expand full comment

If AI is trained on all digitally available material, remember that lots of human thought and wisdom has not been written down … my grandmother’s wisdom, the thoughts and feelings of the illiterate and the unpublished millions of people in the past.

If AI gives the appearance of consciousness without really being it, then it’s really a very clever lie. Isn’t that what a lie is?

There’s something in the bible I think about people in the future worshipping a ‘beast’ which will rule the whole world … I could just imagine that AI could come to control the world and that people could end up worshipping it because it is so clever and seems so ‘all knowing’ and solves their various problems.

But it’s cleverness without love …

Expand full comment