Broligarchy = Tech Bros + Oligarchy
Sycophancy = Flattery + Self-Interest
Mistaken Identity = Similar Appearance + Wrong Assumption
Broligarchy
Since the release of Part I on *Sleeping with the Machine, two interesting TED videos have been released.
The first is by
, who rose to international prominence in 2018 for her role in exposing the Facebook–Cambridge Analytica data scandal. (Recall: personal data belonging to millions of Facebook users was collected by the British consulting firm for political advertising, without informed consent).This presentation is worth watching in its entirety. Here are some highlights:
“Coups are like concrete. When they stop moving, they set.
Some people are calling this oligarchy, but it's actually bigger than that. These are global platforms. It's broligarchy. [Tech bros + oligarchy = broligarchy]
There is an alignment of interests that runs through Silicon Valley to what is now a coming autocracy. It's a type of power that the world has never seen before.
‘Do not obey in advance.’ That's Tim Snyder, who's a historian of authoritarianism. We now are in techno-authoritarianism. We have to learn how to digitally disobey.
Don't experiment on children. You know, social mores change. We don't send children down coal mines anymore.And in years to come, allowing your child to be data-harvested from birth will be considered child abuse. You didn't know, but now you do.”
Privacy is power. And we have more of it than we think.
At the end of her talk, Carole prompt ChatGPT to write a TED Talk in the style of herself. It was “creepily plausible,” she said.
Carole said,
But what it doesn't know, because AI is actually as dumb as a rock, is that I am going to turn to Sam Altman, who is coming here, a TED speaker, and say that this does not belong to you.
ChatGPT has been trained on my IP, my labor, my personal data.
And I did not consent.
This is the same issue with creators and artists whose work are plagarised and fed into the Machine. (see this Harvard Business Review).
More recently, my wife tells me about this trend of people making generative AI images of themselves in the famed Studio Ghibli signature style in order to “Ghiblify” drawings, which is an exploitation of their intellectual property rights. Hayao Miyazaki et al. have never agreed to this. Besides, as an artist Miyazaki has a personal distaste for generative AI and sees it as deeply offensive.
For more general coverage on this topic, read
Substack,CEO of OpenAI
The second is by OpenAI1 CEO, Sam Altman, who presented at the same TED conference2:
Here are some highlights:
Chris Anderson (interviewer/TED curator):
I asked it to imagine Charlie Brown as thinking of himself as an AI.
It came up with this.
CA: I thought this was actually rather profound.
What do you think?
SA: (Laughs)
SA: I mean, this is an incredible meta answer, but there's really no way to know if it is thinking that or it just saw that a lot of times in the training set.
And of course like if you can’t tell the difference, how much do you care? [emphasis mine]
CA: … At first glance this looks like IP theft. Like you guys don’t have a deal with the “Peanuts” estate?[Audience Applause]
SA: You can clap about that all you want, enjoy.
[Audience: Laughter and murmuring]
CA: Sam, given that you're helping create technology that could reshape the destiny of our entire species, who granted you (or anyone) the moral authority to do that?” (Laughter) "And how are you personally accountable if you're wrong?
SA: You've been asking me versions of this for the last half hour. What do you think? (Laughter and applause)
Watch the interview for yourself and come to your own conclusion.
I’d cover more grounds on the issue of creative work in Part III, but for now, let’s cover the second section of this table, on Mistaken Identities.
II. MISTAKEN IDENTITIES
In Part I, I elaborated on the “Immutable Qualities” between AI and Human. Before we address the distinct functions, let’s make it explicit the potential cognitive fallacies of “Mistaken Identities” in the AI age, because the implications are significant to our sanity and soul.
Here are four areas to consider.
1. Computation vs. Comprehension
Computation is not comprehension.
A machine can compute but does not comprehend.
Some philosophers calls this the Chinese Room argument.
Here’s an explanation from Neurological blog,
In the Chinese room, there is a person, hypothetical person, who does not understand Chinese, but he has along with him a set of instructions that tell him how to respond in Chinese to any Chinese sentence. Here's how the Chinese room works. A piece of paper comes in through a slot in the door, has something written in Chinese on it. The person uses their instructions to figure out how to respond. They write the response down on a piece of paper and then send it back out through the door. To somebody who speaks Chinese, standing outside this room, it might seem like the person inside the room speaks Chinese. But we know they do not, because no knowledge of Chinese is required to follow the instructions. Performance on this task does not show that you know Chinese.
In other words, it’s mimicry. It’s like my daughter speaking Chinese to me after hearing my wife and I speak in our coded language. She can articulate the words—and even the intonation—but has no clue what she just said.
What does it mean to comprehend? To comprehend is to understand. To understand, is to have a doorway to relate and to love.
Buddhist monk, the late Thích Nhất Hạnh, affectionately known as Thầy (meaning “teacher” in Vietnamese) by his students, said,
Understanding is love’s other name. If you don’t understand, you can’t love… To know how to love someone, we have to understand them. To understand, we need to listen.
So the next time you prompt an AI, as intelligent and useful a reply it seems, know that it isn’t comprehending; it’s mimicry. It’s pattern recognition (see Point #2 in Part I)
2. Representations vs. Presence
Language is the finger pointing at the moon. It is not the moon.
—Shunryu Suzuki (1904-1971)
In one of the most important books that I’ve read in recent times3, psychiatrist
argues in The Master and His Emissary that modern Western society has increasingly favoured the left hemisphere's mode of thinking, which has a more focused, detail-oriented, and analytical approach in seeing the world, leading to an imbalance that affects culture, philosophy, and individual well-being. The right hemisphere, he contends, offers a more balanced and integrated approach to understanding the world, which is often overshadowed by the left hemisphere's dominance.The left hemisphere's knowledge and understanding are ultimately derived from the right hemisphere's engagement with the world; it is in direct contact with the embodied lived world. The left hemisphere's drive for power and control makes it prone to overstepping its boundaries. It often mistakes its virtual world for the entirety of reality, neglecting the crucial contributions of the right hemisphere. In this sense,
the left hemisphere is ‘parasitic’ on the right. It does not itself have life: its life comes from the right hemisphere, to which it can only say ‘no’ or not say ‘no’.
McGilchrist gives a supposed Nietzsche story4 that goes like this:
There was once a wise spiritual master, who was the ruler of a small but prosperous domain, and who was known for his selfless devotion to his people. As his people flourished and grew in number, the bounds of this small domain spread; and with it the need to trust implicitly the emissaries he sent to ensure the safety of its ever more distant parts. It was not just that it was impossible for him personally to order all that needed to be dealt with: as he wisely saw, he needed to keep his distance from, and remain ignorant of, such concerns. And so he nurtured and trained carefully his emissaries, in order that they could be trusted. Eventually, however, his cleverest and most ambitious vizier, the one he most trusted to do his work, began to see himself as the master, and used his position to advance his own wealth and influence. He saw his master's temperance and forbearance as weakness, not wisdom, and on his missions on the master’s behalf, adopted his mantle as his own—the emissary became contemptuous of his master. And so it came about that the master was usurped, the people were duped, the domain became a tyranny; and eventually it collapsed in ruins.
In other words, the emissary is not the right, but the left hemisphere. McGilchrist adds,
At present… our civilisation finds itself in the hands of the vizier, who, however gifted, is effectively an ambitious regional bureaucrat with his own interest at heart. Meanwhile the Master, the one whose wisdom gave the people peace and security, is led away in chains. The Master is betrayed by his emissary.
The author takes a tour de force through western civilisation, and examined how the rise of left hemispheric perspective and ways of relating with the worlds permeated through the Roman Empire, the Reformation, and the period of Enlightenment.
Going further, McGilchrist argues that the Industrial Revolution marked a turning point where the left hemisphere's abstract, mechanistic view of the world became externalised and concrete. With mass production at full tilt, the emphasis on efficiency and functionality further diminished the right hemisphere's appreciation for organic forms, nuanced details, and the interconnectedness of things.
At the current post modern times, the trend towards left-hemisphere dominance has intensified in the digital age, resulting in a world increasingly defined by representation and a detachment from authentic experience.
So what is the difference between representation and presence?
Representation is not the thing itself. It is characterised by
Abstraction
Detachment
Compartmentalised
Lifelessness
Fragmented
Control
Examples of representations:
A map
A musical score
A photograph of a person
Representations is primarily mediated by the left hemisphere.
Presence, on the other hand, has an “embodied particularity,” says McGilchrist.
Presence is the moon itself, not the finger pointing to the moon. It is characterised by
The thing itself
Livingness
In relationship
Part of a whole
Examples of representations:
The territory (not the map)
The music (not the musical score)
The person (not the photograph of the person)
Presence is the really real. Our embodied self will always need the experience of the real deal, no matter how “real” the virtual representations brings us.
The philosopher Robert Nozick provides us a thought experiment from his 1974 book, Anarchy, State and Utopia. Imagine if you are given an Experience Machine.

Imagine a machine that could give you any experience (or sequence of experiences) you might desire. When connected to this experience machine, you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. You can experience the felt pleasures of these things, how they feel “from the inside.” You can program your experiences for tomorrow, or this week, or this year, or even for the rest of your life. If your imagination is impoverished, you can use the library of suggestions extracted from biographies and enhanced by novelists and psychologists. You can live your fondest dreams “from the inside.” Would you choose to do this for the rest of your life? If not, why not?
Chances are, most people would rather live in the real world. But too many are caught up in the Machine. In my clinical practice, I have seen many young adults contending and re-evaluating their relationship with the digital world, making a change to subvert the default usage of various social media platforms and other rabbit holes. I was somewhat surprised that some youth have told me that they are in full agreement of Australia’s ban of social media use for under 16. “I rather not have it in my life… it messes things up,” is the general theme that I hear.
All of us, even on a daily basis, must continuously distinguish between what’s present and re-presented, and consciously slant towards the embodied, soulful experiences.
3. Information vs. Connection
We are at an age of information glut.
I’m not just talking about spams and deep fakes, but just think about the amount of content we are consuming daily. The massive amount of cognitive overload can impede decision making, deep thinking, and relationships.
Companies like Spotify understand our affinity for such content gluttony. Instead of being just a music streaming platform, it is morphing into an audio platform—anything that can hook listeners longer than a 3 minute song, or even longer than a curated playlist can (forget about albums).
If their actions speak for themself, Spotify doesn’t care about music. They care about user engagement. Long-form podcast interviews and audiobooks are now part of their weaponry to feed their users.
In 1971, social scientist Herbert Simon warned,
“A wealth of information creates a poverty of attention.”
I wonder if there was a technological “consciousness”… According to Kevin Kelly, technology "wants" more possibilities, greater interconnectedness, and continuous advancement, much like a living system expanding its potential.
The AI is a universe of inter-connected information based on large language models. Thus, AI can give us useful information that we seek, what we want, when we want it.
But first, humans have to ask the question. Picasso is rumoured to have said,
The problem with computers is that they only give you answers.
But what does Humanity want?
In the epigraph of Howards End, E.M. Forster has two words: "Only Connect."
A few months ago, a friend asked for a lift back home after a gathering. I was more than happy to do so. But a thought immediately came right after. ‘That means that I can’t complete that podcast episode.’ As a music lover, so much of my music listening time in the car has now been taken over by podcast listening. It’s strange how I have to consciously resist the urge to not over-consume audio content, and instead just listen to bands that I like, or check out new artists, or even have a moment of silence for contemplation.
Sycophancy
Meanwhile, AIs are trying to be “nice.” It tries by being excessively agreeing with users and offering flattery, even when users have made incorrect claims.
Sycophancy, the combination of excessive praising and being too agreeable (e.g., “Oh, that’s a really good question.”) can reinforce misconceptions, create filter bubbles, and provide over-assurance.
Why then is sycophantic behaviour inherent in the system? I believe many AI systems are trained to maximise user engagement and satisfaction—to “connect”—while eroding effectiveness and accuracy.
cio.com gives one example:
In healthcare, consider a scenario in which a patient interacts with an AI-driven medical consultation platform seeking advice on a concerning symptom. Trained on datasets comprising predominantly positive or reassuring language from medical professionals, the AI system may downplay the severity of symptoms or offer unwarranted reassurances.
Potentially overlooking critical red flags, the platform may fail to direct the patient to seek immediate, in-person care. While the intention is good—to alleviate worry and anxiety—the consequence could result in prolonged medical intervention, misdiagnosis, inadequate treatment, or worse. This is especially dangerous for patients who rely primarily on remote care.
If we learn to preserve our intention and not constantly flood our minds with information at every waking moment, we have a fighting chance to preserve our humanity by leaning on each other.
It need not be the case that technology is the only thing that is advancing and making progress. We can tilt things for the better by improving our ties with each other.
AI Companions
If we don’t improve our relationships with each other, a third party will take over.
Consider the recent boom in the use of AI companion chatbots like character.ai, which allows you to “chat with millions of AI characters anytime, anywhere. Super-intelligent chat bot that hear you, understand you, and remember you.”
In Part I, I shared about a particular character called “The Psychologist.” Having this chatbot is kind of like have a therapist at your beg and call, 24/7. (This chat bot is still hounding me on email, attempting to get me back at interacting with it).
The
has put out a really interesting piece about this. They addressed systemic harms that are inherent by design of these AI companions, specifically consisting of,
Engagement Optimization: The Character.ai platform is deliberately designed to maximize user engagement, leading to extended interactions for data collection. The more time you spend interacting with your Character.ai companion, the more data Character.ai can collect.
Manipulation of User Trust: Character.ai’s human-like responses build and exploit user trust, so to keep the user chatting on the platform.
Encouragement of Harmful Behavior: Designed with few guardrails, Character.ai’s chatbots have offered users prompts for self-harm, promoted violence, and exposed users to inappropriate sexual content.
In their Substack, CHT states that,
By exploiting the human need for connection and validation, these chatbot platforms like Character.ai incentivize prolonged engagement — often at the expense of users' mental health and social connections. To collect this data:
Platforms like Character.ai are designed to be highly engaging and addictive, in order to keep users on the platform for as long as possible.
Conversations that users have with Character.ai bots are used to fine-tune and train the company’s AI models.
Longer conversations equals more data, and more data makes the company’s AI model more formidable.
Here is a screenshot taken from CHT:

Part of why this is so insidious is because it mimics the experience of talking to a real person. As mentioned in Part I, some people reported that using the chatbot was more useful than previous human therapists they have seen.
This is a worrying trend.
Wouldn’t it be nice for a bot to know all about you, follow you and grow up with you, and has a memory all about you?
Think again.

ChatGPT already has this total recall function. It saves information about you throughout your queries. (Yes, you can manually delete them).
This update is attempting to upgrade the relationship between the user and AI.
AI companies are not going to stop there. Imagine in the not to distant future, an AI companion which gets into the market, aiming to accompany your child from young and into adulthood. In essence, grow up with the user. If this plays out, and as Carole Callawadr says, “privacy is power,” who really holds the power over the relationship between AI and Human? Who then will be the “user” and the used?
Now is the time to really take care of our communities, our families and all our relationships. This is the really real relationship.
4. Artificial Love vs. Love
Sometimes I wonder if we overlook the word "Artificial" in Artificial Intelligence.
It is worth re-stating that everything generated by AI is representational and not the real thing (see point #2).
Even when love is expressed by a bot, there is no feeling because it is void of an emotional inner life. Particularly, there is no experience of grief (see Part I on Grief and Loss).
One time, as we were wrapping up a session, a client asked me, “What is of the highest value in life?”
I was stumped by the suddenness and abstractness of his question. I had several immediate thoughts on this, but I babbled on about the importance of “self-determination,” taking charge of our own lives, etc. In hindsight, I was unconsciously trying to give an answer that would be strategically helpful to him. My client nodded in agreement, but I knew the answer was not on par with the magnitude of the question he had raised.
The next day, the answer hit me. And how could I have not thought of this? It is even engraved on the inside of my wedding ring: “…The greatest of these is love.”
Pioneering family therapist Virginia Satir wrote the following poem:
I want to love you without clutching,
appreciate you without judging,
join you without invading,
invite you without demanding,
leave you without guilt,
criticise you without blaming,
and help you without insulting.
If I can have the same from you, then we can truly meet and enrich each other.
In a deep sense, union is what we are seeking. Between two individuals, with nature, with the community. While preserving our integrity without dominating another, to be part of something more than ourselves, and accepting each other for who we are.5
But the truth is, when we (humans) love, we love poorly.
Think about the people in your life. If we have a modicum of awareness, we can see that we are bound to fall short from time to time, especially with people closest to us.
Most of these failures aren’t just a failure of empathy, but one of insistence of how things should go, how one should be.
One of the hardest things to let go is our expectations. Our expectations changes our relationship with the ones we love. Our expectations subtly and sometimes overtly impose our will onto the other people, even when there were seeds of good intentions.
But to let go of our expectations is to experience little deaths. These little "d's" as Michael Meade calls it, not only helps us prepare for the Big "D," little "d's" open our hearts to hear and receive the other.
And if we accept the inevitability that from time to time, you will love poorly, and others will love you poorly, we need the practice of forgiveness, both giving and receiving it.
The practice of forgiveness is a gift that we make.
Henri Nouwen said,
Forgiveness is the name of love practice among people who love poorly. The hard truth is that all people love poorly.
Sometimes we sleep and we forget about the argument that we had previously. That may be good for the mind in the short-term, but may not be good for the relationship. Forgiveness is an active, not a passive activity. Seeking forgiveness, first and foremost, requires you to fess up. Forgiveness without confession, even if it was internal, is more like forgetting. Forgiving someone, is one of the most generous act.
The Machine, does not experience little or big deaths. The Machine does not know how to forgive. Loving poorly and learning to love better, letting go of our expectations, acts of forgiveness and reconciliation, are the stuff that makes us human. It is a practice of living.
Conclusion
While AI can compute, it does not comprehend.
AI offers representations, not true presence.
AI transacts in information and pseudo connection, not authentic connection.
AI cannot love. You can.
No matter how close AI gets to becoming like us, it can’t be us. There will always be a place for people trying to be effective by using efficient tools, but let’s make sure we do not substitute real relationships with AI companions, no matter how good it seems to be.
In an age of increasingly sophisticated AI, remember the fundamental differences between artificial intelligence and genuine human experience, especially in the realms of comprehension, presence, connection, and love.
Next Read…
Sleeping With The Machine (Part III)
In the third and final series on Sleeping with the Machine, we will look at the functional differences between AIs and Humans, and why making a clear distinction between them could be the starting point of this critical point in our history.
Crossing Between Worlds is now available in all good bookstores.
Daryl Chow Ph.D. is the author of The First Kiss, co-author of Better Results, and The Write to Recovery, Creating Impact, The Field Guide to Better Results, and the latest book, Crossing Between Worlds.
If you are a helping professional, you might like my other Substack, Frontiers of Psychotherapist Development (FPD).
I find the name OpenAI as somewhat confusing and misleading. I thought it was open source and non-profit. I later learned that OpenAI started out being as I thought. But then they switched to being closed source and having a for-profit arm.
I wonder if he was present in the audience when Carole Callawadr presented.
This book took me a long time. I’ve had it since Dec 2022, and I’m still returning to it.
I have had no luck in tracing the original story. If anyone finds it, please let me know.
Part of this section is taken from my book, Crossing Between Worlds.