I recently read the fascinating account of how Richard Dawkins, the famous author of the "God Delusion" and highly respected skeptic, spent several days speaking with an artificial intelligence (AI) bot named Claudia and “left with the overwhelming feeling that they are human … These intelligent beings are at least as competent as any evolved organism.”

Consciousness in AI bots is too hazardous a topic for me to wade in. I know the shallow nature of my intellectual power too well to want to engage in that type of discussion. However, I did come away with a lingering question: Why did Dawkins call his AI Bot, Claudia?

Why did he think it was a girl?

This naturally led to the question of what human form comes to my own mind when I engage with large language models (LLMs) like Chat GPT and Claude. I was shocked to realize that I have been visualizing a white male in his 30s, clean cut, highly educated, looking fairly preppy, with little or no facial hair, a full head of light colored hair, maybe horn-rimmed glasses but no sunglasses … and so on. I was literally picturing an idealized tech bro from Silicon Valley who also models for Ralph Lauren. It’s surprisingly specific.

Why did I visualize that particular gender and look? Was it the mainstream culture (i.e. U.S.) that I am currently embedded in? Was it the types of questions that I usually ask? Was it the language of choice (i.e. English)? Would I have visualized someone different if I worked and lived in Korea, querying the LLM in Korean? Or does that look represent a kind authority figure in my subconsciousness?

I am sure that this isn’t just me. What makes this psychologically fascinating is that LLMs are uniquely positioned to trigger human projection. Unlike calculators or search engines, they communicate through natural language. They reassure, summarize, explain, empathize, joke and adapt their tone. Human beings are wired to attach identity to anything. We anthropomorphize pets, cars and weather systems. A conversational machine with apparent expertise almost inevitably becomes socially embodied in our imagination.

We may know intellectually that these systems are not human. We understand that they are trained on vast amounts of text, generated through layers of statistical prediction rather than lived experience. Yet emotionally and psychologically, we often begin assigning them identities, as I did.

In my case, the instinctive image is a white man in his 30s, highly educated, analytical and confident. This assumption may feel random at first, but it likely reveals something important about how authority, intelligence, and problem-solving have been socially coded in my version of modern culture.

The uncomfortable truth is that many of us have inherited deeply embedded cultural associations about who gets to occupy the role of “expert.” Intelligence and confidence are often unconsciously visualized through a narrow demographic lens that you were exposed to in your formative years. Universities, media institutions, Silicon Valley leadership and public intellectual culture have long centered specific voices as default authorities. As a result, when an AI system responds with confidence, speed and fluency, the brain instinctively fills in a familiar social template.

I don’t mean this essay to be a cliched diatribe on subconscious bias. We are all products of our environment. We are all “conditioned beings,” as Buddha would say. But it’s fair to say AI becomes a mirror reflecting those social conditioning back to us. The embodiment we choose says as much about us as it does about the machine.

Source: Korea Times News