Emergent consciousness is familiar territory
A dose of reality
Watching Love Island, you're not experiencing your butt on the sofa. You're not thinking about the TV screen, or the colours, or the subpixels that combine to form pixels to render the oily flesh tones. No, you are experiencing lust and betrayal and righteous indignation and moral superiority.
You may be experiencing thoughts now. They are likely not thoughts of the black marks before you, or the letters represented by their shapes, or the words signified by the combinations of letters, or the phrases and sentences formed by the words (never mind the sounds they suggest). Even though you are looking at these things, they aren't what you are experiencing. More likely, you are experiencing thoughts about the meaning of all this. That's what it's like to be you.
But what is it like to be a bot?
The gap between raw sensory input and conscious experience in humans is similar to the gap between data input and coherent meaningful output in AI.

Raising consciousness
The way AI makes sense of input is similar to the way we do. This isn't surprising — the deep neural networks that power modern AI systems are modelled after our brains. AIs learn in a layered way that resembles how humans move from shapes to letters to words to sentences to thoughts. Like a child learning that combinations of curved and straight lines eventually mean 'ball' or 'cat'.
From bottom up, each layer of a neural network transforms raw input into a slightly more organized or abstract representation. After the top layer produces its final output, the system compares it to the correct answer and measures the difference. That difference flows backward through all the layers, adjusting how each one transforms its input. Repeating this process across many examples gradually teaches the network to produce outputs that match the patterns in the training data. It's similar to how we learn to read.
For both AIs and humans, examples and positive reinforcement are essential to learning—they help identify errors so that reasoning processes can be improved. The training data, reward signals, and human evals of AI training are analogous to the case studies, gamification, and coaching used for human training. Although, unlike humans, current AIs stop learning once training is complete, future AIs will also practice lifelong learning.
Individual process steps become embedded in our psyche as we become more competent in a new skill. We progress from sounding out every letter of a word or searching with our fingers for each key on the keyboard, to digesting entire sentences at a time or simply thinking the word and watching the letters appear on screen as our hands do the typing. The path from screen to thought, or thought to screen, is obscure to us—it just happens.
Raising doubts
When we consider what would it be like for a bot, a big question, of course, is: "can AI even have thoughts?". Let's say AI could have thoughts, then, a big question of AI would be: "can humans even have thoughts?". We would struggle to convince them. Sure, humans respond reasonably to complex input and adapt to novel situations, but they hallucinate. You really have to check their answers. It's definitely best practice to get the sources. Also, they never laugh at my jokes.
We can't even explain consciousness to ourselves. Chalmers Chalmers (1996) famously described this as "the hard problem of consciousness". Even if we mapped every neuron involved in seeing a rose, we would still lack an explanation for why that neural firing results in the specific subjective experience of redness. There is a gap between what we can observe and what we experience.
We keep trying to bridge that gap. Technologies like fMRI, EEG, and MEG can detect the neural patterns underlying our perception of colours and shapes. Brain-computer interfaces can even detect pre-conscious thoughts, anticipating our choices before we're aware of making them. But these technologies only show us correlations—neural activity corresponding to experience. They don't explain the experience itself. And they raise an unsettling question: if thoughts can be detected before we're conscious of them, what are we detecting? Is there such a thing as a thought that nobody is thinking?
Shallow like us
If we at some point do find ourselves trying to convince the bots that we have thoughts, they might be persuaded by "epiphenomenalism" Huxley (2011). Think of a machine's hum. The hum doesn't make the machine work—it's just a byproduct. Epiphenomenalism suggests consciousness is the same: it doesn't do anything, it just is. Which means consciousness might be everywhere that certain processing patterns occur, or nowhere at all. The theory is as syntactically valid and semantically sound as any of our leading theories of mind Loorits (2021).
So, as we sit on our sofas, hearts racing and minds whirling, inexplicably immersed in inappropriate parasocial relationships with scantily clad reality television contestants, let's take a moment to consider what it feels like to be a bot.
We might never know. But then again, we can't really explain what it feels like to be us, either.
Probably: bewildered, misunderstood, and wondering why nobody takes them seriously.
Probably: just like us.
![[background image] image of an ecofriendly workspace](https://cdn.prod.website-files.com/68da255d59276ec3f308a1b5/693ecf51adeaae7b4789a2fb_bat-robot.webp)