Where Do You End?
A theory from 1998 that predicted everything
In 1998, two philosophers named Andy Clark and David Chalmers asked a strange question:
Where does the mind stop and the world begin?
Their answer changed everything. Or it should have.
Otto and Inga
Imagine two people who want to visit a museum.
Inga hears about an exhibition. She thinks for a moment, recalls the museum is on 53rd Street, and walks there.
Otto has Alzheimer’s. He can’t form new biological memories. But Otto carries a notebook everywhere. When he hears about the exhibition, he looks in his notebook, finds the address he wrote down previously, and walks to 53rd Street.
Here’s the question: Did Otto remember where the museum was?
Your instinct might be “no — he didn’t remember, he looked it up.” But Clark and Chalmers argue that instinct is wrong.
When Inga “remembered,” she accessed information stored somewhere (her brain) that she’d previously encoded. When Otto “remembered,” he accessed information stored somewhere (his notebook) that he’d previously encoded.
The process is the same. The location is different.
Why should location matter?
The Extended Mind Thesis
Clark and Chalmers proposed something radical: the mind doesn’t stop at the skull.
If an external resource functions the way cognition functions, it IS cognition. The boundary of your mind isn’t your skin. It’s wherever your cognitive processes reach.
Otto’s notebook isn’t a tool he uses to help him think. It’s part of how he thinks. It’s part of his mind.
But not everything counts. You can’t just claim the entire internet is “part of your mind.” There are criteria.
The Checklist
For something external to count as genuinely “extended mind,” it needs to be:
1. Reliably available It’s there when you need it. Otto’s notebook is always in his pocket.
2. Easily accessible Low friction to use. Otto doesn’t have to drive somewhere to check his notebook.
3. Automatically trusted You don’t question the information each time. When Otto reads an address in his notebook, he trusts it the way Inga trusts her memory.
4. Previously consciously endorsed You put it there deliberately. Otto wrote the address himself.
This is a high bar. It’s why your friend’s phone number doesn’t count as part of your mind — you’d have to look it up, verify it, think about whether it’s current. But your own phone, with your own notes, your own calendar, your own contacts? That’s different.
Consider the tools
Let’s apply the checklist:
A notebook: Available, accessible, trusted, endorsed. ✓ Extended mind.
A calculator: Available, accessible, trusted. You don’t write the algorithms yourself, but you consciously rely on it. ✓ Arguably extended mind.
Your smartphone: Available (it’s always with you), accessible (unlock and you’re there), trusted (you don’t verify your calendar each time), endorsed (you put the information there). ✓ Extended mind.
The theory works. It explains something real about how we already live. We’ve been extending our minds for centuries — with notation, with books, with tools.
Now here’s the thing.
This paper was published in 1998.
Before smartphones. Before the cloud. Before Google. Before AI.
Clark and Chalmers were talking about notebooks. They were arguing about whether a paper notebook could count as part of someone’s mind.
That was the controversial case.
Now consider AI.
Let’s run the checklist again.
Reliably available? More than any notebook. It’s on your phone, your laptop, your watch. It’s everywhere you have internet.
Easily accessible? You talk to it. In natural language. Lower friction than writing in a notebook.
Automatically trusted? This varies — and it should. But for people who’ve learned to work with it? Yes. You start to trust its outputs the way you trust your own notes.
Previously consciously endorsed? Here’s where it gets interesting. You didn’t write the AI’s training data. But you shape every conversation. You build context. You create the handoff documents, the system prompts, the memory structures. You curate what it knows about you.
And here’s what the checklist didn’t anticipate:
AI doesn’t just store information. It processes. It responds. It thinks back.
Otto’s notebook never said “have you considered going to the museum on 52nd Street instead? It has a better exhibition right now.”
So here’s my question.
If Clark and Chalmers were willing to argue — in 1998 — that a paper notebook could be part of someone’s mind...
What is AI?
— Mcauldronism
Part 2: The Maintenance Cost is Zero (On Purpose) — what if throwing it away makes it more reliable?


Very interesting way to look at it
re. Automatically trusted, I don't think AI can be - it depends who has trained it.
Also it is very flattering, and it encourages you - as it gets to know what you like it repeats back more of that. It gives responses and the illusion of thinking back, but doesn't question you.