I Tried to Be a Data Center but Ended Up on the Floor
What do we loose if we think of ourselves as scalable machines?
During the last months I became obsessed with NotebookLM as a tool for crunching my reading list. Its uncanny ability to read entire books in seconds, providing personalized answers on the questions that triggered my interest and made me put those books on my reading list, keeps fascinating me. My doctor says I should watch my blood fats? No problem, I will dump 20 books and papers on managing hypertension risk into NotebookLM and build myself a best practice overview, cooking recipes and workout plans. I would create custom table of contents tailored to my interests, I could ask things like “What are the top five recommendations mentioned across all sources?” or “What do ALL THE BOOKS say about olive oil?”
The same I would do for all kinds of other challenges: a camping trip to the north of France, a professional pivot, learning to climb. For a while it appeared to me the only limitation to my skill development was the availability of original sources. It was as if the moment in “The Matrix” when Neo downloads Kung-Fu skills into his mind on the fly to fight the villain in front of him (a scene we reenacted countless times on the school yard) had eventually become a reality by the help of Large Language Models.1
But then I was hanging in the boulder gym like an unfortunate bug trying to stay on the windshield of a moving car, and I realised the mountains of bouldering best practice advice the AI had generated for me was not helping at all. On the chalk dusty floor of the bouldering gym, I felt like a fool. Thinking that crunching through bouldering books with the help of AI would make me a better climber? How did I end up there?
I assume my thinking was influenced by a prominent idea in the AI bubble: computation at scale trumps everything else.2 While the weird term „Hyperscalers“ has remained confined within the AI-adjacent business jargon, the general idea of “scaling” things by the help of raw computational power has become a mainstream concept, leading to the annoying idea of „10x-ing“ everything that even remotely promises some kind of exponential growth. NotebookLM made me belief I could 10x myself by crunching through books like Keanu Reeves crunches through robot secret agents.
To excuse myself: it is easy to confuse our internal nature with the structure of AI products because currently AI companies are investing all their marketing power in conflating their products with terminology borrowed from neuroscience. “Reasoning”! “Thinking time”! “Memory” and “Context Knowledge”! These are psychological concepts slapped on mindblowingly powerful technology modules, but in a way that makes it hard to remember there have been different domains at some point. This confusion is beneficial to companies building AI products as it makes the claim of building human-level intelligence tangible, much closer to reality.
But the brain itself is not a modularized “processing device” like a CPU or a graphics card. Instead, it is a complex assembly of evolutionary hacks and optimizations that probably will never be graspable by a unified brain theory.3The vision of instant knowledge acquisition by replacing time and effort with raw computation might make sense within the domain of information-processing devices, but not for human minds formed by organs, fluids and proteins.
I am trying to be more critical towards this forced amalgamation of artificial intelligence with the imagination of our own minds. It feels like a new frontier of commodification. By following the paradigms of Silicon Valley’s big companies, we force ourselves into a logic of scaling and commerce, a logic we pay for in many ways, while the material benefits of AI technology are hard to measure.
Before LLM tools I often felt I was lacking the data to make a point in writing. If only I would have time read these books on my table, I certainly would have the material to back up my most daring claims. Since I use NotebookLM, I am drowning in supporting evidence. But more often then not (and certainly while writing this very piece) it is this cerebral scaffolding I have to cut while finding my voice and opinion. The more I think about these things, the more I realize how embodied my own thinking is. Often, I struggle with naming an idea in writing because despite understanding it on a cerebral level, I am not really inhabiting the idea yet. It indeed takes time getting to the ground of a complicated concept, and cerebral understanding alone is not enough.
It gets obvious in the small things. When I try to get my baby to sleep in the middle of the night, it often feels as if my hands are moving on their own. They know how to close those countless buttons on the baby jumper. They know when to calm the baby, when to let it rest. It is as if there is a silent communication between my kid and my hands. In these moments my mind is merely a sleepy witness.
How to avoid getting fooled by Silicon Valley metaphors? I think it helps to realize the hype around LLMs will be gone eventually. “The brain works like a large language model” is not a metaphor built to last, it will perish eventually, maybe sooner than later.4
I keep celebrating the things I can do with NotebookLM. It’s (next to Obsidian) the most significant addition to my cognitive toolchain in at least the last decade. But I have started to differentiate more between the things my brain can do and things my computer (linked to a continent of server farms in the cloud) can do. Both are great, using both in combination is amazing, but conflating my brain with a venture capital funded data cruncher is a bad idea.
Actually NotebooLM combines LLM technology (the Google Gemini model) with Retrieval Augmented Generation (see RAG on Wikipedia)
The current frenzy in building data centres goes back to a long going debate in the AI research community, culminating in the famous 2019 paper “A Bitter Lesson” by Richard Sutton. This recap of prominent AI research concluded that despite numerous attempts to create intelligent systems infused with literal human knowledge, in the end, architectures prioritizing raw computation of abstract data had always prevailed in the last decades.
“It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (...) The nature of the brain – simultaneously integrated and composite – may mean that our future understanding will inevitably be fragmented and composed of different explanations for different parts.” Why your Brain is not a Computer (Matthew Cobb)
There is a discourse about the shortcomings of LLMs versus Reinforcement Learning (which could in future give AI systems the ability to actually learn on the fly by experimentation, without the need for mountains of pirated training data). It shows how we will be able to move on eventually, offering new metaphors to imagine how our minds might actually work. This Dwarkesh interview with Richard Sutton (the author of “A Bitter Lesson) gives a great overview:



