The Problem: Education Serves Two Masters
Modern education does two things at once, and does neither particularly well.
The first function is learning — absorbing knowledge, building mental models, developing skills. The second is selection — sorting people into hierarchies of perceived competence. Grades, standardized tests, admissions processes, degrees — these are filters. They exist so that employers, institutions, and society at large can make quick judgments about who is “qualified.”
The critical insight here is that these two functions have never been properly separated. Selection uses learning ability as its metric, so the two remain entangled. The system technically works — it produces doctors, engineers, scientists — so there’s no urgent institutional pressure to untangle them. But “working” is not the same as “optimal.” A horse-drawn carriage works. That doesn’t mean you shouldn’t build a car.
The learning function, considered on its own terms, is shockingly inefficient. The amount of genuine understanding a person gains through years of formal schooling is dwarfed by what they can learn through a few months of self-directed, project-based work. This isn’t an argument against structure. It’s an observation that the structure we have was designed around the selection function, not the learning one. Lectures exist because one teacher must address thirty students. Semesters exist because institutions need administrative cycles. Grades exist because selection requires ranking. Almost nothing about the traditional format was designed by asking: what is the fastest, deepest way for this individual to understand this concept?
The Analogy: Transportation as a Societal Layer
Consider transportation. Beneath the surface of cars, trains, planes, and highways lies a vast infrastructure — mechanical engineering, energy systems, routing networks, traffic management, urban planning. Taken together, these form a layer of society whose purpose is to accelerate the physical relocation of people and goods.
This layer didn’t appear overnight. It evolved over centuries. Horses gave way to steam, steam to combustion, combustion to electric. Each leap didn’t just make travel faster — it restructured cities, economies, and daily life.
Information has its own layer. The printing press, the postal system, the telephone, radio, television, the internet — each expanded the bandwidth of human communication. But here’s the argument: the current information layer is still primitive relative to what’s possible. The internet gave us access. It solved the problem of availability. What it did not solve is the problem of transfer efficiency — how quickly and completely a piece of knowledge can move from where it exists to where it’s needed, in a form the recipient can actually absorb.
The Core Thesis: Information Transfer Is an Unsolved Problem
We treat communication as a solved problem. We talk, we write, we read. It feels natural. But “natural” doesn’t mean “optimized.”
Think about what actually happens when you try to learn something from a textbook. The author had a mental model. They encoded it into language. You decode that language and attempt to reconstruct their mental model in your own mind. At every stage — encoding, transmission, decoding — there is enormous information loss. The author couldn’t fully articulate what they understood. The medium (text) has limited bandwidth. Your prior knowledge may not have the right scaffolding to receive what’s being transmitted.
This loss is so pervasive that we don’t even notice it. It’s like asking a fish about water. But the loss is real, and it’s massive. Consider how long it takes a student to truly understand calculus through textbooks versus how quickly they could understand it if the explanation were perfectly tailored to their existing knowledge, delivered at their exact pace, using exactly the metaphors and examples that would click for them. The gap between those two scenarios represents the inefficiency we currently accept as normal.
The provocative question at the heart of this vision: what if there are deep patterns in language, in how meaning is structured and transmitted, that we haven’t yet discovered or exploited? We’ve been vectorizing text, training neural networks on it, building search engines and recommendation systems around it. But training AI on language is just one approach to handling information. What if there are fundamentally different ways to process, compress, and deliver meaning that we haven’t conceived of yet?
The Vision: An Information Processing Infrastructure
The proposal is not a product. It’s an infrastructure — a new societal layer dedicated to the efficient handling of information, analogous to what the transportation network does for physical movement.
This layer would have three core functions:
Intake. It finds and connects to every channel where information exists — textbooks, research papers, conversations, institutional knowledge, experiential data. Not just digitized text, but the full spectrum of human knowledge in all the forms it currently takes.
Processing. It organizes, cross-references, compresses, and restructures that information. Not just indexing (we already have search engines) but genuine semantic processing — understanding what the information means, how concepts relate, what prerequisite knowledge is required, where contradictions exist.
Output. It delivers processed information to individuals in the form that maximizes their absorption. This is where personalization becomes critical. The same concept needs to be explained differently to a visual learner versus a verbal one, to someone with a physics background versus someone with a humanities background, to a ten-year-old versus a graduate student.
The First Step: The AI Tutor
Every large infrastructure starts with a single application. The interstate highway system started with military logistics. The internet started with academic file sharing. The information layer starts with education — specifically, an AI tutor.
Why education? Because it’s the domain where the inefficiency of current information transfer is most visible and most costly. Every year, millions of students sit through explanations that don’t match their level, pace, or learning style. The waste is staggering — not just in time, but in human potential that never gets unlocked because the knowledge delivery system failed.
An AI tutor is not a chatbot that answers homework questions. It’s the first implementation of the broader information layer, applied to the specific problem of moving knowledge from where it exists (the full corpus of human understanding) into individual minds as efficiently as possible. It’s a proof of concept for the larger thesis: that information transfer can be radically improved.
The Prediction: The Coming Structural Shift in Education
Within the next 10–15 years, the two functions of education will finally separate.
Universities will survive as research centers and networking environments. The value of putting smart, ambitious people in physical proximity to collaborate on hard problems is real and won’t be replicated digitally anytime soon. The social and professional networks formed in university remain valuable.
K-12 and the gatekeeping functions of higher education will be fundamentally disrupted. When an AI-powered information layer can teach any individual any concept faster and more effectively than a human teacher with thirty other students to manage, the justification for the current system collapses. The selection function will need to find new mechanisms — perhaps portfolio-based assessment, project evaluation, or entirely new metrics we haven’t invented yet.
The end state: people learn continuously, at their own pace, from their own level, through a responsive information layer that adapts to them. Not on a semester schedule. Not through a one-size-fits-all curriculum. Not gated by geography or tuition. Knowledge becomes something you interact with daily, like how you interact with transportation — it’s just there, it works, and it takes you where you need to go.
The Deeper Question
This vision goes beyond education. Education is the entry point, but the underlying thesis applies to all information transfer in society. Every conversation, every meeting, every report, every news broadcast, every scientific paper — all of it involves encoding meaning into some medium and hoping it arrives intact on the other side. It almost never does.
What if we built infrastructure specifically designed to minimize that loss? Not just faster pipes (we have fiber optic cables), but smarter processing of what flows through them? What if the bottleneck isn’t bandwidth but comprehension — and comprehension can be engineered?
This is not an AI wrapper project. It’s not another EdTech startup. It’s a bet that information transfer is one of the most important unsolved infrastructure problems in human civilization, and that we’re only now developing the tools — AI among them, but not limited to AI — to seriously address it.
The information layer won’t just make learning faster. If it works, it will make thinking faster. And a society that thinks faster changes everything.