
After 30 years in IT, I can finally say: computers can compute
I was eight years old when I first met a computer that could talk back and it wasn’t Teddy Rupskin. It was 1979, and the Radio Shack TRS-80 sitting on our living room table communicated only in green phosphorescent text. But after eight days of transcribing code from 101 Computer Programs You Can Write Yourself, something extraordinary happened: I had created life.
Well, digital life. A blinking dot navigating ski gates on a monochrome screen. When I demonstrated my creation to friends, expecting wonder, I got adolescent dismissal: “This sucks. Let’s go throw a football.” But I was transfixed. For the first time, I had made a machine do something it hadn’t been explicitly programmed to do by someone else. The seed of possibility was planted.
That seed would germinate slowly. By the time Atari and Nintendo arrived, I was already moving away from games toward something that felt more substantial: the infrastructure of digital communication. I became what we called an “IT guy,” though the term barely existed then. My job was to make machines (Macs, PC’s and Unix) talk to each other, stringing ethernet cables through ceiling tiles, configuring NetWare servers, teaching windows to find printers. It was thrilling work, genuinely revolutionary, but always with the nagging sense that we were building elaborate workarounds for fundamentally stupid machines.
I wanted C-3PO. I got file servers and a RAID
For thirty years, this was the central frustration of working with computers: they could process information with incredible speed and accuracy, but they couldn’t truly compute. They were digital assembly lines, endlessly sophisticated but ultimately mechanical. Ask a computer to find every document containing “Q3 P and L” and it would search filename strings and document text. Ask it to find “everything related to our disappointing fall performance,” and it would stare back with the blank incomprehension of a very fast filing cabinet (No results found…).
The science fiction I’d grown up with, The Jetsons, Asimov’s robots, Clarke’s HAL, Lucas’s droids, had promised something different: machines that could understand context, make connections, reason through problems. Machines that could, in the deepest sense of the word, think. What we got instead were faster ways to do the same digital paperwork.
Then, in 2021, something shifted
The early AI systems were clunky, often comically wrong, but they exhibited a quality I’d never seen in software before: they seemed to understand what I was asking for, not just what I was searching for. When I typed “Write a memo about Q3 performance that sounds optimistic but acknowledges the challenges,” the system didn’t search for keywords. It composed. It understood tone, context, and intent in ways that felt genuinely alien after decades of literal-minded computing.
For the first time since that TRS-80, I felt I was glimpsing the future I’d been promised as a child
What’s remarkable isn’t that AI can write or code or analyze, though it can do all those things with startling competence as well as incompetence. What’s remarkable is that it represents the first fundamental shift in what computers can do since the graphical interface. For fifty years, we’ve been incrementally improving the same basic paradigm: humans give explicit instructions, machines execute them precisely. AI breaks that paradigm. It can work from incomplete instructions, infer intent, and generate solutions to problems it’s never seen before and as it has more context it grows more useful.
This is what I mean when I say that computers can finally compute. Not just process, but reason. Now, I understand that it is just probability happening real fast but what is reasoning anyway?
The irony, of course, is that this breakthrough arrives just as millions of people are treating AI like magic. Those of us who’ve worked with these systems understand the mechanics: large language models trained on vast datasets, predicting the most likely next word in a sequence. Sophisticated statistics running on very fast hardware. But understanding the mechanics doesn’t diminish the achievement. After all, we understand the mechanics of flight, but airplanes still seem miraculous when you’re thirty thousand feet above the ground.
The deeper irony is what this computational breakthrough means for human work. Every previous technological revolution automated physical labor, the cotton gin, the assembly line, the computer spreadsheet. AI automates cognitive tasks. It doesn’t replace thinking so much as augment it, which means everyone will be expected to think faster, deeper, and more creatively than before.
Consider what happened to accounting when VisiCalc introduced the electronic spreadsheet in 1979. Suddenly, financial analysis that once took weeks could be completed in hours. Did this eliminate accounting jobs? Not exactly. It eliminated routine bookkeeping and raised expectations for what accountants should accomplish. Financial modeling became standard. Strategic analysis became expected. The bar didn’t just move, it launched into orbit.
AI promises to do the same thing across virtually every knowledge profession. The baseline for what constitutes adequate work is about to shift dramatically upward. In a world where AI can generate a competent first draft of almost anything, “competent” becomes the new zero. The question isn’t whether AI will replace human workers, but whether human workers can adapt to a world where AI amplifies their output expectations by an order of magnitude.
I find myself both exhilarated and cautious about this future. The child in me who dreamed of R2-D2 and C-3PO is thrilled to finally have conversational, helpful machines. The adult who’s spent three decades debugging network protocols is wary of solutions that work beautifully until they don’t.
More fundamentally, I worry we’re approaching the computational future we dreamed of just as we’re losing faith in technological progress itself. The same cultural moment that’s delivering AI is also producing Neuralink, metaverse evangelism, and cryptocurrency schemes. The promise of better tools is getting tangled up with promises of transcendence and transformation that feel more marketing than substance.
Maybe that’s appropriate. Every transformative technology arrives carrying both genuine utility and inflated promises. The personal computer was supposed to democratize information and empower individuals and it kinda did, while also enabling pathological social convergence, surveillance capitalism and social media addiction. The internet was supposed to connect humanity in unprecedented understanding and it seemed to do that for a minute but the boomerang ended up fragmenting us into algorithmic echo chambers.
AI will likely follow the same pattern: genuinely useful capabilities wrapped in overblown expectations, delivering real value while creating new problems we can’t yet imagine. I cant wait!?
But for now, forty-five years after my first encounter with that TRS-80, I’m mostly grateful. Not because AI represents the singularity or the solution to human limitations, but because it finally feels like the beginning of actual computing. Machines that can understand context, work with ambiguity, and generate original solutions to new problems. It’s also my alternative to reboot when someone calls me for a tech question.
The future I’ve been waiting for isn’t here yet. But for the first time, I can see it loading. What’s your experience with AI changing expectations in your field?