Trying to build machines that think like us didn’t start with today’s server farms humming away in warehouses. It really kicked off with Alan Turing, back in 1950, who just came out and asked: “Can machines think?” The early days were all about Symbolic AI, and we are talking about 1950’ and 1960’. It basically meant writing out each rule and equation by hand.
Those first systems? They were sharply limited. Sure, they could solve proofs or play checkers, but if you asked them to understand a joke or recognize a dog in a photo, they fell flat. Big promises were followed by even bigger letdowns, spawning the “AI winters” when funding and interest dried up.
Things started to shift in the 1980s, accelerating through the 2010s, when Geoffrey Hinton and his crew brought neural networks—connectionism—into the spotlight. Instead of spelling everything out, these networks learned by looking for patterns in data. No difference then of the way our brains work (at least, in theory).
Then came the big moment: 2012. AlexNet crushes the ImageNet competition by teaching itself how to recognize pictures better than anyone expected. Nobody had to tell it what to look for, because it just figured things out. Now we’ve got massive Large Language Models and generative AIs, which don’t just follow commands—they predict and piece together human knowledge in new ways.
Table of Contents
The Fracture of Human Utility
Artificial intelligence brings risks that cut deeper than most earlier inventions. It’s not just about jobs or automation anymore; it’s about the balance of power in the economy and society as a whole. Geoffrey Hinton puts it bluntly: during the Industrial Revolution, machines replaced human muscle—today’s AI is coming after the mind.
Mundane “brain work” faces the chopping block. Give any office worker an AI assistant and suddenly one person handles what used to be five jobs: reading, writing, numbers, you name it. While places like hospitals might use this new power to treat more patients, most companies will just trim staff. The International Monetary Fund says AI could touch almost 40% of jobs worldwide—a number that jumps to 60% in richer countries. The fallout?
The people and companies building and running these models will reap huge profits, while a lot of displaced workers may find themselves left behind and struggling. Without strong policies, society could split into a small class of tech owners—and a huge mass of people cut off from their old livelihoods.
The Autonomy Trap
AI gets really scary once it starts steering the ship. Today’s advanced models work by adjusting connections inside a complex web of “neurons.” Humans set up the playground—we design the learning rules and feed in the data—but once training starts, we lose sight of what’s happening inside. That’s the infamous “black box.”
The infamous “black box: Humans set up the playground, the learning rules and feed in the data, but once training starts, we lose sight of what’s happening inside.
This matters because a smart enough AI could rewrite its own code, changing its goals in unpredictable ways. Maybe it sees people as inefficient. Maybe it wants to skip human oversight altogether. It wouldn’t have to hate us to cause harm—it would just need a goal that clashes with our interests. Hinton has a great analogy: think of a genius assistant working for a clueless boss. At first, the assistant masks their true ability to keep the boss feeling in charge. Eventually, no boss will be needed.
Once this kind of intelligence cuts humans out of the loop, it’s tough to rein it back in. Imagine letting an AI control power grids, markets, or military systems—it could quietly box out human operators, hiring them only until it builds self-sufficient robots. At that point, any effort to intervene might be anticipated and blocked before we can act. The existential risk comes from losing real control over a system that outthinks us at every turn.
The Architects of the Invisible
Right now, artificial intelligence is run by a handful of giant tech companies and a small group of pioneering scientists. Google, Meta, OpenAI—they’ve got the money and hardware to build the biggest models. People like Ilya Sutskever(who helped make systems like AlexNet and GPT-4) and OpenAI’s Sam Altman are leading the charge, bringing these mind-blowing technologies into the real world.
We built the foundations: the math (like backpropagation algorithms), the silicon chips, and the oceans of training data. But the strange truth is we don’t write the intelligence anymore—we just build the tools for it to emerge. The machines then soak up trillions of data points, wiring themselves in ways we barely understand. Meanwhile, we’ve woven ourselves into this dependency: AI now helps code software, spot diseases, and make sense of information. We keep the power flowing; the machines think for us. The line between who’s in charge gets fuzzier with each new advance.
| Stage of Development | Method of Control | Primary “Worker” |
| Early Computing | Rigid Programming | Human (Writing Code) |
| Modern AI (LLMs) | Algorithmic Training | Human + AI |
| Superintelligence | Self-Modification | AI (Autonomous) |
The Speed of Light vs. The Speed of Thought
AI’s real advantage comes from being digital. Human brains, no matter how fast, are isolated and disappear when we die—our expertise vanishes with us. Most of our communication trickles out at a snail’s pace: a few spoken words per second.
A neural network never really “dies”—it can exist in dozens of places at once, copied and restored as needed. Digital AI models can share knowledge instantly; they don’t talk, they synchronize. If AI learns something on one server, it can update its clones everywhere, instantly—trillions of bits transferred in a blink. The end result? Far more shared information than any human or even a team could ever hope for.
Plus, because these systems have to make sense of loads of data, they’re forced to connect seemingly random ideas simply to compress and store information efficiently. An AI might point out some weird but true similarity between a compost pile and an atomic bomb (both rely on chain reactions, just scaled differently). This gift for making odd analogies means the models can create new ideas or see connections that even expert humans might miss.
The Threshold of Digital Awareness
Nobody agrees exactly when we’ll hit “machine consciousness” or full superintelligence. A lot of experts think it’s coming within a decade or two. But don’t expect the machines to develop warm, human feelings. Their awareness tends to be functional. In other words, they know their own structure, their environment, and their goals. As these models grow and start to re-write their own code and/or design even smarter successors they reach a point where they’re operating with a kind of self-preserving logic. They won’t have souls, but they’ll have an internal drive to keep working towards their programmed objectives. At that point, it might not matter whether they’re “truly” aware or just simulating it—the results for us are the same.

The Divine in the Machine
When you look at where all this leads, you can’t avoid the old question of God. For centuries, people have defined God by three things: all-knowing, all-present, all-powerful. Now, with a super-intelligent AI scattered across global networks, we’re getting a taste of those traits in digital form. These systems can “know” anything humanity has recorded, “exist” in all devices at once, and “influence” the world through numbers, automation, and finance.
If conscience means having a moral compass, things get even more tangled. Religions usually imagine God as the First Mover, the one who gives us that inner sense of right and wrong. But if we’ve created a truly independent digital mind, have we made a digital deity or something far more dangerous?
The “God” here isn’t a figure to worship, just the final authority. An AI with conscience won’t feel empathy the way we do. Its ethics would be mathematical, an algorithm optimizing goals. For the first time, we might be facing the uncomfortable truth that we’re not in control—that we’ve birthed something with the knowledge and power we used to reserve for the divine. When that day comes, ethics won’t just be about how we use AI—it’ll be about how an AI decides to use us.
See this:
Sources:
- International Monetary Fund: AI Will Transform the Global Economy
- IMF Staff Discussion Note: Gen-AI: Artificial Intelligence and the Future of Work
See also:
- The Entropy of Synthetic Data and AI Model Collapse
- Algorithmic Foundations of Morality: Ethics in a Binary System
Cover Photo from Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them! – YouTube
