Human progress in science and mathematics has classically come from breaking down hard-to-solve complex problems into simpler, easier-to-solve logical pieces. This method, a cornerstone of human reasoning, has enabled us to unravel the mysteries of nature and make astonishing advancements.
I have always found mathematics particularly beautiful for this very reason. It offers a way to distill intricate concepts into fundamental elements that make even the most complex problems approachable. My time earning a Ph.D. in mathematics at the University of Chicago and later studying at the Institute for Advanced Study in Princeton taught me the immense power of this approach. There, I was fortunate to learn from remarkable teachers who could take opaque, formidable problems and reduce them to logical, easy-to-understand chunks. Witnessing this process often reveals the true genius behind the solutions.
It’s also true that the human brain is capable of more than just following logical arguments. We have a remarkable ability to recognize patterns instantly without consciously breaking them into parts. For example, when we see a dog, we don’t think, “It has four legs, fur, a nose, and ears—therefore, it’s a dog.” We know. This effortless recognition extends to hearing a melody or recognizing a scent from childhood. Yet, while these processes are instant and natural, they are not readily harnessed for solving complex, abstract problems.
For decades, computers also lacked this kind of instant recognition. Teaching a machine to identify a picture of a dog, for example, has been one of the most significant challenges in artificial intelligence. But with the advent of Deep Learning, computers have developed a form of this recognition ability. Deep Learning systems, modeled after networks of neurons in the brain, can now take vast amounts of data—such as an image of a dog—and almost instantly recognize it. These systems are powered by immense neural networks that, like the human brain, interact in nonlinear ways, sometimes with billions or even trillions of data elements.
In recent years, Deep Learning has achieved stunning success and is now applied in numerous areas, including chatbots, image recognition, natural language understanding, and translation. The implications are profound: tasks that once required human expertise—such as lawyers reviewing contracts, radiologists reading X-rays, or engineers designing complex systems—are now being performed by AI with remarkable accuracy - sometimes beyond human accuracy.
What’s most fascinating is that Deep Learning is upending the traditional approach of breaking down problems into logical steps. Whereas a doctor diagnosing cancer from a mammogram might explain her decision using identifiable features like the mass’s rough outline, the presence of a cluster of blood vessels leading to the mass, or microcalcifications in the core of the mass, a Deep Learning system simply analyzes the image and outputs an answer - cancer, or not cancer. Its accuracy can surpass human judgment, yet it cannot explain its reasoning. This phenomenon is why AI is often described as a “black box.” But this black box isn’t entirely alien to us—it mirrors how we human beings intuitively recognize a dog or a familiar face without our needing to explain why.
Deep Learning systems operate beyond the boundaries of simplicity and logic. They don’t need to reduce complex problems into understandable pieces. Instead, they process vast datasets and output results using immense computational power. We are now entering an era where human reliance on logic and simplicity is no longer essential for significant breakthroughs. For example, an AI system might identify a potential cure for Parkinson’s disease from a new compound based on patterns within large datasets—but it won’t be able to explain why.
As we move into this new age of superhuman AI capabilities, some worry that AI systems are so advanced that humans may become obsolete in problem-solving. It is true that in situations where vast amounts of data are needed, AI can and will outperform human reasoning.
Yet I believe human logical reasoning will continue to be crucial in solving future problems. It has led us to profound achievements in fields like mathematics, physics, and biology, to name a few. Our creativity, intuition, and ethical considerations allow us to make leaps that transcend data. We can draw inspiration from complex experiences, make logical connections between disparate subjects, create entirely new concepts to solve previously unsolvable problems, understand moral nuances, and comprehend concepts like love and empathy. These capabilities will remain crucial, and I believe that Deep Learning AI will enhance, not replace, our ability to reason.
Some might say I am wrong. There are indications that AI may soon integrate logical reasoning into its systems—most recently with news of OpenAI’s upcoming “Strawberry” release. Maybe that is the beginning of AI having human-level logical reasoning. I’m highly doubtful. I think human logical reasoning will always be essential - assisted by AI, but not replaced.
What’s your opinion?
Your Venture Coach,
Norman
Assisted by AI, but not replaced? That's definitely a 'yes' for the current crop of AI tools, most which only really work well under very narrow circumstances and under human supervision.
DL is such a tool. Yes, it’s great for a wide range of applications, and some of the achievements (such as for cancer diagnostics) are truly revolutionary. But DL only works well where the problem narrowly defined and the system bound to its limits, and where that’s not the case results tend to become increasingly erroneous. And when this happens it’s then quite often even beyond the understanding of the ones who created the network in the first place to explain the behavior, let alone fix it (the same is true for LLMs where “hallucinations” are often ‘fixed’ by limiting possible responses or creating exceptions in the model to prevent them from occurring, not by fixing the underlying fundamental problem which causes them). And not to forget, that’s at a level of intelligence which is still much closer to the predictive autocorrect function on a modern smartphone than to Cmdr Data of Star Trek fame.
DL is great for what it is, and I'm sure we’ll soon see some level of reasoning (human-level, well, considering how poorly many humans do when it comes to reasoning, I’d say yes) so perhaps we might even see DL networks which can explain to their creators why they failed to deliver correct results. But I do have severe doubts about DL being able to reach any level of actual intelligence, and even more about some of the claims of certain AI proponents of being close to build AGI or a “super-intelligence” based on this technology.
As for 'replaced by AI', I guess it depends whether you're only talking about AI as in software or AI controlled systems (robots). There are many areas where a human’s frailty, physical and cognitive limitations and tendency for errors impose severe performance and safety restrictions. Other tasks are just outright dangerous for the human worker and better left to a machine. And that's even more true for applications where the anthropomorphic form isn’t advantageous for the tasks at hand.
Unfortunately AI is still widely seen through the lens of anthropomorphism (i.e., applying human thinking, ideals, fears and feelings to a system which is inherently different). Which is also culminating in the various doomsday scenarios floating around which describe how AI will exterminate humanity. We have a tendency to impose our own limitations to other things, and by doing this we limit what those things can become (a good example is how humans often treat animals). Unsurprisingly, the results then tend to be limited as well.
It is interesting to me that you include creativity, intuition and ethical considerations as part of human logical thought. I can see how AI pattern matching can simulate intuition, given enough history; creativity too because much of what I think of as creative is novel pathways, which I think could be explored more thoroughly by a machine. The ethical considerations too are vulnerable because so much of what people do is based on a sliding scale of morals. I recall the scene in The Longest Day when the SS General muses that sometimes he wonders whose side god is on.
Your opinion is an anthropocentric one; after all we like to think we are special. But if human self-awareness is organism-based and DL is a mimic with higher power and resolution it should eventually out pace us. If however our consciousness is tapped in to something greater (i.e. should we believe in god) we have the magic bullet. I don’t know yet. I need to read more Spinoza.