Human progress in science and mathematics has classically come from breaking down hard-to-solve complex problems into simpler, easier-to-solve logical pieces.
Assisted by AI, but not replaced? That's definitely a 'yes' for the current crop of AI tools, most which only really work well under very narrow circumstances and under human supervision.
DL is such a tool. Yes, it’s great for a wide range of applications, and some of the achievements (such as for cancer diagnostics) are truly revolutionary. But DL only works well where the problem narrowly defined and the system bound to its limits, and where that’s not the case results tend to become increasingly erroneous. And when this happens it’s then quite often even beyond the understanding of the ones who created the network in the first place to explain the behavior, let alone fix it (the same is true for LLMs where “hallucinations” are often ‘fixed’ by limiting possible responses or creating exceptions in the model to prevent them from occurring, not by fixing the underlying fundamental problem which causes them). And not to forget, that’s at a level of intelligence which is still much closer to the predictive autocorrect function on a modern smartphone than to Cmdr Data of Star Trek fame.
DL is great for what it is, and I'm sure we’ll soon see some level of reasoning (human-level, well, considering how poorly many humans do when it comes to reasoning, I’d say yes) so perhaps we might even see DL networks which can explain to their creators why they failed to deliver correct results. But I do have severe doubts about DL being able to reach any level of actual intelligence, and even more about some of the claims of certain AI proponents of being close to build AGI or a “super-intelligence” based on this technology.
As for 'replaced by AI', I guess it depends whether you're only talking about AI as in software or AI controlled systems (robots). There are many areas where a human’s frailty, physical and cognitive limitations and tendency for errors impose severe performance and safety restrictions. Other tasks are just outright dangerous for the human worker and better left to a machine. And that's even more true for applications where the anthropomorphic form isn’t advantageous for the tasks at hand.
Unfortunately AI is still widely seen through the lens of anthropomorphism (i.e., applying human thinking, ideals, fears and feelings to a system which is inherently different). Which is also culminating in the various doomsday scenarios floating around which describe how AI will exterminate humanity. We have a tendency to impose our own limitations to other things, and by doing this we limit what those things can become (a good example is how humans often treat animals). Unsurprisingly, the results then tend to be limited as well.
Beautifully said. I do think LLMs and the like will continue to improve, but hallucinations will continue. In fact, to manage "truth" and accuracy, there may actually need to be a layer of symbolic AI on top of the LLM.
I couldn't agree more about needing a symbolic layer on top of the LLMs. In fact, that's what we're doing at Fallacycheck.com. We don't get 100% automation, but we get more reliable output.
It is interesting to me that you include creativity, intuition and ethical considerations as part of human logical thought. I can see how AI pattern matching can simulate intuition, given enough history; creativity too because much of what I think of as creative is novel pathways, which I think could be explored more thoroughly by a machine. The ethical considerations too are vulnerable because so much of what people do is based on a sliding scale of morals. I recall the scene in The Longest Day when the SS General muses that sometimes he wonders whose side god is on.
Your opinion is an anthropocentric one; after all we like to think we are special. But if human self-awareness is organism-based and DL is a mimic with higher power and resolution it should eventually out pace us. If however our consciousness is tapped in to something greater (i.e. should we believe in god) we have the magic bullet. I don’t know yet. I need to read more Spinoza.
When I talk about Deep Learning, I'm not thinking of AI in general. I'm thinking about massive data sets being applied to massive computational systems with (and sometimes without) pre-training. So they don't use logical reasoning, like we do. It is not able to distinguish what might be opinion from fact. It's true that some of the issue can be dealt with by selecting just the right prompts, but the inability to separate opinion from fact will never entirely go away. In fact, that's why LLM systems (which are DL) often hallucinate.
It's also true that LLM systems will always show the bias, moral predispositions, and the like of the data it is fed - and that might be very disturbing to many of us. For example, vast amounts of data about recruiting might result in DL systems that show bias against gender, race, etc.
The point of my post was that DL is not about making logical connections. It is about providing information based on the data it is provided, and so there is no logic behind that.
So one day if AI systems can do logical reasoning, as I mention at the end, then humans may indeed be outpaced by AI systems. But right now, I don't believe DL systems have that capacity. Perhaps the next OpenAI system, "Strawberry," will start using logic.
Assisted by AI, but not replaced? That's definitely a 'yes' for the current crop of AI tools, most which only really work well under very narrow circumstances and under human supervision.
DL is such a tool. Yes, it’s great for a wide range of applications, and some of the achievements (such as for cancer diagnostics) are truly revolutionary. But DL only works well where the problem narrowly defined and the system bound to its limits, and where that’s not the case results tend to become increasingly erroneous. And when this happens it’s then quite often even beyond the understanding of the ones who created the network in the first place to explain the behavior, let alone fix it (the same is true for LLMs where “hallucinations” are often ‘fixed’ by limiting possible responses or creating exceptions in the model to prevent them from occurring, not by fixing the underlying fundamental problem which causes them). And not to forget, that’s at a level of intelligence which is still much closer to the predictive autocorrect function on a modern smartphone than to Cmdr Data of Star Trek fame.
DL is great for what it is, and I'm sure we’ll soon see some level of reasoning (human-level, well, considering how poorly many humans do when it comes to reasoning, I’d say yes) so perhaps we might even see DL networks which can explain to their creators why they failed to deliver correct results. But I do have severe doubts about DL being able to reach any level of actual intelligence, and even more about some of the claims of certain AI proponents of being close to build AGI or a “super-intelligence” based on this technology.
As for 'replaced by AI', I guess it depends whether you're only talking about AI as in software or AI controlled systems (robots). There are many areas where a human’s frailty, physical and cognitive limitations and tendency for errors impose severe performance and safety restrictions. Other tasks are just outright dangerous for the human worker and better left to a machine. And that's even more true for applications where the anthropomorphic form isn’t advantageous for the tasks at hand.
Unfortunately AI is still widely seen through the lens of anthropomorphism (i.e., applying human thinking, ideals, fears and feelings to a system which is inherently different). Which is also culminating in the various doomsday scenarios floating around which describe how AI will exterminate humanity. We have a tendency to impose our own limitations to other things, and by doing this we limit what those things can become (a good example is how humans often treat animals). Unsurprisingly, the results then tend to be limited as well.
Beautifully said. I do think LLMs and the like will continue to improve, but hallucinations will continue. In fact, to manage "truth" and accuracy, there may actually need to be a layer of symbolic AI on top of the LLM.
I couldn't agree more about needing a symbolic layer on top of the LLMs. In fact, that's what we're doing at Fallacycheck.com. We don't get 100% automation, but we get more reliable output.
It is interesting to me that you include creativity, intuition and ethical considerations as part of human logical thought. I can see how AI pattern matching can simulate intuition, given enough history; creativity too because much of what I think of as creative is novel pathways, which I think could be explored more thoroughly by a machine. The ethical considerations too are vulnerable because so much of what people do is based on a sliding scale of morals. I recall the scene in The Longest Day when the SS General muses that sometimes he wonders whose side god is on.
Your opinion is an anthropocentric one; after all we like to think we are special. But if human self-awareness is organism-based and DL is a mimic with higher power and resolution it should eventually out pace us. If however our consciousness is tapped in to something greater (i.e. should we believe in god) we have the magic bullet. I don’t know yet. I need to read more Spinoza.
Hi John, thanks for the thoughtful comment!
When I talk about Deep Learning, I'm not thinking of AI in general. I'm thinking about massive data sets being applied to massive computational systems with (and sometimes without) pre-training. So they don't use logical reasoning, like we do. It is not able to distinguish what might be opinion from fact. It's true that some of the issue can be dealt with by selecting just the right prompts, but the inability to separate opinion from fact will never entirely go away. In fact, that's why LLM systems (which are DL) often hallucinate.
It's also true that LLM systems will always show the bias, moral predispositions, and the like of the data it is fed - and that might be very disturbing to many of us. For example, vast amounts of data about recruiting might result in DL systems that show bias against gender, race, etc.
The point of my post was that DL is not about making logical connections. It is about providing information based on the data it is provided, and so there is no logic behind that.
So one day if AI systems can do logical reasoning, as I mention at the end, then humans may indeed be outpaced by AI systems. But right now, I don't believe DL systems have that capacity. Perhaps the next OpenAI system, "Strawberry," will start using logic.
All the best,
Norman