Discussion about this post

User's avatar
Benjamin Gawert's avatar

Assisted by AI, but not replaced? That's definitely a 'yes' for the current crop of AI tools, most which only really work well under very narrow circumstances and under human supervision.

DL is such a tool. Yes, it’s great for a wide range of applications, and some of the achievements (such as for cancer diagnostics) are truly revolutionary. But DL only works well where the problem narrowly defined and the system bound to its limits, and where that’s not the case results tend to become increasingly erroneous. And when this happens it’s then quite often even beyond the understanding of the ones who created the network in the first place to explain the behavior, let alone fix it (the same is true for LLMs where “hallucinations” are often ‘fixed’ by limiting possible responses or creating exceptions in the model to prevent them from occurring, not by fixing the underlying fundamental problem which causes them). And not to forget, that’s at a level of intelligence which is still much closer to the predictive autocorrect function on a modern smartphone than to Cmdr Data of Star Trek fame.

DL is great for what it is, and I'm sure we’ll soon see some level of reasoning (human-level, well, considering how poorly many humans do when it comes to reasoning, I’d say yes) so perhaps we might even see DL networks which can explain to their creators why they failed to deliver correct results. But I do have severe doubts about DL being able to reach any level of actual intelligence, and even more about some of the claims of certain AI proponents of being close to build AGI or a “super-intelligence” based on this technology.

As for 'replaced by AI', I guess it depends whether you're only talking about AI as in software or AI controlled systems (robots). There are many areas where a human’s frailty, physical and cognitive limitations and tendency for errors impose severe performance and safety restrictions. Other tasks are just outright dangerous for the human worker and better left to a machine. And that's even more true for applications where the anthropomorphic form isn’t advantageous for the tasks at hand.

Unfortunately AI is still widely seen through the lens of anthropomorphism (i.e., applying human thinking, ideals, fears and feelings to a system which is inherently different). Which is also culminating in the various doomsday scenarios floating around which describe how AI will exterminate humanity. We have a tendency to impose our own limitations to other things, and by doing this we limit what those things can become (a good example is how humans often treat animals). Unsurprisingly, the results then tend to be limited as well.

Expand full comment
John H Kramer's avatar

It is interesting to me that you include creativity, intuition and ethical considerations as part of human logical thought. I can see how AI pattern matching can simulate intuition, given enough history; creativity too because much of what I think of as creative is novel pathways, which I think could be explored more thoroughly by a machine. The ethical considerations too are vulnerable because so much of what people do is based on a sliding scale of morals. I recall the scene in The Longest Day when the SS General muses that sometimes he wonders whose side god is on.

Your opinion is an anthropocentric one; after all we like to think we are special. But if human self-awareness is organism-based and DL is a mimic with higher power and resolution it should eventually out pace us. If however our consciousness is tapped in to something greater (i.e. should we believe in god) we have the magic bullet. I don’t know yet. I need to read more Spinoza.

Expand full comment
3 more comments...

No posts