🇺🇸 Rebooting AI (Gary Marcus, Ernest Davis)
- 2 minsRebooting AI, New York, Pantheon Books, 2019
Marcus and Davis summarize in this book all the concerns that come to our minds with the success of AI tools such as ChatGPT or Stable Difussion. They claim they are optimistic but the mood of the book is quite pessimistic.
AI is a tool, an opportunity to create marvellous things. But AI, or better, Deep Learning is just a statistical tool. Hence, it fails. Always. Within a fixed percentage. Suddenly, like an idiot savant.
AI needs supervision. It doesn’t understand what it is doing. Its knowledge is brittle and can be fooled. It suffers from ethic/bias problems. It’s only deep memorising not deep understanding. It cannot take decisions. It deeply understands a single narrow domain (the one created by the big data that trained it) and under one single context. Must be retrained on slightly different contexts. Hallucinates when you forces it to change to other contexts (to other knowledge frameworks, to other micro theories). It relies on existing content. It steals. It’s prone to IP problems. It’s based in knowledge commonly held but it’s neither common sense nor public domain content.
Is DL hitting a wall? (AI is not…)
On the other hand, the mood of Stephen Wolfram in his long article about GPT is far more optimistic.
I’ve found the relationship between AI and Thermodynamics very interesting: GPT doesn’t choose always the highest-ranked word, or it will produce a very ‘flat’ essay, it picks, randomly, lower-ranked words, as often as a ‘temperature’ parameter determines (0.8).
Thermodynamics again in this article from O’Reilly’s Make Magazine about Generative AI:
Many generative AI applications use diffusion model architecture under the hood. Diffusion models are a type of AI algorithm inspired by non-equilibrium thermodynamics. They add random noise to an input image and then learn to reconstruct a new, similar image from noise.
As John Grubber comments in his post “Bing, the Most Exciting Product in Tech” referring to Wolfram’s article:
Any system complex enough to generate seemingly-original human language and thoughts is by definition too complex for us to truly understand. I find that thought both scary and beautiful.
Yes, AI is, today, scary and beautiful!