Moravec's paradox
According to Wikipedia, Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources.
In the post "Formulas for knowledge representation", I stated that "intelligence does not compute the ideal answer, it applies the substitution method and uses the most fitting among the available candidates as the answer". "the substitution method ... among the available candidates" may provide an insight into the Moravec's paradox and even about "sensorimotor and perception skills". It will become clear if we consider the number of "available candidates" at each level.
At the level of perception, the high resolution of our sensors produces a huge number of features to differentiate. The number of possible trajectories for our effectors is also huge. Compared to those numbers, the number of high-level objects to track while reasoning is negligible. Hence, Moravec's paradox.
Most likely, the huge computing power of the brain is insufficient to process those huge numbers related to "sensorimotor and perception skills". It may discretize the perceived continuum to impose a limit on them. The "quants" of perception are tiny enough for us to perceive smooth signals from our senses.
This huge difference between the number of items to process at each level is one of the reasons why I consider language to be "simple" and focus on NLU. I have already explained that language is a fully functional model of intelligence. Therefore, AGI as a text chatbot is perfectly possible.
#AGI is nothing impressive
Many AI experts believe in "superintelligence". They believe that AGI will be able to solve extraordinary problems but forget about humans. Even those experts that try to achieve "human-level" intelligence do not have a clear picture of what that level is.
To make fair estimates, strip human intelligence of all the tools, culture, and education. Throw a baby to gorillas and observe what "Tarzan" will accomplish in his lifetime. I am not calling to perform that experiment, don't get me wrong. I hope we are all capable of thought experiments. What I am saying is to have realistic expectations about the core AGI algorithm.
Really, there is nothing impressive about AGI. Humans accumulated vast knowledge - this is impressive. At some point in time, we will have AGI-powered agents capable of handling significant pieces of that massive knowledge. Meanwhile, they will be specialized with the ability to exchange information fast. Definitely, not a threat to humanity at large.
It's a totally different story what the core AGI abilities are. In my previous posts, I explained that those are not problem-solving, adaptation, reasoning, or learning. The core AGI abilities are based on differentiation, representing knowledge as formulas, and asking questions about any part of any formula. As I said, nothing impressive. Not today.
is the Moravec paradox even a thing though? The researchers who defined it haven’t defined what reasoning is in the first place. Not with a good enough definition anyway.
So, how does it go with birds, insects, other animals having perfect motor skills with their tiny brains, and not much of reasoning?