Artificial Intelligence is the Holy Grail of the information age. I use the phrase "Holy Grail" on purpose because it's the result of an utter misunderstanding. According to the book "Holy Blood, Holy Grail: The Secret History of Christ & The Shocking Legacy of the Grail" by Michael Baigent, Richard Leigh, and Henry Lincoln, it all started with the phrase "sangreal" without spaces. It was natural at the time to separate out "san", which stands for "holy" in Latin. It left a gibberish "greal", so they came out with a brilliant interpretation - let it be a cup! What the writer meant originally was "sang real", which in Latin stands for "royal blood". The rest of the story is in the book - worth reading, I highly recommend it.
How does that relate to Artificial Intelligence? It is also misunderstood. The above example shows how deceiving assumptions may be, even natural ones. There were many wrong assumptions about intelligence. Persistence in general is good but persistence in following a wrong path is fatal. I propose to cut losses fast and search for new paradigms. We need to change angles, test our approaches, and learn from our failures.
We had a failure with Good Old-Fashioned Artificial Intelligence and so we decided to switch to machine learning. It created value above its cost but did not solve intelligence. According to Albert Einstein, "insanity is doing the same thing over and over again but expecting different results". Let's be reasonable and switch once again. We learned the lessons of GOFAI and statistical methods. It is time for new assumptions.
Get ready! I will be describing my understanding of intelligence. It is purely symbolic but I do not use knowledge graphs or FOPL. I approach intelligence via NLU. So I am going to solve not only intelligence but also language. Not much but I am a one-man team after all!
Let's convert "Holy" into "real"!
Intelligence from a New Angle
Consider an ordinary day in our life. We wake up and start the day by recognizing our state and needs/desires. We form goals, build plans, and start implementing them. We solve tasks, meet obstacles, and adapt to new circumstances. We interact with people, animals, plants, objects, and substances. We perform actions and observe actions performed by others. We analyze the achieved results and reason about their unobserved causes.
All of the above is facilitated by our intelligence. It is natural to assume and expect those to be defining features or components of intelligence. And we observe definitions of intelligence based on many of those things and implementations of intelligence focusing on others. Needless to say that AGI has not been implemented yet despite all those natural assumptions.
Should AGI address all those items? Definitely. Should AGI be defined or implemented based on them? Definitely not. Contradiction? Not. Follow along.
We mentioned a variety of intellectual phenomena and also phenomena from the real world handled by intelligence. The range of those phenomena is wide. The general consensus is that AGI is based on one algorithm. Continuing the allegory further, our brains are viewed as computers. All that is fine and reasonable. But let me ask you what computers do. They compute/measure/compare. The above phenomena are not measurable/comparable directly. You may refer to them as molecules. What are the measurable/comparable atoms of intelligence? Properties.
Think about it. Objects have properties, are defined by properties, measured/compared by properties, and referred to by properties. Generalization/recognition/classification/differentiation of objects are also organized based on properties. Actions change properties.
Goals can be represented as the desired changes in properties. Plans represent the sequences of actions to achieve changes in properties. The reasoning is an attempt to figure out actions that caused the observed changes in properties. Adaptation is counteraction to compensate for the adversarial effects of the environment. Abstract knowledge describes what actions change what properties and specific, episodic knowledge describes interactions of actors and the environment and observed changes in properties. Learning is the accumulation of knowledge.
Consider the titles of some influential books in NLP:
Austin "How to do things with words"
Skinner "Verbal Behavior"
Searle "Speech acts"
If we are to make progress toward AGI we have to change focus from objects and actions to properties. Next, we will address NLP more when we will talk about knowledge representation.
I am yet to read your subsequent articles. So this comment might be out-of-sync with your complete perspective. Overall what you seem to suggest (focusing on NLU and symbols and explicit representation of properties to differentiate etc.) as far as I can tell, has already been tried by GOFAI.
I fail to understand what's different in the paradigm that you are trying to develop anew?
Ofcourse, as I said, am yet to read your other articles. So may be this is jumping the guns from my side. Thanks for writing!
Perception is recursively inserted into perception. Symbol after symbol. We use language symbols to organize our lifes and the language organzines us.
That we can learn by words without having to experience something ourselfs, is a very good indicator that language plays a key role in creating AGI systems. By the way, I dont believe english is a good choice for AI ...