Prediction
I am writing here knowing how it will end. Because the ending has already been written. I really wanted to make it a short and light reading for you. Now, I am not sure. I guess, it’s up to you to decide.
Observations
There are two types of prediction we need to differentiate: based on past data/observations and based on a theory.
A famous turkey problem describes the issue with relying on observations. In short, for a turkey there is nothing to thank for on Thanksgiving. Despite all the observations.
Newton’s formulas for the motion of planets was based on data first, but then he explained how to apply those formulas to any objects, even though he did not explain gravity.
Trash Heap
I love the Fraggle Rock show!
A shooter based on Newton’s formulas may predict hitting a target, but if a deviation from the center to the right is observed, it will mean that some additional force intruded. In fact, forces.
We explain observed effects in chunks by using known causal mappings and all the remaining effects are attributed to a combination of unaccounted factors. Intelligence automates the application of proven mappings and relentlessly attacks the “combination.” All the details of an event are recorded and hypotheses are formed. Good if only one factor is missing, but what if there are many and we find one of them, but various events contradict each other? Ceteris paribus to the rescue. Find out how that factor makes a difference and then discard its effect from the “combination.” It is not that simple, as factors may affect each other. Causality is multidimensional.
Predators and Prey
Many claim that intelligence is predictive or relies on prediction. What can that mean?
A dragonfly catches prey by moving to a predicted location of the latter. What are the dragonfly’s calculations based on?
What about the dragonfly’s prey? Why not predict and evade? OK, the dragonfly was insidious and attacked unnoticed. What about a cobra and a mongoose? Why is one successful at prediction and attack and the other unsuccessful?
It is relatively easy to predict the future location of an object moving along a straight line. This is often so for flying insects. It allows a dragonfly to get closer. It is possible that at the last moment the prey will notice the predator and try to evade. Most likely, the maneuver is the same given a species and the dragonfly’s enclosing vector. So even that is predictable easily.
When a cobra throws at a mongoose, the snake flies as a projectile without any way to change that before touching the ground to have a support to perform the next throw or any other movement. So there are predictable pieces of the trajectory, possibly predictable period of inability for quick movements and even those movements are from a limited set. Even if a mongoose had the same speed and reaction as a cobra, the mongoose had a chance purely by exploiting those moments of vulnerability. Add to that its actually higher speed and reaction.
Encoding
Causality is encoded in mappings, which include not only the effect of actions on affected properties, but also the effects of other things on the actions effects. Given a context, is it RL that chooses what to do? In real time, how many scenarios can/should be calculated for decent performance? Do we have time for those? Also, is it what to “do” or what to “be doing”, tracking changes and updating one’s actions accordingly? Semantically, we “go through a crowd,” in reality, we are constantly engage thrusters to update our trajectory.
Causality mappings should be viewed broadly. Given a situation, what goals to pursue? Given multiple goals - how to prioritize and how to perform them - in parallel or sequentially? Given a goal, what object to search for or pay attention to in a given context? Those objects may affect the results of our actions - in various ways. What signals to filter out and what signals to still collect and take into account? What signals are irrelevant to the current goal and do not change anything about it? What signals change our goals? A fox eating a rabbit may ignore another approaching rabbit, but it will have a hard time ignoring an approaching tiger.
Hypotheses
We don’t know everything. The trash heap is huge. Filling the gaps in our knowledge or exploration never stops. Noticing inconsistencies in our mappings we need to form hypotheses about factors we still don’t know how to take into account. When the results of our actions are different from our predictions (in a rather Newtonian way), we are surprised, which triggers emotions and everything in a scene is remembered for future analysis.
In a sleep, when we have both time and computing resources, we validate the collected hypotheses and free our memory from failed ones. We extract patterns from the heap. We wake up with insights about new causal mappings. We’ve got more tools to apply, we are more prepared for encounters with the real world. But make no mistake - the heap is still there, we still don’t know a lot. We will never be able to predict the results of our actions accurately.
***
How predictive is intelligence?
It is not predictive if by that one means guessing. Intelligence hates guessing, it’s painful for intelligence to traverse its trees without finding a fitting action to take. It is predictive if by that one means moving the needle, making a difference. Finding a proper key to search in its mappings and then proceed with the chosen action feels like a relief and success of a sort. A reward is not in “better reaching a goal,” it is rather in moving in the right direction, in taking meaningful steps.
Intelligence is not about producing prophecies. I said it before. It makes decisions without guarantees. Let It Go! Be It What It May! What a relief! ... Almost forgot, record how it goes and use it later for hypotheses validation.


