Ripple Effect
Ontology and Causality are two disciplines among others under the umbrella of Metaphysics. Businesses are oversold the former, while needing more of the latter. Still, the study of intelligence can teach them a thing or two about both.
Knowledge Graphs
A hot topic recently along with facts databases and ontologies. Essentially these are all about “what.” They deal with “entities” and relationships between those. They may mention what actions each entity may produce or what actions may involve an entity.
This sort of encyclopedia knowledge, which can be googled, is often confused for intelligence and quite as often criticized for being confused for intelligence. I see its shortcomings in that it shifts decision-making to the “compute” module, while giving it little resources to work with.
Knowledge-How
To understand the problem with the above, consider a simple fact. It states what action happened, what entities were involved, where and when. Is this knowledge useful for deriving any recipe for future interactions with the world? Does it mention the initial state of the systems involved? Does it provide all the parameters for the actions, like how much energy was used and how it was applied? Does it analyze intervening factors and their effects on the final outcome? That kind of information is much more useful for businesses. They want to know how to reach their goals facing adversary actions and intervening factors, while given limited resources.
Businesses are more interested in “how” any state can be reached from some initial state than in “what” those states are. Businesses are about the dynamic more than about the static.
Compositionality and Continual Learning
Intervening factors have a compound effect on the results of actions. It is hard to analyze them as a bunch. It is easy to analyze each factor in isolation. It is easy to learn about each factor in isolation. Knowing about the effect of each factor, one can take it into consideration when facing it among other factors in the decision-making process.
Whatever
Intelligence has two key pieces - mappings and filters. Categories are an example of the latter. A category does not specify all the properties of the fitting objects, only the defining features which differentiate those objects from other categories, allowing differentiating objects from context. Differentiation is what semantics is about.
Fitting objects are “whatever” passes the filter. They should not have similarities of other properties. Those similarities, if present, are not essential. What matters is satisfying “defining features.” They should be organized hierarchically as in the game 20 Questions to enable logarithmic complexity of categorization.
Depending on the level in the hierarchy, some properties are relevant and some are irrelevant. This has a limiting effect on the actions that can be used to affect those properties. Why limitations should be welcome will be shown later.
Symbols
Intelligence works with properties. Viewing them as axes or dimensions, it breaks each into ranges depending on a purpose. For example, to making tea. Chinese terms for boiling water temperature ranges are both practical and poetic:
Shrimp Eyes: This refers to the stage just before boiling, when small bubbles start forming at the pot’s bottom, resembling shrimp eyes. At this stage, the water is approximately 160°F, the temperature at which eggs begin to set.
Crab Eyes: This stage refers to the point just before boiling when small bubbles resembling crab eyes start forming at the pot’s bottom. The temperature is right around 175°F.
Fish Eyes: Coming quickly after crab eyes is the fish eyes stage, where the bubbles are even a bit larger and the temperature is 180°F.
Ropes of Pearls: When the water reaches a full boil, large bubbles rise rapidly to the surface in a continuous, string-like manner, resembling a string or rope of pearls. At this point, the water is between 200°F - 205°F.
Raging Torrents: This stage describes a vigorous, rolling boil where the water is in a state of intense agitation, akin to a raging torrent or rapid flow of water. At this point, the water is bubbling violently, and the surface is rolling with them, and, at sea level, the temperature is 212°F.
Please note that sizes of ranges are different. One can replace them with symbols T1, T2, and so on, or with enumerations, achieving “compression.” However, by doing so, one will lose semantics. The way symbols are processed according to Shannon has little in common to how semantic, causal information embedded in ranges is processed by intelligence.
Logic operates well with symbols and enumerations, but not with noisy information from the real world. Take a red apple. Is it truly red? It may have yellow or green “pixels.” Is it yellow or green then? What is its “true” color? Intelligence doesn’t care about logic or truth, “good enough” is enough.
Causality Trees
Mappings are about what leads to what. Recall Einstein’s definition of insanity - doing the same thing over and over expecting different results. It’s all about causality. Now express different results as different ranges of some property. They are achievable by doing “different” things. Most often, it is about the same action, but with different parameters. Like the kick in snooker is the same, but aiming and power may vary. In tennis, the results may depend also on a wind or the type of ground. The depth of the tree and the number of properties taken into account differentiates an expert from a novice.
What is important about causality trees is that they enable logarithmic complexity for decision-making.
Force vs Pressure
I am not sure about this and don’t know if it will lead anywhere, but I prefer pressure over force as the foundation of causality. Like “social pressures make one behave.” On the other hand, “pressure of biological needs may justify divergence from appropriate behavior.” The use of “force” in those contexts seems strange. Even where “force” is used traditionally, it may be used interchangeably with pressure, methink. The pressure of magnetic field ...
States and Goals Representation
We set goals when our current state is different from our desired state. “Different” is the key word here. It implies that with respect to some property those states fall into different ranges. This suggests to apply actions affecting that property.
Often, especially in the presence of competition, we are expected to put in “maximum effort.” But true expertise lies in the ability to finetune the use of ranges between the extreme ones. Even semantically those ranges are richer.
Slow and Fast Thinking
Any system should enable fast thinking first. I explained above how categorization and decision-making can be done in logarithmic time. It is fast enough for real time.
If real-world pressures allow and the “fast” solution is not good enough, we may follow horizontal connections between causal trees, employ BFS or whatever other algorithms we like to look for “slow, better” solutions.
Robots
Humanoid robots or self-driving cars, all the instances of embedded AI need three modules - perception, intelligence, and actuation. What I propose is to consider each module through the lens of properties.
Perception should collect signals from the environment and the “body”. Perception doesn’t know about objects yet, it only breaks down the signals into properties. For example, what pixels transform in parallel and what shapes those pixels fit. After recognition has been performed, perception may be engaged in object tracking if the task requires that.
Intelligence, and its submodule memory, is responsible for processing signals. It combines basic properties supplied by perception into more complex ones. It recognizes objects, actions, and their abstractions. For example, it may recognize SOS on a desert island with letters formed by properly placed coconuts. Or it may recognize people running and kicking a ball as playing football. Intelligence recognizes a scene, a situation, and moves to selecting a goal to pursue in that situation. The goal may be self-oriented, like repair a manipulator or recharge a battery, or the goal may be to help someone in trouble or continue with a previous long-running task. Intelligence sends signals to actuators to affect properties of relevant objects and systems (run to some place or kick a ball in certain direction with some force). It also instructs perception on what specific properties to track. Intelligence is focused on its current task, but it is always on a lookout for signals that make a difference and it is always ready to pivot its goals.
Actuation is about specific actions behind abstract goals - to “make tea” we pour water into a teapot, put it on a stove, put tea leaves into a cup, pour boiling water into a cup, get cookies, etc. We can learn a lot from football. It involves manipulating a ball. Traditionally the ball is kicked with a foot, but there are so many ways to do that. But other parts of the body are also allowed, except for hands. How much effort to put into a strike is another decision to make. How do we choose the way of interacting with the ball in real time? I have mentioned the options, what are the constraints? The location of the ball or the trajectory of the approaching ball, other players, their locations, orientations, and movements, possibly other factors, like the woodwork or puddles, etc. The more we train the finer ranges of those properties we consider. The more options we consider the more urgent the need is to involve our subconscious multi-processor parallel computer. Football players acting automatically look elegant, but I bet they hardly can explain their “thought process.” Indeed, it is not symbolic.
World Models
Training the above modules requires abundance of properties, from which an agent will be picking promising ones for hypotheses forming and testing. Obviously, “world models” (generated video or at best 3D scenes) lack the required abundance, discarding multiple modalities of useful signals from the environment.
“Superintelligence”
“As soon as we have AGI, it will improve itself recursively.” Have you heard that? If intelligence relies on search, which has known theoretical limits, then how much can it improve itself? Or allow me this joke, how much better number will ASI find in the same set compared to a bubble sort?
Analysis Paralysis
There is a different direction for improvement. It is related to the number of properties for intelligence to juggle. Compare a human to a bee in that respect. Their intelligence relies on the same Semantic Binary Search, but the range of properties taken into account is different. The results we can see.
And still, can a single human agent take into account all the possible consequences of one’s actions - short and long term, for oneself, for one’s family, company, country and all the competitors, for the planet and other species, for coming generations, for the progress of science, and so on? Take a look at this section’s title.
To achieve anything in a limited time frame, we apply dimensionality reduction, we stick to one level of abstraction, and we limit the number of options considered. Most importantly, we limit the decision-making time period and start acting. Does it mean negative consequences in some respects? Absolutely! Do not allow that to stop you.
Introduce a hierarchy of decision-making agents and let them take care about each level of problems. By introducing constraints for lower levels, agents at higher levels may ensure the appropriate behavior of the whole system. Ideally. Constraints should work both ways to avoid abuse.
Reverse Thinking
Instead of entities, focus on properties and ranges as filters
Instead of what entities can do, focus on how actions affect properties
Instead of linear at best, focus on logarithmic complexity
Instead of taking everything into account, focus on what is manageable at your level of decision-making
Instead of prediction, focus on Semantic Binary Search and Be It What It May!


The ability to manipulate concepts, properties, and elements of an ontology (semantic graph), filter data according to known rules, and apply rules for logical inference does not make a system intelligent. Intelligence is the ability to collect facts and construct new, useful rules, ontologies, filters, etc., based on collected facts.