Spice Must Flow
If you don’t know, I consider Dune by Frank Herbert the best book of all times. So, it’s natural to take that phrase as a title for a post about data structures and flows. And we will see how spicy it will be.
Properties
Language operates with objects (nouns), actions (verbs), and properties (adjectives). Too many moving pieces. But, ... objects have properties and actions change properties. Properties are all we need!
I started my journey into AI from NLU. That was my first insight. It paved the way to many others. That is why I consider it a blessing that I started with language.
Get ready to understand what I mean by properties. A key word here is “commensurate.” A property is a dimension or an axis against which phenomena may be compared. “Phenomenon” is a top-level property. You may use the word “category” for a property.
There are two broad groups of phenomena - abstract and tangible ones. Here is an important piece. “Abstract” and “Tangible” are values of the “Phenomenon” property, but considered on their own, each one of them is a property. All tangibles can be touched, all abstract ones cannot be touched. Whether some phenomenon can be touched is a differentiating criterion for the “Abstract” and “Tangible” subcategories.
A category in a potential system may be represented by some ID. Later, we will connect that ID to some actual signal from a sensor or to some actual command to an actuator to perform an action affecting that property. That is how I propose to solve the Symbol Grounding Problem.
The Common sense entry in Wikipedia provides a cute quote:
“Each sense is used to identify distinctions, such as sight identifying the difference between black and white, but, says Aristotle, all animals with perception must have “some one thing” that can distinguish black from sweet.” Each property/category carries a unique semantics, which cannot be simplified away.
Differentiating Criteria
It is not necessary for a differentiating criterion to be binary. For example, the property “Color” can be divided into many subcategories. Also, a criterion is not expected to ensure a clear-cut boundary. The same property “Color” demonstrates that a boundary between neighbor subcategories is quite vague.
A differentiating criterion provides a condition or a rule for dividing all the phenomena under certain category into subcategories. It determines a defining feature of that subcategory. A subcategory is NOT a set of fitting phenomena. It is a hierarchy of defining features (along with inherited ones from higher levels). Fitting phenomena satisfying the defining features are considered to belong to the subcategory.
The values of criteria from the lower level of the hierarchy are irrelevant for category membership.
Above, we used “touchability” as a criterion. However, the choice of criteria is not carved in stone. There are some guidelines about criteria - “clear cut” is desirable, division in roughly equal subgroups is desirable, etc. But only desirable. If for some purpose some weird criterion is needed, feel free to go with it.
One critical “property” of criteria is the applicability to all the members of a considered subcategory. For example, we could try to apply the criterion “Taste” to the “Phenomenon.” What is the taste of “Abstract”? “Not applicable” is not a very good criterion.
A category represented by ID also needs a parent category and a differentiating criterion. In the simplest case, it may be a range, like “from 1 to 10” or “10 plus-minus 5” or “above 100.” More interesting ranges may be represented by “rules,” like “odd numbers” or “prime numbers.” Even more interesting ranges are context-dependent - “all items within 2 sigmas around a mean value.” It means that to identify a range, we should allow not only scalar values or formulas, but also semantic references to the context. Getting spicy.
With respect to categories, everything is recognized in comparison. Please note that a category is defined by its difference from a sibling category and not by “similarities” of its members. All the further defining or any other features are irrelevant for membership. Clarity and efficiency of this approach to categorization make similarity-based approaches obsolete. In this paper, I talk more about similarities - Nature of Cognitive Computation. Consider reading it not only because of similarities.
To complete this discussion, a subcategory should be supplied with possible further differentiating criteria. Remember? Those that apply to all the members. I love the game 20 Questions. We will extract questions based on this list. It should be organized as a tree for efficiency and to enable priorities, so technically it’s not a list.
Ranges
A category encloses all phenomena that pass differentiating criteria. Further differentiating criteria will break it down into ranges. It is important that all the category instances fall at least into one range and we could refer to that range.
If range boundaries are vague, it is OK to allow overlapping ranges and the use of any handle (of two possible) plus consider overlap as a range of its own. For example, we may have the color of “ocean” and allow calling it “blue” or “green.”
Taste or color demonstrate interesting options adding to spicyness. What is the color of zebra or a Scottish kilt? What is the taste of tea? From a single shade we switch to “texture,” from a single taste we switch to “bouquet.” There are properties and ranges and there are combinations of primitives. Time for the latter will come.
Why ranges are preferable to point-accurate measurements. I can provide two main reasons. One is interchangeability. All items from the same range can substitute for each other. It imposes some constraints on ranges and we will address those. The second reason is related to efficiency. There should be a limited number of ranges vs an infinite number of points along the category axis. Selection from the finite number of options is a finite-time task (can be otherwise, I agree, but hey, we are developing intelligence here!).
Again, a range is a value of its parent category and a category of its own. Think about “Fruit” and “Apple.” Because of that, we do not need additional representational tools for ranges. Those for subcategories will suffice.
Relations
When I started thinking about relations, I asked the following seemingly simple question: “Where does the information about a relation between two objects belong?” Is it with any of the objects? But then that object should “know” about the other. It is not always desirable or even possible. Therefore, I proposed to introduce a “formation” - a third “object,” which knows about the two and their relation. I even went further and proposed to express that relation through relations of each object to the formation via its “reference point.” Members of a system know about a system and it knows about them. Efficient.
As an example, think about bricks and mortar before and after construction and after demolition. Components are the same, but the system is different in all three cases. The difference is in the relation. My intuition then suggested that the relation is more important for understanding the system than the components. Think about a letter A, formed by cheerleaders standing on each other or by planted trees or by islands in the sea.
Relations are abstract and differentiable. Their participation in the formation of complex categories makes those categories partially abstract.
Relations pave the way for compositionality. Considering relations as properties/categories of special kind, which provide “placeholders” for other categories (either imposing some semantic restrictions on what can fill in, or accepting any items there (cheerleaders, trees, or islands)).
One may say that using relations and various components it is possible to “engineer” categories. I would say that relations are already “there,” in some abstract mathematical sense. The same is true about components. To be meaningful, newly formed combinations need to make a difference in causal sense. I will talk about that shortly.
Think about the recognition task. The variety of shapes and constructions is enormous. But having a hierarchy of relations and sibling relations at any level, different with respect to only one differentiating criterion, it makes the task manageable. Even in the presence of noise or occlusions, we will check for which relation of that level the observed components fill only proper placeholders, even if some are unobserved. The key is to compare only sibling relations according to one criterion - a relatively easy task.
Representation of relations may require some identifier (they are categories) plus semantically specified placeholders - “right” or “left,” “on top” or “below” or “inside,” etc. How a component fills a placeholder is just another relation, so placeholders may restrict that as well. Placeholders may be obligatory or optional. For mammals, “heart” is obligatory, while “limb” or “fur” are optional, but the absence of a limb makes an animal injured, while the absence of fur is OK.
Objects
In terms of properties, objects are multidimensional. Any object can be categorized differently, depending on a purpose. An apple can be viewed as food if we are hungry, or as a projectile weapon if we are attacked.
I propose to start the analysis of objects with relations. The most interesting part is how various properties are combined. For example, “taste” may come from “within” or be “added.”
To represent objects in general, a category may suffice, when only a handful of top-level differentiating criteria leave us with only the desired objects. To refer to a specific one in a given context, more properties may be required. To represent a familiar object so that we can recognize it in the future, we may require more properties compared to the first case, but additional ones will not be the same as in the second case.
Please recall that categories have “further differentiating criteria.” We may use them to determine identifying properties of a given object. Or we may query for additional criteria categories from lower levels.
The hierarchical organization of objects and their properties enables us to quickly filter them for having specific properties, thereby “projecting” them on certain dimensions. This dimensionality reduction is important for efficiency.
Representations of categories may be enough for talking about objects in general. Specific objects in context will require additional information, but recall “formations.” We don’t need to store additional information “with the object.” Rather, we will store that information with the context along with other information relevant for references. Only a familiar object will require the most detailed representation. Essentially, we will need to specialize our knowledge about its properties to such a low level in the categorization hierarchy that the objects becomes a “category of one.” Because we supply an identifier to any category, that way we will have a unique identifier for that object as well. Internal identifier. For constructing a linguistic reference, we will need to use the abovementioned “context formation.”
Specialization/Generalization
Generalization is a hot topic in AI. Under this approach, it’s a very simple aspect of intelligence to explain. Specialization is the process of introducing subcategories by specifying differences between them. It’s a downward traversal of the tree. Generalization is the opposite process, an upward traversal, forgetting differences between sibling categories (possibly at several levels of hierarchy). While specialization performs a lot of work, generalization does nothing, it even decreases the amount of information about a current object.
Actions
There are many types of actions and we may differentiate them just as well as we differentiate objects or abstractions. In fact, there are also abstract actions. For example, “play football” or “make tea” are abstract actions. They are umbrella terms for multiple specific actions, which have to be performed in parallel or sequentially to achieve the result.
Actions have properties - affected property (properties), duration, parameters, prerequisites, obstacles/facilitators, cause, and effect. Using these and other properties, actions can be differentiated and categorized. There are also relations for actions - sequences, repetitions, conditionals, etc. Actions can combine and they can generalize.
Before we decide how to represent actions, let’s talk about causality.
Causality
We want to understand how intelligence handles causality. The best insight about that comes from Einstein’s definition of insanity. Ironic, I know.
Insanity is doing the same thing over and over, expecting different results.
By “thing” the definition means an action, of course. But in fact, it’s not only about an action. The result of an action depends to a greater extent on the parameters of an action, which are most often expressed by properties.
What is “different” in the definition? I claim it’s about ranges. The affected property may fall in the same range or in different ranges. The same is true about the “same” in the definition. But this time, parameters fall in the same range or in different ranges.
Let’s go back to ranges. Who determines their boundaries? Let’s say, we have a breakdown for the affected property. If an input property makes a difference or is relevant for this action and the affected property, changes in the input property will map to changes in the output property. Those properties are semantically different, so we cannot expect their ranges to be comparable to each other. Ranges are comparable within the same property. So, if different point values of the input property map to the same range of the affected property, all those input points fall in the same input range. Differences in the affected property will map to different ranges of the input property. Even though the input property affects the affected property, it is the reverse causality when ranges of the input property are concerned.
Many parameters may affect the results of an action. Science has long relied on the principle “ceteris paribus” - “everything else being equal.” Not point-accurate equal, but rather range-equal, I suspect. Possibility of actions interfering with each other, leading to the result being different from a mere sum of two effects, creates additional complexity. Recall relations. We may use a “formation” of several parameters as an input parameter and research the ranges of the formation and their effects on the results.
How do we determine the ranges of the affected property? Consider an action for which that property serves as an input parameter and repeat the above procedure. This paves the way to planning, when we feed the results of one stage into the next. Also it shows how specific actions may be used as little cogs in the complicated mechanism of an abstract action.
Any action affects only a subset of properties of the affected multidimensional objects. Also an action uses only a subset of properties of objects used as parameters or tools. This paves the way to purpose-dependent categorization. But think about this from the point of view of efficiency. Actions can be considered as dimensionality reduction operations. Ignoring other properties makes computations efficient.
This sheds light on some peculiarities of our perception. When we follow some goal, we become more attentive to some things in our context and more ignorant with respect to other things. Actions finetune our perception to “fire” when encountering objects with desired properties - those from the set of needed parameters or anything that affects the results. On top of that, our perception should always be alert for signals that may lead to reprioritization or a change of plans.
Many roads lead to Rome. We have discussed above properties of actions. On the one hand, those properties affect the results of actions. On the other, they help to select which action/road to take to achieve the result/Rome. This paves the way to how actions are selected, decisions are made, plans are developed.
Causality mappings should be viewed broadly. Given a situation, what goals to pursue? Given multiple goals - how to prioritize and how to perform them - in parallel or sequentially? Given a goal, what object to search for or pay attention to in a given context? Those objects may affect the results of our actions - in various ways. What signals to filter out and what signals to still collect and take into account? What signals are irrelevant to the current goal and do not change anything about it? What signals change our goals? A fox eating a rabbit may ignore another approaching rabbit, but it will have a hard time ignoring an approaching tiger.
Guess what? My favorite game 20 Questions can be adapted for causality. Ranges of the affected property serve as options. Input parameters serve as constraints. And just like with categories, we may attach rules to any level in a hierarchy, and any exceptions to them will be attached to subcategories. Consider exceptions as differentiating criteria.
To represent actions, we may apply a reconsidered approach to relations. We also have a combination of several items. Not a static formation, but a dynamic transformation. Apart from an identifier, input parameters and an affected property, a transformation’s placeholders may also include prerequisites and constraints/facilitators. The difference between input parameters and constraints/facilitators is in an agent’s control over them. As with the formation’s placeholders, we may represent candidates to fill them in a variety of ways - via a hierarchy of properties or using some semantic references to context.
Abstract or composite actions, like “make tea,” will include all the familiar components of algorithms - sequences, loops, conditions, etc. The affected properties from one stage are fed into another as input properties.
Did I mention time as a unique property of actions? I didn’t. One more placeholder. But it’s tricky. It may depend on input parameters, so we may actually have several placeholders with “time” in them. Like “allocated time,” “time taken,” etc.
And last but not least, we should keep track of the current state of execution, asking “are we there yet?” or “should we switch to something else?” This is necessary for continuous actions like “push” versus instant actions like “throw.”
Complexity
Real time is a tough constraint. To beat it, we cannot agree with anything worse than logarithmic complexity and realistically we cannot have a constant one. Therefore, we should have trees everywhere. If you see “list,” “array,” “vector,” or, God forbid, “matrix,” drop it. If you saw any of those above, replace it with a tree.
It does not mean that all the operations should be fast. We know that some operations may take years or even generations to complete. The system should enable such operations as well, but when attacked by a lion, we should be able to respond quickly.
Algorithmic complexity is one thing to consider, but all the other optimization tricks should not be ignored. Dimensionality reduction is one such trick. You may be surprised but ranges represent another optimization trick. Essentially they make possible the core algorithm of cognition, which I will discuss shortly.
Context
It is another optimization trick. We are making decisions against some context and therefore consider only what is available there. “Only use what you have got and you won’t need what you have not.” Taking into account what is not available requires more computing resources, but brings no practical benefit.
Contexts should be considered broadly. They are not only about objects nearby. Include also our memory and skills. If something needs our involvement, it may affect our goals. Therefore, our internal needs or desires should also be included.
If we can look at the context to collect more information, it is always good, but essentially we run computations against our mental model of the context. And when we communicate, we do it against context, even though in each utterance we only mention currently relevant phenomena.
Algorithm
We are almost ready to discuss the data flows in a perspective system based on this model. We only need to understand the core algorithm of cognition:
It’s selection of the most fitting option from the available ones respecting relevant constraints.
It is best demonstrated by the game 20 Questions. Starting with available options (categories known to us), we respect constraints (properties of an object being recognized). At each step, roughly half of remaining options is filtered out making this a logarithmic complexity algorithm - efficient.
Context provides what options are available. Dimensionality reduction and the finite number of ranges (vs the infinite number of points) make the selection finite and even real-time compatible.
Facts
Even though contexts may be considered as limited in the number of objects, they still provide a huge number of topics to talk about. Here I will combine what we covered above - objects, formations, transformations.
A fact cannot be considered separately from context. For context, we need to provide an object, which is basically a formation, most likely recursive, because it will contain in its placeholders other formations. Context’s formation requires additional information - known history of interactions within, current dynamics for continuous actions.
To cover the whole context may be quite demanding for memory and other cognitive processes. It is possible to filter context for relevant information as is often done in stories. We may record details only if they are relevant, even if they are not used. For example, there may be several tools, we will use one of them, but we may mention the others to show that our choice was reasonable.
The mental model of the context may be incomplete, but its details will be added with each perceptual frame. Relevant items will have priority for being added.
A single fact involves reporting objects and actions. I propose to assign identifiers to facts, include the context identifier and a time stamp (or a time period range). That will do for identification. Then we will use identifiers of objects from the context and of actions from general memory. Compared to how facts are often reported in natural languages, oriented at intelligent agents, I propose to record for each fact how relevant properties changed. This will enable us to later report each fact using language.
Context needs to store its known history of interactions within. Facts are how it can be done.
There is no need to store everything as one huge blob of data. Abstract object information may be stored with the “category storage” and general information about actions may be stored in the “transformations storage” with identifiers helping to retrieve them when a fact retrieval will be requested.
This organization of storing facts is different from the often proposed “hologram” model of memory. Facts are broken into pieces and reconstructed upon retrieval. Even missing some pieces is not critical. Most likely, there is some redundancy to prevent missing critical information. Or there is an option of reinserting a missing piece.
What is interesting about the proposed approach to storing facts within the “context formation” is that an attempt to retrieve any of the facts revives all the records related to that context, which is recreated based on its relevant pieces and stored history of interactions.
Questions
To query the above model of storing facts, one needs to provide keys. Within a fact, such keys with respect to any constituent are all the other constituents. Declarative sentences often miss repetitive information. But in memory, missing pieces (context, time, etc.) are stored and this allows retrieving them upon provision of the remaining constituents.
Language
Language is often misunderstood (pun intended) as delivering information. The proposed theory of intelligence defines the role of language as delivering “filters” and suggesting “connections between phenomena.” Language doesn’t have to encode and deliver every piece of information if a listener/reader is intelligent and has access to the same context. Applying the provided filters to that context, the listener will figure out the relevant phenomena and will validate the suggested connections, as well as retrieve all the additional information required.
My favorite example is the phrase “Take a seat.” It does not encode the location of a chair, but any intelligent agent will have no problem complying with such a request. Language guides attention and invites other cognitive abilities of the other party to communication. Hence, it doesn’t need to substitute thinking.
I have already mentioned the paper Nature of Cognitive Computation. There I describe in details how references and disambiguation work, how language uses categories and the core algorithm. Language so far is the only intelligent tool we have invented. Hopefully, we will soon add another one.
Memory
Biological memory relies on two small modules of the brain - hippocampus and amygdala. The former is known to contain place and time cells and is often considered as “recording memories.” Too many duties for a small area. The latter hypothesis is based on the famous story of patient HM. Without the hippocampus he lost the ability to form long-term memories. Above, I often mentioned identifiers and their role for retrival of memories. I propose to consider a different hypothesis about the role of the hippocampus, namely, that it issues identifiers for facts. Those identifiers may use location and time information similar to how modern smartphones store photographs and videos with geotags and time stamps.
The role of amygdala is tightly connected to emotions. Emotional keys are also used for stronger memory. Patients with a damaged hippocampus but functioning amygdala can store emotions-involving memories even without remembering the facts associated with those emotions.
It may seem that a machine does not need the emotional keys and emotions. I have a role for those in my model. I will discuss it next.
Novelty
Consider the problems of overextension and underextension in language acquisition. If “bird” applies only to one toy (the case of a narrow range, underextension) and another bird toy or a real bird is encountered, it is important not only to determine that a system should learn (to correct a mistake in categories) in such a case, but the mechanism (widening the range) needs to be provided. If the term is applied to other objects, for example, bats (too wide range, overextension), a different correction (breaking the range into two) is required. Correction is semantic, error-dependent. It is not enough to just state that an error is there. The type of an error maps to the type of correction, another case of causality, viewed broadly.
Continual learning is about having a hierarchy of categories and either breaking some range into smaller subcategories or widening some range to cover additional cases. It may apply to hierarchies of both objects and actions.
Individual properties combine into composite categories leading to the combinatorial explosion of complexity. The core algorithm chisels away one property at a time, efficiently extinguishing complexity. For intelligence, it does not matter where it learns about a new property. What matters is what difference that property makes. Knowing that, intelligence will have a good starting point to address relevant tasks. For example, long ago bridge builders learn about resonance. When engineers encountered resonance while conquering supersonic flight, they had an idea how to approach it.
Note the flexibility of the system. To add/remove, update positions/elements, it doesn’t need to be redesigned or retrained. Consider the example of amulets. One may think that they help with exams. But a long enough history of relevant experiments will show that amulets have little effect on the exams results. Removing that parameter from a tree is easy.
Gaps in knowledge deserve attention when discussing novelty. Humans had a long history of dealing with different temperatures. But they could not affect temperature efficiently. For example, to make ice-cream they came up with a mixture of salt and ice. But to liquify nitrogen or even helium that approach was not enough. Coming up with “actions” that can change an affected property to previously unreachable ranges is an interesting challenge.
There is always some unexplained depth to categories and causal mappings, so there is always some unexplained residual. The core algorithm allows attacking it looking for reliable patterns. The process starts with forming a hypothesis about what properties may be taken into account to explain some differences in results. Using the “ceteris paribus” principle, validation may be quite fast.
Flows
It all starts with an agent recognizing the current context. At this moment, no perceptual filters are active, unless the agent has some unfinished long-running task, in which case some filters mapped via that task will be half-active.
Context recognition happens from the top formation. Perception in this time range supplies properties of a scene/location and of major components. Perceptual filters start activating from the top-level formations levels. Additional filters are engaged if there are uncertainties. If we are uncertain if we are watching a game or a training session, for example. Any unclarities form perceptual filters based on defining filters of competing formations/categories.
Special attention should be paid to actions in progress and to agents who perform them. Static scenes provide time for their gradual recognition and planning what to do. Dynamic environments impose time pressures for decision-making. Hence, recognition of dynamics should have higher priority compared to static details of a scene.
With a clear understanding of the context, the agent follows the goals mappings from the context, taking into account positions in the hierarchies as priorities. Having goals creates strong perceptual filters to locate any constraints/facilitators. This enables the agent to figure out both abstract and specific action plans.
With plans ready, signals are sent to actuators to perform specific actions under abstract goals. Perception tracks the environment via two types of perceptual filters - purpose-dependent and purpose-altering ones.
Encountering an event for which the agent has no explanation/mapping in terms of causal mappings, an “emotional” trigger should force the agent to collect and persist as much information about the event as possible. Later, the Semantic Binary Search may try to find a missing causal link or learn something new.
Flows for Learning
Let me address a robot’s learning in its early stages. Here we need to provide a robot first with some “innate” knowledge and skills and send it into an environment so that it can learn more skills and knowledge.
We could start with analyzing dictionary entries to figure out properties and ranges relevant to humans. It is necessary to differentiate individual meanings of polysemous words or the same ranges of the same properties referred to by synonyms. Starting with the first 5000 most common words would be enough. This will help to establish some hierarchies for objects, relations, and actions.
To ground these properties and ranges in perception, it is necessary to demonstrate pairs of different objects to a robot so that differences could be figured out. A human could check internal representations to confirm that correct differentiating criteria are picked up at various levels of hierarchy. It is important also to demonstrate differences in relations and actions.
The next stage includes actuation, when after performing some action a robot can perceive the difference produced. After correct mappings are verified by a human, a robot can experiment with various parameters - direction, speed, etc.
Finally, it may be interesting to try communication with a clone-robot, with a different robot (which will need to adapt the communicated mappings to its parameters), and with a human (now that symbols are grounded in internal representations of perceived ranges of properties, which also can be affected by the robot).
With that basic training done, a robot will become an interesting entity to watch and interact with.
Inter-Tree Communication
Hierarchies and mappings enable fast, logarithmic-complexity processing of cognitive tasks. In this section, I will make a detour to touch on slow tasks, which take long time to complete - years or even generations. Like switching in understanding lightnings and thunders from Thor to electricity.
I mentioned this already that intelligence is about reliable recipes, even if unexplained. Many so-called scientific explanations reach fundamental limits at some level, but thanks to them we have formulas and procedures for reliable recipes in some of our interactions with the world. A great support in that is provided by metaphors. Let’s see how the proposed hierarchies enable them.
It is important to record all mappings from actions to effects. Sparkles from static electricity cannot compare to lightnings, sounds that accompany those sparkles cannot compare to thunders, and still. The time lapse between a lightning and a thunder makes it even more difficult to mapp the two. So how did Franklin pick up that metaphor? Intelligence is interested in what makes a difference, in what is relevant. Size and loudness are not relevant in explaining the nature of a phenomenon. The production of light and sound are important. The time lapse is not important, but it is confusing. It disconnects lightnings from thunders. Most likely, multiple light-producing phenomena were considered as options to select from. The sparkles were included, but probably with low priority. The core algorithm was considering evidence for higher priority hypotheses until discarded all of them. Taking into account other events when light and accompanying sound were recorded with a lapse, like a gun shot, it is possible not only to conclude that sparkles may “explain” lightnings, but also to realize how far from an observer lightnings usually happen. That allowed to connect trees for light and sound.
Directions for Future Research
The neocortex organizes neurons into columns and minicolumns. I suspect that these may be responsible for handling properties. It would be interesting to come up with roles of a single neuron in a column.
It is known that the neocortex is divided into domain-specific areas - language, face recognition, etc. It will be interesting to investigate the principles of packaging/location optimization. Viewing the neural real estate dynamics through the processes of learning or trauma, it is worth taking a look at how reallocation of areas occurs, how knowledge is transferred between neurons when that happens, or how they reacquire knowledge. The role of redundancy in that is also good to understand.
It is one thing when a single area or module grows bigger in the number of neurons. It is another thing when a new module is added to the brain. Recalling relations and formations, that may open up new capabilities. It is interesting to reconsider the brain evolution from that perspective.
But I would leave these ideas to other researchers. At the moment, I am more interested in polishing data structures and flows, proposed in this post. You may expect more posts related to this goal.

