In this post, I will open for you and myself the concept of open-endedness based on the following two papers - "Open-Endedness is Essential for Artificial Superhuman Intelligence" by Edward Hughes, Michael Dennis, Jack Parker-Holder, Feryal Behbahani, Aditi Mavalankar, Yuge Shi, Tom Schaul, Tim Rocktaschel (Article 1) and "OMNI: Open-Endedness via Models of Human Notions of Interestingness" by Jenny Zhang, Kenneth Stanley, Joel Lehman, Jeff Clune (Article 2).
The concept of openness seems too vague to be helpful. It is not a property of a system but a property of the interaction of two systems, one of which plays the role of an observer (researcher). The outcome depends on the ability of the observer to detect artifacts generated by the observed system and its ability to detect novelty. Since the generated artifacts may be internal to the observed system, and the system's design may prevent their availability without destroying the observed system, the situation becomes even more confusing.
An example of an open system that probably does not meet the expectations of the author of the concept is a slot machine observed by a person; an artifact is a sequence of numbers generated by a given moment in time. Each new artifact differs from the previous one, observed, studied, etc. What "useful residue" do we have as a result?
"Article 2 proposes to use foundation models, in particular LLMs, to figure out "interesting" tasks,"
Yes, I strongly agree with your view here. LLMs can only determine what is novel by matching against patterns of what has already been called novel. It can't properly evaluate something that doesn't exist and its significance. What we consider interesting changes over time. LLMs can't capture this relevance.
As for everything else, I'm considering it with interest, but as of yet my thoughts are still in process.
That's totally in line with my view of language - a writer only offers a possible connection of mentioned phenomena. It is up to readers to check the proposed connection against their life experience and conclude if they buy that or not.
The concept of openness seems too vague to be helpful. It is not a property of a system but a property of the interaction of two systems, one of which plays the role of an observer (researcher). The outcome depends on the ability of the observer to detect artifacts generated by the observed system and its ability to detect novelty. Since the generated artifacts may be internal to the observed system, and the system's design may prevent their availability without destroying the observed system, the situation becomes even more confusing.
An example of an open system that probably does not meet the expectations of the author of the concept is a slot machine observed by a person; an artifact is a sequence of numbers generated by a given moment in time. Each new artifact differs from the previous one, observed, studied, etc. What "useful residue" do we have as a result?
"Article 2 proposes to use foundation models, in particular LLMs, to figure out "interesting" tasks,"
Yes, I strongly agree with your view here. LLMs can only determine what is novel by matching against patterns of what has already been called novel. It can't properly evaluate something that doesn't exist and its significance. What we consider interesting changes over time. LLMs can't capture this relevance.
As for everything else, I'm considering it with interest, but as of yet my thoughts are still in process.
That's totally in line with my view of language - a writer only offers a possible connection of mentioned phenomena. It is up to readers to check the proposed connection against their life experience and conclude if they buy that or not.