When I started taking the idea of AGI seriously, I noticed a shift in how I thought about the world: rather than seeing people or institutions, I just saw agents. Napoleon, Washington and Alexander, for example, are some of the great agents of history. The fact that they were of type Homo sapiens isn’t why we remember them, but because they were extremely effective solutions to the dynamical system they emerged into. Over time, a smaller and smaller percent of such agents will be humans, but that doesn’t really matter.
Thinking about this I started to wonder about the very long term behavior of populations of agents in our universe. Fast forward a billion years, assume that not all agents have died out — what are they doing? Why?
Presumably they would have been making progress towards some best possible existence, or even achieving getting asymptotically close to it. What does this even mean? First, some preliminaries: I assume physicalism, including that phenomenal experience supervenes onto the physical world, such that any possible experience supported by our universe is accessible to any sufficient system made entirely from ordinary matter arranged in the right way.
Let us say that we can judge the desirability of a state by the strength of its attraction — that is, it is psychologically addictive. We can model this as an attractor basin in the phase space of the agent. Forgo the typical connotations of addiction here: all I am saying is that we can judge the desirability of a state by the rate at which an agent voluntarily chooses to access it.
Agents also care about their continued existence, so these states can’t come at a significant cost to their odds of survival. This probably (who knows, we’re making things up here) acts as a major regularizer over desirability of possible states, since in cases where the basin is too deep, the risk of getting stuck or otherwise unable to access other states when needed might be too great.
If you took a large number of powerful agents in the far future and ran time forward, I suspect we would see certain fixed points appear over and over again. These are some of the “ideal experience” trajectories particular to the physics being evaluated. (Thus this fixed point is of interest in physics design.) If a universe supports intelligence, then eventually if we don’t see extinction, we will expect this “ideal existence” to be implemented and probably widespread.
One technology that will certainly exist in such a far future is the ability to arbitrarily alter the borders distinguishing two agents, creating the ability of merging them “brain to brain.” (And conversely, individuating them.) One obvious question we can ask is, “at the infinite limit, how many agents total are there, and does it converge?”
While I’m not totally convinced, I find Dark Forest theory compelling and I suspect that over long enough times and distances agents will prioritize secrecy, which combined with inflation leads to islands of experience disconnected from its environment. In other words, I think I buy the plausibility, if not the likelihood, of the transcendence hypothesis.
Like how the stock market is easier to predict over long timescales compares to short ones, I suspect that a field of very-long-term economics would be fruitful and probably better are making correct concrete predictions. I suspect that the single greatest error of modern science fiction is assuming that the clustering of agency in the far future looks like the many individuals we have today, but either way, there is something deeply fascinating about the game theory of such a world.