Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Complexity penalties mean the optimal strategy for a given game can’t have unbounded recursion depth unless it’s either tail-call optimized, or producing exponential rewards. Each recursive split adds at least one bit of complexity to the a strategy’s time-unrolled model.
Most game theory I’ve seen does not grapple with the implications of this. It is a different bound than mere computational cost. The cost of the computation can be priced in locally, but complexity is a global bound. The context matters.
(If you know of game theory considering the time-unrolled behavior of the player as a model whose accuracy and complexity must be balanced, pls let me know! I have looked and not found, but that doesn’t mean I used the right keywords…)
This says that the optimal strategy for a player is determined relative to the player’s self model. If you model yourself as choosing btwn two options under a certain condition, the unrolled tree grows. But if you round it off to zero, then the tree doesn’t gain a new branch.
In effect, there is a “decision budget”. Adding more fine-grained decisions here means you have to make less fine-grained decisions somewhere else. Not less compute, but fewer decisions. Or put another way, this is the complexity cost of untaken options.
The equivalent of “cheaper compute” here is “better background priors”. How many decisions you are making is the divergence between your behavior based on the state in this moment, vs your behavior if it was (your model of) the average moment of experience. Good habits!
This is sort of like a mirror of common knowledge…it’s common actions. An agents habitual past actions constrain its future optimal actions. Which means, in some sense, merely usually-acting-someway is a credible pre commitment to continue the implied strategy.
Unless of course, the player is acting deceptively — paying a surprisingly high complexity cost in order to model themselves as usually acting another way, in order to maintain a different background priors, because they expect profit by betraying those deceived later.
Optimal strategies are robustly optimal. A optimal strategy with higher expected return which leads to ruin is not optimal. Robustness relies on simplicity, which is relative to theory of mind of both self, other, and collective “we”.
These rules about optimal decisions under uncertainty are not suggestions, they are laws in the same way Bayesian updates are. What you know of yourself is causal over your optimal strategy, and there is an unavoidable complexity cost to deception tying the self-model to reality.
4,31K
Johtavat
Rankkaus
Suosikit