9/25/2023 0 Comments Ais transcriptPrediction error minimization can be achieved … : through immediate inference about the states of the world model and through updating a global world-model to make better predictions.“Converges to simulation objective: The system is incentivized to model the transition probabilities of its training distribution faithfully”.The core function of the brain is simply to minimise prediction error, where the prediction errors signal mismatches between predicted input and the input actually received.The system learns from sensory inputs in a self-supervised way.“Self-supervised: Training samples are self-supervised”.Often my gloss will be about human brains in particular, as the predictive processing literature is most centrally concerned with that example but it’s worth reiterating that I think that both GPT and what parts of human brain do are examples of generative models, and I think that the things I say about the brain below can be directly applied to artificial generative models. To show how these terminological differences play out in practice, I’m going to take the part of Simulators describing GPT’s properties, and unpack each of the properties in the kind of language that’s typically used in predictive processing papers. Incentive to reverse-engineer the (semantic) physics of the training distribution Self-supervised, but often this is omitted Predictive loss on a self-supervised dataset Given the conceptual similarity but terminological differences, perhaps it's useful to create a translation table between the maps: Simulators terminology Both frames give you similar phenomenological capabilities: for example, what CFAR’s "inner simulator" technique is doing is literally and explicitly conditioning your brain-based generative model on a given observation and generating rollouts.The generative model is updated using approximate Bayesian inference.Systems are equipped with a generative model that is able to simulate the system's sensory inputs.They also have some non-overlapping parts. The two maps are partially overlapping, even though they were originally created to understand different systems. In terms of the space of maps and the space of systems, we have a situation like this: The simulators frame typically adds a connection to GPT-like models, and the usual central example is LLMs. The predictive processing frame tends to add some understanding of how generative models can be learned by brains and what the results look like in the real world, and the usual central example is the brain. This is also true about another isomorphic frame - predictive models as described by Evan Hubinger. Actually, in my view, it's hard to find any deep conceptual difference - simulators broadly are generative models. There’s a lot of overlap between the concept of simulators and the concept of generative world models in predictive processing. Simulators as (predictive processing) generative models This post unpacks how this might happen, by translating the Simulators frame into the language of predictive processing, and arguing that there is an emergent control loop between the generative world model inside of GPT and the external world. Given that GPT already learns to model a lot about humans and reality from listening to the conversations in the first cave, it seems reasonable to expect that it will also learn to model itself. In fact, more and more of the conversations are actually written by GPT.Īs GPT listens to the echoes of its own words, might it start to notice “wait, that’s me speaking”? Now imagine that more and more of the conversations GPT overhears in the first cave mention GPT. The only way it can learn about the real world is by listening to the conversations of the humans in the first cave, and predicting the next word. GPT trained on text is in the second cave. Now imagine a second cave, further away from the real world. Imagine humans in Plato’s cave, interacting with reality by watching the shadows on the wall. Prelude: when GPT first hears its own voice
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |