The Pervasiveness of Models

Models and simulations of many kinds are tools for dealing with reality; they are as old as humanity itself. Humans have always used mental models to better understand reality, to make plans, to consider different possibilities, to share their ideas with others, to try out changes and alternatives, to develop blueprints for realization of some ideas, or to convince themselves and others that certain ideas cannot be realized. — Hartmut Bossel, Modeling and Simulation

The thought thread for this post started from the amount of flak that’s been tossed out attacking the usefulness and results of climate models. Reflecting on this situation, I decided that one of the reasons that such negative characterizations can take hold is that few people realize the pervasive use of models in everyday life, thus thinking of climate models as something apart from what we, as human beings, do day-in and day-out. This is the first in what I plan as a series of posts on models and modeling.

Marvin Minsky, in The Society of Mind, defines a model as: “Any structure that a person can use to simulate or anticipate the behavior of something else.”

Gene Bellinger, on his website on Systems Thinking, defines a model as: “a simplified representation of a system at some particular point in time or space intended to promote understanding of the real system.” Bellinger goes on to say:

The most important question to ask should relate to the extent to which the models we develop promote the intentioned development of our understanding. The extent to which a model aids in the development of our understanding is the basis for deciding how good the model is. In developing models there is always a trade off. A model is a simplification of reality, and as such, certain details are excluded from it. The question is always what to include and what to exclude.

Bellenger’s statement that a model is a simplified representation of a system is important. Polish-American scientist and philosopher Alfred Korzybski stated this as “the map is not the territory,” encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself.

In Empirical Modeling-Building and Response Surfaces (1987), Box and Draper noted that: “essentially, all models are wrong, but some are useful” (p. 424) and “remember that all models are wrong; the practical question is how wrong do they have to be to not be useful” (p. 74).

Gregory Bateson, in “Form, Substance and Difference,” from Steps to an Ecology of Mind, elucidates the essential impossibility of knowing what the territory is, as any understanding of it is based on some representation:

We say the map is different from the territory. But what is the territory? Operationally, somebody went out with a retina or a measuring stick and made representations which were then put on paper. What is on the paper map is a representation of what was in the retinal representation of the man who made the map; and as you push the question back, what you find is an infinite regress, an infinite series of maps. The territory never gets in at all. […] Always, the process of representation will filter it out so that the mental world is only maps of maps, ad infinitum.

Bateson also points out that the usefulness of a map (a representation of reality) is not necessarily a matter of its literal truthfulness, but its having a structure analogous, for the purpose at hand, to the territory.

Before going further and deeper into modeling in the abstract, I wanted to bring up an example of mental modeling from Gary Klein’s book Sources of Power, in which he talks about field research on the nature of expertise and expert decision making. He also goes into this same material in a video on adaptive decision making. In the following, Klein is commenting on the observation that firefighter commanders don’t compare different solutions to a problem but pick one that matches the pattern of the current situation and that they expect to be a sufficient solution. Part of arriving at this expectation is a process of mental simulation, i.e. execution of a mental model.

The commanders’ secret was that their experience let them see a situation, even a nonroutine one, as an example of a prototype, so they knew the typical course of action right away. Their experience let them identify a reasonable reaction as the first one they considered, so they did not bother thinking of others. They were not being perverse. They were being skillful. We now call this strategy recognition-primed decision making

To evaluate a single course of action, the lieutenant imagined himself carrying it out. Fire-ground commanders use the power of mental simulation, running the action through in their minds. If they spot a potential problem, like the rescue harness not working well, they will move on to the next option, and the next, until they find one that seems to work. Then they carry it out. As the example shows, this is not a foolproof strategy. The advantage is that it is usually better than anything else they can do.

Klein’s observations that the strategy is not foolproof coincides with models being approximations of reality not reality itself. The commander runs a mental simulation, but his or her model does not contain all details of the situations, only the ones that have been unconsciously flagged as being relevant. Nonetheless, in most cases the model is useful.

A simulation is a model-based experiment. An experiment is done using the model with the anticipation that the result will increase our understanding of how the real system would respond. This requires that the experiment be within the set (or space) of valid experiments for a given model; i.e. that the experiment is within the design range of the model. François Cellier, in Continuous System Modeling (pp. 5-6), stresses the connection between model and experiment.

If people say that “a model of a system is invalid” (as can be frequently read), they don’t know what they are talking about. A model of a system may be valid for one experiment and invalid for another, that is: the term “model validation” always relates to an experiment or class of experiments to be performed on a system rather than to the system alone. Clearly, any model is valid for the “null experiment” applied to any system (if we don’t want to get any answers out of a simulation, we can use any model for that purpose). On the other hand, no model of a system is valid for all possible experiments except the system itself or an identical copy thereof.

A model may take many forms: a mental model, a map or sketch on a piece of paper, a reduced-scale physical model of the system, an electrical circuit or hydraulic system that is an analog of the real system, or a computer model. The latter might be equation-based or agent-based or a hybrid of the two. Delving deeper into types of models will be another post. I’ll close this overview with a quote from Richard Hamming.

The purpose of computing is insight, not numbers.

5 Responses to “The Pervasiveness of Models”

  1. […] This post was mentioned on Twitter by Keith Eric Grant, Keith Eric Grant. Keith Eric Grant said: I just put up a blog post on modeling & models: "The Pervasiveness of Models" http://bit.ly/egoQCq #climate #SystemsThinking […]

  2. I look forward to the rest of your series, Keith. As a physicist I’ll be interested to hear what you think about how uncertainty in physical parameters propagates through models, or in general how we can evaluate how well our models and their predictions correspond to reality.

  3. Hehe, I mean, since YOU have a physics background. Not me. Never me!

  4. Interesting start.

    I’m also interested to see that the word “theory” doesn’t appear anywhere here. I’ve always taken the word “theory” to be roughly synonymous (for many purposes at least) with the word “model.” A theory, like a model, is a simple representation or approximation of a complex reality. Anyway, I find it useful to think of them that way.

    Chris.

  5. Chris, I’m partly in agreement. I might think, however, of a theory as a meta-model; the conceptual framework or insight within which to implement a model. I make the distinction in .in the sense that a theory generally requires some type of implementation in order to yield predictions, and that implementation itself often requires decisions and approximations.

    What a theory also helps do is guide one away from fitting data by some arbitrary collection of parameters into some form of representation in which the parameters have a “physical” meaning. I’m thinking here of modeling something like the growth of aerosol particles in saturated air, in which the underlying theory would be a diffusion and the model would include factors like particle radius, chemical compositions, …

Leave a Reply