Donella Meadows Archives

Insuring the Usefulness of Models in Forecasting Human Needs

by Donella H. Meadows & Jennifer M. Robinson
Unesco Programme on Research and Human Needs
Tbilisi, 6–10 December, 1981

Introduction

Mathematical modeling is already used for day-to-day, operational
decisions such as route scheduling and inventory management. But the
systems problems that most urgently need improved understanding and
management are strategic global and universal problems, such as
long-term global energy supply, the poverty and hunger of a large
portion of the world’s people, and threats to the biosphere.
Storing and processing all the information needed to address these
problems is clearly beyond the capacity of the unaided human mind.
Computer models could play a very useful role in understanding and
managing the world’s largest and most complex systems; many
models of those systems have already been made. But very few of them
have been used.

In this paper we discuss what modelers might do to make their work
more usable. Our suggestions are probably applicable to all levels of
application, from daily scheduling decisions to national policy
formulation, but we particularly aim to present guidelines that will be
helpful when the research objective is to help solve major global or
universal problems. We have abstracted our guidelines from many sources;
from the literature and conversational scuttlebut of operations
researchers, from our own study of mathematical modeling in national
development policy (The Electronic Oracle, forthcoming in the IIASA
Wiley series), and from personal experience.

We begin by looking at the question, “what’s the problem? Why aren’t models more useful
for solving major systems problems?” and then work toward solutions: “what can be
done to make models more useful?” We start by posing and
evaluating six hypotheses about why models so often fail to be used for
major policy questions. We then describe a not-too-exaggerated
modeler-client relationship that pretty much guarantees that the
resulting model will go unused. Thereafter four general principles for
building useful models are presented, followed by 17 specific guidelines
that may lead to more useful modeling. Then we conclude with a very
short sermon.

Six Hypotheses

Why does mathematical modeling so often fail to influence the world
of policy? One hears many explanations. Below we present six of the
reasons commonly given, overstating each to highlight its central
point.

Hypothesis 1: Policy-Makers Are Boors

Making mathematical models for politicians is casting pearls before
swine. Politicians do not respond to rationality, and they are often
systematically corrupt. Clients only hear what they want to hear, and if
you tell them anything else, you will be either ignored or fired. Even
well-meaning bureaucrats are locked into an overwhelming system of
nonfunctional subdivisions and short-term parochial interests. There is
no room in the political system for long-term, cross-disciplinary,
holistic thinking. The policy world is not ready for large-scale,
socioeconomic models.

Hypothesis 2: Modelers Are Clowns

Modelers get ignored because they deserve to be. They are starry-eyed
idealists who don’t understand how the world operates. They
create abstract monstrosities and describe them with big words, but they
seldom tell you anything new. Moreover, they lack discipline; they never
get things done on time and ignore contracts. If you must deal with
them, treat them like children. Don’t expect much of them; keep
them firmly in line; tell them exactly what is expected of them and make
them report back to you regularly. Audit their accounts carefully. Make
sure they are working on YOUR problem, and don’t let them hide
behind jargon. Even better, avoid them entirely.

Hypothesis 3: Supermodeler

A remarkable set of scientific and technical capabilities is required
to build a good computer model. A very different set of skills, in
communications and interpersonal relationships, is needed to convey the
information gained by modeling and to bring it into use. Not many people
have all these skills. If you, as a modeler, don’t happen to be
one of the super few, don’t be surprised if your results are not
brilliant. And as a client, realize that the only way to get a super
product is to hire a super modeler, if you can find one.

Hypothesis 4: Two Cultures

Modelers and clients live in different worlds, see different things,
and respond to different pressures. On both sides there are basically
honest and gifted people, trying to do a good job. Each side has useful
insights into policy. Each can benefit from working with the other. But
the difficulties of communicating between worlds must not be
underestimated. Without patience, empathy and careful listening, there
will be no communication. Each side should work at extending its
understanding of the other world.

Hypothesis 5: The Learning Curve

Computers have only been around for 40 years, and social scientists
have only been using them for a decade or so. Most modelers are
beginners, and they still make stupid mistakes while trying out new
things. Clients haven’t had time to learn what modeling can and
cannot do or to direct modelers to operate in a fashion that serves
client needs. Things will improve with time, but progress may take
decades. The situation calls for error tolerance and open minded
willingness to learn. And, especially, patience.

Hypothesis 6: No Problem

The main problem with modeling is that people are lying about what is
going on. The grand statements about revising policy and improving
understanding are just smokescreens to gain public acceptance and
funding. The basic motives in modeling are the personal and
institutional goals of modelers and clients, and these are being well
served by present modeling activities. Modelers get a chance to play
with their mathematical toys, to breathe the heady atmosphere of
political power, and to get tenure. Clients bask in the scientific
impressiveness of the computer, expand their budgets, and postpone
action by saying the issue is being studied. Both modelers and
policy-makers get exactly what they want from the modeling process. So
far all they have wanted was to make a model. When they decide to solve
real problems, they will be able to.

Which hypothesis is true? We think they all are…partially.
There are boorish clients, and clownish modelers. There are gulfs
between the two worlds and lessons to learn. Many models are initiated
with no clear intent to change the world in any way. And a few modelers
are definitely super, while the rest of us are mere mortals.

Although there is some evidence to support each hypothesis, and no
easy means to disprove any of them, we can dismiss some as barren and
unhelpful. Consider, for example, the intrinsic unworkability of a
relationship between a modeler operating under the hypothesis that
clients are boors and a client operating under the hypothesis that
modelers are clowns. If a modeling project ever got started, it might
proceed something like this:

Modeler: Here I am, ready to bring the light of Systems Analysis to
the beclouded, dismal world of Policy.

Client: Here comes another pointy-headed modeler.

Modeler: I am going to make a model to show you why policy X is
best.

Client: I already like policy X. Perhaps you could make a model to
sell policy X to agency Y. They’re always blocking my work and
they like this computer stuff.

Modeler: This guy is a jerk, but I really need a contract to keep my
staff together.

Client: I need all the help I can get pushing policy X.

Modeler: I will need abut 20 man years of work, 6,000 hours of CPU
time, and $100,000 for travel and overhead.

Client: Well, at least that will justify my budget increase request.
Make sure to spend it all in the coming fiscal year.

Modeler: Now I am going off to make the model. This part is beyond
you, so I’ll just bring the report to you in January.

Client: Good, don’t bother me until then.

(time passes)

Modeler: Sorry for the delay. We had some software problems, and we
had to wait for the new quarterly economic accounts, and, to tell the
truth, the model didn’t compile until last Thursday.

Client: What did you say your name was?

Modeler: Notice that in this case it has been necessary to impose
some restrictions on the market response, which has forced the aggregate
value of production to differ significantly from its full equilibrium
value along the unrestricted response surface.

Client: Getting plain language from a modeler is like getting blood
from a turnip.

Modeler: The upshot of all this is that policy X is not as beneficial
as we thought. In fact, it’s counterproductive.

Client: Garbage in, garbage out.

Modeler: If they were going to ignore us why did they hire us in the
first place?

Client: I knew this would never work. Why did I hire them in the
first place?

Modeler: Policy makers are impossible boors.

Client: What a crew of irresponsible clowns.

And so both initial hypotheses are reinforced. The only result of the
effort is that both modeler and client get to be right about each
other’s deficiencies. Policymakers as Boors and Modelers as
Clowns can be eliminated as useful hypotheses.

The Supermodeler theory also has some operational problems:

  • it feeds elitism.
  • there is no good selection mechanism. How do you identify a
    supermodeler?
  • it leaves no opening for improvement, no hope that modelers or
    clients can do anything to transform their so-so daily efforts into
    super work.

This leaves the Two Worlds, the Learning Curve and the No Problem
hypotheses standing. They leave open the possibility for learning and
suggest that effective modeling requires human as well as technical
skills. The No Problem hypothesis, when it is not stated cynically,
encourages us to set our sights higher and discover the capabilities we
already have. Most important, all three theories remind us that good
modeling, like good policy-making and indeed any other creative and
useful human activity, is an ongoing, ever-changing experimental
process…not a mysterious knack that “either you got or you
don’t”.

Meta-Rules for Excellence

How, then, does one manage an on-going, experimental process so that
it yields applicable results as well as learning for the modeler and the
client? What does one do to increase the probability of a model being
useful and being used?

Below we pose first, four general principles, and then seventeen
specific guidelines. These principles and guidelines do not come backed
by any guarantee. Putting a model to work in the world is, like every
other mechanism of social change, a matter of human communication and
personal relationships. The very nature of the exercise leads
immediately to our first and most important principle: THERE ARE NO
RULES. Or more accurately, there are rules on a higher level; not
instructions about what to do, but instructions about how to be in order
to figure out for any given situation what to do.

  1. Do what is appropriate: No guideline will fit all situations,
    and the practices most useful in one modeling activity may be
    counterproductive in the next. Realistic, functional behavior comes from
    awareness of the current situation, not from a formula. Don’t let
    habit or reflex take over your work. Be mindful, not mechanical.
  2. Take responsibility for implementation: Plan and act to have your
    work used. Implementation is not something to be grafted on once the
    modeling is done, nor is someone else going to do it for you. Means for
    implementation (such as those described below) should be designed into
    the project from the beginning.
  3. Respect all parties to the relationship: Anyone in a position to
    commission a model or to make one can be assumed to have sufficient
    intelligence, sincerity, and survival skills to be worthy of respect. If
    that respect is not cultivated, and if either client or modeler treats
    the other as a fool, an object or a convenience rather than as a person
    and a partner, there is little chance that an appropriate model will be
    developed, or that, if it is developed, it will be used.
  4. Support the needs of all parties: Modelers and clients operate in a
    world buzzing with short-term needs and pressures from jobs, families,
    bosses, students, budgets, secretaries, degrees, publications
    requirements and telephone bills. Responding to these needs is not a
    luxury, but a necessary part of doing business. If the task creates
    distracting short-term difficulties, if it lacks personal rewards, or if
    it poses personal risks, it is likely not to get done, or at least not
    done well. Personal and institutional needs should be expressed and
    empathized with, not hidden or sneered at.

Seventeen Guidelines

The following suggestions are a grab-bag of specific practices that
we and others have found to have a positive influence on model
usefulness, sometimes, under some conditions. They are posed as separate
but parallel recommendations for modeler and client, under the
assumption that there is a clear client. Where the client is not
directly involved, or is a diffuse group of people, the modeler’s
job is much harder. He or she must either personally summon the energy
to step out of the modeler’s perspective and assume the guiding,
critical role that these guidelines presume the client can fill, or
adopt a wise, knowledgeable person or persons to fill in the gaps left
by the absence of a true client. Not all suggestions are appropriate to
all cases, and all should be freely used or discarded, as warranted by
the situation.

Modeler Client/Sponsor
1. Don’t be hungry. 1. Don’t keep modelers hungry.

A modeler motivated by the desire to have a job, rather than the
desire to do a job well, is unlikely to make the slow and careful effort
needed to produce a usable model. Life is tough when you have to
maintain all your resources (excellent modelers, other staff, physical
facilities) with short-term, uncertain funding. When modelers see
themselves in this position they think more about funding than about the
job to be done. As a modeler, don’t allow yourself to be put in
this position (a polite way of saying adjust your grandiose ideas about
the style in which you must be supported). As a client, do what you can
to relieve the modeler’s mind from worrying about where the next
contract is going to come from.

2. Beware of clients who want you to prove their point. 2. Beware of modelers who are out to prove their
point.

If client and modeler happen to want to prove the same point, they
may get along famously, but the result will be polemics, not analysis.
If the points are different, communication will break down rapidly
unless both parties can see the difference of opinion as an opportunity
for learning (and care more about learning than about being right).

3. Insist on a clear problem
definition.
3. Deliver a clear problem definition.

Probably the single most important cause of modeling failure is the
tendency to model the whole system instead of a specific problem.
Models, to be useful, must simplify and clarify reality. If they are as
incomprehensible and intractable as the real system, they do not help.
To simplify and still address real problems, one must clearly define
what questions and what real problems the study is addressing and not
addressing. Keep talking and probing until the real questions and
problems become clear. Don’t rush on to modeling before a clear
goal is established.

4. Find the client who can influence the system. 4. Make the model that is actually useful for the system you
influence.

Modelers, avoid clients who lack the will and the power to affect the
problem situation. Don’t design a model to explore options that
are not available to the client. Build support up, down and across the
hierarchy to eliminate internal obstacles to implementation and protect
against bureaucratic mobility (many modeling studies are undermined when
their champion in government is transferred suddenly to Botswana).
Clients, hire modelers to explore options that are truly open to
you…and stick with the project from beginning to end.

5. Take time to define the job
completely and precisely.
5. Take time to define the job completely and precisely.

During the period when the problem is being defined and operational
procedures are being worked out there is strong pressure to get on to
modeling to show that something is being accomplished. Resist this
pressure. Project definition, manpower requirements, documentation
needs, deadlines, and channels of communication should be thought
through carefully beforehand on the basis of realistic understanding of
the capabilities, goals and constraints of modelers and clients.
Careless and hasty project design often results in sloppy work.

6. Make sure the problem is important to the client(s). 6. Don’t accept a trivial problem definition.

A job not worth doing at all is not worth doing well. There is a
terrible tendency to do what you think is feasible instead of what you
really want or need to have done. If you go that route, you will not
only lose interest in the project, you will never find out what is
feasible.

7. Make solving the problem the goal, not building the model. 7. Don’t permit modelers to forget the problem.

Modelers tend to get carried away from the real problem and drift
toward where the data are better or the computations more congenial. To
avoid the tendency to drift, both modelers and clients should keep
focussed on the study’s problem definition and keep their actions
directed toward meeting the study’s goals (which is why problem
definition is so critical.)

8. Match the method to the problem, not vice versa. 8. Employ modelers whose tools suit the problem of interest.

Most modelers are specialists in particular methods and go around
fitting the world to their favorite kind of matrix, optimization routine
or spaghetti diagram. Applied to appropriate problems, mathematical
techniques are powerful conceptual aids. When misapplied, they can be as
distorting as fun-house mirrors. Modelers should understand the
strengths and limitations of their own methods and be self critical.
Clients should learn enough about modeling methods to be intelligent
critics of the modeler’s choice and should keep an eye on the
modeler to make sure he or she is not distorting the problem to fit some
preferred method.

9. State your biases openly and be aware of client biases. 9. State your biases openly and be aware of modeler biases.

It is no problem to go into a study with a bias; it is impossible not
to. It can be a great problem to hide or deny your biases and feign
objectivity. It can be an asset to be aware of your biases and to seek
actively evidence that counters them.

10. Experience the system. 10. Share your knowledge of the system with the modeler.

The essence of a system is often invisible to outsiders (including
modelers). Many details are stored in the minds and files of its
operators. Ability to represent a system accurately in a model is
enhanced by a tangible sense of what is there, how it looks, and how
those involved feel. Make on-site visits and talk with people. Get a
physical, hands-on sense of how things work. Participate wherever you
can. Modelers, listen to the client, who usually has a good idea of what
is connected to what. Clients, follow the model and be sure it reflects
what you know.

11. Involve clients in the modeling
process.
11. Allow time to understand and guide the modeling process.

Client involvement can help both in keeping model development in line
with client needs and in creating a model that the client understands,
identifies with, and will use. Experienced consultants state that the
single most important step toward implementation is the interested
participation of the client in the entire modeling process.

12. Have a rough model up and running quickly (within one month). 12. Have a prototype built before going ahead with a large model.

Building a rough, quick model helps to prevent small problems from
becoming large ones, and assures that what the modeler builds is what
the client wants and expects. The initial model can be very crude and
need not have accurate numbers or detailed disaggregation. It should be
flexible and exploratory. You will never get more than a rough
approximation of reality in a model anyway; don’t insist that the
first cut be more than a basic sketch. But do have a basic sketch that
delineates the whole structure and shows how the pieces will fit
together.

13. Use a level of detail appropriate
to the problem.
13. Use a level of detail appropriate to the problem.

Many clients, perhaps because they know the system details
intimately, ask for detail-rich models. Many modelers are good at
disaggregating and are pleased to respond to client pressure. And many
models end up so full of detail that they are untestable,
undocumentable, incomprehensible, and useless. Avoid unnecessary
clutter. The appropriate degree of detail for the problem may not be the
one that the client, the modeler, or the data-collectors have ever
thought of before.

14. Design the model to reflect the client’s indices of system performance and model validity. 14. Specify what indices you want and what tests the model should
pass to convince you.

Do the clients relate better to statistical measures, graphical
output, time trends, tabular measures, or maps? Do they care about the
rate of return or the energy per capita of the condition of the resource
base? Are they more concerned that the model be able to reproduce
history, it have a good R-square, or that its parts and general behavior
correspond to their knowledge of the system? Model design should respond
to client concerns on such subjects.

15. Describe the model in terms the policy-maker understands. 15. Don’t accept model descriptions that you don’t
understand.

The modeler who can’t explain his work often does not
understand it himself. Modelers, don’t try to sell your client
obscurantist gobbeldygook; and clients, don’t accept work that is
not completely understandable to you.

16. Design policy recommendations
with a clear understanding of real constraints and opportunities.
16. Don’t underestimate possibilities for change.

Modelers must remain aware that many things (such as price setting,
creating new jobs, setting a rate of national income growth) are much
easier to simulate than to do. Clients, on the other hand, can often
change more than they think they can.

17. Stay with the job until the
model implementation is done, not just until the model is finished.
17. Use the modeler for implementation as well as modeling.

Bringing about change in society is hard work; it requires patient,
persistent, repetitive explanation. And as model findings are explained
new questions will arise and new tests will be thought of. The modeler
should remain available, if the need arises, to explain results, to test
options, or to discuss why the real situation is not behaving as he
predicted it would.

Conclusion

We have based this entire discussion on three crucial
assumptions:

  1. Although many computer models of complex social systems are not
    useful, and many that are useful are not used, models could be made in
    such a way that they regularly, constructively, unquestionably
    contribute to and improve social policy.
  2. The keys to model usefulness are the personal characteristics,
    attitudes, and behavior of modelers and clients, not better technical
    skills, data, or computers. (These things are important, but they
    already receive so much time and attention that they are no longer
    limiting factors in implementation). An analysis that goes to the heart
    of a system problem and that is compelling enough to induce people to
    act upon it requires an analyst who is a responsive, intelligent,
    organized, empathetic, and inspired human being. It requires good
    judgement. It requires a clear vision of the purpose of the model and
    the nature of the system being modeled. It requires all concerned to
    stop acting like boors and clowns and to relate to others with
    sufficient honesty, clarity, and purposefulness to bridge the gaps
    between people and their worlds. In short, it requires a Supermodeler.
    And it helps to be working for a Superclient.
  3. Anybody with the patience and intelligence to build a computer model
    and with some emotional concern about large-scale social problems can be
    a Supermodeler. No principle or guideline listed in this paper is
    impossible for any modeler we know to follow. Nor is it impossible to
    figure out which ones to follow when. If implementation failure has been
    common in large-scale modeling efforts, it has been because modelers and
    clients have not chosen to bring out of themselves the qualities needed
    to make the effort work, not because the qualities were not there.

So our own preference is to combine two basically cynical
hypotheses—Supermodeler and No Problem—each of which takes
on a new positive twist when combined with the other. Perhaps we should
give the combination a new name, the You-Can-Do-It hypothesis. You have
to call on extraordinary capabilities to make models that have real,
lasting, constructive impact on the world.

Most of the modelers we know are quite willing to accept that they
can do anything technical. They will embark on ambitious mathematical
challenges or complicated model designs with hardly a qualm. But they
automatically resist the suggestion that they have the patience,
judgment and persuasiveness to interact effectively with policymakers,
especially on important, large-scale problems. They have not been
trained or encouraged to develop those skills, they have rarely been
given an opportunity to test them. We believe that that is why
social-system models are so seldom useful or used.

Of course there is no way to prove this (or any other) hypothesis.
The You-Can-Do-It hypothesis is probably so tautological that there is
also no way to disprove it. But there is also the possibility that, like
the theories about Boors and Clowns, the hypothesis generates its own
reality. Certainly of all the theories we have discussed, it is the only
one that does not contain a built-in reason for doing models badly.
(“Sorry that didn’t work out, but the client was
corrupt/we’re not far enough on the learning curve/the gap
between the two worlds was too great/etc.”) If you can’t
choose your operating beliefs by their demonstrable truths, you can
always choose them by what they allow you to achieve or create.

If you feel I have been reading a sermon, you are right. A sermon is
vehemently-delivered advice, generally given by someone who badly needs
to follow that advice him- or herself. The advice is not really
surprising. Its concepts are familiar to both preacher and preachee, but
easily forgotten and deserving of repetition, affirmation, and
reinforcement. The sermon itself does nothing and changes no one. But by
delivering it and receiving it, both the preacher and the listeners may
find the internal inspiration to go out and practice what they already
know they can and should do.

Print Friendly, PDF & Email

About The Donella Meadows Project

The mission of the Donella Meadows Project is to preserve Donella (Dana) H. Meadows’s legacy as an inspiring leader, scholar, writer, and teacher; to manage the intellectual property rights related to Dana’s published work; to provide and maintain a comprehensive and easily accessible archive of her work online, including articles, columns, and letters; to develop new resources and programs that apply her ideas to current issues and make them available to an ever-larger network of students, practitioners, and leaders in social change.  Read More

Newsletter Sign Up

The Academy occasionally sends E-newsletters with updates on the work of our fellows, the Donella Meadows Project and more. Sign up here if you'd like to stay connected.

Subscribe to our mailing list

* indicates required
Select your interests below:

Contact Form

    captcha