EXECUTIVES PERCEPTUAL FILTERS:
WHAT THEY NOTICE AND HOW THEY MAKE SENSE
William H. Starbuck and Frances J. Milliken
Published in Donald Hambrick
(ed.). The Executive Effect: Concepts and Methods for
Studying Top Managers.
Management magazines, academic journals and textbooks almost always presume that analyses of past events reveal how those events actually unfolded. Such writings also frequently portray strategy formulation and implementation as a causal sequence, in which executives perceive some reality, analyze the options offered by this reality, decide to pursue one or more of these options, and obtain results when their organizations’ environments react. Thus, according to this causal sequence, organizations’ environments act as impartial evaluators of executives’ perceptions, analyses, and actions. When the results are good, executives receive credit for accurately perceiving opportunities or threats, for analyzing these correctly and for taking appropriate actions. When the results are bad, executives get blamed for perceiving erroneously, for making analytic mistakes, for taking inappropriate actions.
This chapter argues that, prevalent though they are, retrospective explanations of past events encourage academics to overstate the contributions of executives and the benefits of accurate perceptions or careful analyses. Because retrospective analyses oversimplify the connections between behaviors and outcomes, prescriptions derived from retrospective understanding may not help executives who are living amid current events. As Fischhoff (1980: 335) observed, “While the past entertains, ennobles and expands quite readily, it enlightens only with delicate coaxing”.
The chapter describes some of the influences on the perceptual filtering processes that executives use as they observe and try to understand their environments. It has four major sections. The first of these explains how retrospection distorts people’s understanding of their worlds by emphasizing one or the other of two logical sequences. The ensuing section characterizes perceptual filtering, and argues that filtering can provide a nonjudgmental framework for looking at past, present, and future events. The next-to-last and longest section reviews evidence about how filtering processes vary with executives’ characteristics – such as their habits, beliefs, experiences, and work settings. This review divides perception into noticing and sensemaking. The chapter ends by considering how a focus on perceptual filtering changes one’s understanding of the noticing and sensemaking tasks of executives. Noticing may be at least as important to effective problem solving as sensemaking: Sensemaking focuses on subtleties and interdependencies, whereas noticing picks up major events and gross trends.
“The French people are incapable
of regicide.” - - Louis XVI, King of
“The Army is the Indian’s best friend.” - - General George Armstrong Custer, 1870
“I don’t need bodyguards.” - - Jimmy Hoffa, June 1975
“Nobody can overthrow me. I have
the support of 700,000 troops, all the workers, and most of the people. I have
the power.” – Mohammed Reza Pahlevi. Shah of
Observers of the past can discern executives:
who drew erroneous inferences from their observations, -
who sensibly diversified and spread their risks, or
who saw meaningful connections where everyone else saw unrelated events.
By contrast, observers of the present and future have less confidence that they can identify executives:
who are failing to anticipate important trends,
who are not implementing effectively strategies that might work, or
who are putting all of their eggs into the right baskets.
People seem to see past events as much more rationally ordered than current or future events, because retrospective sensemaking erases many of the causal sequences that complicate and obscure the present and future (Fischhoff, 1975; Fischhoff and 8eyth 1975;Greenwald. 1950; Hawkins and Hastie, 1956). The past seems to contain fewer of the sequences in which
the goodness or badness of results remains unclear, or
incorrect actions by executives yield good results, or
correct actions by executives yield bad results, or
executives’ actions have no significant effects on results, or
bad analyses by executives lead to correct actions, or
good analyses by executives lead to incorrect actions, or
analyses by executives do not significantly affect their actions, or
inaccurate perceptions by executives undermine good analyses, or
accurate perceptions by executives get lost in bad analyses, or
executives’ own perceptions exert no significant effects on their analyses.
For instance, in a study of a large government project, Ross and Staw (1986) noted that public declarations of commitment to the project became occasions for erasing information that had cast doubt upon it. Such erasing may occur quite involuntarily as people’s memories automatically take account of subsequent events (Fischhoff, 1975; Loftus, 1979; Snyder and Uranowitz, 1975; Wohlwill and Kohn, 1976). As Fischhoff (1980: 341) put it, people not only tend to view what has happened as having been inevitable but also to view it as having appeared ‘relatively inevitable’ before it happened.”
Observers who know the results of actions tend to see two kinds of analytic sequences:
Good results → Correct actions →Flawless analyses →Accurate perceptions
Bad results →Incorrect actions →Flawed analyses →Inaccurate perceptions
Knowing, for example, that bad results occurred, observers search for the incorrect actions that produced these bad results; the actual results guide the observers toward relevant actions and help them to see what was wrong with these actions (Neisser. 1981). Knowing that actions were incorrect, observers seek flawed analyses; the incorrect actions point to specific analyses, and the actions’ incorrectness guarantees the presence of flaws in these analyses. Knowing which analyses contained flaws, observers look for inaccurate perceptions; observers inspect the perceptions that fed into the flawed analyses, feeling sure that some of these perceptions must have contained errors.
Thus, after the space shuttle exploded and destroyed the Challenger spacecraft, a Presidential Commission searched for the human errors that caused this disaster. Physical evidence from the sea bottom, laboratory tests, and television tapes ruled out several initial hypotheses and focused attention on design flaws in the wall of solid-rocket booster. Confident that mistakes had occurred when NASA decided to continue using this booster, the Presidential Commission could then review these processes and identify the mistakes. The Commission did spot some data that should have been taken more seriously, some rules that should have been enforced more stringently, some opinions that should have been given more credence, some communication channels that should have been used, and some specific people who had played central roles in the faulty decision processes. Many of these same actions had occurred before previous flights - - the same rules had been bent, the same kinds of discussions had taken place, and the same communication channels had been ignored. But, after previous flights, no participant said these actions had been mistakes; and when inspectors noted defects in the solid-rocket boosters, NASA personnel concluded that these defects were not serious.
Retrospective perceivers are much more likely to see bad results, and hence mistaken actions and analyses, if they did not themselves play central roles in the events; and perceivers are much more likely to see good results if they did play central roles (Nisbett and Ross, 1980). Festinger. Riecken, and Schachter (1956) observed a religious cult that waited expectantly for a flying saucer to arrive and carry them off so that they would be safe when the world came to an end at . At dawn, facing the fact that the expected events had not transpired, the cult members retreated in confusion and disappointment to their meeting house. But they soon emerged with revitalized faith, for they had realized that it was their unquestioning faith the night before that had convinced God to postpone Armageddon.
The two dominant analytic sequences not only simplify observers’ perceptions, they put executives’ perceptions at the beginnings of causal sequences and imply that executives’ perceptual accuracy strongly influences their organizations’ results. For example, nearly all explanations of crisis, disaster, or organizational decline focus on how executives failed to spot major environmental threats or opportunities, failed to heed well-founded warnings, assessed risks improperly, or adhered to outdated goals and beliefs (Dunbar and Goldberg, 1978; Mitroff and Kilmann, 1984; Starbuck, Greve, and Hedberg, 1978). On the other hand, explanations of organizational success generally cite executives’ accurate visions, willingness to take wise risks or refusals to take foolish risks, commitments to well-conceived goals, or insightful persistence in the face of adversity (Bennis. 1983; Peters and Waterman, 1982). In foresight, however, one is hard pressed to distinguish accurate perceivers from inaccurate ones. Examples abound, of course. In 1979, Harding Lawrence, Chairman of the Board of Braniff Airlines, was hailed for “his aggressive response to deregulation ... another brilliant, strategic move that should put Braniff in splendid shape for the 80s” (Business Week, March 19, 1979); a few years later, industry experts were blaming Braniff’s bankruptcy on Lawrence’s overly aggressive response to deregulation. In 1972, James Galton, publisher of the Popular Library, told Macmillan that Richard Bach’s best-selling novel, “Jonathan Livingston Seagull will never make it as a paperback”; Avon Books bought the rights and sold seven million copies in ten years. Immediately after World War II, IBM offered computers for sale, but looked upon this offer as merely a way to publicize the company’s participation in an avant-garde wartime project at that time, Chairman of the Board Thomas J. Watson speculated, “I think there is a world market for about five computers.”
The two dominant kinds of analytic sequences also conform to norms about what ought to happen in a rational world:
Accurate perceptions ought to go with flawless analyses, and
flawless analyses ought to lead to correct actions, and
correct actions ought to yield good results.
This rationalization helps observers to understand their environments and it gives observers the comfort of knowing that their worlds are working as they should work. Unfortunately, such understanding can only exist after results have occurred and the results’ goodness or badness clarifies, because good and bad results may arise from very similar processes. For example, executives who insightfully persisted in the face of adversity may also have failed to heed well-founded warnings that nearly came true. Their research led Starbuck. Greve. and Hedberg (1978: 114) to conclude that “the processes which produce crises are substantially identical to the processes which produce successes”.
Of course, even hindsight usually leaves the past complex and ambiguous. Some results manifest themselves much later than others, and results have numerous dimensions that elicit different evaluations from observers who hold divergent values, so observers disagree about results’ goodness and see multiple and inconsistent interpretations of their causes. Retrospection only makes the past clearer than the present or future; it cannot make the past transparent. But the past’s clarity is usually artificial enough to mislead people who are living in the present and looking toward the future. In particular, retrospection wrongly implies that errors should have been anticipated and that good perceptions, good analyses, and good decisions will yield good results.
The present is itself
substantially indeterminate because people can only apprehend the present by
placing it in the context of the past and the future, and vice versa. Imagine,
for instance, that this is
One thing an intelligent executive does not need is totally accurate perception. Such perception would have no distortion whatever. Someone who perceived without any distortion would hear background noise as loudly as a voice or music, and so would be unable to use an outdoor telephone booth beside a noisy street, and would be driven crazy by the coughs and chair squeaks at symphony concerts. A completely accurate perceiver might find it so difficult to follow a baseballs path that batting or catching would be out of the question. The processes that amplify some stimuli and attenuate others, thus distorting the raw data and focusing attention, are perceptual filters.
Effective perceptual filtering amplifies relevant information and attenuates irrelevant information, so that the relevant information comes into the perceptual foreground and the irrelevant information recedes into the background. The filtered information is less accurate but, if the filtering is effective, more understandable. People filter information quite instinctively: for example, a basketball player can shoot a foul shot against a turbulent backdrop of shouting people and waving hands, and a telephone user can hear and understand a quiet voice despite interference from street noise that is many times louder.
In complex environments, effective perceptual filtering requires detailed knowledge of the task environment. Systems engineers sometimes try to design filters that minimize the errors in perceived information – errors might include extraneous information, biases, noise, static, or interference between simultaneous messages (Sage, 1951). To design an error-minimizing filter for some task, an engineer would make assumptions about the possible sources of stimuli, and would distinguish relevant from irrelevant sources. An error-minimizing filter makes predictions about where errors are going to occur in perception and then either removes these errors or prevents them from occurring. In a task environment as complex as most real-life environments, an error-minimizing filter would incorporate numerous complex assumptions (Ashby, 1960), and for the filter actually to minimize perceptual error, these assumptions must be correct.
In real life, people do not know all of the sources of stimuli, nor do they necessarily know how to distinguish relevant from irrelevant information. They must discover the characteristics of sources and tasks experimentally. Some combinations of tasks and task environments occur often enough and with enough consistency that people learn to make useful discriminations. For example, batters can practice tracking baseballs until they develop good models of baseball trajectories and learn to distinguish baseballs from other stimuli. Even though batters may move around from one ballpark to another, they encounter enough consistency that their learning transfers. Batters also see immediate consequences of their actions, so they get prompt feedback concerning the effectiveness of their perceptions.
Executives, by contrast, find it difficult to practice strategizing: Long lags may Intervene between executives’ actions and the visible outcomes of those actions, and these outcomes have multiple causes; so executives lack clear feedback about the effectiveness of their perceptions and the relevance of information. Constant changes in their environments mean that executives’ knowledge grows rapidly obsolete and that they gain few benefits from practice. Executives’ experience may even be deceptive. Some research indicates that industries and strategic groups change bimodally: long periods of gradual incremental development get interrupted by occasional bursts of radical change (Astley, 1985; Astley and Fombrun. 1987; Fombrun and Starbuck. 1987; Tushman and Anderson. 1986). Therefore, executives’ learning mainly occurs during the periods of relative stability, but their strategic skills are mainly tested during the bursts of change. It is during the bursts of change that executives most need to act creatively rather than on the basis of experience and that perceptual errors may cause the greatest damage.
The theories about effective perceptual filtering assume that it occurs within perceivers, not their environments, and that the unfiltered stimuli are facts. However, perceivers are inseparable from their environments because each depends on the other, and perceptions can either validate or invalidate themselves when people act on their environments (Ittelson, Franck. and O’Hanlon. 1976). For example, people in hierarchies pay more attention to messages from their superiors than to ones from their subordinates, so they actually receive more information from their superiors even though more messages originate from their subordinates (Porter and Roberts. 1976). Another example was suggested by Hayek (1974), who argued that an emphasis on quantitative measures and mathematical models caused economists to develop mistaken beliefs about macroeconomic systems, and that these erroneous beliefs led them to formulate policies that produced unexpected consequences and actually made the macroeconomic systems less manageable and less self-correcting. In particular, at one time, economists generally taught that inflation and unemployment worked against each other: an economic policy could achieve full employment only if it suffered some inflation, or it could suppress inflation by maintaining a high level of unemployment. In the 1960s, economists moved into governmental policy-making positions and began creating institutions that were designed to control inflation while minimizing unemployment. One result, said Hayek, was an economy in which unemployment rises whenever inflation stops accelerating.
Perceivers who act on their environments need perceptual filters that take account of the malleability of task environments. Perceptual errors have smaller consequences in environments that resist change, and larger consequences in more malleable environments. But errors may yield benefits as well as costs – as when faulty perceptions lead people to pursue energetically goals that would look much less attainable if assessed in utter objectivity, but the pursuers’ enthusiasm, effort, and self-confidence bring success. Brunsson pointed out that actions are more likely to succeed if they are supported by strong commitments, firm expectations, and high motivation. He (1965: 27) said: “organizations have two problems: to choose the right thing to do, and to get it done. There are two kinds of rationality corresponding to the two problems: decision rationality and action rationality. Neither is superior to the other, but they serve different purposes and are based on different norms. The two kinds of rationality are difficult to pursue simultaneously, because rational decision-making procedures are irrational in an action perspective. They should be avoided if action is to be more easily achieved.”
It is generally impossible to decide, at the time of perception, whether perceptions will prove accurate or inaccurate, correct or incorrect, because perceptions are partly predictions that may change reality, because different perceptions may lead to similar actions, and because similar perceptions may lead to different actions. Many perceptual errors, perhaps the great majority, become erroneous only in retrospect. Even disagreement among people who are supposedly looking at a shared situation does not indicate that any of these divergent perceptions must be wrong. People may operate very effectively even though they characterize a shared situation quite differently, and people’s unique backgrounds may reveal to them distinct, but nevertheless accurate and valid, aspects of a complex reality.
Trying to learn from past errors oversimplifies the complexity and ambiguity of the task environments that people once faced, and it assumes that the future will closely resemble the past. Furthermore, although in the present, people can distinguish their perceptions from the alternative actions they are considering, people looking at the past find such distinctions very difficult because they revise their perceptions to fit the actions that actually occurred (Loftus, 1979; Neisser, 1981). Therefore, it makes sense to deemphasize errors and to analyze perception in a nonjudgmental, nonaccusatory framework. One way to do this is to emphasize filtering processes.
Table 1 identifies some filtering processes that may be important in understanding environmental scanning and strategy formulation (McArthur, 1981; Taylor and Crocker, 1981). All of these filtering processes distort the raw data that executives could perceive: In some situations, these distortions enable executives to operate more effectively in their environments by focusing attention on important, relevant stimuli; whereas in other situations, these same distortions make executives operate less effectively by focusing attention on unimportant, irrelevant stimuli.
Further, these types of filtering may persist over time and so characterize organizations or individual executives.
INFLUENCES UPON FILTERING PROCESSES
Executives who work in the same organization frequently disagree about the characteristics of that organization, and executives whose firms compete in the same industry may disagree strongly about the characteristics of that industry (Downey, Hellriegel, and Slocum, 1977; Duncan, 1972; Payne and Pugh, 1976; Starbuck, 1976). The stimuli that one executive receives may be precisely the same stimuli that another executive filters out. Furthermore, executives who notice the same stimuli may use different frameworks to interpret these stimuli and therefore disagree about meanings or causes or effects. Understanding how organizational and individual characteristics influence executives’ filtering processes may both help executives themselves to behave more effectively and help researchers to predict the types of filtering processes that executives use (Jervis, 1976).
The analyses to follow divide perception into noticing and sensemaking. This is admittedly a difficult distinction in practice because people notice stimuli and make sense of them at the same time, and each of these activities depends upon the other. For instance, what people notice becomes input to their sensemaking, and in turn, the sense that people have made appears to influence what the people notice (Goleman, 1985). Noticing involves a rudimentary form of sensemaking in that noticing requires distinguishing signal from noise, making crude separations of relevant from irrelevant. Similarly, sensemaking involves a form of noticing when a perceiver reclassifies remembered signal as noise, or remembered noise as signal, in order to fit a new interpretive framework.
Nevertheless, like others (Daft and Weick, 1984; Kiesler and Sproull, 1982), we believe a distinction between noticing and sensemaking sometimes exists and has theoretical value. For example, Daft and Weick (1984) distinguished scanning, a process that collects data, from interpretation, a process that gives meaning to data. Thus. Daft and Weick’s scanning corresponds to noticing, and their interpretation corresponds to sensemaking. We prefer the term noticing to scanning on the ground that scanning seems to imply formal and voluntary actions, whereas noticing may be quite informal and involuntary; and we prefer the term sensemaking to interpretation because sensemaking seems more self-explanatory.
Noticing: Where to look and what to see
The range of what we think and do
is limited by what we fail to notice.
And because we fail to notice
that we fail to notice
there is little we can do
until we notice
how failing to notice
shapes our thoughts and deeds.
R. D. Laing (Goleman, 1985: 24)
Noticing is an act of classifying stimuli as signals or noise. Noticing results from interactions of the characteristics of stimuli with the characteristics of perceivers. In particular, some stimuli are more available or more likely to attract attention than others (McArthur, 1981; Taylor and Crocker, 1981; Tversky and Kahneman, 1974). However, the characteristics of perceivers, including their current activities, strongly affect both the availabilities of stimuli and the abilities of stimuli to attract attention (Wohlwill and Kohn, 1976); even colorful or loud stimuli may be overlooked if people are used to them or are concentrating on critical tasks, and novel events or sudden changes may remain unseen if people are looking elsewhere. Furthermore, executives tend to have more control over their own behaviors than over the stimuli that interest them most. Therefore, we emphasize the characteristics of perceivers, either as individuals or as members of organizations, more than the characteristics of stimuli. Noticing is influenced by perceivers’ habits, their beliefs about what is, and their beliefs about what ought to be.
People classify stimuli by comparing them either to other immediately available stimuli or to standards arising from their experiences and expectations. Psychologists call the smallest differences that people can detect reliably “just-noticeable differences”, and they have devoted extensive research to ascertaining just-noticeable differences in laboratory settings.
In the 183 E. H. Weber studied people’s abilities to identify the heavier of two weights, and posited that just-noticeable differences are approximately constant percentages of the absolute magnitudes of stimuli. By this hypothesis, a just-noticeable difference in wealth, for example, would be some percentage of a person’s wealth, so a rich person would be much less likely to notice a $1 difference in wealth than a poor person. Studies indicate that Weber’s hypothesis seems to describe hearing and vision accurately over a wide range of stimuli, but it describes touch, smell, and taste less accurately (Luce and Galanter, 1963). It does, nonetheless, suggest some ideas about executives’ perceptions. For instance, an executive in a volatile industry might be less likely to notice absolutely small changes in prices or sales volumes than an executive in a stable industry. Similarly, an executive in a small firm might tend to be more sensitive to the needs of an individual employee than an executive in a large firm.
Psychologists’ studies of just-noticeable differences measure what people can perceive under controlled conditions, and these studies emphasize comparisons between simultaneous or nearly simultaneous stimuli. In real life, people not only compare stimuli to standards arising from nearly simultaneous stimuli, they also compare stimuli to standards evolved over long periods, and to models and enduring expectations about their environments. Furthermore, a yes-no characterization of noticing seems to misrepresent its subtlety and continuity. If need be, people can often recall stimuli that they had been unaware that they had noticed or that they had classified as background noise; this recall suggests that people perceive unconsciously or subliminally as well as consciously. You may, for instance, have had the experience of hearing a question but not quite hearing it, so you ask the questioner to repeat her question; but before she does so, you suddenly realize that you know the question and you answer it.
Bargh (1982) pointed out that people seem to have two modes of noticing, one of them controlled and volitional, and one automatic and involuntary. Although the two modes interact on occasion, they operate independently of each other most of the time. Bargh observed that people who are fully absorbed in performing tasks nevertheless notice it when someone speaks their names. Nielsen and Sarason (1981) found that someone can virtually always capture another person’s attention by speaking sexually explicit words.
The standards that determine what people notice in real life seem to be of several not-entirely-distinct types: People notice familiar and unfamiliar stimuli, as well as what they believe to be relevant, important, significant, desirable, or evil.
Looking for the Familiar or Overlooking the Familiar. Helson (1964) observed that perceptual thresholds reflect experience, but that adaptation both sensitizes and desensitizes. On the one hand, people grow less sensitive to stimuli as these stimuli become more familiar. For example, an executive who moves into a new industry would initially notice numerous phenomena, but many of these would fade into the background as the industry becomes more familiar. On the other hand, some sensory functions improve with practice (Gibson, 1953), and as people become more familiar with a domain of activity, they grow more sensitive to subtle changes within that domain (Schroder, Driver, and Streufert, 1967). Thus, an executive who moves into a new industry would initially overtook some phenomena that seem unimportant, but would gradually learn to notice those phenomena as experience clarifies their significance. Although these two processes produce opposite effects that may, in fact, counteract each other, they generally interact as complements: decreasing sensitivity pushes some stimuli into the background, while increasing sensitivity brings other stimuli to the foreground.
Helson studied the relative effects of foreground and background events on what it is that people do not notice. He (1964) argued that experience produces standards for distinguishing or evaluating stimuli, and he called these standards “adaptation levels”. People do not notice the stimuli that resemble adaptation levels, or they act as if they are indifferent to such stimuli. In studies of vision and weight-sensing, Helson found that the adaptation level associated with a sequence of alternating stimuli resembles an average in which the foreground stimuli receive around three times the weight of the interspersed background stimuli, implying that foreground events actually exert much more influence on not-noticing than background events do. Nonsimultaneity evidently helps perceivers to concentrate on foreground events and to deemphasize background events in cognition. On the other hand, Helson’s experiments showed that the adaptation level associated with a combination of simultaneous stimuli resembles an average in which the background stimuli have around three times the weight of the foreground stimuli, implying that simultaneous background events actually exert much more influence on not-noticing than foreground events do. Simultaneous background events impede perceivers’ abilities to concentrate on foreground events, and so the background events gain influence in cognition; where background events greatly outnumber foreground events, as in an ordinary photograph, the background events dominate the adaptations that determine not-noticing. An extrapolation might be that general, societal events occurring simultaneously with events in executives’ task environments affect the executives’ expectations about what is normal and unremarkable more strongly than do the specific, immediate events in their task environments.
Because simultaneous background stimuli strongly influence what people do not notice, people tend not to notice them. In particular, people tend to notice subtle changes in foreground stimuli while overlooking substantial changes in background stimuli, and so background stimuli may have to change dramatically to attract notice (McArthur. 1981; Normann, 1971). One reason is that familiarity enables people to develop programs and habits for noticing foreground stimuli, whereas they attend less systematically and consistently to background stimuli. Like experience, moreover, programs and habits may have complementary effects, in that they may also deaden sensitivity and convert foreground events into background events (Tuchman, 1973). Programs and habits make noticing less reflective, and by routinizing it, inject extraneous detail.
Lyles and Mitroff (1980: 116) found that “managers become aware of significant problems through informal sensing techniques”; they surmised that either “the formal reporting systems do not identify the relevant indicators or, more possibly,... managers tend to ignore the indicators when they are formally reported”. Formalized information systems often try to make up for inflexibility by providing extensive detail, so they bog down in detail and operate slowly: Irrelevant detail becomes noise, and slow processing makes the data outdated. For instance, in the late 1950s, executives in the Tar Products Division of the Koppers Company decided that computers could provide inputs to operating decisions, so they purchased a computer-based system that produced weekly and monthly reports of the plants’ daily inventories and outputs. The collected data were voluminous but riddled with large errors, both unintentional data-entry errors and intentional misrepresentations of the plants’ situations. But personnel at divisional headquarters remained quite unaware of these errors because they paid almost no attention to the computer-generated reports. The headquarters personnel in production scheduling could not wait for the computer-generated data, so they maintained daily contact with the plants by telephone; the headquarters personnel in purchasing and sales looked mainly at annual totals because they bought and sold through annual or longer contracts.
Successful experience may tend to make foregrounds smaller and backgrounds larger. For instance, IBM, which dominated the mainframe computer business, virtually ignored the initial developments of minicomputers, microcomputers, and supercomputers; these developments were obviously foreground events for Digital Equipment, Apple, and Cray. Success gives individual people and organizations the confidence to build upon their experience by creating buffers, which insulate them from environmental variations, and programs, which automate and standardize their responses to environmental events. The buffers and programs identify certain stimuli as foreground events and they exclude other stimuli from consideration. Starbuck (1976: 1081) observed: “organizations tend to crystallize and preserve their existing states of knowledge whenever they set up systems to routinely collect, aggregate, and analyze information, whenever they identify and allocate areas of responsibility, whenever they hire people who possess particular skills and experiences, and whenever they plan long-range strategies and invest in capital goods.” Thus, organizational scanning systems formalize their members’ beliefs and expectations as procedures and structures that may be difficult to change.
However, like sexually explicit words, social and technological changes may make themselves difficult to ignore: IBM did eventually enter the minicomputer and microcomputer businesses.
Looking for What Matters. People also define foregrounds and backgrounds on the basis of their definitions of what phenomena are relevant, important, insignificant, desirable, or evil (C3oleman, 1985). For instance, in recent years, two amateur astronomers, one Australian and one Japanese, who specialized in this activity, have spotted a great majority of the known comets; presumably professional astronomers assign tower value to comet spotting in comparison to alternative uses of their time. Rosenhan (1978) remarked that the staff members in psychiatric hospitals tend not to notice that patients are in fact behaving normally; possibly these staff members would notice the normal behaviors if they saw them outside of the hospital context.
Weick (1979) pointed out that people “enact” their environments; by this, he meant that people’s beliefs and expectations define what they regard as relevant, and so beliefs and expectations define what parts of task environments draw people’s notice. Deciding that certain markets and prices are worth scanning, executives may assign subordinates to monitor these markets and prices, thereby making them part of their organization’s foreground. But no organization can afford to monitor everything, so by not assigning subordinates to monitor other markets and prices, the executives are implicitly defining a background (Normann, 1971). For example, Mitroff and Kilmann (1964) argued that business firms overtook saboteurs. terrorists, and other “bizarre characters” in their environments because managers find it “unthinkable” that anyone might commit sabotage or terrorism against their firms. Another example occurred at the Facit company, a manufacturer of mechanical calculators: When Facit’s top managers wanted to assess the threat posed by electronic calculators, they instructed their salesmen to interview Facit’s customers about their attitudes toward this new technology: The salesmen continued to report that Facit’s customers almost unanimously preferred mechanical to electronic calculators ... even white the number of customers was plummeting.
Executives’ values about how their businesses should be run also influence their definitions of foregrounds and backgrounds. Donaldson and Lorsch (1983), for instance, commented that executives’ values about organizational self-sufficiency seem to influence their organizations’ actions; in particular, CEOs who believe that long-term debt indicates inadequate sell-sufficiency tend to avoid strategies that require borrowing. One reason executives’ values have such effects on actions is that they influence what the executives and their organizations notice. Thus, executives who believe in the no-debt principle are likely to relegate potential lenders and financial markets to the background, and so remain uninformed about available loan terms or changes in interest rates.
The influence of executives’ definitions of what matters may gain strength through the uncoupling of executives’ decisions about what to observe from their subordinates’ acts of perception. This uncoupling and the asymmetry of hierarchical communications impair feedback, and so executives’ perceptions adapt sluggishly to the actual observations made by their subordinates. Organizations encourage the creation of formalized scanning programs, they assign specialists to monitor foreground events, and they discourage subordinates from reporting observations that fall outside their assigned responsibilities. Even when subordinates do notice events that have been formally classified as background, they tend not to report these upward, and superiors tend to ignore their subordinates’ messages (Dunbar and Goldberg. 1978; O’Reilly. 1983; Porter and Roberts. 1976). For instance, on the morning of the Challenger disaster, a crew was dispatched to inspect ice formation on the shuttle and launch pad: The crew noticed very low temperatures on the solid rocket boosters, but this observation was not relevant to their assignment, so they did not report it (Presidential Commission, 1g86).
Sensemaking: What it means
“On the day following the 1956 election in which a Republican President and Democratic Congress were elected, two colleagues remarked to me that the voters were becoming more discriminating in splitting their ballots. But the two individuals did not mean the same thing by the remark, for one was a staunch Republican and the other a strong Democrat. The first referred with satisfaction to the election of a Republican President; and the second approved the election of a Democratic Congress.” – Harry Helson (1964: 36)
Daft and Weick (1984: 286) remarked: “Managers ... must wade into the ocean of events that surround the organization and actively try to make sense of them.” Sensemaking has many distinct aspects – comprehending, understanding, explaining, attributing, extrapolating, and predicting, at least. For example, understanding seems to precede explaining and to require less input; predicting may occur without either understanding or explaining; attributing is a form of explanation that assigns causes. Yet, concrete examples seem to illustrate the commonalities I ties and interdependencies among these processes more than their differences.
What is common to these processes is that they involve placing stimuli into frameworks (or schemata) that make sense of the stimuli (Goleman 1965). Some sensemaking frameworks seem to be simple and others complex; some appear to describe static states of affairs and others sequential procedures; some seem to delineate the boundaries between categories and others to describe central tendencies within categories; some seem more general and others more specific; some appear more abstract and others more concrete (Dutton and Jackson, 1967; Hastie, 1981). These sensemaking frameworks, like the frameworks for noticing, reflect habits, beliefs about what is, and beliefs about what ought to be.
Perceptual frameworks categorize
data, assign likelihoods to data, hide data, and fill in missing data (Taylor
and Crocker, 1981). At least frameworks often imply that certain data ought to
exist or ought not to exist. Sherlock Holmes was, of course, remarkable for his
ability to draw elaborate inferences from a few, seemingly trivial and
incongruous clues. Nonfictional people, however, should contemplate the
probabilities that the filled-in data may actually exist but not be seen until
sought, or they may be seen in fantasy but not actually exist, or they may not
actually exist until sought. Errors seen in retrospect often exemplify the
tatter. For instance, the faults in the space shuttle’s solid-rocket booster
were only a few of multitude design characteristics. Before
Similarly, nonfictional people
should allow for the probabilities that existing data may not be taken into
account because they violate perceptual frameworks or may be distorted badly to
make them fit perceptual frameworks. In March 1961, shortly before the U.S.
Central Intelligence Agency launched an invasion of the
In spite of their propensities for seeing what ought to exist, people do sometimes strive to see beyond their blind spots. They get opportunities to discover their blind spots when they observe incongruous events that do not make sense within their perceptual frameworks (McCall, 1977). Management by exception is an action strategy that depends on spotting situations in which current events are diverging from familiar patterns. Such observations may be either very disorienting or very revealing, or both. Incongruous events are disorienting as long as they make no sense, and they become revealing when they induce perceivers to adopt new frameworks that render them explicable. For instance, Starbuck (1 gee) and his colleagues set out to discover the abnormalities that cause a few organizations to run into serious, existence-threatening crises. But the researchers were inundated with examples, and so it gradually dawned on them that that they were seeing normality because crises are common. Facing this incongruity brought the researchers to see that the causes of crises are essentially the same as the causes of success.
Watzlawick, Weakland, and Fisch (1974) said that all perceptual frameworks have blind spots that prevent people from solving some problems and that link behaviors into self-reinforcing cycles (Goleman, 1985; Masuch, 1985). To solve problems that blind spots have made unsolvable, people need new perceptual frameworks that portray the problematic situations differently. Watzlawick, Weakland, and Fisch proposed four basic strategies for reframing such problematic situations: (1) redefine undesirable elements to make them appear desirable, or vice versa, (2) relabel elements so that they acquire new meanings, (3) ignore elements that you cannot change, and (4) try overtly to achieve the opposite of what you want. For example, a man with a bad stammer had to work as a salesman, but this role heightened his worries about his speech defect and so made it worse. His psychotherapists advised him that potential customers generally distrust salesmen precisely because of their slick and clever spiels that go on insistently, whereas people listen carefully and patiently to someone with a speech handicap. Had he considered what an incredible advantage he actually had over other salesmen? Perhaps, they suggested, he should try to maintain a high level of stammering even after experience made him feel more at ease and his propensity to stammer abated.
Framing within The Familiar. Normann (1971) pointed out that people in organizations can understand and readily respond to events in the domains with which they interact frequently, but that they are likely to misapprehend events in unfamiliar domains, or to have difficulty generating responses to them. Different parts of organizations are familiar with different domains, and these domains both shape and get shaped by organizations’ political systems and task divisions:
Hierarchical, functional, geographic, and product differentiations affect the ways people interpret events, and these differentials foster political struggles that interact with strategic choices and designs for organizational perception systems (Dearborn and Simon. 1958).
For instance, people with expertise in newer tasks tend to appear at the bottoms of hierarchies and to interpret events in terms of these newer tasks, and they welcome changes that will offer them promotion opportunities and bring their expertise to the fore. Conversely, people at the tops of organizational hierarchies tend to have expertise related to older and more stable tasks, they are prone to interpret events in terms of these tasks, and they favor strategies and personnel assignments that will keep these tasks central (Starbuck. 1983). Some research also suggests that people at the tops of organizational hierarchies tend to have simpler perceptual frameworks than their subordinates. One reason is that top executives have to span several domains of expertise, each of which looks complex to specialists (Schroder, Driver, and Streufert. 1967). Another reason is that top executives receive so much information from so many sources that they experience overloads (Ackoff, 1967; Hedberg, 1981). A third reason is that top executives receive much of their information through intermediaries, who filter it (Starbuck. 1985). Still another reason is that their spokesperson roles force top executives to put ideas and relationships into simply expressed terms (Axelrod, 1976; Hart, 1976. 1977).
Repeated successes or failures lead people to discount accidents as explanations, to look for explicit causes, and eventually to expect the successes or failures to continue. Repeated failures may lead people to view themselves as having weak influence over events and to blame the failures on external causes such as bosses, enemies, strangers, foreign competition, economic cycles, or technological trends (Langer and Roth. 1975). By contrast, repeated successes may cause people to see themselves or their close associates as having strong influence on events. NASA, for instance, experienced many years of successes in meeting technological challenges. It would appear that this made people at NASA grow confident that they could routinely overcome nearly insurmountable technological hurdles. They became used to the technologies with which they were working, and they gradually grew more complacent about technological problems as more flights worked out well. Thus, the NASA personnel came to see the space shuttle as an “operational” technology, meaning that it was safe enough to carry ordinary citizens such as Senators and school teachers.
Framing within the Expected. Expectations may come from extrapolating past events into the future. In a world that changes slowly and in which everyone else is deriving their expectations incrementally and so behaving incrementally, most of the time it is useful to formulate expectations incrementally oneself. It also appears that simple extrapolations generally work better than complex ones. Makridakis and Hibon (1979) tested 22 mechanistic forecasting techniques on 111 economic and business time series. They found that simple rules do surprisingly well in comparison to complex rules, and in particular, that the no-change rule
next period = this period
provides excellent forecasts. Another effective forecasting rule is a weighted average in which more recent events receive more weight. Simple rules extrapolate well because they ‘hedge’ forecasts towards recent experience and therefore they make fewer large errors. Complex forecasting rules amplify short-term fluctuations when they project and thus make large errors more often.
Because extrapolations normally deviate no more than incrementally from experience, they set up past-oriented perceptual frameworks that do not encourage innovation. Expectations, however, may also come from models and general orientations that are transmitted through socialization, communication, or education; and the expectations that come from transmitted models may differ radically from experience. For example, members of the religious cult observed by Festinger, Riecken, and Schachter (1956) obviously had no experience indicating that they would be carried off in a spaceship before the world came to an end at . Thus, transmitted models have the capacity to generate expectations that correspond to other people’s experiences, that encourage innovation, that enable people to see problems from new perspectives, or that interrupt cycles of self-reinforcing behaviors. Unfortunately, transmitted models may be difficult to disconfirm. During World War II, the Germans developed a secret code that they believed to be unbreakable (Winterbotham, 1975); but the Allies broke the code early in the war, with the result that the Allied commanders often knew of the Germans’ plans even before the German field commanders heard them. The Germans never did discover that their code had been broken, partly because the Germans believed so strongly in their cryptographers, and partly because the Allies were careful to provide false explanations for their successes.
Stereotypes are categorical expectations, or central-tendency schemata. People transmit stereotypes to one another, and labels enable people to evoke stereotypes for one another; so to facilitate communication, organizations tend to foster labeling and to make labels more definitive. A stereotype may embody useful information that helps a person decide how to behave in new situations or with new people; an effective stereotype enables a person to act more appropriately than if the person had no information. On the other hand, a stereotype may caricature situations or people and become a self-fulfilling prophecy (Danet, 1981). At best, a stereotype describes the central tendency of a group of situations or a group of people, and specific instances deviate from this central tendency.
Fredrickson (1985) investigated the degrees to which the labels ‘problem’ or ‘opportunity’ may represent shared schemata. In strategic-management courses, he presented two strategy cases to MBA students and to upper-middle-level executives; the cases sometimes labeled the described situations as problems and other times as opportunities. He found that the labels correlated with differences in the problem-solving processes recommended by MBA students, but not by the executives. Fredrickson traced these differences to the MBA students’ extensive experience with teaching cases, as compared to the executives’ relative inexperience with them. It might appear that the strategic-management courses had taught the MBA students that these two labels denote stereotypes, but that executives at large had not been influenced strongly by the courses, do not hold these stereotypes, and do not assign these labels shared meanings. However, Jackson and Dutton (1987) found evidence that the labels ‘threats’ and ‘opportunities’ do evoke shared schemata among executives. Jackson and Dutton asked participants in an executive-development program to think of experiences that they would classify as opportunities or as threats, and then to characterize these experiences in terms of 55 attributes. The participants tended to agree that opportunities and threats differ in at least three attributes: opportunities are positive and controllable, and they offer possible gains; whereas threats are negative and uncontrollable, and they portend possible losses. The participants also perceived opportunities and threats to have seven attributes in common, including pressure to act, urgency, difficulty, and importance. Thus, problems, threats, and opportunities may possess more attributes that are similar than attributes that are different. Jackson and Dutton (1987: 34) observed: “These simple labels do not have simple meanings.”
Fredrickson’s study suggests
that labeling a situation as a problem or threat might have no discernible
effects on the way an executive perceives it; but Jackson and Dutton’s study
suggests that labeling a situation as an opportunity might lead an executive to
perceive it as more controllable and to notice its positive aspects, whereas
labeling the situation as a threat might cause the executive to see its
negative potentials and uncontrollable elements. However, both Fredrickson and
Rotter related controllability itself to noticing and sensemaking. After reviewing various studies, he (1966:25) concluded that “the individual who has a strong belief that he can control his own destiny is likely to ... be more alert to those aspects of his environment which provide useful information for his behavior;... [and to] place greater value on skill or achievement reinforcements and [ be generally more concerned with his ability, particularly his failures”. Rotter defined locus of control as “a generalized expectancy” about who or what controls rewards and punishments; supposedly, a person with an internal locus of control sees herself as exerting strong influence over outcomes, whereas one with an external locus believes she has weak influence. As one driver reported to an insurance company,
“The telephone pole was approaching. I was attempting to swerve out of its way when it struck my front end.” However, research indicates that the behavioral consistencies associated with generalized personal characteristics, such as locus of control or dogmatism or cognitive complexity, are much smaller than the behavioral variations associated with different situations (Barker, 1958; Goldstein and Blackman, 1978; Mischel, 1958).
within What Matters. “...a nuclear war could alleviate some of the
factors leading to today’s ecological disturbances that are due to current high-population
concentrations and heavy industrial production.” – Official of the
Beliefs about “what matters” not
only define what phenomena are relevant, important, insignificant, desirable,
or evil, they also influence sensemaking by determining the frames of reference
that give meaning to phenomena (Jervis, 1976). These beliefs include both
generalized images of how the world should be and more specific ideas about
what should be organizations’ missions, structures, and strategies. For
example, CEOs who believe that long-term debt indicates inadequate self-sufficiency
may also tend to see borrowing as risky and burdensome, debt as constraining, and lenders as controlling (Donaldson and
Lorsch, 1983). Similarly, in 1962, Joseph F. Cullman
Because values and norms differ considerably from one arena to another, perceivers may discover that the beliefs that guided them welt in one arena take them astray in another. In 1985, for instance, the U.S. Department of Justice indicted General Dynamics Corporation and several of its current and former top executives. The prosecutors charged that one of General Dynamics’ defense contracts required it to absorb costs above a $39 million ceiling, but the company had fraudulently charged $3 million of the work on that project to two other contracts, one for general research and development and one for the preparation of bids and proposals. Eighteen months later, the Department of Justice withdrew these charges and explained that the contract in question was far more flexible than the prosecutors had originally surmised; in particular, the contract did allow General Dynamics to charge additional costs to other defense contracts. Evidently, the actual words of the contract were not at issue. Rather, the prosecutors had originally assumed that the contract provisions meant what they would mean outside the context of defense contracting, but the prosecutors later discovered that the language of defense contracts often appears to say something other than what it actually says. Thus, even though this contract appeared to set a ceiling on costs, both General Dynamics and the U.S. Department of Defense understood that costs above this ceiling would be paid through other channels.
Jönsson and Lundin (1977) found that organizational sensemaking frameworks evolve cyclically. They observed that Swedish industrial-development companies went through waves of enthusiasm that were punctuated by intermittent crises. Each wave was associated with a shared myth, or key idea, that appeared as the solution to a preceding crisis. A new myth would attract adherents, filter perceptions, guide expectations, provide rationalizations, and engender enthusiasm and wishful thinking; vigorous action would occur. But eventually, new problems would be seen arising, enthusiasm for the prevailing myth would wane, and alternative “ghost” myths would appear as explanations for what was wrong. As alternative perceptual frameworks, these ghost myths would highlight anomalies connected with the prevailing myth, cast doubt upon its relevance, and so accelerate unlearning of it. When enough anomalies accumulated, organization members would discard the prevailing myth and espouse one of the ghost myths.
LIVING WITH COMPLEXITY
A focus on filtering processes makes it clear that the past, present, and future intertwine inseparably. Familiar events, expectations, and beliefs about what matters form overlapping categories that cannot be cleanly distinguished from one another. Expectations about the future, for instance, grow out of past experience and simultaneously express models that verge very close to wishful thinking, these expectations also lead people to edit their past experience and to espouse goals that look feasible. Such complexity matches the worlds in which people live.
Simon (1957: 20) pointed out that “principles of administration...., like proverbs, they occur in pairs. For almost every principle one can find an equally plausible and contradictory principle.” For instance, the injunction to minimize spans of control runs up against the injunction to minimize the number of hierarchical levels. Simon pointed out that if one tries to render one of these injunctions more valid by narrowing its scope, the injunction no longer solves so many problems. Hewitt and Hall (1973) remarked that societies offer numerous “quasi-theories,” which are widely accepted recipes that can explain observed events and that apply to very diverse situations. For example, the quasi-theory of time claims that events can be expected to occur at certain times, and so an occurrence can be explained by saying that its time has came, or a non-occurrence can be explained by saying that its time has not yet come. Edelman (1977: 5) argued that “In every culture people learn to explain chronic problems through alternative sets of assumptions that are inconsistent with one another; yet the contradictory formulas persist, rationalizing inconsistent public policies and inconsistent individual beliefs about the threats that are widely feared in everyday life.” He illustrated this point by asserting that everyone, whether poor or affluent, learns two contrasting characterizations of the poor, one as victims of exploitative institutions and one as independent actors responsible for their own plight and in need of control to compensate for their inadequacies. Westerlund and Sjöstrand (1979) identified numerous myths that organizations sometimes bring forth to frame problem analyses. Most of these myths occur in mutually contradictory pairs. For example, the myth of organizational limitations states that an organization has boundaries that circumscribe its abilities; but this myth contradicts the one of the unlimited environment, which claims that the organization’s environment offers innumerable opportunities. Similarly, the fairy tale of optimization convinces organization members of their competence by asserting that the organization is acting optimally; but it contradicts the fairy tale of satisfaction, which says that satisfactory performances are good enough.
Fombrun and Starbuck (1987) explained that such contradictions are so prevalent because processes affecting social systems inevitably call forth antithetical processes having opposing effects. For instance, laws forbidding certain businesses make it highly profitable to engage in these businesses illegally, so law enforcement unintentionally fosters criminality. One ubiquitous antithetical effect is that the handicaps of individuals motivate the creation of compensating social supports, and another prevalent antithetical effect is that organizations’ strategic choices create opportunities for their competitors. Antithetical processes mean that social systems tend to remain stable, complex, and ambiguous. A process that tends to displace a social system from its current state gets offset by processes that tend to restore that current state; a process that tends to eliminate some characteristics gets offset by processes that tend to preserve the existing characteristics or to add new ones; and so the social system remains a complex mixture.
Facing such a world, realistic people have to have numerous sensemaking frameworks that contradict each other. These numerous frameworks create plentiful interpretive opportunities – if an initial framework fails, one can try its equally plausible converse, or try a framework that emphasizes different elements. Thus, meanings are generally cheap and easily found, except when people confront major tragedies such as divorces or the deaths of loved ones ... and even these often become “growth experiences”. People have confidence that they can eventually make sense of almost any situation because they can.
Of course, some sensemaking frameworks lead to more effective behaviors than others do, but the criteria of effectiveness are many and inconsistent, and perceivers usually can appraise effectiveness only in retrospect. The most accurate perceivers may be either ones who change their minds readily or ones who believe strongly enough to enact their beliefs., and the happiest perceivers may be the least accurate ones. The ambiguity and complexity of their worlds imply that perceivers may benefit by using multiple sensemaking frameworks to appraise events; but perceivers are more likely to act forcefully and effectively if they see things simply, and multiple frameworks may undermine organizations’ political structures (Brunsson, 1985; Wildavsky, 1972). Malleable worlds imply that perceivers may benefit by using frameworks that disclose opportunities to exert influence, but people who try to change their worlds often produce unintended results, even the opposite of what they intended. Perceivers who understand themselves and their environments should appreciate sensemaking frameworks that recognize the inevitability of distortions and that foster beneficial distortions, but such wise people should also doubt that they actually know what is good for themselves, and they should recognize that the most beneficial errors are often the most surprising ones. Fortunately, people seem to have a good deal of latitude for discretion. People investigate hypotheses from the viewpoint that they are correct and as long as results can be interpreted within current frameworks, the frameworks need not change, or even be evaluated (Snyder, 1981). Further, sensemaking may or may not determine whether people respond appropriately to environmental events: sometimes people act first and then later make sense of the outcomes (Starbuck, 1983; Weick, 1983).
Because sensemaking is so elusive, noticing may be at least as important as sensemaking. Perhaps sensemaking and noticing interact as complements in effective problem solving: sensemaking focuses on subtleties and interdependencies, whereas noticing picks up major events and gross trends. Noticing determines whether people even consider responding to environmental events. If events are noticed, people make sense of them; and if events are not noticed, they are not available for sensemaking. Thus, it makes a great difference how foregrounds, backgrounds, and adaptation levels adjust to current stimuli and experience. Insofar as people can control these adjustments voluntarily, they can design noticing systems that respond to changes or ignore them, that emphasize some constituencies and deemphasize others, or that integrate many stimuli simultaneously or concentrate on a few stimuli at a time.
In the late 1950s, residents of
Acknowledgements. We thank Janice Beyer, Janet Dukerich, Jane Dutton, and Donald Hambrick for contributing helpful comments and suggestions.
Distortions in noticing (where to look and what to see)
Paying too much or too little attention to stimuli with certain properties
stimuli in certain environmental domains
changes or regularities
familiar or unusual stimuli
expected or unexpected stimuli
desirable or undesirable stimuli
dramatic or undramatic stimuli
Letting some stimuli draw too much attention to themselves, and other stimuli evade attention
Distortions in sensemaking (what it means)
Distortions in framing
Perceiving or classifying events in the wrong frameworks (schemata)
Applying existing frameworks in radically novel situations
Perceiving illusory stimuli that fit evoked frameworks, or ignoring stimuli that violate evoked frameworks
Assigning high credibility to stimuli that fit evoked frameworks, and discounting stimuli that violate evoked frameworks
Assigning importance to covariations that fit evoked frameworks, and discounting covariations that violate evoked frameworks
Distortions in predicting
Forming expectations by applying the wrong frameworks (schemata) Amplifying good events and attenuating bad events
Underestimating or not seeing serious, imminent threats Overestimating insignificant, remote, or doubtful opportunities
Amplifying bad events and attenuating good events
Underestimating or not seeing significant, immediate opportunities. Overestimating minor, very distant, or improbable threats
Underestimating or overestimating
the effects of environmental changes
the ranges of environmental events likely to occur
the ranges of probable outcomes from proposed actions, policies, or strategies
risks associated with proposed actions, policies, or strategies
Distortions in causal attributions
Making attributions by applying the wrong frameworks (schemata) Attributing outcomes produced by many causes to only a few causes, or vice versa
Amplifying or attenuating the influence of
an organization’s actions
executives’ act ions
Failing to notice or allow for
Perceiving uncontrollable events as controllable, or vice versa
Ackoff, Russell L.
1967 “Management misinformation systems.” Management Science, 14: 6147-6156.
Ashby. W. Ross
1961 Design for a Brain (2nd edn.).
Astley. W. Graham
1985 “The two ecologies: Microevolutionary and macroevolutionary perspectives on organizational change.” Administrative Science Quarterly, 30: 224-241.
Astley. W. Graham. and Charles J. Fombrun
1987 “Organizational communities: An ecological perspective.”
In Samuel Bacharach and Nancy DiTomaso (eds.). Research In the Sociology of
Organizations, Vol. 5.
Axelrod, Robert M.
1976 “Results.” In Robert M. Axelrod
(ed.), Structure of Decision: The Cognitive Maps of Political Elites: 221-248.
Bargh. John A.
1982 “Attention and automaticity in the processing of self-relevant information.” Journal of Personality and Social Psychology, 43: 425-436.
Barker. Roger G.
1968 Ecological Psychology.
Bennis, Warren C.
1983 “The artform
of leadership.” In Suresh Srivastva
and Associates. The Executive Mind: New Insights on Managerial Thought
and Action: 15-24.
1985 The Irrational Organization: Irrationality as a Basis
for Organizational Action and Change.
Daft. Richard L.. and Karl E. Weick
1984 “Toward a model of organizations as
1981 “Client-organization relationships.”
In Paul C. Nystrom and William H. Starbuck (eds.). Handbook of Organizational Design, Vol. 2: 382-428.
1958 “Selective perception: A note on the departmental identifications of executives.” Sociometry, 21: 140-144.
Donaldson, Gordon, and Jay W. Lorsch
1983 Decision Making at the Top: The Shaping of Strategic
1977 “Individual characteristics as sources of perceived uncertainty.” Human Relations, 30: 161-174.
Dunbar, Roger L. M., and Walter H. Goldberg
1978 “Crisis development and strategic response in European corporations.” Journal of Business Administration, 9(2): 139-149.
1972 “Characteristics of organizational environments and perceived environmental uncertainty.” Administrative Science Quarterly, 11: 313-327.
Dutton, Jane E., and Susan E. Jackson
1987 “Categorizing strategic issues: Links to organizational
1977 Political Language: Words That Succeed and Policies That
1956 When Prophecy Fails.
1975 “Hindsight or foresight: The effect of outcome knowledge on judgment under uncertainty.” Journal of Experimental Psychology: Human Perception and Performance, 1: 288-299.
1980 “For those condemned to study the past: Reflections on
historical judgment.” In R. A. Shweder and D. W. Fiste (eds.), New Directions for Methodology of Behavioral
Fischhoff, Baruch, and Ruth Beyth
1975 “I knew it would happen: Remembered probabilities of once-future things.” Organizational Behavior and Human Performance, 13: 1-16.
Fombrun, Charles J., and William H. Starbuck
1987 “Variations in the evolution of organizational ecology,”
Fredrickson, James W.
1955 “Effects of decision motive and organizational
performance level on strategic decision processes.”
1953 “Improvement in perceptual judgments as a function of controlled practice or training.” Psychological Bulletin, 50: 401-431.
Goldstein, Kenneth M., and Sheldon Blackman
1978 Cognitive Style: Five Approaches and Relevant Research.
1985 Vital Lies, Simple Truths: The Psychology of Self-Deception.
Greenwald, Anthony G.
1980 “The totalitarian ego: Fabrication and revision of personal history.” American Psychologist, 35: 603-618.
Hart, Jeffrey A.
1976 “Comparative cognition: Politics of international
control of the oceans.” In Robert M. Axelrod (ed.), Structure of Decision: The
Cognitive Maps of Political Elites: 180-217.
1977 “Cognitive maps of three Latin American policy makers.” World Politics, 30: 115-140.
1981 “Schematic principles in human
memory.” In E. Tory Higgins, C. Peter Herman, and Mark P. Zanna (eds.), Social Cognition, The
Hawkins and Reid Hastie
1986 “Hindsight: Biased processing of past events in response
to outcome information.” Working paper,
Hayek, Friedrich A. von
1974 The Pretence of Knowledge.
Hedberg, Bo L. T.
1981 “How organizations learn and unlearn.”
In Paul C. Nystrom and William H. Starbuck (eds.),
Handbook of Organizational Design, Vol. 1: 3-27.
1964 Adaptation-Level Theory.
Hewitt, John P., and Peter M. Hall
1973 “Social problems, problematic situations, and quasi-theories.” American Sociological Review, 38: 367-374.
Ittetson, William H., Karen A. Franck, and Timothy J. O.
1976 “The nature of environmental
Jackson, Susan E., and Jane E. Dutton
1987 “What do ‘threat’ and ‘opportunity’ mean? A complex
answer to a simple question.” Working paper,
1976 Perception and Misperception in International
Jönsson, Sten A. and Rolf A. Lundin
1977 “Myths and wishful thinking as
management tools.” In Paul C. Nystrom and
William H. Starbuck (ed.), Prescriptive Models of Organizations: 157-170.
Kepner, Charles H., and Benjamin B. Tregoe
1965 The Rational Manager.
Kiesler, Charles A., and Lee S. Sproull
1982 “Managerial responses to changing environments: Perspectives on problem sensing from social cognition.” Administrative Science Quarterly, 27: 548-570.
Langer, Ellen J., and Jane Roth
1975 “Heads I win, tails it’s chance: The illusion of control as a function of the sequence outcomes in a purely chance task.” Journal of Personality and Social Psychology, 32: 951-955.
Loftus, Elizabeth F.
1979 “The malleability of human memory.” American Scientist, 67: 312-320.
Luce, R. Duncan. and Eugene Galanter
1963 “Discrimination.” In A. Duncan
Luce, Robert A. Bush, and
Lyles, Marjorie A., and Ian I. Mitroff
1980 “Organizational problem formulation: An empirical study.” Administrative Science Quarterly, 25: 102-119.
McArthur, Leslie Zebrowitz
1981 “What grabs you? The role of attention in impression formation and causal
attribution.” In E. Tory Higgins. C. Peter
Herman. and Mark P. Zanna
(eds.). Social Cognition, The
McCall. Morgan W.. Jr.
1977 “Making sense with nonsense: Helping frames of reference
clash.” In Paul C. Nystrom and William H. Starbuck (eds.). Prescriptive Models of Organizations: 111-123.
Maier. Norman R. F.
1963 Problem-solving Discussions and Conferences: Leadership
Methods and Skills.
Makridakis. Spyros. and Michèle Hibon
1979 “Accuracy of forecasting: An empirical investigation.” Journal of the Royal Statistical Society, Series A, 142: 97-145.
1985 “Vicious circles in organizations.” Administrative Science Quarterly. 30: 14-33.
Meadows. Donella H.. Dennis L. Meadows. Jǿrgen Randers. and William W. Behrens III
1972 The Limits to Growth.
1968 Personality and Assessment.
Mitroff. Ian I., and Ralph H. Kilmann
1984 Corporate Tragedies: Product Tampering, Sabotage, and
1981 “John Dean’s memory: A case study.” Cognition, 9: 1-22.
Nielsen, Steven L. and Irwin G. Sarason
1981 “Emotion, personality, and selective attention.” Journal of Personality and Social Psychology, 41: 945-960.
Nisbett, Richard E., and L. Ross
1980 Human Inference: Strategies and Shortcomings of Social
1971 “Organizational innovativeness: Product variation and reorientation.” Administrative Science Quarterly, 16: 203-215.
O’Reilly, Charles A., III
1983 “The use of information in organizational decision
making: A model and some propositions.” In
Payne, Roy L., and Derek S. Pugh
1976 “Organizational structure and
climate.” In Marvin D. Dunnette (ed.) Handbook
of Industrial and Organizational Psychology: 1125-1173.
Peters, Thomas J., and Robert H. Waterman. Jr.
1962 In Search of Excellence.
Porter, Lyman W., and Karlene H. Roberts
1976 “Communication in organizations.”
In Marvin D. Dunnette (ed.), Handbook of Industrial
and Organizational Psychology: 1553-1589.
Presidential Commission on the Space Shuttle Challenger Accident
1986 Report to the President.
Rosenhan, David L.
1978 “On being sane in insane places.”
In John M. Neale, Gerald C. Davison, and Kenneth P.
Price (eds.), Contemporary
Ross, Jerry, and Barry M. Staw
1966 “Expo 86: An escalation prototype.” Administrative Science Quarterly, 31: 274-297.
Rotter, Julian B.
1966 “Generalized expectancies for internal versus external control of reinforcement.” Psychological Monographs. 60(1): 1-28.
Sage, Andrew P.
1981 “Designs for optimal information filters.” In Paul C. Nystrom and William H. Starbuck (eds.), Handbook of
Organizational Design, Vol. 1: 105-121.
1982 The Fate of the Earth.
Schroder, Harold M., Michael J. Driver, and Siegfried Streufert
1967 Human Information Processing.
Simon, Herbert A.
1957 Administrative Behavior (2nd edn.).
1981 “Seek, and ye shall find:
Testing hypotheses about other people.” In E. Tory Higgins, C. Peter Herman,
and Mark P. Zanna (eds.), Social Cognition, The
Snyder, Mark, and Seymour W. Uranowitz
1978 “Reconstructing the past: Some cognitive consequences of person perception.” Journal of Personality and Social Psychology, 36: 941-950.
Starbuck, William H.
1976 “Organizations and their
environments.” In Marvin D. Dunnette (ed.),
Handbook of Industrial and Organizational Psychology: 1069-1123.
1983 “Organizations as action generators.” American Sociological Review, 48: 91-102.
1985 “Acting first and thinking later: Theory versus reality
in strategic change.” In Johannes M. Pennings and
Associates, Organizational Strategy and Change: 336-372.
1988 “Why organizations run into crises ...
and sometimes survive them.” In Kenneth Laudon and Jon Turner (eds.), Information Technology and
Starbuck, William H., Arent Greve, and Bo L. T. Hedberg
1975 “Responding to crises.” Journal of Business Administration, 9(2): 111-137.
1981 “Schematic bases of social information
processing.” In E. Tory Higgins, C. Peter Herman, and Mark P. Zanna (eds.), Social Cognition, The
1973 “Making news by doing work: Routinizing the unexpected.” American Journal of Sociology, 79: 110-131.
Tushman, Michael L., and Philip Anderson
1986 “Technological discontinuities and organizational environments.” Administrative Science Quarterly, 31: 439-465.
Tversky. Amos, and Daniel Kahneman
1974 “Judgement under uncertainty: Heuristics and biases.” Science, 165: 1124-1131.
Watzlawick, Paul. John H. Weakland, and Richard Fisch
1974 Change: Principles of Problem Formation and Problem
Weick, Karl E.
1979 The Social Psychology of Organizing (2nd edn.).
1983 “Managerial thought in the context of
action.” In Suresh Srivastva (ed.), The Executive Mind: 221-242.
Westerlund, Gunnar, and Sven-Erik Sjöstrand
1979 Organizational Myths.
Wildavsky, Aaron B.
1972 “The self-evaluating organization.” Public Administration Review, 32: 509-520.
Winterbotham, Frederick W.
1975 The Ultra Secret.
Wohlwill, Joachim F., and lmre Kohn
the environmental manifold.” In