American Sociological Review 1983, Vol. 48 (February: 91-102)

 

ORGANIZATIONS AS ACTION GENERATORS*

 

WILLIAM H. STARBUCK

 

Most of the time, organizations generate actions unreflectively and nonadaptively. To justify their actions, organizations create problems, successes, threats and opportunities. These are ideological molecules that mix values, goals, expectations, perceptions, theories, plans, and symbols. The molecules form while people are result watching, guided by the beliefs that they should judge results good or bad, look for the causes of results, and propose needs for action. Because Organizations modify their behavior programs mainly in small increments that make sense to top managers, they change too little and inappropriately, and nearly all organizations disappear within a few years.

 

Managers, management scientists, and organization theorists generally assert that organizations are, and ought to be, problem solvers. Problem solving is activity that starts with perception of a problem. Although often equated with decision making, problem solving is de­fined by its origin, whereas decision making is defined by its ending — a decision. Problem solving can stop without decisions having been made, if problem solvers can find no solutions or if problems just disappear. Some analysts have reported that decision making usually starts before decision makers perceive problems (Mintzberg et al., 1976). Many decisions lead to no actions, yet they may solve problems; many decisions may be imputed in hindsight (Weick, 1979); and many actions occur without anyone thinking they solve explicated problems.

Problem solving involves repetitive cycles of activity. A seminal study remarked that “the ‘problem’ to be solved was in fact a whole series of 'nested' problems, each alternative solution to a problem at one level leading to a new set of problems at the next level” (Cyert et al., 1956:247). However, that study ended when a committee voted for an action, as if this vote ended the process. Subsequent studies have portrayed decisions as endings, but they have not insisted that all decision processes begin with problems. Cyert and March (1963:121), for example, noted that problems may be excuses:

Solutions are also motivated to search for problems. Pet projects (e.g., cost savings in someone else’s department, expansion in our own department) look for crises (e.g., failure to achieve the profit goal, innovation by a competitor).

Cohen et al. (1972) argued that decisions re­sult from interactions among streams of problems, potential actions, participants, and choice opportunities. When a choice opportunity arises, participants bring up pet problems and propose actions unrelated to any visible problems, so the choice opportunity comes to resemble a garbage can filled with unrelated problems and potential actions. Participants may perceive a decision (a) when an action is taken, even if this action solves no problems in the garbage can; (b) when a problem is removed from the garbage can, even if no action has been taken to cause its removal; or (c) when an action is mated with a problem and called a solution. Cohen et al. asserted that events (a) and (b) predominate and event (c) occurs infrequently.

This article backtracks the trail blazed by Cohen et al., and then sets off in a different direction. The backtracking occurs because the garbage-can model understates cause-effect attributions, de-emphasizes the activities preceding decisions, and ignores the activities following decisions. When Cohen et al. claimed that decisions infrequently mate problems with solutions, they were letting the participants judge whether decisions occur, whereas they themselves were judging whether actions solve problems. Participants generally think actions do promise to solve problems; most problems are generated or remodeled to justify intended actions. Participants also see logic in problem-solving activities despite their disorganization, and participants react to actions’ results.

Organizations’ activities categorize in at least two modes: a problem-solving mode in which perceived problems motivate searches for solutions, and an action-generating mode in which action taking motivates the invention of problems to justify the actions. The problem-solving mode seems to describe a very small percentage of the activity sequences that occur, and the action-generating mode a large percentage.

The view I propose both decomposes and generalizes. The phenomena others have called problems are separated into four concepts: symptoms, causes of symptoms, needs for action, and problems. Actions are distinguished from needs for action and solutions. At the same time, opportunities, threats, and successes strongly resemble problems.

Although this view integrates ideas from many sources, two especially influential ancestors are Hewitt and Hall (1973). They pointed out that people collectively appraise a shared problematic situation by talking in stylized language. The appraisal talk continues until participants agree on a cure. Then the participants generate a core problem that the agreed cure will solve. The next step is to build a theory relating the core problem to its cure; theory building is iterative and includes tests of the theory against past events and concocted examples. The theory (a) defines the essential, real elements of the core problem and excludes peripheral, illusory elements, (b) explains why the core problem arose and how the agreed cure will solve it, (c) generalizes to numerous situations so that the stimulus situation becomes a specific instance, and (d) founds itself on widely accepted, societal ideologies.

 

PROGRAMS AS ACTION GENERATORS

 

Case studies of organizations facing crises (Nystrom et al., 1976; Starbuck et al., 1978) teach several lessons — among them, that normal organizations may manufacture crises for themselves by choosing to inhabit stagnating environments. The organizations do not foresee the results of their actions: they misperceive environmental opportunities and threats, impose imagined constraints on themselves, and expect rational analyses to produce good strategies. Organizations create behavior programs to repeat their successes, but these programs turn into some of the main causes of crises. Programs focus perceptions on events their creators believe important, so the programs blind organizations to other events that often turn out to be more important. Within the frames of reference created by and inherent in programs, they appear to be working well. However, evaluation data are biased, and programs are not updated as rapidly as they should be.

For example, Facit AB grew large and profitable while making and selling business machines and office furnishings (Starbuck and Hedberg, 1977). Although Facit made many products, the top managers believed the key product line to be mechanical calculators: they saw products such as typewriters, desks, and computers as peripheral. In fact, the top managers declined to authorize production of computers and electronic calculators designed by a subsidiary. Facit concentrated on improving the quality and lowering the costs of mechanical calculators, and it created behavior programs to facilitate production and sale of mechanical calculators. Technological change was seen as slow, incremental, and controllable. In the mid 1960s, Facit borrowed large sums and built new plants that enabled it to make better mechanical calculators at lower costs than any other company in the world. Between 1962 and 1970, employment rose 70 percent and sales and profits more than doubled. By 1970, Facit employed 14,000 people who worked in factories in twenty cities in five countries, or in sales offices in fifteen countries.

Facit’s focus on mechanical calculators was self-reinforcing. Electronics engineers were relegated to a small, jointly owned subsidiary. The engineers within Facit itself concentrated on technologies having clear relevance for mechanical calculators, and Facit understood these technologies well. Top, middle, and lower managers agreed about how a mechanical-calculator factory should look and operate, what mechanical-calculator customers wanted, what was key to success, and what was unimportant or silly. Behavior programs were pared to essentials; bottlenecks were excised; no resources were wasted gathering irrelevant information or analyzing tangential issues. Costs were low, service fast, glitches rare, understanding high, and expertise great.

But only within the programmed domain! One loyal customer finally cancelled a large order for voting machines after Facit had failed repeatedly to produce machines of adequate quality. Although some lower-level managers and engineers were acutely aware of the electronic revolution in the world at large, this awareness did not penetrate upward, and the advent of electronic calculators took Facit’s top managers by surprise. Relying on the company’s information-gathering programs, the top managers surmised that Facit’s mechanical-calculator customers would switch to electronics very slowly because they liked mechanical calculators. Of course, Facit had no programs for gathering information from people who were buying electronic calculators.

Actual demand for mechanical calculators dropped precipitously, and Facit went through two years of loss, turmoil, and contraction. The top managers’ contraction strategy aimed perversely at preserving the mechanical-calculator factories by closing the typewriter and office-furnishings factories. With bankruptcy looming, the board of directors sold Facit to a larger firm. The new top managers discovered that demand for typewriters was at least three times and demand for office furnishings at least twice the production capacities: sales personnel had been turning down orders because the company could not fill them.

Such observations dramatize the power of behavior programs to shape reality. Programs are not merely convenient and amenable tools that people control. Programs construct realities that match their assumptions — by influencing their users’ perceptions, values, and beliefs, by dictating new programs’ characteristics, by filtering information and focusing attention (Rosenhan, 1978; Salancik, 1977; Starbuck, 1976). Most importantly, programs act unreflectively.

Situations in which a relatively simple stimulus sets off an elaborate program of activity without any apparent interval of search, problem-solving, or choice are not rare. They account for a very large part of the behavior of all persons, and for almost all of the behavior of persons in relatively routine positions. Most behavior, and particularly most behavior in organizations, is governed by performance programs (March and Simon, 1958:141-42).

Indeed, research shows that programs account for almost all behavior in nonroutine positions as well (Mintzberg, 1973; Mintzberg et al., 1976; Tuchman, 1973). Behaviors get programmed through spontaneous habits, professional norms, education, training, precedents, traditions, and rituals as well as through formalized procedures. Adults cope with new situations by reapplying routines they already know, and one would be hard pressed to find unprogrammed behavior in a supermarket, a business letter, a courtroom, a cocktail party, or a bed.

Organizations amplify the general human propensity to create behavior programs, because programming is organizations’ primary method for coordinating activities, learning, and hierarchical control. Indeed, organizations frequently create action generators — automatic behavior programs that require no information-bearing stimuli because they are activated through job assignments, clocks, and calendars. Consequently, organizations act unreflectively and nonadaptively most of the time. A manufacturing organization does not produce goods at ten o’clock on Tuesday morning because some problem arose at nine o’clock and careful analysis implied that a solution would be to start producing: its founders created the organization to produce goods; funds were solicited with promises that goods would be produced, then spent on production equipment; personnel were chosen far their capabilities to produce goods; people arrived at work on Tuesday expecting to produce goods. Similarly, organizations advertise, make budgets, maintain inventories, answer letters, and hold annual picnics whether or not these actions can be interpreted as solving any immediate problems. Even if actions first begin because of specific needs, they become automatic when assigned to specialists, written into budgets, and given floor space. Most likely, however, action generators do not even originate because of specific needs: they are traditional, copied from other organizations, taught in schools of management, or legitimated by managerial literature and talk (Beyer, 1981; Starbuck, 1976).

Although new organizations inherit and imitate, old organizations undoubtedly have larger repertoires of action generators than do new ones. Because formalization produces action generators, bureaucracies have larger repertoires than nonbureaucratic organizations; bureaucratization correlates with organizational size. Similarly, the newer subunits of organizations tend to possess fewer action generators, as do the less bureaucratized subunits. Some subunits, such as those which conduct ceremonies or those with great autonomy, participate in activity domains that evaluate conformity to programs legitimated by societal ideologies; and self-selection and socialization may make members of these subunits especially respectful of societal ideologies (Beyer, 1981; Meyer and Rowan, 1977). Such subunits would use action generators more often than entrepreneurial subunits or subunits that participate in illegitimate domains. Thus, old, bureaucratic banks, churches, and public-accounting firms contrast with new, nonbureaucratic, and deviant orga­nizations such as criminal associations or entrepreneurial firms.

People see actions as producing results, including solutions to specific problems, so organizations sometimes modify or discard action generators to obtain different results. Failures and difficulties may provoke changes, whereas successes heighten complacency (Hedberg, 1981); successful organizations seem to depend strongly on action generators (Nystrom et al., 1976). However, actions and benefits are loosely associated. Actions occur even if not stimulated by problems, successes, threats, or opportunities that exist here and now; and action generators may continue operating without change through periods when no problems, successes, threats, or opportunities are acknowledged. This stability has evoked proposals for zero-based budgeting and sunset laws.

 

JUSTIFYING ACTIONS

 

Societal ideologies insist that actions ought to be responses — actions taken unreflectively without specific reasons are irrational, and irrationality is bad (Beyer, 1981; Meyer and Rowan, 1977). So organizations justify their actions with problems, threats, successes, or opportunities. Bureaucrats, for instance, attribute red tape to legal mandates or to sound practice.

Expecting justifications to be self-serving, audiences discount them; and so organizations try to render justifications credible. Examples range from falsified reports by police officers, through military reports portraying all personnel as superior, to workers who behave abnormally during time studies (Altheide and Johnson, 1980; Edelman, 1977). Such examples show organizations interpreting, classifying, and labeling ambiguous data as well as recording biased data, but they also show that organizations encompass contending interest groups.

Actions may be justified unintentionally, because brains involuntarily alter current beliefs so as to fit in new information (Loftus, 1979). People cannot avoid revising their memories or perceptions to make them match; and in particular, an actor’s brain highlights memories justifying that action and suppresses memories making the action appear irrational or wrong (Salancik, 1977; Weick, 1979:194-201).

 

Problems as Justifications

 

After observing managers, Kepner and Tregoe (1965:7-17) concluded that differing meanings of the word problem engender a lot of confusion, disagreement, and wasted talk. Managers use problem to denote (a) evidence that events differ from what is desired, (b) events that cause discomfort or effort, (c) conjectures about why events differ from what is desired, (d) possible sources of events that cause discomfort, and (e) actions that ought to be taken to alter events. Managers also use problem synonymously with such words as issue, question, trouble, and situation.

To avoid such confusions, I denote usages (a) and (b) with the term symptoms, usages (c) and (d) with the term causes of symptoms, and usage (e) with the term needs for action. Needs for action also include statements advocating inaction. I reserve the word problem for molecular concepts to which people give distinctive labels, such as “the quality-control problem” or “the problem of production.”

I have analyzed every problem-solving transcript I could find. These analyses suggest that people avoid problem labels with negative connotations and adopt labels with positive or neutral connotations. Taken literally, most problem labels imply that no symptoms exist. Very few labels specify symptoms (the problem of absenteeism) or causes of symptoms (the crime problem). Some labels name sites, observers of symptoms, or potential problem solvers (the Watergate thing, Mitchell’s problem). More labels describe variables used to measure what is going on (the market-share problem, the population problem). Many labels describe desired states of affairs (the President’s credibility problem, the need for a better corporate image). Thus, problem labels conform to the widespread tendency to sterilize organizational communication with euphemisms

The cash problem might refer to any amount of cash that the speaker considers problematic, and the credit problem to any level of credit. Such ambiguity enables problem labels to be used over and over while people generalize and rationalize problems. A problem is an ideological molecule that integrates elements such as values, causal beliefs, terminology, and perceptions. Over time, people expand one of these molecules to include fitting ideological elements, and they edit out inharmonious elements. Being an ideological element itself, a problem’s label helps to determine which ideological elements fit in. The problem evolves toward an ideal type that matches its label and rational logic, but deviates more and more from immediate realities. Evolution may also change the problem label, to a more general or more positive one; but a sufficiently general and positive label persists as an in­creasingly accurate designation.

For example, Facit’s top managers viewed their company as a harmonious system that evolved slowly by conforming to plans, and they perceived their industry as focusing on price competition over technologically stable products. For many years, their central challenge had been competitive threat, and they interpreted electronic calculators as a new aspect of competitive threat. This marginal revision left the central challenge basically the same, so it could be met through the familiar planned evolution. Two years of plant closings, managerial transfers, and financial losses convinced the top managers that planned evolution no longer met the challenge of competitive threat. But, they thought, their company was designed to change slowly so it could not change quickly, and a harmonious system for producing mechanical calculators might never be able to produce electronic calculators. The top managers could see that competitive threat had become an unmeetable challenge. After Facit was sold, the new top managers did no even see competitive threat. Indeed, Facit faced weak competition in the sale of type writers and office furnishings and its subsidiary had designed electronic calculators and computers. The company turned around in less than a year, including the addition of electronic products.

Because action generators are stable and nonadaptive, they require stable, nonadaptive justifications; and ambiguous labels and generalized problems afford such justifications Thus, for Facit to keep on producing mechanical calculators, the top managers had to categorize electronic calculators as elements of competitive threat.

Growing crystals in complex fluids. The ideological molecules called problems resemble crystals: they form incrementally; their elements array in logically congruent patterns; and as rationalization fills the logical gaps, problems grow perfect and hard like emeralds or rubies.

People mix symptoms, causes of symptoms, and needs for action into conceptual and conversational hodgepodges that also include situation statements, goals, values, expectations, plans, symbols, beliefs, and theories; and consequently, problems may begin to crystallize around diverse initial elements. Advocates of rational problem solving react to this by prescribing systematic procedures for growing problem crystals. Kepner and Tregoe (1965), for instance, advocated first defining symptoms precisely, next identifying causes of these symptoms, and then spelling out goals and values before proposing needs for action.

Because brains create new categories on slight pretexts and apply logic so enthusiastically that they remember fictional events, one might ask why people do not spontaneously form problem molecules in a systematic, rational way such as Kepner and Tregoe and others have prescribed. One reason is that ideological elements have meaning only in relation to other elements. Defining symptoms requires describing the symptoms’ contexts: the symptoms can be identified as A and B only if C, D. and E can be excluded. Defining symptoms also reveals goals, values, and expectations; Kepner and Tregoe themselves prescribed that problem solvers should identify symptoms by comparing actual performances with expected ones. Both contextual distinctions and expectations rest upon causal beliefs. Hodgepodges help people surmount the self-deceptions of their compulsively logical brains and grow problem crystals that mirror some of the complexity of their environments.

People also deviate from systematic, rational problem solving because justifying actions requires tight integration between needs for action, symptoms, and causes of symptoms; and to justify actions strongly, problem crystals must emphasize needs for action (Brunsson, 1982). Needs for action talk about symptoms indirectly, by asserting that certain actions should be taken to correct symptoms, or by arguing that corrective actions are unnecessary: “I wonder if our real problem isn’t what we’re going to do about stepping up production” rather than "Production is too low.”

Then too, people behave as they do because they believe (a) that results are good or bad, (b) that results have discernible causes, and (c) that results should evoke statements about needs for action. Small children learn that results are rarely neutral, that reward and punishment are ubiquitous, and that adults react to results by taking actions. Older children learn that even if rewards, punishments, and responsive actions cannot be observed immediately, they will occur eventually and perhaps subtly. Because children are asked to solve mysteries with answers that look obvious to their parents and their teachers, children learn that adults solve mysteries easily, that mysteries arise mainly from inexperience or stupidity. Of course, some people learn these lessons better than others.

Contemporary, industrialized societies encourage people to create large problems with crystalline structures. Complexity, rapid change, and floods of information impede learning: when faced with overloads of mediated information about intricate cause-effect relations, people form simple cognitive models having little validity (Hedberg, 1981). At the same time, these societies advocate rationality, justification, consistency, and bureaucratization: people are supposed to see causal links, interdependencies, and logical implications; to integrate their ideas and to extrapolate them beyond immediate experience; to weed out dissonance and disorder (Beyer, 1981; Meyer and Rowan, 1977). Bureaucratization reinforces rationality, justification, and consistency as well as hierarchical cognitive models.

 

Successes Threats, and Opportunities as Justifications

 

Problems justify negatively by indicating that symptoms warrant correction; and insofar as problems emphasize perceived or remembered symptoms, they justify currently or retrospectively. Therefore, problems can be viewed as a subset of continua in at least two dimensions. Successes, threats, and opportunities are other subsets of these continua: successes justify actions retrospectively and positively by implying that continuation of past actions will yield continued successes; threats and opportunities justify prospectively in terms of possible symptoms and expected needs for action. (The term symptom encompasses events that cause pleasure as well as discomfort.)

Record keeping, contending interest groups, and weak socialization can make organizational memories intractable — as Richard M. Nixon and his colleagues demonstrated — so problems and successes sometimes fail to justify actions. Threats and opportunities offer more latitude for fantasy and social construction, but may lack credibility because they are merely predictions, or may lack immediacy because they lie in the future. To justify strongly, threats and opportunities have to be larger than life, possibly too large to be taken seriously. Moreover, most societies frown on opportunism, so opportunities are mainly used confidentially inside organizations. For external audiences, organizations sometimes try to legitimate pursuits of opportunities by disclosing their altruistic motives: oil companies have been portraying their exploration activities as societally beneficial responses to OPEC’s control of oil prices. Many societies also disapprove of exercises of power unless they correct undesirable conditions, so organizations characterize powerful actions as responses to problems or threats rather than as responses to opportunities or successes. The United States has a Department of Defense, not of Armed Aggression and Control by Force.

 

Dissolution through Unlearning

 

A small problem, success, threat, or opportu­nity dissolves gradually. A symptom disappears; an expectation changes; a goal evolves; a causal process becomes visible. Each change propagates within the ideological molecule, influencing logically adjacent elements, strengthening some logical bonds between elements, and weakening others. Because adjacent elements need not be completely congruent, the secondary effects of a change attenuate as they propagate and parts of the molecule remain unaffected. Thus, a sequence of reinforcing changes may erode the molecule, but leave one or more fragments that become nuclei of new molecules. A solved problem may leave behind it a success and an opportunity that justify continuing the same actions; the opportunity may eventually turn into a threat or another success.

A large, general molecule picks up new ele­ments as rapidly as it loses old ones, so instead of dissolving, it evolves. Organizations amplify this dynamic stability by creating action generators that add new elements to old molecules. Quality control might accurately be named defect discovery; annual reports and newsletters augment success records.

Organizations have great difficulty dissolving the problems, successes, threats, and opportunities that hold central positions in their top managers’ ideologies, because these molecules are so big and so crystalline. Organizations facing crises demonstrate this. The organizations find it hard even to notice that anything is amiss, but symptoms do eventually attract attention and percolate up to the top managers, who attribute the symptoms to temporary environmental disturbances such as recessions, fickle customers, or random fluctuations. The managers talk of trimming the fat, running a tighter ship; and they seek short-term relief through such tactics as unfilled positions, reduced maintenance, liquidated assets, and centralized control. In true crises, the symptoms reappear, and unlearning begins. Some people try to persuade colleagues that current behavior programs no longer work. Subordinates set out to overthrow leaders. Bankers and governmental and union officials try to exert influence. Many people depart, distrust and stress escalate, conflicts amplify, and morale collapses (Nystrom et al., 1976).

Unlearning seems to be a distinctly social phenomenon, and it may be predominantly organizational. Theories about individual people omit unlearning: the theories say a brain can replace a stimulus-response pair immediately by learning a new stimulus or a new response. But organizations wait until stimulus-response pairs have been explicitly disconfirmed before they seriously consider alternative stimuli or responses, at least for the central molecules in their top managers’ ideologies.

The need for unlearning arises from ways organizations typically differ from individual people: (a) Organizations rely on action generators, which add inertia and impede re­flection. (b) Organizations emphasize explicit justification, which rigidifies and perfects their rationality. (c) To facilitate documentation and communication, organizations use perceptual categories that destroy subtlety and foster simplification (Axelrod, 1976; Bougon et al., 1977). Language defines reality, and objec­tively perceived realities do not make small changes (Nystrom et al., 1976). (d) Organi­zations not only use perceptual programs, they concretize these programs in standard operating procedures, job specifications, space assignments, buildings, and contracts (Starbuck, 1976). (e) Organizations’ complexity engenders fear that significant changes might initiate cascades of unforeseen events. (f) The conjunction of complexity with differentiation allows organizations to encompass numerous contradictions. Disparate ideological molecules can coexist. (g) Hierarchies detach top managers from the realities that their subordinates confront, so top managers’ ideologies can diverge from the perceptions of low-level personnel. (h) Top managers’ macroscopic points of view let them see more ideological elements than their ideologies can incorporate, and their secondhand contact with most events and their spokesperson roles encourage them to simplify and rationalize their ideologies (Axelrod, 1976). Public statements encourage distortion while committing the speakers to their pronouncements. (i) Organizations punish dissent and deviance, thus silencing new arrivals who have disparate ideologies or low-level personnel whose ideologies are more complex and less logical than their superiors’ (Dunbar et al., 1982). (j) Their members see organizations, especially successful ones, as powerful enough to manipulate their environments. (k) Organizations buffer themselves from their environments, so they interact loosely with their environments and have scope to fantasize about environmental phenomena. The foregoing properties correlate, of course, with organizational size, age, success, and bureaucratization.

 

WATCHING RESULTS

 

Problems, successes, threats, and opportunities crystallize while people are result watching, which happens intermittently. Much of the time, people simply continue acting without watching the results. However, societal ideologies say that organizations should set goals and record progress toward these goals (Dunbar, 1981; Meyer and Rowan, 1977), so organizations create action generators that routinely gather and evaluate data about goal achievement.

Both performance data and their evaluation are ritualistic. Numerical coding makes it easy for people to bias data; and standards for what to collect and how to categorize and interpret data are designed to make managements look successful (Boland, 1982; Halberstam, 1972). Societies put priorities on different kinds of data, primarily by assigning monetary valuations, and these objective priorities assign no value to the mass of data that might be collected. Consequently, organizations record almost no data about the causal processes operating in everyday life, and the recorded data confound attempts to infer practical lessons (Dunbar, 1981; Hopwood, 1972). Yet people attend to these data because they influence social statuses, pay, autonomy, and freedom from supervision.

Laboratory experiments suggest some hypotheses about result watching. If laboratory behaviors extrapolate to natural settings, people see nonexistent patterns, pay too much attention to exciting events and too little to familiar events, accept data readily as confirming their expectations, interpret competition or talk about causation as evidence that they can control events, attribute good results to their own actions, and blame bad results on chance or exogenous influences such as people they dislike. Consequently, bad results rarely elicit basic changes in actions: what is needed is to reinforce the actions with more effort and money and to document better the good results (Staw and Ross, 1978).

Result watching produces many scenarios. People may perceive symptoms and proceed to crystallize new problems. They may see action alternatives and debate whether these actions would solve any problems or defend against any threats. They may discover causal processes that suggest revisions in their theories. Revised theories or new action alternatives may imply revised goals and expectations. Revised goals may disclose different symptoms. And so on.

Some of these scenarios correspond to the conventional notion of problem solving: perceive a problem, consider alternative actions, choose a solution. Thus, organizations exhibit a problem-solving mode. Such scenarios are unusual: in one study, only four of 233 decision processes conformed closely to problem solving (Witte, 1972). Moreover, attempts to conform to a problem-solving scenario tend to be self-defeating. Insisting that problems be solved before actions are taken renders actions impossible, because people rarely have enough information and understanding to feel sure they have found solutions. Considering alternative actions makes it difficult to arouse the motiva­tion and commitment to carry out actions, because people see risks and liabilities of the chosen actions (Brunsson. 1982). Defining problems without regard for potential actions may yield problems that have no solutions (Watzlawick et al., 1974). Even scenarios that approximate to problem solving include activities — such as learning, experimentation, and feedback from actions to problems — that fall outside the notion of problem solving.

Observers have seen diverse scenarios (Hewitt and Hall, 1973; Mintzberg et al., 1976). What is striking about the published reports is not that result watching follows consistent scenarios but that all kinds of scenarios occur. The patterns observers discern are better explained as artifacts of the observers themselves or of the (often artificial) situations they observe than as characteristics of spontaneous activities in familiar settings.

Explanations of why people start problem solving tend to be tautological. Hewitt and Hall (1973:368), for instance, held that people classify a situation as problematic “if the behavior seems atypical, unlikely, inexplicable, technically inappropriate, unrealistic, or morally wrong." But any situation could be interpreted as violating at least one of those criteria, because people readily alter their goals, expectations, moral standards, and perceptions. A meaningful explanation has to consider minute events that determine whether particular people will regard a specific situation as problematic. Among the organizational action generators are periodic meetings in which people make sense of performance data, periodic meetings in which people agree to plans and evaluation criteria, and documents that arrive routinely and demand signatures or data. Also, outsiders may ask for data or point out problems or request actions. All of these initiate result watching. Knowing that result watching calls for statements about symptoms, causes of symptoms, and needs for action, people make such statements, and it is these statements that explicate disorder and render situations problematic. Ensuing scenarios operate primarily to mesh the disorder-making statements into preexisting problems, successes, threats, and opportunities (Lyles and Mitroff, 1980).

 

Talking about What To Do

 

People generally spend little or no time on pure description before they evaluate results and propose actions (Kepner and Tregoe, 1965; Witte, 1972). Talk about results usually begins with someone stating a need for action, a symptom, or a cause of a symptom; and overtly descriptive statements nearly always imply certain causal interpretations or certain needs for action (Mehan, 1982). However, hearers do not interpret these statements as definitive conclusions even if the speakers evince confidence. Rather, the statements initiate social rituals that build up collective definitions of reality, and stylized language plays central roles in these rituals.

People vote for or against proposals in many ways, including nods, mumbles, and skeptical looks; but two especially interesting media for voting are rephrasings and causal clichés. These enable people to contribute to social construction and to reinforce their organizational commitments without advancing alternative proposals: people can participate without risking their interpersonal statuses. Rephrasings purport merely to echo previous proposals: “You’re saying it’s time we did something about quality control” (which was actually a subtle reorientation of what had been said). Causal clichés endorse or reject proposals indirectly by commenting on causal models that, the clichés imply, underlie the proposals:

“He who hesitates is lost.’ Although clichés portray their speakers as inane and imitative, the clichés’ value lies in their unoriginality and emptiness: unoriginal, empty statements are not to be mistaken for alternative proposals, so hearers know they should interpret them as votes. In fact, clichés’ unoriginality implicitly disavows their overt meanings, as if to disclaim I haven’t thought about it very much, but....” Rephrasings announce quite explicitly that they are only votes, and they disown their overt meanings overtly: “This is what you said, so you are to blame if it isn’t what you meant.”

People in organizations must not only choose actions, they must arouse motivation and elicit commitments to take actions; and group discussions facilitate both (Brunsson, 1982). When only one or two people contribute, participation is inadequate to support collective actions; but there would be too much ideological diversity if many participants in­jected unique contributions. Clichés and rephrasings enable many to participate with not very much uniqueness, and unoriginality signals commitments to cooperation and organizational membership (Schiffrin, 1977). Be­cause groups and organizations rarely endorse needs for action when they are first proposed (Fisher, 1980:149-54), rephrasing is essential to winning endorsements.

Although the participants in group discussions frequently mention a core, real, or main problem (Hewitt and Hall, 1973), these phrases rarely reflect consensuses about priorities. People may speak this way to remind others that they met to discuss a specified problem, to designate their problem as the most important problem, to follow prescriptions that advocate solving the most important problem first, or even to express confusion. Groups hardly ever agree on a core symptom, a core cause of symptoms, or a core need for action (Fisher, 1980; Kepner and Tregoe. 1965). Most actions are justified by more than one problem, and most problems justify more than one action.

Nor do people generally seek or believe they have found guaranteed solutions. They regard agreed needs for action as conjectures to be tested experimentally. This frame of reference helps them accept (a) that participants disagree about needs for action, (b) that many agreed needs for action are never acted upon, and (c) that few actions solve problems.

Figure 1 summarizes the foregoing discussion by diagramming the main causal processes that regulate the cognitive frameworks of organizations. The dashed arrows denote inverse processes that can produce negative feedbacks. For example, the inverse process between expected results and discovery can decelerate crystallization as ideological molecules grow larger — the illusion that learn­ing becomes unnecessary because so much is already known.

 

 

Figure 1. A Summarizing Flowchart

 

TRYING TO STABILIZE CHANGE

 

Change and stability coexist in dialectical syntheses (Giddens, 1979:131-64). Stability may occur in the structural facades that legitimate organizations in terms of societal ideologies, while changes appear in behaviors, technologies, and environments. Everyday programs and relations may remain stable during dramatic changes in long-run goals, expectations, and values. The stability of programs and relations may bring on revolutionary crises that end in organizational demise. Stable action generators can generate changes in actions.

Normann (1971) observed that organizations react quite differently to variations, which would modify the organizations’ domains only incrementally, than to reorientations, which would redefine the domains. Variations exploit organizations' experience, preserve existing distributions of power, and can win approval from partially conflicting interests. Reorientations take organizations outside their familiar domains and alter the bases of power, so reorientation proposals instigate struggles between power holders and power seekers (Rhenman, 1973; Wildavsky, 1972). For example, Facit’s top managers (lid not understand electronics, and they doubtless feared the younger managers who spoke enthusiastically of an electronic future. The top managers also expected (rightly) to be blamed for committing Facit to mechanical calculators.

Watzlawick et al. (1974) emphasized the relativity of perception. Reorientations seem illogical because they violate basic tenets of a current cognitive framework, whereas variations make sense because they modify actions or ideologies incrementally within an overarching cognitive framework that they accept. Thus, Facit’s top managers thought it sensible to close the unimportant typewriter and office-furnishings factories so as to save the important mechanical-calculator factories. The top managers viewed electronic calculators as reorientations even though other managers classed them as mere variations on a large product line.

The conceptual filtering of reorientation proposals and power struggles over them are dramatic versions of pervasive, everyday processes. People appraise their proposals before enunciating them: they put forth only real symptoms, only plausible causes of symptoms, and only needs for good actions. Real symptoms concern variables that high-status people hold important, describe deviations from legitimate goals or expectations, or describe discomforts, pleasures, efforts, or benefits that people can properly discuss in public (Lyles and Mitroff, 1980). Plausible causes mesh into current theories, blame people instead of rules or machines, and enemies or strangers rather than friends, and define accidents in terms of what legitimate theories could have predicted, or what authorities did predict. Good actions follow precedents, harmonize with current actions, resemble the practices in other organizations, use resources that are going to waste, fit with top managers’ values, or reinforce power holders (Mehan, 1982; Starbuck, 1976; Staw and Ross, 1978). Although people apply these appraisal criteria implicitly, the criteria remain rather stable from situation to situation; and stable criteria for appraising allow people to shift their criteria for choosing.

Managerial ideologies cherish variations. Executives believe organizations should grow incrementally at their margins. Variations, like searches for symptoms, are often programmed: research departments generate opportunities for complementary actions; sales personnel report on competitors’ actions within current domains. Companies that managers regard as well run ‘tend to be tinkerers rather than inventors, making small steps of progress rather than conceiving sweeping new concepts” (Peters, 1980:196).

Emphasis on variations may be essential in normal situations because of the gross misperceptions people suffer. Programs, buffers, and slack resources dull organizations’ perceptions of what is happening, so organizations fantasize about their environments and their own characteristics. Business firms’ profits correlate not at all with their managers’ consensus about goals and strategies, and formal planning by business firms is as likely to yield unprofitable strategies as profitable ones (Grinyer and Norburn, 1975). Managers’ perceptions of their industries correlate zero with statistical measures of those industries (Downey et al., 1975; Tosi et al., 1973). Members of organizations agree with each other and with statistical measures about whether their organizations are large or small; but about all other characteristics of their organizations, they disagree with each other as well as with statistical measures (Payne and Pugh, 1976). Formal reports are filled with misrepresentations and inadvertent biases (Altheide and Johnson, 1980; Hopwood, 1972), and organizations that take formal reports seriously either get into trouble or perform ineffectively (Grinyer and Norburn, 1975; Starbuck et al., 1978). Ritualistic result watching encourages people to tolerate deviant observations that make no sense and to accept superficial, incomplete causal theories. Such misperceptions mean that reorientations would generally be foolhardy, whereas incremental variations keep low the risks of unpleasant surprises.

However, variations are also inadequate. People choose variations and interpret results within the frameworks of their current beliefs and vested interests, so misperceptions not only persist, they accumulate. Because organizations create programs to repeat their successes, they want stable environments, and so they try to choose variations that will halt social and technological changes. Such variations can succeed only to small extents and briefly, but the organizations perceive more environmental stability than exists,

Hierarchies amplify these tendencies. Top managers’ misperceptions and self-deceptions are especially potent because top managers can block the actions proposed by subordinates. Yet top managers are also especially prone to misperceive events and to resist changes: they have strong vested interests; they will be blamed if current practices, strategies, and goals prove to be wrong; reorientations threaten their dominance; their promotions and high statuses have persuaded them that they have more expertise than other people; their expertise tends to be out-of-date because their personal experiences with clients, customers, technologies, and low-level personnel lie in the past; they get much information through channels which conceal events that might displease them; and they associate with other top managers who face similar pressures (Porter and Roberts, 1976:1573-76). Thus, organizations behave somewhat as Marx ([1859] 1904) said societies behave. Marx argued that ruling social classes try to preserve their favored positions by halting social changes, so technologies grow increasingly inconsistent with social structures, until the ruling classes can no longer control their societies. For organizations, the issue is less one of technologies versus social structures than one of internal versus external events: top managers can block technological changes inside their organizations, but they have little influence on either technological or social changes outside their organizations.

Marx said that when a ruling social class can no longer control events, a revolution installs a different ruling class and transforms the social structure. His observation generalizes only partly to organizations. Reorientations do punctuate sequences of variations, and reorientations do activate and broaden political activities, but few reorientations transform organizational structures (Jönsson and Lundin, 1977; Normann, 1971; Rhenman, 1973; Starbuck, 1973). Facit’s reorientation, for instance, began with the replacement of a dozen top managers, but the overwhelming majority of members occupied the same positions after the reorientation as they did before. Indeed, hierarchies generally mean that large behavioral and ideological effects can result from changing just a few top managers.

Many organizations drift along, perceiving that they are succeeding in stable environments, until they suddenly find themselves confronted by existence-threatening crises. Most of the organizations my colleagues and I have studied did not survive their crises; but in every case of survival, the reorientations included wholesale replacements of the top managers, and we infer that survival requires this. Crises also bring unlearning when people discover that their beliefs do not explain events, that their behavior programs are producing bad results, that their superiors’ expertise is hollow. Although this unlearning clears the way for new learning during reorientations, it so corrodes morale and trust that many organizations cannot reorient.

Crises evidently afflict all kinds of organizations, although they may be more likely in bureaucracies that have recently enjoyed great success. Some organizations facing crises unlearn, replace their top managers, reorient, and survive. More organizations unlearn and then die. Thus, nonadaptiveness turns organizations into temporary systems, nearly all of which have short lives. The fifty-year-old corporations represent only two percent of those initially created, and fifty-year-old Federal agencies only four percent (Starbuck and Nystrom, 1981). Although older organizations are more likely to survive, even elderly organizations are far from immortal. Approximately 30 percent of the fifty-year-old corporations can be expected to disappear within ten years, as can 26 percent of the fifty-year-old Federal agencies.

 

FOOTNOTE

 

*This article benefited from the suggestions of Scott Greer, Michael Moch, Paul Nystrom, and anonymous reviewers.

 

REFERENCES

 

Altheide, David L. and John M. Johnson

1980 Bureaucratic Propaganda. Boston: Allyn & Bacon.

Axelrod, Robert M.

1976 “Results.” Pp. 221-48 in Robert M. Axelrod (ed.), Structure of Decision: The Cognitive Maps of Political Élites. Princeton:

Princeton University Press.

Beyer, Janice M.

1981 “Ideologies, values, and decision making in organizations.” Pp. 167-202 in Paul C. Nystrom and William H. Starbuck (eds.), Handbook of Organizational Design, Vol. 2. New York: Oxford University Press.

Boland, Richard J., Jr.

1982 “Myth and technology in the American accounting profession.” Journal of Management Studies 19:109-27.

Bougon, Michel, Karl E. Weick and Din Binkhorst

1977 “Cognition in organizations: an analysis of the Utrecht Jazz Orchestra.” Administrative Science Quarterly 22:606-39.

Brunsson, Nils

1982 “The irrationality of action and action rationality: decisions, ideologies, and organisational actions.” Journal of Management Studies 19:29-44.

Cohen, Michael D., James G. March and Johan P. Olsen

1972 “A garbage can model of organizational choice.” Administrative Science Quarterly 17: 1-25.

Cyert, Richard M. and James G. March

1963 A Behavioral Theory of the Firm. En­glewood Cliffs, NJ: Prentice-Hall.

Cyert, Richard M., Herbert A. Simon and Donald B. Trow

1956 “Observation of a business decision.” Journal of Business 29:237-48.

Downey, H. Kirk, Don Hellriegel and John W. Slocum, Jr.

1975 “Environmental uncertainty: the construct and its application.” Administrative Science Quarterly 20:613-29.

Dunbar, Roger L. M.

1981 “Designs for organizational control.” Pp. 85-115 in Paul C. Nystrom and William H. Starbuck (eds.), Handbook of Organizational Design, Vol. 2. New York: Oxford University Press.

Dunbar, Roger L. M., John M. Dutton and William R. Torbert

1982 “Crossing mother: ideological constraints on organizational improvements.” Journal of Management Studies 19:91-108.

Edelman, Murray

1977 Political Language: Words That Succeed and Policies That Fail. New York: Academic Press.

Fisher, B. Aubrey

1980 Small Group Decision Making. New York: McGraw-Hill.

Giddens, Anthony

1979 Central Problems in Social Theory: Action, Structure and Contradiction in Social Analysis. London: Macmillan.

Grinyer, Peter H. and David Norburn

1975 “Planning for existing markets: perceptions of executives and financial performance.” Journal of the Royal Statistical Society, Series A 138:70-97.

Halberstam, David

1972 The Best and the Brightest. New York: Random House.

Hedberg, Bo L. T.

1981 “How organizations learn and unlearn.” Pp. 3-27 in Paul C. Nystrom and William H. Starbuck (eds.). Handbook of Organizational Design, Vol. 1. New York: Oxford University Press.

Hewitt, John P. and Peter M. Hall

1973 “Social problems, problematic situations, and quasi-theories.” American Sociological Review 38:367-74.

Hopwood, Anthony G.

1972 “An empirical study of the role of accounting data in performance evaluation.” Empirical Research in Accounting: Selected Studies (supplement to the Journal of Accounting Research) 10:156-82.

Jönsson, Sten A. and Rolf A. Lundin

1977 “Myths and wishful thinking as management tools.” Pp. 157-70 in Paul C. Nystrom and William H. Starbuck (eds.), Prescriptive Models of Organizations. Amsterdam: North-Holland.

Kepner, Charles H. and Benjamin B. Tregoe

1965 The Rational Manager. New York: McGraw-Hill.

Loftus, Elizabeth F.

1979 “The malleability of human memory. American Scientist 67:312-20.

Lyles, Marjorie A. and tan I. Mitroff

1980 “Organizational problem formulation: an empirical study.” Administrative Science Quarterly 25:102-19.

March, James G. and Herbert A. Simon

1958 Organizations. New York: Wiley.

Marx, Karl

1904 [1859] A Contribution to the Critique of Political Economy. Chicago: Kerr.

Mehan, Hugh

1982 “Practical decision making in naturally oc­curring institutional settings.” In Barbara Rogoff and Jean Lave (eds.). Everyday Cognition Its Development in Social Context. Cambridge: Harvard University Press (forthcoming).

Meyer. John W. and Brian Rowan

1977 “Institutionalized organizations: formal structure as myth and ceremony.” Ameri­can Journal of Sociology 83:340-63.

Mintzberg, Henry

1973 The Nature of Managerial Work. New York: Harper & Row.

Mintzberg, Henry, Duru Raisinghani and André Théorêt

1976 “The structure of ‘unstructured’ decision processes.” Administrative Science Quarterly 21:246-75.

Normann, Richard

1971 “Organizational innovativeness: product variation and reorientation.” Administra­tive Science Quarterly 16:203-15.

Nystrom, Paul C., Bo L. T. Hedberg and William H. Starbuck

1976 “Interacting processes as organization de­signs.” Pp. 209-30 in Ralph H. Kilmann, Louis R. Pondy and Dennis P. Slevin (eds.), The Management of Organization Design, Vol. 1. New York: Elsevier North-Holland.

Payne. Roy L. and Derek S. Pugh

1976 “Organizational structure and climate.” Pp. 1125-73 in Marvin D. Dunnette (ed.), Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally.

Peters. Thomas J.

1980 “Putting excellence into management.” Business Week 2646 (July 21): 196-97, 200, 205.

Porter, Lyman W. and Karlene H. Roberts

1976 “Communication in organizations.” Pp. 1553-89 in Marvin D. Dunnette (ed.), Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally.

Rhenman, Eric

1973 Organization Theory for Long-range Planning. London: Wiley.

Rosenhan, David L.

1978 “On being sane in insane places.” Pp. 29-41 in John M. Neale, Gerald C. Davison and Kenneth P. Price (eds.), Contemporary Readings in Psychopathology. New York: Wiley.

Salancik. Gerald R.

1977 “Commitment and the control of organizational behavior and belief.” Pp. 1-54 in Barry M. Staw and Gerald R. Salancik (eds.), New Directions in Organizational Behavior. Chicago: St. Clair.

Schiffrin, Deborah

1977 ‘‘Opening encounters.’’ American Sociological Review 42:679-91.

Starbuck, William H.

1973 “Tadpoles into Armageddon and Chrysler into butterflies.” Social Science Research, 2:81-109.

1976 “Organizations and their environments.” Pp. 1069-1123 in Marvin D. Dunnette (ed.), Handbook of Industrial and Organizational Psychology. Chicago: Rand McNally.

Starbuck, William H. and Bo L. T. Hedberg

1977 “Saving an organization from a stagnating environment.” Pp. 249-58 in Hans B. Thorelli (ed.), Strategy + Structure = Performance. Bloomington: Indiana University Press.

Starbuck, William H. and Paul C. Nystrom

1981 “Why the world needs organizational design.” Journal of General Management, 6(1):3-17.

Starbuck, William H., Arent Greve and Bo L. T. Hedberg

1978 “Responding to crises.” Journal of Business Administration 9(2): 111-37.

Staw, Barry M. and Jerry Ross

1978 “Commitment to a policy decision: a multi-theoretical perspective.” Administrative Science Quarterly 23:40-64.

Tosi, Henry, Ramon Aldag and Ronald Storey

1973 “On the measurement of the environment: an assessment of the Lawrence and Lorsch environmental uncertainty subscale.” Administrative Science Quarterly 18:27-36.

Tuchman, Gaye

1973 “Making news by doing work: routinizing the unexpected.” American Journal of Sociology 79:110-31.

Watzlawick, Paul, John H. Weakland and Richard Fisch

1974 Change: Principles of Problem Formation and Problem Resolution. New York: Norton.

Weick, Karl E.

1979 The Social Psychology of Organizing (2nd ed.). Reading, MA: Addison-Wesley.

Wildavsky, Aaron B.

1972 “The self-evaluating organization.” Public Administration Review 32:509-20.

Witte, Eberhard

1972 “Field research on complex decision making processes — the phase theorem. International Studies on Management & Organization 2:156-82.