The Need for a Paradigm for Innovation

By A Norman, K. Mahmood, M. Chowdhury

Abstract

Microeconomics is a paradigm to derive supply and demand functions. The assumptions of fixed technology and optimization make the current paradigm external to the study of innovation. Examining important manufacturing innovations since 1800 we assert that the proper formulation to model innovation is a generalization of the estimation and control model. We consider the NP problem of scheduling to show that much innovation is obtaining better approximate solutions to difficult problems. Finally, we examine organizational innovations. Here the ability to predict is so inadequate that organization innovation might best be promoted by chaos theory. In order to study innovation economists will have to be willing to focus not on production functions, but the underlying processes.

JEL Keywords:

 

Send correspondence to:

Alfred Lorn Norman

Department of Economics

The University of Texas at Austin

Austin, TX USA 78712-1173

1  Introduction

Traditional substantive microeconomics assumes that tastes and technology are given. Then by postulating optimal behavior by consumers and producers, microeconomic theory provides demand and supply functions. Economics can study the comparative statics of changes in prices and income and can estimate the derived demand and supply functions. In substantive microeconomics the focus is on the consequences of optimization not the procedures to achieve optimization. While the authors believe that economic agents even the best of circumstances only approximate optimal conditions, we do not question the value of traditional microeconomic theory for the study of supply and demand functions.

We shall present our case that traditional substantive microeconomics is not the appropriate paradigm to study innovation. An logical place to start with a proposal for the study of innovation is Schumpeter's definition of innovation. Schumpeter (1934) defined innovation as new combinations of productive means the following five cases:

(1) Introduction of a new good

(2) Introduction of a new method of production

(3) Opening of a new market

(4) Conquest of a new source of supply

(5) New organization of any industry

In this paper we shall focus on case (2) that can be interpreted as the creation of a new point outside (+ orthant) the current production possibilities set. While this definition can be generalized to all goal-directed behavior, for example see Norman (1993,1999), we shall use the basic definition in this paper because we will focus on production. In addition, it is important to distinguish between an invention and an innovation. Spreadsheet software is an invention. A new business application of spreadsheets which increases profits is an innovation.

The study of innovation is essential in order to promote growth in incomes and to increase the competitiveness of all industries. We contend the serious study of innovation will require a more general approach to microeconomics.

In Section 2 we examine manufacturing from an historic perspective from 1800 considering interchangeable parts, diversification, and the current innovation of production for final demand. Major innovations such as interchangeable parts can take over a century before becoming commonplace and firms actively invest in improving their production processes. In current microeconomics, the production function or production possibilities set is taken as given. This means that the study of innovation is external to microeconomics. The appropriate model to consider innovation would be a generalization of a estimation and control model that we will call a search and control model. A simple example of an estimation and control model is shown in Appendix A. Estimation and control problems are generally analytically intractable and not computable.

But, as Alchian (1950) pointed out decades ago, competition does not imply the most efficient firm is optimal, but only that the most efficient firm is more efficient than its competitors. The movement to production for final demand has created several difficult problems for firms. One such problem is scheduling, which is a well defined problem that in its most general case in an NP problem. Operational researchers have been working on this problem since Kantorivich in 1939. Innovations in scheduling are improvement in the approximate algorithms to solve this difficult problem. Scheduling is discussed in Section 3 and Appendix B contains a short discussion of scheduling problems. We show that innovation can involve an improvement in an approximate solution. Assuming firms optimize precludes the study of most innovation.

The optimal organization in the current economy is far from obvious and the problem is not clearly defined. We show in Section 6 that there are multiplicative factors and the number of possible organizations is very large because advancing information technology affects almost all processes. Since simulation of organizations is not accurate, decision makers have no easy way to evaluate a large number of alternatives. One possible approach to organizational innovation is self-reorganization through chaos theory.

The concluding remarks in Section 7 emphasis our main points.

2  Innovation since 1800

In this section we shall consider three important manufacturing innovations since 1800. The first is interchangeable parts, the second is product diversification, and the third is the current progress towards production for final demand. In considering these innovations it is important to realize that major innovations take decades, even in excess of a century, to reach full implementation. Innovation is endogenous not exogenous to the firm. As a resource allocation problem, innovation requires the firm each period to decide to what extent to expend resources to improve the production function to obtain better efficiency. Such a problem is an estimation and control problem because the improved production function must be learned simultaneously with current production. As estimation and control problems are generally not computable one issue is the development of good suboptimal strategies for innovation. Another issue is to what extent is innovation a public good deserving public funding.

2.1  Replaceable Parts

The economics of interchangeable parts have three factors:

1. Feasibility and cost of the machinery to create parts with the required accuracy.

2. Reduction in assembly costs in the assemblers do not have to be skilled artisans making the parts fit.

3. Reduction in repair costs, because skill required to replace a standardized part is much less than having a skilled artisan create a nonstandard part.

Historically some products have had interchangeable parts since before recorded history. The sinewy string of a prehistoric bow was an interchangeable part in other bows and so were arrows. The progress towards replaceable parts in modern manufacturing of products is directly related to the inventive activity in creating machinery to make parts with greater accuracy and the mass production of such machinery to reduce its cost, see Durfee (1893). With the invention of printing type became interchangeable first within one shop and then between shops. In the 18th century mechanics invented machinery to create geared teeth necessary to create clocks. By the end of the 18th century in England inventors had invented various type of machines to work wood. The idea of interchangeable parts in manufacturing may have originated in the 18th century in France where the military viewed it as a desirable goal in building weapons, but at that time machinery with the required accuracy to create the metal parts did not exist at any cost.

One of the first private successful application of the idea of interchangeable parts in the US was done by Eli Terry starting about 1800 in the production of clocks using wooden parts, see Bourne(1996). Wood working machinery of the time was accurate enough and cheap enough to make serviceable clocks from wood with interchangeable parts. Such clocks could be mass produced at much lower cost that artisan created clocks. In 1814 he was mass producing clocks with brass and steel parts. To obtain machines with the required metal working accuracy he had to employ skilled mechanics to improve the metal working machines.

The idea of interchangeable parts is frequently erroneously attributed to Eli Whitney, who in 1798 obtained a US government contract to build 10,000 muskets, Smith (1981). In 1799 in order to gain more time on his contract he proposed constructing the muskets with interchangeable parts and in Eli Whitney contrived a demonstration in 1801 to show that he had succeeded. By 1808 he had set up a factory in Whitneyville and delivered the muskets whose parts were not completely interchangeable. The problem was the machinery making the parts did not have the required accuracy. Eli Whitney spend considerable effort improving the quality of his metal working machinery.

The realization of interchangeable parts in manufacturing muskets was not realized until 1826 in the Harpers Ferry Armory where Hall succeeded based on his own and Simeon North's extensive improvements in metal working machinery. The first successful application of interchangeable parts to a metal product required over 25 years of inventive activity improving the metal working machinery and organizing the production process.

The diffusion of interchangeable parts manufacturing proceeded gradually in the 19th century. The government in creating interchangeable parts in muskets felt the benefits of being able to repair muskets in the field by soldiers instead of craftsmen was worth the extra cost. But, for most private sector products the cost of obtaining accuracy was not worth the cost initially. As skilled machinists left government armories to work in the private sector, the knowledge gained in the armories was transferred to the private sector. As the metal working machine industry invented new more accurate metal working machines expanded their production, their costs fell due to mass production. This lead to the diffusion of interchangeable parts manufacturing that became commonplace by the 20th century. The principles of interchangeable parts become textbook material, for example see Buckingham (1920).

2.2  Multiple Products

In the first half of the 20 century manufacturing firms shifted from producing single product lines to multiple product lines in several industries.

The economics of producing multiple products has been characterized by Chandler as the economics of scope

1. In their research and development efforts firms have economies in producing a range of related products.

2. Firms have economies in selling a variety of related products.

The corporate merger movement at the end of the 19th century was one of vertical and horizontal mergers. Firms moved backward to obtain secure supplies of raw materials, forward into final consumer products and increased the concentration of firms in manufacturing. At the same time, Dupont became one of the first firms in the 20th century to produce a variety of products. The real expansion in multiproduct firms started in the 1920s. As Chandler (1988) points out the these firms were in electrical, chemical, machinery, metals, and rubber, industries that had extensive research and development based on chemistry and physics. After WW II diversivifaction became commonplace.

Diversification created problems that have occupied manufacturing innovators for over 50 years. Producing multiple products in a single factory creates at least two major problems to be overcome:

1. Getting the right input to the right place at the right time.

2. Determining the best production run to match supply with demand.

The input can be a part or work in progress for discrete production or a combination of chemicals in continuous production. The simplest solution to this coordination problem is to stockpile inventory of inputs at each station in the production process to ease the timing and quality control problems. Assemblers can find a correct part in the inventory. This simple solution has a hidden cost in that the inventory or parts and work in progress is a financial investment that garners no rate of return until the firm obtains the payment for the sale of the final product.

The economics of the second problem are determined by the cost and time to changeover from the production of one product to another. The more costly the changeover and the longer the required time, the longer the production run to distribute the fixed costs of changeover. But again, long production runs also have the hidden cost in that the inventory of final products does not garner a rate of return until payment from the sale. Finally, the faster and cheaper the changeover, the greater the variety of products that can be produced at a single factory.

2.3  Production for final demand

Most production innovations in the second half of the 20th century have been focused in providing better solutions to two production problems of multiproduct production. After World War II, US manufacturers did not have powerful incentives to innovate in manufacturing. In the 50s they could sell all they could produce. Consequently, innovations in the US took place in finance and in applying software to administration. The goal of American manufacturing until the 80s was in long production runs to minimize the fixed costs of changeover from one product to another. Because land was cheap relative to other major manufacturing countries, the resulting inventory buildup, although costly in terms of lost return on investment, was inexpensive to store.

Japan is about the size of California and only about 15% of the land is flat. In relationship to other countries land is very expensive in Japan. After world war II Japanese manufacturers had powerful incentives to reduce the required land for manufacturing in order to compete internationally. In the 50s Shigeo Shingo (1985) developed the SMED (single minute change of a die) system to reduce the required time and cost to shift a machine from the production of one product to another in less than 10 minutes. His SMED system consist of carefully observations to streamline and standardize the changeover process. Reducing the fixed cost of changeover makes smaller production runs economic in parts production and reduces the inventory costs of foregone return on investment and the space required to store the inventory. There are consulting firms selling the SMED system worldwide today.

The innovation to greatly reduce the parts and work in progress inventory was Toyota's creation of the Kanban (generally called just in time, JIT) system of inventory control of trying to reduce the required parts to exactly the one required for the production of the product on the assemblyline. Toyota accomplished this with cards and not computers. Successful application of the Kanban or JIT inventory system requires that the suppliers be adjacent to the assemblyine so that the flow of required parts is not interrupted by transportation delays.

In order to affect Kanban the quality of the parts must be extremely high because the one part must be correct when delivered. The Japanese made tremendous advances in quality control started by AT&T in the 30s and introduced into Japan after world war II by Deming and Juran. Shigeo Shingo (1986) created the Poka Yoke system of quality control by systematic observation of the manufacturing process incorporate steps such that quality control become an integral part of the production process. S. Taguchi introduced the idea of designing products so that performance is not affected by minor defects, Ealey (1994). These innovations lead to the Toyota or Japanese system of manufactures.

A related Japanese innovation is the reduction in the time from idea to product on the market. The Japanese in automobile manufacturing gave the design team leader authority to make decisions and included in the design team members from all aspects of the corporation. Because manufacturing engineers were included in the design process, designs were created that could be immediately manufactured without extensive design changes. Including marketing people resulted in designs that were more likely to be desired by consumers. In automobiles this innovation reduced the design of a new automobile from two years. At the time US auto manufacturers took as long as six years to design a new automobile.

In the renaissance in American manufacturing starting in the mid 1980s, US firms imitated the Japanese innovations. For example, Chrysler created Japanese style design teams to greatly accelerate the design of new cars. Firms in the US became converts to quality control as they discovered that consumers preferred quality Japanese products.

The US firms exploited their lead in software to innovate in production in beyond the Japanese approach. Because US suppliers are generally not near the assemblylines US reduction in inventory and work in progress was promoted by software programs starting with material resource planning, MRP and advancing to supply chain management software programs of today. Because US supplier firms are generally not adjacent to the assembly plant it is not possible to reduce inventory to the Japanese levels because of the possibility of transportation mishaps. Firms like Dell have their suppliers keeps supplys adjacent to the Dell assembly area and pay for supplies as they are used, McWilliams(1999). In some cases the supplier¹s 18 wheeler rig is parked at the factory.

Using their lead in software, American firms made innovations culminating the innovation of production for final demand. The application of software to soft as opposed to hard automation increased the trend for rapid changeover from one product to another at lower cost. One example is the use of industrial bar codes to simplify the machine intelligence problem of having each machine perform the correct procedure in assemblylines with multiple products. Another example is the software controlled flexible manufacturing system. Another is factory simulation programs created by Aspen Technology (1999), Dassault Systemes (1999), and Tecnomatix (1999) that allow industrial engineers to simulate the factory organization. Use of factory simulation can greatly reduce changeover times, not just of a single machine but an entire factory. In the auto industry the use of factory simulation programs has the promise of reducing the changeover in product production from 8 weeks to 48 hours, see Ross(1998). With soft automation the cost of batch production is falling to the level of previous mass production so that firms under the aegis of total quality management of pleasing the customer are increasingly producing for niche markets.

The culmination in the production for final demand comes about by the use of advances in communication for sales. Dell is one of the first firms to produce products exclusively for final demand and not inventory. At first customers ordered their build to specifications computers via the telephone. Then with the advance of E-commerce on the internet, Dell began selling through the internet. Their software is developed so that it aids the customer in choosing the most appropriate computer for their needs and allows the customer to follow their order through the production and delivery process. In the 21st century we forecast that production for final demand will become a increasingly common manufacturing practice.

In the later part of the 20th century the US government has many steps to promote invention and innovation in manufacturing. One example, is funding for Sematech (1999), a consortium promoting integrated circuit production. The Defense Advanced Research Projects Agency, DARPA (1999), funds numerous inventive and innovative activities in military production that frequently also have broad applications in the private sector such as Agile Manufacturing promoting lean manufacturing and rapid changeover. The Department of Commerce has initiated the manufacturing extension partnership, MEP (1999), to promote innovation and imitation in small firm manufacturing.

2.4  Implications

In order to obtain supply and demand functions, the developers of microeconomics assumed the production function or production possibilities set to be fixed, for example see Mas-Colell, Whinston, and Green (1995). What this does is make the derivation of supply and demand functions analytically tractable. But, at the same time it makes the study of innovation external to microeconomics.

Looking at our examples it should be clear that a basic resource allocation problem in firms is how much resources should the firm devote to improving the production function in each period and how much to production for sale. Moreover, in innovating in production processes, innovators generally can not afford to set up a production process simply as an experiment to determine the new better production function. Simultaneously, as they learn the new improved production process, they must create an output to sell at a profit. Thus, in most cases the learning function is subordinate to making a profit.

Manufacturers are simultaneously searching new technological and social possibilities to improve the production function while producing with existing knowledge. Such a problem might be called a search and control problem. The simplest form of such a problem is an estimation and control problem where the objective is to optimally produce while learning the value of an unknown parameter. In appendix A we provide an example to show that estimation and control problems are seldom mathematically tractable.

Frank Knight in his Risk, Uncertainty, and Profits intuitively understood that innovators made above normal profits by solving difficult problems involving uncertainty whereas conventional business made normal profits by solving problems involving risk. The distinction has been made consistent with Bayesian statistics in Norman and Shimer (1993) by defining uncertainty as unknown parameters in distributions that must be estimated to solve the optimization problem. In risk the parameters of the distributions are known. There is no need to try to make a distinction between objective and subjective probability, a distinction that the Bayesians have long since shown to be false. There is a small literature about the firm based on estimation and control, for example see Mazzola and McCardle (1996).

Since even simple estimation and control problems are generally not computable, innovators must use bounded rational simply strategies to solve such problems. One passive learning strategy discussed in the literature is learning by doing, Alchian(1963), Arrow(1969), Yelle(1969). This type of learning is frequently attributed to workers learning to better perform their jobs. As Yelle points out the focus of such studies was frequently military production with large production runs. This approach assumes that learning is a byproduct of production and not the result of an active learning strategy. But as Abernathy and Wayne (1974) point out in discussing Henry Ford¹s policy with the model T and model A, the learning curve could also be an active learning strategy of management.

The manufacturing strategies of the Japanese after 1950 were not to reduce the cost of long production runs, but rather to reduce to cost of small production runs through rapid changeover. These strategies were definitely active learning consciously expending resources to innovate. Certainly, Shigeo Shingo SMED and Poka Yoke approaches to innovation are active learning strategies performed by engineers systematically observing work processes and making improvements. Upton and Kim, [1996], present an analysis of two alternative active learning strategies used in two Korean shipyards.

Innovation where learning is integral to production raises numerous interesting questions. If an innovator goes too far in advance of current technology, he or she will be forced to spend all their efforts learning and not make much profit. This was the case of Eli Whitney in 1800 and General Motors, GM, in the 1980s, Cheung (1997). GM went too far ahead of the state of the art in trying to automate their automobile production. Ford and Chrysler backed off and were more successful. Secondly, in most cases the innovator can not capture through property rights all the value of his innovation that if successful, will be imitated in other applications. To what extent should government fund innovation as a public good? In the case of interchangeable parts the US government in pursuing its military objectives demonstrated that interchangeable parts was feasible. Today DARPA and other Federal agencies fund much innovation research in production.

We shall now consider how better approximations to the two multiproduct production problems discussed above created a new problem: scheduling. This is a wonderful example to illustrate Simon¹s (1957) concept of bounded rationality and the fact that much innovation is achieving a better bounded-rational solution.

3  Scheduling

The innovations leading to production for final demand in firms producing multiple products create a difficult scheduling problem of operating factories shifting from one product to another satisfying customer demands, minimizing inventory and at the same time fully utilizing the production resources. In their most general form, a resource-constrained scheduling problem asks the following question: given a set of tasks, a set of resources and constraints and a measure of performance, what is the best way to assign the resources to the tasks such that the performance is maximized. Formally operation researchers have devised a variety of formal models of industrial scheduling problems such as varying in such factors as the types of machines, their operations, delays, information flows, deterministic or stochastic elements, for example see Dorn and Froeschl(1993).

Planning and scheduling have profound importance for projects consisting of several tasks and constraints associated with them. On a factory floor, determining which jobs to execute in what order on which machines and which employees to assign to a certain job, can mean the difference between profit and loss. Even for relatively small projects, however, the number of possible courses of action quickly become so overwhelming that it becomes almost impossible to achieve an optimum solution. In general, scheduling problems are NP-hard i.e. no algorithm exists that can find optimal solutions to these problems in polynomial time. Heuristics exists for solving exactly some forms of the problem but typically they become intractable(i.e. take more than polynomial time) when additional constraints are added or the problem size grows. As a result, most research has been focussed on either simplifying the scheduling problem(mostly by making assumptions) to the point that it is solvable by some algorithm within reasonable time limits or devising efficient heuristics for finding acceptable (not necessarily optimal) solutions.

In this section, we will briefly discuss the advancements in scheduling and the resulting performance gains. The point to note here is that a scheduling problem with reasonable number of tasks is usually so complex that the most sophisticated scheduling heuristics and fastest machines do not give us the optimum solutions- they only give us better solutions. Usually a move from one scheduling paradigm to the other involves huge investments in time and money and has profound impact on the functioning of firms and of the economy as a whole.

3.1  Exact Solution Methods and Monte Carlo

Although the roots of the scheduling problem can be traced back to pre-historic times, active research in this field began with the creation of digital computing machines after 1950. Linear Programming became the first formulation of scheduling problems with the invention of the Simplex algorithm by Dantzig in 1947 that provided efficient computation. Other important early Operations Research methodologies were Monte Carlo simulation techniques, stochastic optimization, queuing theory, Integer Programming (Gomory, 1958), Dynamic Programming, Bellman et al.(1982) and a several combinatorial methods, notably, Balas, (1969) and Branch-and-Bound methods, e.g. Land and Doig, (1960), Little et al., (1963), Barker and McMahon, (1985).

The principles and method of dynamic programming were first elaborated by Bellman in 1950s. For combinatorial scheduling problems, dynamic programming algorithms have exponential computational complexity because in order to calculate the optimal criterion value for any subset of size p, we have to know the criterion values for each subset of size k-1. Thus, for a set of n elements, we have to consider 2n subsets. The branch and bound algorithms divide the problem into several subproblems and calculates the lower bound for each of the subproblems. This procedure usually generates a huge tree. The computational complexity of a branch and bound algorithm is also exponential. Branch and bound methods are therefore limited to less than one hundred activities or even fewer in the multi-model cases. Other enumerative methods also suffer from the exponential computational complexity for reasonably large problems.

Unfortunately most scheduling problems especially the the most relevant ones belong to the class of NP-complete problems which are intractable since nobody has shown that a polynomial bounded algorithm exists for these problems. Exact solution methods are thus of limited practical relevance in obtaining better performance.

3.2  Approximate Solution Methods

With the complexity theory developed in computer sciences,Edmonds (1965), Cook (1971) and Karp, (1972), the focus of research shifted from exact but intractable methods to approximate but tractable solutions. Interest in heuristic reasoning within computer sciences was prompted by outstanding researchers like Herbert A. Simon starting in the 1960s. Heuristic search tries to enumerate combinatorial search spaces efficiently by ruling out the areas with low subjective probability of containing an optimal solution. This application of the research in neuro-science and human symbol processing to computer science gave a strong boost to the field of artificial intelligence. Soon the techniques developed in artificial intelligence were being used to find better solutions for real world scheduling problems.

This class of methods comprises several related algorithms. Simluated annealing was proposed as a framework for the solution of combinatorial optimization problems by Kirkpatrick, Gelatt and Vecchi (1983) and independently by Cerny (1985). As the name suggests, the inspiration for this method comes from the Physics of cooling solids. "Simulated annealing allows the allows the algorithms to 'escape' from bad local optima by performing occasional cost-increasing changes. " Lewis and Papadimitriou (1998). Simulated annealing gives good approximations of the optimal in many cases. However the cost-increasing changes often result in great loss of efficiency.

Tabu search, a discrete version of simulated annealing, is a general framework, which was originally proposed by Glover and subsequently expanded in a series of papers. Öne of the central ideas in this proposal is to guide deterministically the local search process out of local optima (in contrast with the non-deterministic approach of simulated annealing). This can be done using different criteria, which ensure that the loss incurred in the value of the objective function in such an 'escaping' step (a move) is not too important, or is somehow compensated for. " Blazewicz, Ecker, Pesch, Schmidt and Weglarz (1996).

Genetic algorithms are inspired by the theory of evolution; they date back to the early work described in Rechenberg (1973). "They have been designed as general search strategies and optimization methods working on populations of feasible solutions." Blazewicz et al. (1996) The traditional genetic algorithms have often not been suitable for combinatorial optimization problems because of the difficulty in the representation of the solution. Several improvements have therefore been proposed. Local search heuristics or genetic enumeration have compensated for this drawback.

Evolutionary techniques like genetic algorithms and neural nets and other recent techniques like fuzzy logic are mainly being used in rule-based reasoning and expert systems. Due to the early success of evolutionary algorithms, researchers strived for an total automation. In the last few years, however, the focus has shifted to a more realistic scenario of decision support systems. Decision support systems use genetic algorithms and neural nets to help managers in decision making. Due to the massive explosion in the size and diversity of firms, modern techniques like decision support systems are increasingly becoming necessities for large firms.

3.3  Example of Improvement: The job shop scheduling problem

We now wish to consider scheduling from the perspective of Simon's bounded rationality. Because in general scheduling is an NP hard problem, managers aided with computers can not obtain exact optimal solutions in polynomial time. Bounded rational behavior is the use of approximate algorithms. Innovations occur in bounded rational behavior with the development of better man aided by computer algorithms that lead to better performance, and in some cases optimal performance. We wish to demonstrate this with an examination of progress in creating better algorithms for the job-shop scheduling problem.

In general, the work in a job-shop is made-to-order rather than made-to-stock and thus has crucial customer delivery dates associated with it. The main challenge of scheduling job-shops is to minimize the lateness of the jobs and maximize the throughput, both of which can be measured in different ways. Job-shops minimize inventory while attempting to meet the due dates within the known capacity constraints. Job-shop scheduling is the paradigm of choice for companies with customer customizable products.

This problems directly maps to the classic parallel programming problem which attempts to schedule a certain number of jobs of varying lengths on a given processors in a given time span. This problem is formalized as below.

Statement of the problem:

Number m Z+ of processors, set J of jobs, each j J consisting of an ordered collection of tasks tk[j],1 £ k £ nj, for each such task t a length l(t) Z+0 and a processor p(t) 1,2,º,m, where p(tk[j]) ¼ p(tk+1[j]) for all j J and l £ k £ nj, and a deadline D Z+.

Many real world scheduling problems are related to the job-shop scheduling problem. Most of the scheduling techniques developed in the last forty years have been used, with varying results, for solving this problem. Tracing the history of job-shop scheduling problem would thus give us a general idea of the improvements in scheduling in the last four decades.

The origins of significant interest in the job shop scheduling problem can be traced to the two well known benchmark problems consisting of 10 jobs and 10 machines as well as 20 jobs and 5 machines. Both of these problems were introduced by Fisher and Thompson (1963). While the case of 20 jobs and 5 machines was solved in 10 years, the instance of 10 machines and 10 jobs took 25 years of extensive research to solve. Better solutions of the later case are still being explored. Lageweg in 1984 found an optimal schedule of 10 by 10 problem that Carlier and Pinson (1989) proved to be optimal. They used the branch and bound method to solve the problem. Since then several other branch and bound algorithms have been applied to find better solutions to the problem. Carlier and Pinson (1991) showed that the problem can be solved to optimality within less than 2 minutes of CPU-time on a small work station.

As approximation methods became popular in the early 90s, researchers applied them to solve the 10 x 10 job shop problem. The efforts to apply simulated annealing, tabu search, parallel tabu search and genetic algorithms culminated in the excellent tabu search implementation of Nowicki and Smutnicki (1993) and Balas and Vazacopoulos (1995). They achieved greater efficiency.

The job shop scheduling problem remains one of the most difficult combinatorial problems to date and arouses new research interest. The exact solution methods perform well only for some specific instances of the problem of the same size. They usually perform poorly for the instances of different size. Heuristics such as simulated annealing, tabu search and genetic algorithms are generally more robust under different problem structures and require only a reasonable amount of implementation work with relatively little insight into the combinatorial structure of the problem. Instances of the problem larger than 10 x 10 proposed among others by Adams, Balas and Zawack (1988) still remain open. There is thus continued interest in the problem as more efficient and robust solutions are being explored.

The approximation algorithms have not only facilitated the adoption of new manufacturing paradigms for firms, but have acted as major catalysts in the conception and implementation of new paradigms. Advances in the approximate scheduling methods have facilitated the gradual implementation of production of final demand.

Today markets are turbulent and dynamic. Tremendous advance in the field of information technology, micro-processor technology and artificial intelligence(application to scheduling and decision support) in the last decade or so have turned the vision of agile manufacturing into reality. It is increasingly becoming possible for the firms to achieve short product development cycle times and respond immediately to sudden market opportunities (Agility Forum, 1994). With time we are achieving better solutions but we are still far from the optimal. It has taken more than 40 years and billions of dollars in research and development for firms to reach this stage and there is still a long way to go. The scheduling algorithms are still far from being optimal in the general case.

Much innovation is creating better approximations to difficult problems. We use scheduling because the problem is clearly defined, but we strongly assert this applies to a large number of problems, which may not be clearly defined, that firms face.

4  Organizational innovation

So far the paper has dealt with the technical aspects of production that can be incorporated into models and simulation programs. We have seen in the last section that for intractable problems, for which reaching an optimal solution is not possible, the focus of innovation is better approximate solutions. This section deals with the social problems that are even harder to understand and formulate because the current state of economics and the other social sciences does not permit the simulation of social behavior to even one significant figure in real time.

The constant advance of technology, such as informational technology provides organizations with numerous opportunities for innovation in their organization and procedures. Organizational transformation may be explained in terms of the search to obtain better suboptimal performance given advancing technology and a changing political economy. This part of the paper deals with the difficulties and complexity behind such searches.

The first section deals with describing the DuPont¹s difficult transformation from functional to divisional organization from 1915 to 1921. We include this example to show that major organizational innovations can take considerable time to implement. The second section discusses why major organizational innovations are more difficult and complex today. The last section takes up the possibility of self-reorganization through chaos theory as means to better social innovation.

4.1  DuPont

Outcomes from a certain structure are hard to predict and has to be often found through experimentation in real life. Optimal solutions may not always be computable and may sometimes take years to reach even a sub-optimal solution. For example, it took DuPont Pont about 6 years to move from multi-functional structure to multi-divisional structure.

When DuPont diversified early in the 20th century, the firm found that their functional organization could not cope with the diversification. Facing anti-trust laws over their monopoly over the manufacture of military powder, DuPont began to diversify into new product-lines. Diversification greatly increased the demands on the company¹s administrative offices. The development of plans and the appraisal of activities became harder as the executives with experience primarily in explosives were making decisions about paints, dyes, chemicals and plastic products.

The main problems with organizational structure introduced by the diversification strategies were many-fold. Firstly, coordination became very complicated because different products called for different types of standards, procedures, and policies. Secondly, the specific functional departments such as Purchasing or Sales were not equipped to handle the tasks of all products. The Marketing department faced the most serious problems because marketing activities of different products were markedly different from one another. Thirdly, the central office was even more overwhelmed than the departments because overall policy-making of the company still lay among executives who did not have the expertise on all products.

Need for considerable re-structuring was clearly apparent, but management lacked concrete ideas how to make organizational innovations. Through 1916 and 1920, many proposals were written up and vetoed by the president while the company strove for better internal organization. During the last part of the decade, the proposals mainly recommended that product be the basis of organization rather than function. The major impediments to change were mainly due to social values and belief systems. Some were reluctant to abandon the organizational structure based on the "principle of specialization,² a theory proven to have worked for centuries; some were hesitant to give up authority and decision-making power even in situations where they were not fit to make them. An undying tendency to maintain status quo was a big impediment to implementing much needed changes.

Finally, from 1921, it started to move toward a structure based on product divisions rather than functional departments. The products whose production and selling processes were compartmentalized from other products proved to do better than those which were not. After a whole year of further experimentation, the company finally settled with the new multidivisional structure in face of major crises.

4.2  Difficulties Today

Today, innovation in the form of organizational restructuring is a much more complicated process than when DuPont transformed itself from a functional organization to a divisional organization in the early 1920s. One aspect of the difficulty is a more rapid advance of technology that provides firms with a large number of opportunities to employ the new technology to improve performance. At the same time intense international competition and a rapidly changing international political economy require a more rapid response to innovation opportunities in order to maintain competititiveness. Also, a rapidly changing political economy requires firms to rapidly respond to the changing conditions of the political economy.

One factor that makes innovation difficult is the fact the fundamental changes in informational technology create opportunities to fundamentally alter business processes. During the past decade, effort towards performance improvement in firms frequently falls under the term, ``business process reengineering.¹¹ or BPR. Dutta (1999) defines BPR as ``the ongoing effort of organizing and radically redefining the key processes of a company with the ultimate goal of achieving and sustaining significant performance gains.¹¹ One of the most essential parts of BPR is to identify the processes of a firm that in a large firm number in the tens of thousands. Some examples of broad-based processes may be strategy development, manufacturing, supply chain management, order fulfillment, marketing, and human resource development. Processes involve varies combinations of worker, software and hardware. Innovation in processes requires a careful assessment of the process that frequently requires a complete restructuring of the task with a new objective using a new combination of people, software and hardware. The rate of technological advance in information technology is now so great that opportunities for innovation do not involve minor modifications to existing processes but complete restructuring of thousands of processes. One example, is that the shift of sales to the internet requires a major realignment of business processes throughout the firm.

A second factor is the hierarchical structure of the firm. After World War II some US firms developed as many as 14 levels of management a factor that justifies high salaries to the top management. The function of middle managers in such organizations is the filter the information flow from the bottom of the corporation to the top. With the creation of corporate data bases, executive assistants of the top managers could use the corporate date bases to create many of the reports formerly created by middle managers. As Japanese firms never developed as many levels of management the question of the best number of levels of managers is an open one.

A third factor is how to harness the advances in communications to link workers and work groups together. With the development of video teleconferencing and data displays through the internet, groups at physical remote locations can work well together. This provides opportunities to consider fundamentally new types of organizations where workers have primary groups at physical locations and secondary groups through communications channels. Since firms must respond quickly to changes in the conditions in the political economy the organizations of groups through communications channels allows firms to quickly organize the workers must able to deal with the latest problem. Here the complexity of such possibilities is obvious. If every worker is organized to communicate with every other worker, the number of links is a quadratic function of number of workers. The desirable number of secondary communication organizations must be greater than a single hierarchy, but sparse in relationship to the total number of possible links.

Innovation today is much more difficult than DuPont's historic transformation from functional to divisional organization. The DuPont innovation primarily affected only the top levels of managers and did not affect the basic business processes. Today innovation is frequently software driven and implemented by armies of consultants. Such consultants assume a process-oriented approach to innovation. Unlike DuPont's transformation, current software based innovation frequently affects a very large number of processes in an organization. Typically, large organizations with diversified production have tens of thousands of processes. So identifying them and recognizing the overlaps can potentially amount to the need to consider thousands of alternatives.

Since social alternatives can not be accurately forecasted, the consulting industry works by replicating in all firms in an industry a change that improves performance in one firm and then expanding the innovation with modifications to other industries. This great simplifies the problem of the search for the best alternative, but is definitely suboptimal.

4.3  Chaos

In face of increasing difficulty in reaching optimal solutions regarding organizational structures, a number of researchers are investigating the feasibility of the application of complexity theory (within which chaos is a particular mode of behavior) in the practice of organizational management, Stacey (1996) . The theory regards organizational structure and intra-organizational relationships as being dynamic systems that defy prediction and are capable of changing over time. Before the emergence of complexity theory, the uncertainty of such systems was attributed to randomness and was attempted to be explained by probability - instability was feared and avoided. The theory rests on the belief that there is a pattern to the seemingly random movements and emphasize the importance of expectation of instability. It argues that organizations will organize themselves in accordance with the external changes, that organizations should be left without strictly defined individual responsibilities and inter-personal relationships so that by mutual interaction, they will find solutions to tough problems.

5  Concluding Remarks

A major problem to the study of innovation using traditional substantive microeconomics is the assumption that tastes and technology are given. Since the 1870s when the formulation of the current substantive microeconomic theory began there has been an large increase in the rate of technological change. Since 1870 we have seen the rise of the research university, corporation research and development, public finance of research, incubators to promote startups, venture capital, and the emergence of a consulting industry to promote innovation. Given the current high rate that new products are introduced into the marketplace and rate new software, hardware, and automation equipment is being made available to producers the assumption of constant tastes and technology has become very dubious indeed.

We assert that much innovation comes about by firms and consumers constantly adapting the flood of new technology that enters the marketplace each year. Also, with the creation of the institutional structure promoting discovery, invention, and innovation this flood of new technology enters continually. Innovation is becoming increasingly important both for general competitiveness of the firm and the advance of incomes.

To properly study innovation:

1. The assumption of fixed technology must be dropped. The allocation between improving the production function and using the current production function needs to be extensively studied creating a very general estimation and control model.

2. Innovations in the form of better bounded-rational approximations of difficult problems need to be studied.

3. Economists need to examine economic procedures an effort Simon (1976) has long since recommended. Most economic innovation takes place at the procedural level not the production function level.

References

1. Abernathy, w. and Wayne, K., 1974, Limits of the Learning Curve, Harvard Business Review, Vol 52, No. 5, pp 109-119

2. Adams, J., Balas, E. and Zawack, D., 1988, The shifting bottleneck procedure for job shop scheduling, Management Science, 34, pp 391-401

3. Agility Forum, 1994, http://www.agilityforum.org/

4. Alchain, A., 1950, Uncertainty, Evolution and Economic Theory, Journal of Political Economy, 57, pp 211-221

5. Alchain, A., 1963, Reliability of Progress Curves in Airframe Production, Econometrica, pp 579-693

6. Arrow, K., 1969, The Economic Implications of Learning by Doing, Review of Economic Studies 29,pp155-173

7. Aspen Tech, 1999, http://www.aspentech.com/

8. Balas, E., 1969, Machine Sequencing via Disjunctive Graphs: An Implicit Enumeration Algorithm, Operations Research,.17 (6), pp 941-957

9. Balas, E. and Vazacopoulos, A., 1995, Guided local search with shifting bottle neck for job shop scheduling, Management Science Research Report #MSR R-609, Carnegie-Mellon University, Pittsburgh

10. Barker, J. and McMahon, G., 1985, Scheduling the General Job Shop, Management Science, 31 (5) pp 594-598

11. Bellman, R., Esogbue, A. and Nabeshima, I., 1982, Mathematical Aspects of Scheduling and Computations, (Pergamon Press: Oxford)

12. Blazewic, J., Ecker K., Pesch E., Schmidt G. and Weglarz J., 1996, Scheduling Computer and Manufacturing Processes (Spring: Berlin)

13. Bourne, R, 1995, Invention in America, (Fulcrum Publishing: Golden)

14. Buckingham, E., 1920, Principles of Interchangeable Manufacturing (The Industrial Press: New York)

15. Carlier, J. and Pinson, E, 1989, An algorithm for solving the job-shop problem, Management Science, pp 164-176

16. Carlier, J. and Pinson, E, 1994, Adjusting heads and tails fro the job-shop problem, European Journal of Operations Research, 78, pp 146-161

17. Cerny, V., 1985, Thermodynamical approach to the traveling salesman problem: an efficient simulation algorithm, J. OPtimization Theory and Applications, 45, pp 41-51

18. Chandler, Alfred, 1988, The Structure of American Industry in the Twentieth Century: A Historical Overview, in McCraw, Thomas (ed) The Essential Alfred Chandler: Essays toward a Historical Theory of Big Business (Harvard Business School Press: Cambridge)

19. Cheung, I., 1997,Automobile Manufacturing Automation: the Experience of GM and Ford, http://www.eco.utexas.edu/Homepages/Faculty/ Norman/long/Projects.F97/GM/Edu.htm

20. Cook, S., 1971, The Complexity of Theorem Proving Procedures, Proceedings of the 3rd ACM Symposium on Theory of Computing, pp 151-158

21. DARPA, (1999), http://www.darpa.mil/

22. Dassault Systemes, 1999, http://www.catia.com/

23. Dorn, J. and Froeschl, K (ed), 1993, Scheduling of Production Processes (Ellis Horwood: New York)

24. Durfee, W., 1893, The History and Modern Development of the Art of Interchangeable Construction in Mechanism, Transactions of the American Society of Mechanical Engineers Vol XIV, pp1225-1257

25. Dutta, S. and Manzoni, J., 1999, Process Re-engineering, Organizational Change and Performance Improvement, (McGraw-Hill: New York)

26. Ealey, L., 1994, quality by design: Taguchi methods and US industry, 2nd ed (ASI Press: Burr Ridge, Ill)

27. Edmonds, J., 1965, Paths, Trees, and Flowers, Canadian Journal on Mathematics, 17, pp 449-467

28. Fisher, H., and Thompson, G., 1963, Probalistic learning combinations of local job-shop scheduling rules, in Muth, J., Thompson, G. (eds), Industrial Scheduling (Prentice Hall: Englewood Cliffs, NJ)

29. Gomory, R., 1958, Outline of an Algorithm for Integer Solutions to Linear-Programs, Bullitin of the AMerican Mathematical Society, 64,pp 275-278

30. Karp, R., 1972, Reducibility among Combinatorial Problems, in Miller, R. and Thatcher, J. (eds) Complexity of Computer Computation (Plenum Press: New York), pp 85-104

31. Knight, Frank H., 1971 (original 1921), Risk, Uncertainty and Profit, (University of Chicago Press, Chicago).

32. Kirkpatrick, S., Gelatt, C. and Vecchi, M., 1983, Optimization by Simulated Annealing, Science, NO. 220, pp 671-680

33. Land, A. and Doig, A., 1969, An Automatic Method of Solving Discrete Programming Problems, Econometrica, 28 (3), pp 297-520

34. Little, J. et al., 1963, An Algorithm for the Traveling Salesman Problem, Operations Research 8 (2), pp 972-989

35. Lewis, R., and Papadimitriou, C., 1998, Elements of the Theory of Computation, Second Edition, (Prentice-Hall: Upper Saddle River, NJ)

36. Mas-Colell, A., Whinston, M., and Green, J., 1995, Microeconomic Theory (Oxford University Press: New York)

37. Mazzola, J. and McCardle, 1996, A Bayesian approach to managing learning-curve uncertainty, Management Science, 42 (5), pp. 680-692

38. McWilliams, G., 1999, Whirlwind on the Web, BusinessWeek Apr 7, pp132-138

39. MEP, 1999, http://www.mep.nist.gov/index1.html

40. Norman, A., 1993, Informational Society: An Economic Theory of Discovery, Invention, and Innovation, (Kluwer: Boston)

41. Norman, A., 1999, Informational Society Notes, http://www.eco.utexas.edu/ Homepages/Faculty/Norman/long/indexInfo.html

42. Norman, A and Shimer, D, 1994, Risk, uncertainty, and complexity, Journal of Economic Dynamics and Control 18, 231-249

43. Norman, A. (1994), Computability, Complexity and Economics, Computational Economics , 7, pp. 1-21.

44. Nowicki, E. and Smutnicki C.,1993, A fast taboo search algorithm for the job shop problem, Management Science

45. Rechenberg, I., 1973, Optimierung technischer Systeme nach Prinzipien der biologishchen Evolution (Problemata: Frommann-Holzboog)

46. Ross, P., 1998, Why shut down a factory for weeks when you can simulate the retooling on a computer?, Forbes, May 4,

47. Schumpeter, J., 1934, The Theory of Economic Development, (Harvard University Press, Cambridge)

48. Sematech, 1999, http://www.sematech.org/public/home.htm

49. Shingo, S., 1985, A Revolution in Manufacturing; The Smed System (Productivity Press Inc: Portland)

50. Shingo, S, 1986, Zero Quality Control: Source Inspection and the Poka-Yoke System (Productivity Press Inc: Portland)

51. Simon, H, 1957, Models of Man, (John Wiley & Sons, Inc.: New York)

52. Simon, H., 1976, From substantive to procedural rationality, S Latsis (ed), Method and Appraisal in Economics, (Cambridge University Press, Cambridge)

53. Smith,M., 1981, Eli Whitney and the American System of Manufacturing, in Pursell, C. (ed), Technology in America, (The MIT Press, Cambridge), pp 45-61

54. Stacey, R., 1996, Complexity and Creativity in Organizations, (Berrett-Koehler: San Francisco)

55. STR Group, 1999, Incorporation of Factory Simulation in the Total TCAD Framework, http://www.ee.ed.ac.uk/STR/research } projects/factory/factory.html

56. Tecnomatix, 1999, http://www.tecnomatix.com

57. Traub, J.F., G. W. Wasilkowski and H. Wo\'zniakowski, 1988, Information Based Complexity, (Academic Press, Inc., Boston).

58. Upton, D and Kim B, 1996, An Empirical Study of Alternative Methods of Learning and Process Improvement in Manufacturing, Harvard Business School Working Paper 96-006 Rev 12/96 at http://www.people.hbs.edu/dupton/papers

59. Yelle, L., 1979, The Learning Curve: Historical Review and Comprehensive Study, Decision Science 10, pp. 302-328

 

A  Estimation and Control Example

The issue in estimation and control is how far from best current production practice should a firm experiment in order to improve future performance. For example, stopping a production process in order to experiment with a machine is expensive in terms of foregone production, such as 10(1999). Let us first demonstrate the difficulties in estimation and control when the unknown factor is a single parameter. Afterwards we will discuss the more general problem.

In order to demonstrate just how simple a noncomputable estimation and control problem can be, we consider the problem presented in Norman and Shimer (1993), which employs the information-based complexity model of Traub, Wasilkowski and Wo\'zniakowski (1988). A monopolist has a linear production process, faces a linear inverse demand function, and has a profit function for t = 1,2,... ,T:

pt = a - d qt,
pt = ptqt - c(qt),       (8)
qt = bxt + zt,
where a and d are known, qt is the tth observation of net output, xt is the tth level of the production process, b is the unknown scalar parameter, and zt is the tth unobserved disturbance term. The zt are iid normal with mean zero and known variance one. Since the complexity results are invariant to defining the cost function as a zero, linear, or quadratic function, the cost function is defined as c(qt) = 0 to simplify the notation.

Given a normal prior on b at time t = 1, the prior information on b at time t is a normal distribution N(mt,ht), where mt is the mean updated by ht = ht-1 + xt-12 and ht is the precision updated by mt = (mt-1ht-1+qt-1xt-1)/ht. For this paper let us consider two cases:

1. The agent knows b precisely. He or she has either been given precise knowledge of b or has observed (1) a countable number of times so that his or her prior on b has asymptotically converged to N(b,).

2. The agent's prior information on b is represented by N(m1 ,h1 ), where h1 has a very small positive value.

The monopolist is interested in maximizing his expected discounted profit over a finite time horizon:

JT =
sup
xT 
E{ T
Â
t = 1 
tt-1 pt(xt)qt(xt) | qt-1, xt-1},       (9)
where t is the discount factor, qt-1 is (q1,q2, ..., qt-1) and xt-1 is (x1, x2, ...,xt-1). qt-1 and xt-1 represent the fact that the decision maker anticipates complete information that is observed exactly and without delay.

First consider the optimization problem where b is a known parameter. The optimal xt can be exactly determined as a function of the parameters of f F without recourse to the information operator as

xt* = a
2db
.       (10)
The (e = 0)-computational complexity of this problem is T0, polynomial zero, because the control that can be computed in 3 operations needs to be computed only once for the entire time horizon.

Now let us illustrate the computational difficulty with case 2, the simplest nontrivial example having a time horizon of only two periods, T = 2. The value function in the first period is

J1(q1) = a2((m1h1+q1x1)/h1)2
4d([(m1h1+q1x1)/(h1 +x12)] 2+(h1+x12)-1)
-d.       (11)
While the expectation of J1(q1) has the form
E È
Í
Î
Q1(q1)
Q2(q1)
˜

š
-d,       (12)
where Q1(q1) and Q2(q1) are quadratic forms in the normal variable q1. This expectation cannot be carried out explicitly to give an analytic closed expression. This implies the 0-complexity of this problem with an unknown parameter is transfinite.

Now let us consider the problem in the interchangeable parts innovation from the perspective of estimation and control. The problem in metal interchangeable was developing accurate, inexpensive metal working machinery. How much effort should a firm expend into improving the quality of machinery is a difficult question. Eli Terry succeeded in making clock with metal parts, but Eli Whitney did not succeed. The mathematical formulation of such a problem will require a more general framework than just estimating an unknown parameter. The problem might be called a search and control problem because firms are constantly creating and searching for new technological possibilities to improve the production function while producing with current knowledge. This is a fundamental problem of innovation.


File translated from TEX by TTH, version 2.25.
On 22 Aug 1999, 20:56.