Next: Communication Up: Informational Society Notes Previous: Microelectronics

Hardware, Software and Impact

Objective: Discuss the economic incentives driving the evolution of hardware and software and relate this progress to Moore's law. The index for this section is:

Hardware

Let us start with two basic definitions which may be obvious to most of you. Hardware refers to the physical computer, communication or control equipment. Software refers to the operating system and programs which run on the physical equipment.

Background

Leibnitz conceptualized the concept of a computer in the 17th century. In the 19th century, Babage made advances in mechanical computers. By World War II, warships had mechanical computers in the form of fire control systems. The British (Turing) built the first digital computer and used it to break the German code in World War II. After World War II, the Americans built a digital computer using vacuum tubes for military ballistic studies. Like many post WWII technologies, computers were originally created for military purposes; however, the real advances did not materialize until they were applied to business problems thereby creating a large demand, which promoted advances through economic competition.

Advances in computers are made possible by advances in microelectronics. This is because (1) mechanical computers are far too large and too inaccurate, and (2) vacuum tube computers consume a great deal of power and break down frequently. What made the computer a commercial success was the invention of an inexpensive, reliable transistor, which required very little power. At first, transistors in computers were individually wired components. Technological advance lead to boards with individual components, and finally to boards with integrated circuits. The advance in microelectronics has made the digital computer dominant over other types of computers because of its lower cost and higher performance.

Von Neumann Digital Computer

Most computers today are the digital computers based on the Von Neumann design principle-(1) a single central processor; (2) a single path between the central processor and memory; (3) program is stored in memory; and (4) central processor fetches, decodes, and executes the stored instructions of the program sequentially. Almost all personal computers, workstations, minicomputers, and mainframes are currently Von Neumann design computers. To understand how such a computer works consider the following schematic drawing:

The various components of a computer are:

a. CPU: The central processor unit is the brains of a computer-the circuits to decode and execute instructions. Because of Moore's law, the number of integrated circuits necessary to create a CPU correspondingly decreased until in 1975 the CPU for a personal computer could be designed on a single integrated circuit called a microprocessor. Since that time, each generation of microprocessors is more powerful than the last. Two types of microprocessors are CISC and RISC. Simply put, the instructions for the first type are much more complex than for the latter. Examples of CISC are the Intel 80x86 and Motorola 860x0 series microprocessors. Thus most personal computers are CISC. RISC are more powerful and are found in workstations such as those from SUN. However, Intel is incorporating most of the features of RISC chips in the next generations of its CISC chips. Besides a single CPU a Von Neumann design might have several coprocessors-that is subordinates processors for arithmetic and fast video displays.

b. Memory Pyramid As CPU have become faster and more powerful a fundamental bottleneck in computer design is the flow of information back and forth from memory to the CPU. As computing of videos becomes more important the need to more large amounts of data quickly from storage to the CPU grows accordingly.

Currently, there are now many types of memory devices with different costs, access times and storage capacities. Generally the faster the access time the more expensive the memory. Consequently, good design is based on a memory pyramid of providing increasing amounts of less costly, slower memory.

(1) Cache memory: This generally is memory placed on the CPU or is attached with a special fast bus. It is the fastest and most expensive so little is used.

(2). RAM(DRAM and SRAM) and ROM: These are various types of memory chips directly accessible to the computer. The two most important types of random access memory RAM are dynamic(DRAM) and static(SRAM). The advantage of the latter is that memory is retained when you turn off the computer. Read only memory ROM, is a special type of memory for reading but not writing. This type of memory is useful for storing frequently used software, for which the user needs rapid access but has no need to modify. Memory chips are more expensive and faster than the various types of magnetic and laser disk memories.

As the cost of a bit of integrated circuit storage has been decreasing by about 30-35%each year, it is not surprising that each new generation of computer has much greater RAM storage than the previous generation. For example; the APPLE II had 48K and was expandable to 64K; the MAC started at 128K and now is rarely purchased with less the 4M of RAM. (K stands for one thousand and M stands for one million.)

(3). Magnetic Disk: These are magnetic disks of various types, such as floppies (5 1/4 and 3 1/2) and hard disk. These devices are cheaper and can store much greater amounts of data than RAMs, but have slower access time, which is the amount of time it takes the computer to read information from the magnetic disk into RAM. Over time technological advances make it possible to store increasing amounts of information per square inch of disk space. For example, firms are now selling floppy 3 1/2" disk drives with a capacity of 100M per diskette. Generally a computer has a much bigger hard disk memory than the memory on integrated circuits. Also, over time computers have increasing amounts of magnetic disk space.

(4). Bubble memory: This is an expensive, magnetic type of memory device which currently has not achieved its potential. Bubble memory has the potential to store the information contained in the Library of Congress on a device the size of a 1/4 inch cube. This type of memory has some applications in portable computers.

(5). Laser Disk: A new type of memory device is the laser disk that can store very large amounts of data. This type of memory device is beginning to be used in library application. Cheaper, but much slower than hard disks. CD-ROM refers to read only laser disks.

(6). Holograph memory: This is a new type of memory based on holograph patterns which is just now entering the market. The commercial potential for this type of memory device is very large because it is fast and can store a prodigious amount of information. Thus, this type of memory will be very useful in multimedia computers because the processing power of microprocessors has advanced much faster than the ability to move large amounts of date required for dynamic images from memory to the CPU.

c. Input/Output Devices

Keyboard, mouse and video screen

Printers: Dot matrix impact, Inkjet, Laser,

Modem to phone

Currently a great R&Deffort is being made on pen input devices which can read handwriting. This is a problem in pattern recognition. Also, voice input devices are now becoming a part of PCs. Again, this is a serious problem in pattern recognition.

d. Bus: The bus is the electric circuit connecting the components of the computer together. The more powerful the computer the bigger the bus, where bus size refers to the number of bits which can be communicated at one time. The trend in personal computers has moved from 8 to 16 to 32 and is moving to 64 bits communicated at one time. As we move towards multimedia, the need to move large blocks of data very quickly will increase.

e. Clock: The clock synchronizes the components of a computer to work together. A faster clock enables the computer to execute more instructions a second. Since the 1970s, clocks on personal computers have increased from 1 to over 100 million cycles a second. The faster the clock the more heat is generated and the more steps must be taken to keep the electronic devices cool.

Market Forces-Hardware

The original computer developed by UNIVAC was funded by the military to solve problems such as the trajectories of shells. Sperry thought the demand was 4 for trajectory studies and so did not push the marketing of computers. IBM saw the business possibilities of computers and developed its reputation not so much by product innovation, but by service and support. The computer market developed in the Fortune 500 companies and has progressively moved to smaller and smaller companies. Currently, the computer is entering the smallest of businesses. The computer is now even entering the home as a mass market item. The economics of the expansion are simple: as the market expands, the manufacturing costs fall which makes the computer a useful device to a larger and larger market and promotes further software development which in turn fuels the expansion by providing more application software to run on cheaper machines.

Today there is a vast array of different sized computers from small personal computers able to process 10 to 100 million or so instructions per second to giant supercomputers which can process tens of billions of instructions per second. A heuristic hierarchy might be personal computers, workstations, minicomputers, mainframes and supercomputers. In addition, special purpose computers act as control devices for numerous industrial activities such as chemical plants and communication exchanges. In new cars, a microprocessor controls the combustion process.

Because of Moore's Law concerning the increase in the number of electronic components on a chip, each new generation of computer is much more powerful than the last. When a new computer is designed it is designed with an existing microprocessor. After six months to a year, the computer comes to market. To sell the computer the manufacturer makes sure that the computer is backwards compatible which means that it can run all the previous software for previous company machines. For example, 80486 personal computers can run all 80386 software, and likewise the Pentium chip was designed to run both 80486 and 80386 software. After several years the software industry catches up and creates software which explicitly uses the power of the new machine. For example, the new operating systems for the PC clone world are just now taking advantage of the power of the 80386 chip. In the meantime new machines are developed on the next generation of microprocessor. Again there will be a lag of several years before software is developed which fully exploits the capabilities of the new machine.

Advances in computing involve considerably more than ever more powerful Von Neumann design computers. An important development is the parallel processor, a computer with many CPUs. While parallel processors are potentially much faster the Von Neumann designs in that all processors can be working on the program simultaneously, it is a very difficult programming problem to coordinate all the processors to achieve their potential performance. It should be noted that some programs are inherently sequential and can not take advantage of a parallel processor. There are numerous architectures for parallel processors each best suited for a particular type of job. In my opinion the type which is likely to gain in market share are computers that use thousands of personal-computer type microprocessors. Designs vary in whether each processor shares memory with other processors or whether each processor has its own memory. In designs where each processor has its own memory, performance is increased by connecting all the microprocessors into a network. One successful design where the number of processors is 2 to the nth is a N-hypercube where each processor is connected to n other processors. This means that a message between two processors must travel through no more than n nodes in the network.

Another computing concept is a neural network which approximates the operation of biological neural networks( a very large number of links between the microprocessors). Neural networks excel in pattern recognition, a task for which Von Neumann computers are not very efficient. Instead of being programmed, neural networks are trained by adjusting internal weights to match their output to specified targets. Neural networks can be software which runs on a Von Neumann computer or specially designed computers.

Originally corporations had standalone computers which were fed input from cards and magnetic tapes. Early evolution was towards more powerful mainframes. The next step were systems of terminals connected to the corporate mainframe. Then came the explosion of personal computers. At first personal computers were standalone units on employees desks. Because these PCs were not linked to the corporate mainframe, employees had a difficult time obtaining corporate data for their work. This difficulty created incentives to link these personal computers into networks.

A rapidly growing type of computer network is a client
server network
, which is a network of clients, which are PCs, MAC or workstations used by employees, connected to a server which furnishes clients with such thing as huge disk drives, databases, or connections to a network. Servers can be mainframes, minicomputers, workstations, or powerful PCs. More than one server can supply services to the clients. In creating a client-server network, it is frequently efficient to replace an expensive corporate mainframe with a much less costly, powerful workstation. This is known as (downsizing).

A client-server network has many advantages. If a client machine fails, the network remains operational. Such a network has tremendous computer power at a low cost because powerful workstations are as powerful as older mainframes at 1/10th cost. You can pick and choose hardware, software, and services from various vendors because these networks are open systems. Such systems can easily be expanded or modified to suit individual users and departments. The difficulties of such systems are that they are difficult to maintain, lack support tools, and require retraining the corporate programming staff.

Hardware: Surf the Net

There are a large number of computer hardware company sites which you might wish to visit. The list covers an example from each size of computers and an example of each type of computer.

For a list of computer companies, courtesy of Yahoo, click here

Software

Our interest in software is not that of a programmer, but rather an economist. What are the market forces operating on the evolution of software. We are concerned with the increasing capability of software as it moves from number crunching to multimedia applications on networks. We are also interested in the increased efficiency of programmers and how market forces tend to make software more `user friendly.'

Market Forces: Software

Software consists of the operating systems, which control the operations of computers, and programs which run in the operating system on a computer. Let us consider the economic forces operating on the evolution of software. From an economic perspective, most software buyers compare the costs with the services the software will perform, whether computer games at home or work tasks in their home or office.

Consequently, in order to compete, software companies constantly innovate to reduce the cost of creating software. Originally, programs for a computer had to be written in machine code, that is binary numbers for each operation. Because humans do not think in terms of strings of binary numbers, this meant the development of software was a slow, tedious affair. The first innovation in programming was assembly language which substituted a three letter code for the corresponding binary string, such as ADD for the binary string add. In the 50s computer scientists invented FORTRAN, which allowed engineers and scientists to write programs as equations, and COBOL, which allowed business programmers to write programs in business operations.

Since that time computer scientists have developed thousands of languages. The applications programmer generates statements in the language most suited for the application and a translator (an assembler, interpreter, or a compiler) transforms the statements of the language into machine language for execution. One trend in languages is specialization, such as Lisp and Prolog for artificial intelligence. A second trend is to incorporate new concepts, such as structured programming in Pascal. Another trend is incorporate more powerful statements in new languages. For example, being able to perform matrix operations in a single statement rather than write a routine to process each matrix element. Finally, constant effort is made to constantly improve the efficiency of the machine code created by translators.

Because it is much easier to automate the production of hardware than software, software development has become the bottleneck in the expansion of computation. One method of making programmers more efficient is to create libraries of subroutines. This means that rather than start from scratch with each program the programmer can write code to employ the appropriate subroutines. Because there is a very large investment in FORTRAN and Cobol libraries and programs, these languages have not been totally replaced by newer languages. Rather new features are constantly being added to these older languages. More advanced is the development is the creation of toolkits for programmers which write standard software for routine operations. For example, sending text to the printer.

The current rage is the move towards Object Oriented Programming, OOP. OOP can be considered an innovation on the idea of writing code using libraries of subroutines. The new wrinkle is to expand the concept of a subroutine to include not only code but also data. The data-code modules in OOP have three basic properties, encapsulation, inheritance and polymorphism. Encapsulation means that each module is a self-contained entity. Inheritance means that if you create a module A from module B which contains a subset of the data in A, module A inheritances all the code which runs on module B. Polymorphism means that general code can be applied to different objects, such as draw command to draw both a square and a circle simply from their definitions. A current example of an OOP language is C++. Currently OOP is primarily a concept which will not bear fruit until standards are created and agreed upon.

From the perspective of economics, the concept of OOP is an example of the specialization of labor. OOP improves efficiency because highly skilled, creative programmers will create libraries of OOP modules. Users with some programming skills will then use these modules to easily create their application programs.

Another very important software cost is the cost of learning how to use an application program. For example, how long do you have to send a secretary to school to use a new wordprocessor effectively. As computer memories grow and computers become faster, part of these increased memory and speed are used to develop operating systems and application programs which are much easier for the final user to learn how to use. While in large computer systems the trend has been to develop interactive operating systems so that many users can interact with the computer at the same time, the trend in personal computers has been to develop visual icon, mouse driven operating systems. The great advantage of an icon based operating system is that it is intuitive to man, a visual animal. Moreover, Apple has insisted all software developers use the same desktop format in presenting programs. While such a strategy meant that a considerable portion of Macintosh resources were devoted to running the mouse, icon interface, Apple was successful with the Macintosh because the user does not have to spend hours over manuals to accomplish a simple task. With Windows the PC clone world is imitating Apple. As I understand in the next version of Windows should be as easy to use as a Mac.

In a similar vein, software developers constantly try to reduce the users labor costs in using application software. One example is the trend towards integrated software. At first, it was very time consuming and labor intensive to transfer information from one type of software program to another. The user had to print out the information from one program and input it to another. Today in packages called office suites word processors are integrated with such programs as spreadsheets, drawing programs, and file programs to automatically transfer of information from one type of program to another. The frontier in integration is creating software which facilitates group interaction in networks of computers. An example here is Lotus notes. Group integration software improves the productivity of work groups in their joint efforts. Thus, integration simultaneously makes software more powerful and easier to use.

Software developers have powerful incentives in use the ever increasing power of computers to constantly create new software to perform services for users. One aspect of creating ever more powerful software is to continually add new features to software packages such as word processors. Another is to specialize software into niche markets such as specialized CAD programs for each industry. In addition, software programs are created for new human tasks. Initially computers were number crunchers for science and accounting. Next software was created for text processing. Then as computers became more powerful increasing amounts of software was created for graphics. Because man is a visual animal, the trend will be towards ever more powerful multimedia software.

Personal computers are just now acquiring sufficient power to process dynamic visual images. One example here is the creation of virtual reality. Virtual reality might be described as a 3-D animated world created on a computer screen. The viewer wears a special glove and helmet with goggles which enables him or her to interact with the 3-D world. Virtual reality is great for computer games and has numerous business applications. For example, engineers can determine if parts fit together in virtual space. As the number of electronic components on an integrated circuit continues to increase software will be created to edit videos on personal computers.

Computers will never be common home items until they are much easier to use. The long term vision is to have computers which are controlled by English language operating systems and programs. This will require major advances in voice pattern recognition and in creating English type computer languages. This, in turn, may require the integration of neural network computers for pattern recognition with the current type digital computers.

Quasi-intelligent Software

For software to provide an economic service, the software program need not contain any artificial intelligence whatsoever. For example, the software for an ATM machine enables the customer to select the desired service from a series of menus by pushing buttons. In many information services, computer programs based on menus could replace professionals, for example, travel agents, realtors, and those who provide simple legal services such as routine wills or divorces. Consider travel agents who collect commissions for using the airlines' software reservation systems to obtain tickets for their customers. The airlines are moving to simplify the use of their software for final customers, thus avoiding having to pay agents commissions. These reservation systems are available through information utilities. Over time the amount of service a computer could provide will increase. One impediment will be the monopoly power of the practitioners who will try to have the new software services declared illegal. The student should know that a legal monopoly element is prevalent in many services because the practitioner must be licensed by the government and the professional organization for the practitioners usually controls the licensing standards. For example, lawyers are not exactly happy about the prospect of legal software controlled by the user.

Expert or Knowledge-Based Systems: Obviously, the capability of software to provide economic services is enhanced by the incorporation of artificial intelligence. Let us first consider the advance of expert systems, which AI professionals prefer to call knowledge-based systems. These expert services in many cases are specialized information services which provide opinions or answers. The basis of an expert system is the knowledge base which is constructed by a knowledge engineer consulting with the expert. A rule of thumb is that if a problem takes less than twenty minutes for the expert to solve it is not worth the effort and if it takes over two hours it is too complicated. The knowledge engineer attempts to reduce the expert's problem solving approach to a list of conditional (if) statements and rules. The expert program provides a search procedure to search through the knowledge base in order to solve a problem. A particular problem is solved by entering the facts of the case. The knowledge incorporated in most expert programs is empirical not theoretical knowledge. Thus, this approach works best on problems which are clearly focused. Expert programs have had some market successes:

a. XCON of Digital: This expert program configures VAX computers for customers and makes fewer mistakes than humans.

b. Dipmeter advisor of Schlumberer: This expert program interprets readings from oil wells and performs as well as a junior geologist 90%of the time.

c. Prospector of the US Geological Service: This expert program found a major deposit worth $100M.

d. MYCIN of Stanford: This expert program can diagnose disorders of the blood better than a GP but not as well as an expert.

Before the mid80s AI was a research activity in universities. With the advent of the first commercial successes of expert systems a new industry was created. The new industry oversold the possibility of expert systems, and sold firms software packages with the mistaken idea that the firms could easily create the knowledge base for their applications themselves. The result was a fiasco which discredited the new industry. Critics claim that the AI types are overreaching themselves because the computer limitations imply an expert program is unlikely to be more than just competent and can not deal with new situations. This is the reason the AI types prefer to call expert systems knowledge based systems. Such systems in practice act as intelligent assistants not experts. They change the composition of work groups by replacing assistants and offer the possibility of new services. For example, knowledge based accounting systems reduces the need for junior accountants and enables accounting groups to answer what-if type questions for their clients.

From an economic prospective, if competency via an expert program is cheaper than competency via training humans, then the expert program industry will continue to grow. Once you have created an expert program, the cost of creating an additional copy is very low. The biggest success in expert systems is in the area of equipment maintenance programs. The use of expert systems continues to grow.

Other types of quasi-intelligent software: New types of quasi-intelligent software are neural networks and case-based reasoning. Neural networks are being used to spot credit card crooks, pick stocks, sort apples, and even drive trucks. A neural network is better at spotting credit card crooks than an expert program because the former can use many more variable than the latter. Expert systems used to spot credit card crooks tend to give off so many false alarms that they are of little use.

Case-based reasoning systems are natural language systems which employ a ``case base'' of previously solved problems. To solve a new problem the program searches for similar, previously solved cases and then adapts those solutions to the case at hand. Each new case is added to the case base. An important application is customer query systems. Quasi-intelligent software is being created which combines expert systems, neural networks and case-based reasoning with other types of software such as genetic algorithms( a powerful search tool for the best alternative), virtual reality, and multimedia.

The newest type of quasi-intelligent software is the concept of an intelligent agent. Intelligent agents are designed to perform tasks for their owner. For example, network agents could scan data bases and electronic mail, schedule meetings, and help with travel arrangements. Clerical agents in offices could answer phones, tap into computers for customer data and send faxes.

Software: Surf the Net

There are a large number of software sites you should visit.

For a long list of computer companies, courtesy of Yahoo, click here.

Impact of Computation

Objective: In order to forecast the future impact of computers on society, the first step is to consider the impact computers have already had on society. We will consider the impact of computers on mathematics and the sciences, engineering and policy, business and institutions, and other human activities.

Impact does not require computers to think

While researchers have a fairly clear idea how a computer functions, they do not know how the brain works. Until researchers have a much better idea how life forms think, the issue of whether a computer can ever be programmed to think like a human is an open question. Von Neumann computers are much better at arithmetic operations than humans, but much worse at pattern recognition. Neural networks, a new type of computer with numerous interconnections between the processing units much like the human brain, show great promise in pattern recognition. Future computers may combine neural networks with digital computers to take advantage of the strengths of both. Currently, a computer can be programmed to play chess at the grand master level and can beat the world winner at backgammon. To what extent software running in computers can demonstrate creativity is an open question. Nevertheless, in spite of their limitations software and hardware have had a major social impact and will have an even greater one in the future.

Mathematics and science

An example of the controversial impact of computers on math is the 4 color map problem. This problem deals with how many colors are necessary for a map on which no two adjacent areas have the same color. Example: How many colors would it take to color a map of the US? This problem has been worked on for at least one hundred years. The conjecture was four, however, no one was able to offer a proof until some mathematicians an the University of Illinois programmed their parallel processor. After using 1000 hours of computer time to examine all possible cases, they were able to state four colors were enough. This computer approach to proofs represents a fundamentally new approach to math. The computer methodology fundamentally challenges the ideal that the goal in proving theorems is simple proofs which can be checked by other mathematicians. Examining the proof for the 4 color map problem requires understanding the software which is anything but simple. In addition, mathematicians are beginning to study mathematics problems by graphical displays in a computer. This has created a controversial area of math known as experimental mathematics, that is studying math through computer experiments instead of proofs.

There are many phenomenon in science which simply can not be studied without supercomputers. For example, to analyze models of the weather requires computing tens of thousands of equations. Until the invention of supercomputers, these equations could not be computed in a reasonable time frame. Numerous physics problems exist which push the limits of computing. Chemists have programs which simulate chemical reactions and thus enable the chemist to tell the outcome without experiment. Complex molecules are studied with computer graphics. Economist have developed several types of world models with up to 15,000 equations; however these models can be simulated on a workstation.

Engineering and policy

The important concept which a computer enables an engineer or a policy maker to perform is the computation of an outcome from a simulation model. This allows an engineer or policy maker to analyze different cases without having to build a prototype or to perform experiments. For example, in the design of a car or aircraft wing, the air resistance can be determined by a computer simulation program without having to perform a wind tunnel experiment. Similarly, an economist can forecast the consequences of alternative monetary policies. Computer simulations are both much faster and much cheaper than conducting actual experiments. The value of simulation programs depends on how well the theory upon which the model is built explains the underlying phenomenon. Simulation models based on theories from natural science are generally much more accurate than simulation models based on social science theories. Some engineering projects would not have been possible without computers. The Apollo project, which sent man to the moon, could not be completed without computers. Social science simulations are generally not very actuate.

Business and and administration

One of the most important use of computers is administration in business and other institutions. When I worked for Douglas aircraft in the 60s, over 50 percent of the computer use was administrative tasks, such as, accounting and records for government contracts. Currently most corporate and other institutional records have been computerized and are maintained in databases. Even here at UT, student registration is finally being computerized. Businesses and other institutions are increasing their use of computers to make analytical decisions. One example is the growing use of spreadsheets to consider alternatives. Thus, spreadsheets can be considered the a business counterpart to engineering simulation programs. As computers become increasingly powerful, the amount of detail considered in business decision-making increases.

All human activities

Computers are now being used in most human activities. For example, in police work computers have been programmed to match fingerprints and this application identified a serial killer in California. Before this program, the labor costs were simply too high to match fingerprints unless there were suspects. A hot area of computation in the arts is music synthesizers. Another are programs to choreograph dance steps. In sports computers are used to analyze athletes' performances. In medicine, a computer program has been developed which stimulate nerves to allow paraplegics to move their muscles. One success story is a young lady paralyzed from waist down who was programmed to ride a bicycle and walk.

Impact: Surf the Net

To see some impact of computing check out the following:



Next: Communication Up: Informational Society Notes Previous: Microelectronics


norman@eco.utexas.edu
Wed Jul 19 11:08:35 CDT 1995