View in A-Z Index
Number of views:
26654

It was just 50 years ago that the first electronic computer was under development at the Moore School of Electronics in Philadelphia, as a tool to assist in the accurate targeting of long range artillery. Developments in computer technology and use since then have been a sensational demonstration of the pace of advance in science and engineering. Computers now impact on every aspect of life—as tools in scientific research, as dominant components of increasingly-complex business and commercial activities, as tools for personal use and as intelligent components of machines (ranging from automated factories to washing machines). It has been estimated that an equivalent rate of progress in the automobile industry would have provided luxury cars priced at £25 and with a fuel consumption of 400 miles per gallon!

This growth has been made possible by developments in hardware technology, which enable the construction of machines of appropriate performance, scale, reliability and cost, and by developments in software technology which provide cost-effective interfaces to users for a wide variety of requirements and expertise. Some of these advances are outlined briefly below and linkend to the development of the discipline of computing.

Early computing systems which developed following the demonstration of the ENIAC system in 1946 (for example, EDSAC in the UK, EDVAC in the US) were simple systems of limited scale; their simplicity arose from limited knowledge of the exploitation of computers; their limited scale was due to the components used in their construction (electronic valves with cathode ray tubes or mercury delay lines as storage) and the associated problems of size, power consumption and reliability. They were prototypes which provided experience for gaining new methods of exploiting their capabilities. The advent of the transistor and the use of ferrite magnetic core memory devices made possible significant increases in scale and the incorporation of more sophisticated facilities, such as floating point arithmetic. These also made possible the creation of scaled-down versions of large mainframe machines, which became known as minicomputers. The DEC PDP-11 was

The advent of microelectronics, whereby circuits involving a large number of transistors could be constructed by suitable doping of a single chip of a semiconductor, introduced dramatic changes to the scale of hardware systems as well as to their cost and reliability. Integrated circuits replaced discrete components. These could be mass-produced in sufficient quantities for use not only within arithmetic units but also as storage devices, with the result that stores became much larger (by a factor of 105 compared with early systems) and arithmetic units became more complex to improve their capability. Thus, the mainframe computer became more powerful without an increase in cost; the minicomputer became a maxicomputer such as VAX-11.

Perhaps more significantly, it became economically-feasible to produce microcomputers which were sufficiently simple that an entire processor—complete with fast storage in the form of registers—could be implemented on a single chip. Microprocessors, incorporated in microcomputer systems, also had a profound effect on the market and in shaping future system development. Their scale allowed computers to be used economically to replace mechanical control of simple analogue processes, such as clocks, cameras and washing machines. They extended the range of computing systems by supporting desk-top computers (to the “mini” PDP-11 was added the micro version LSI-11) heralding the way for an explosive increase in computer usage. The impact of the microprocessor on computer system development, however, was a mixed blessing. The limited scale of early microprocessors led to the introduction of very simple architectures, such as the Intel 8080 and the Z-80, with short word lengths and primitive register structures and arithmetic units. In many ways, these were comparable with features of the early computing systems of the 1950s, and the tools used to exploit them were equally primitive due to limitations of scale.

The rapid growth in microelectronics capability made possible an expansion in the capabilities of microprocessors. The Intel 8080 formed the starting point of a range of processors used to support personal computers and work stations, with the same capabilities as earlier minicomputers and “main frames” (the top-of-the-range Intel 80486 was a 32-bit system with a very wide range of complex features; the Micro-VAX 11 and Motorola 68000 became the equivalent of mini-systems). In an attempt to exploit microelectronic capability, the resulting systems became more complex. A typical example was the case of Intel 80 86 range. Its complexity was in part due to compatibility with early microprocessors whose features had never been designed for future expansion. In the case of the Motorola 68000, its complexity arose from attempts to combine sophisticated and primitive facilities with conservative use of resources, such as a store to hold instructions.

The increased complexity of processor architecture, using microcode to enable the decoding and execution of a wide range of basic instructions, was enhanced further by the introduction of the Reduced Instruction Set Computer or RISC architecture. It was suggested that complex operations such as multiplication or string handling could be decomposed by a compiler at the time of compilation instead of by microcode or complex circuitry at execution time. The executing system could incorporate only simple primitives which could be executed at high speed and the space saved on the processor chip could be used for fast storage in the form of registers and slave stores. Provided that the hardware-executed primitives are suitable as a target for compilers (not for mythical assembler writers), the net effect could be a more cost-effective computing system. The widespread growth of RISC architectures had shown that this was indeed a powerful argument. But the increasing complexity of RISC architectures to achieve higher performance—as seen in the DEC ALPHA processor—also suggested that the cycle of computer development may well be repetitive. Useful studies on computer development can be found in the work of Siewiorek, Bell and Newell (1982) while a thorough analysis of current trends in computer design has been undertaken by Hennesey and Patterson (1990).

Although the core of any computing system is the hardware of the processor and stores, it became clear that a cost-effective system with a wide range of expertise must incorporate software components. Initially, these took the form of library subroutines which could be copied into a user’s deck of cards; the user could then book time to run the task on the computer system. Hardware development of faster processors and of bulk storage capacity using magnetic tapes and discs made the manual procedures of library maintenance and of organizing a queue of users increasingly ineffective; they were therefore replaced by automatic procedures, implemented by a resident system software in the form of an operating system, which could organize the maintenance and use of library procedures and the processing of a sequence of user tasks.

Early operating systems, such as the Fortran Monitor System, were designed to process a sequence of user tasks comprising decks of cards separated by ”End-of-Job” cards. The mismatch between processor speeds and input/output operation using slow mechanical devices was partially overcome by copying input material to a magnetic tape which has a faster interface with the main processor. The advent of random access fast storage devices, such as discs, enabled a more flexible job assembly system to be supported, in the form of “Spooling” systems. The Atlas system was an early example of this.

Such systems were inspired by the replacement of manual procedures by automatic procedures. But the resulting “batch systems” suffered a severe disadvantage in that the user was denied access to the computing system during execution of a job. User interaction is clearly useful in many applications and essential in others, but its handicap was that the processor was synchronized with the reaction time of a user. Thus, processing time became totally dominated by the time taken by the user to analyze output, and in the light of that analysis to direct future processing. To address this problem, an alternative organization of job processing was introduced. This was paralleled by the development of batch systems, in which a single processor-store system could be switched to service many users, each interacting via their own terminal and enabling the system to overlap “think time” with processing of other active tasks. Time-sharing systems supporting multiple users were pioneered in the Multics System, which directly influenced the design of the UNIX and OS-2 systems, both of which are currently in widespread use in a variety of computer systems.

With the creation of microcomputers, which made available low-cost, high-performance processing, further development of operating procedures has become possible. One approach is to provide multiple “stand-alone” systems in the form of desk-top personal computers, each operated by users using manual control techniques in much the same way as early computer systems were used. An alternative strategy can be regarded as a development of time-sharing: the terminal is replaced by a work station physically similar to a personal computer, but providing only part of a user requirements; the rest are supplied by one or more servers which operate on a time-shared basis. Typically, an X-Terminal will support the graphical user interface to one or more tasks, with functions such as file store access and maintenance, compilation etc. being carried out by by servers (which again may be physically identical to a personal computer or may be mini or mainframe computing systems). The resulting network of computing resources—under the control of an operating system software—presents the appearance of a single overall system with multiple “intelligent” access points. Supported operating systems, such as UNIX, MS-DOS and OS-2, are now commonly available.

It might be supposed that the implementation of distributed processing has made irrelevant the capabilities implemented in batch and time-sharing systems on large mainframe computers. But this has not been the case for several reasons. Firstly, workstations are commonly used to support multiple tasks for a single user, as in the Windows system, and system support is similar to that developed in multiuser systems. More important, such systems are only effective if they conceal some properties of the real machine from access by user tasks, thus replacing the real machine by a Virtual Machine which reflects the properties of the underlying hardware and software mechanisms. For example, if several tasks share a single physical memory, it is clearly inappropriate to access the instructions and data by the real address of the appropriate memory cells, for these will vary depending upon the mix of tasks under execution. It becomes more appropriate to support a Virtual Memory with mechanisms to convert a virtual address into a real address at time of use. The virtual memory can reflect the logical property of the information, whilst mapping to a real store hierarchy is a system function; for example, there is benefit if the elements of an array occupy contiguous cells of virtual memory, whereas the array can be located in real store as blocks or pages which can be independently located.

Support by system software of a “virtual machine” is one of the most important provisions of a system software. It is commonplace in all modern systems, although the nature of the virtual machine and its relationships to the characteristics of the real machine differ in different systems. UNIX supports virtual machines which are almost independent of the hardware of the real machine; the VM-370 system supports virtual machines which are almost identical qualitatively to the real IBM 370 range of machines. Virtual machines contribute significantly to the ease of the programming of tasks for execution by any computing system—whether a stand-alone personal computer, a network of work stations or a multiuser mainframe system. A useful exposition of the structure of various operating systems is provided by Deitel (1990).

The creation of sequences of instructions which can be stored and subsequently executed by a processor can be achieved in a number of ways. A sequence could be manually created by means of a set of switches; more realistically, the sequence could itself be produced by the computing system, taking as input an expression of the required sequence supplied by a user. A compiler is a program converting a source language expression of a problem into a machine-executable sequence of instructions.

Early source languages were a mnemonic form of machine code. This was easily extended to form an Assembly Language, which had a close relationship to machine code but included some higher-level facilities such as naming and macroinstructions or subsequences of machine instructions. This level of problem expression was unsatisfactory in that the task of converting a problem to this form is exceedingly complex and error-prone and it is specific to the architecture of a particular system. The growth of computing was therefore accompanied by the development of higher-level languages providing a more natural description of the problem in a largely machine independent form, but allowing translation to machine code by a compiler executed by the computing system. Fortran and Cobol were two early languages suitable for scientific and commercial problems, respectively. Basic was a simpler language, popular in small-scale systems such as personal computers because of the simplicity of the required translation. These languages were static languages, in which work space and store for variables were allocated at the start of the job (by the compiler).

A preferable approach is to allocate space dynamically as required during execution, and procedural languages such as Algol, Pascal, and Modula 2 exhibit this characteristic. The C language is also a procedural language which enjoys current popularity. It is modeled on the Unix virtual machine—a largely machine-independent system—and can be used at a variety of levels of abstraction.

These languages are all “imperative” languages in that they consist of a sequence of statements which, when translated, correspond to sequences of machine instructions. An alternative form of problem description is that found in “declarative” languages. Here, the rules of problem solution are described and the compiler uses these rules to construct appropriate instruction sequences. For example, consider the problem of evaluation of the factorial N!. This could be expressed in imperative form as a sequence of multiplications by 2, 3, 4......N. In declarative form, the functional definition

allows the compiler to create an instruction sequence appropriate to any particular machine configuration. One such declarative language enjoying considerable popularity among knowledged-based systems is PROLOG; the declarations consist of Facts and Rules stored in a Knowledge Base and operated on by an Inference Engine, which applies the rules to a particular problem.

The ever-increasing scale of computer application has the inevitable consequence of increasing the scale and complexity of problem description. The discipline of Software Engineering addresses this requirement of producing complex problem descriptions which can be generated, tested, maintained and modified in an effective manner. Languages can help in this process; currently, Object-Orientated Languages such as SMALLTALK and C++ are becoming widely-used. Their principle of operation is to move away from the customary description of data structures and procedures which operate on the data to a description of Objects which incorporate data and the procedures that manipulate that data, and which communicate with each other by passing messages. The resulting modularisation of complex problems is one tool that can be used in the quest for manageable production of reliable and maintainable programs. Further discussions of the various types of programming languages are given by Horowitz (1987) and Wilson and Clark (1988).

Computing systems, whose original objective was to provide a calculation mechanism for use in scientific and engineering research, now pervade the whole of society. Their partially unforeseen expansion in scope has been the result of the increased cost-effectiveness of computing systems themselves over the past 50 years; in turn, this expansion in scope has influenced the development of computing systems.

Back to top © Copyright 2008-2024