Jump to UNIX history


Earliest computers

The earliest calculating machine was the abacus, believed to have been invented in Babylon around 2400 B.C.E. The abacus was used by many different cultures and civilizations, including the major advance known as the Chinese abacus from the 2nd Century B.C.E.

The Chinese developed the South Pointing Chariot in 115 B.C.E. This device featured a differential gear, later used in modern times to make analog computers in the mid-20th Century.

The Indian grammarian Panini wrote the Ashtadhyayi in the 5th Century B.C.E. In this work he created 3,959 rules of grammar for India’s Sanskrit language. This important work is the oldest surviving linguistic book and introduced the idea of metarules, transformations, and recursions, all of which have important applications in computer science.

The first true computers were made with intricate gear systems by the Greeks. These computers turned out to be too delicate for the technological capabilities of the time and were abandoned as impractical. The Antikythera mechanism, discovered in a shipwreck in 1900, is an early mechanical analog computer from between 150 B.C.E. and 100 B.C.E.. The Antikythera mechanism used a system of 37 gears to compute the positions of the sun and the moon through the zodiac on the Egyptian calendar, and possibly also the fixed stars and five planets known in antiquity (Mercury, Venus, Mars, Jupiter, and Saturn) for any time in the future or past. The system of gears added and subtracted angular velocities to compute differentials. The Antikythera mechanism could accurately predict eclipses and could draw up accurate astrological charts for important leaders. It is likely that the Antikythera mechanism was based on an astrological computer created by Archimedes of Syracuse in the 3rd century B.C.E.

The first digital computers were made by the Inca using ropes and pulleys. Knots in the ropes served the purpose of binary digits. The Inca had several of these computers and used them for tax and government records. In addition to keeping track of taxes, the Inca computers held data bases on all of the resources of the Inca empire, allowing for efficient allocation of resources in response to local disasters (storms, drought, earthquakes, etc.). Spanish soldiers acting on orders of Roman Catholic priests destroyed all but one of the Inca computers in the mistaken belief that any device that could give accurate information about distant conditions must be a divination device powered by the Christian “Devil” (and many modern Luddites continue to view computers as Satanically possessed devices).

In the 1800s, the first computers were programmable devices for controlling the weaving machines in the factories of the Industrial Revolution. Created by Charles Babbage, these early computers used Punch cards as data storage (the cards contained the control codes for the various patterns). These cards were very similiar to the famous Hollerinth cards developed later. The first computer programmer was Lady Ada, for whom the Ada programming language is named.

In 1822 Charles Babbage proposed a difference engine for automated calculating. In 1933 Babbage started work on his Analytical Engine, a mechanical computer with all of the elements of a modern computer, including control, arithmetic, and memory, but the technology of the day couldn’t produce gears with enough precision or reliability to make his computer possible. The Analytical Engine would have been programmed with Jacquard’s punched cards. Babbage designed the Difference Engine No.2. Lady Ada Lovelace wrote a program for the Analytical Engine that would have correctly calculated a sequence of Bernoulli numbers, but was never able to test her program because the machine wasn’t built.

George Boole introduced what is now called Boolean algebra in 1854. This branch of mathematics was essential for creating the complex circuits in modern electronic digital computers.

In the 1900s, researchers started experimenting with both analog and digital computers using vacuum tubes. Some of the most successful early computers were analog computers, capable of performing advanced calculus problems rather quickly. But the real future of computing was digital rather than analog. Building on the technology and math used for telephone and telegraph switching networks, researchers started building the first electronic digital computers.

The first modern computer was the German Zuse computer (Z3) in 1941. In 1944 Howard Aiken of Harvard University created the Harvard Mark I and Mark II. The Mark I was primarily mechanical, while the Mark II was primarily based on reed relays. Telephone and telegraph companies had been using reed relays for the logic circuits needed for large scale switching networks.

The first modern electronic computer was the ENIAC in 1946, using 18,000 vacuum tubes. See below for information on Von Neumann’s important contributions.

The first solid-state (or transistor) computer was the TRADIC, built at Bell Laboratories in 1954. The transistor had previously been invented at Bell Labs in 1948.



Jump to UNIX history continues..

Von Neumann architecture


John Louis von Neumann, mathematician (born János von Neumann 28 December 1903 in Budapest, Hungary, died 8 February 1957 in Washington, D.C.), proposed the stored program concept while professor of mathemtics (one of the orginal six) at Princeton University’s Institute for Advanced Services, in which programs (code) are stored in the same memory as data. The computer knows the difference between code and data by which it is attempting to access at any given moment. When evaluating code, the binary numbers are decoded by some kind of physical logic circuits (later other methods, such as microprogramming, were introduced), and then the instructions are run in hardware. This design is called von Neumann architecture and has been used in almost every digital computer ever made.


Von Neumann architecture introduced flexibility to computers. Previous computers had their programming hard wired into the computer. A particular computer could only do one task (at the time, mostly building artillery tables) and had to be physically rewired to do any new task.

By using numeric codes, von Neumann computers could be reprogrammed for a wide variety of problems, with the decode logic remaining the same.

As processors (especially super computers) get ever faster, the von Neumann bottleneck is starting to become an issue. With data and code both being accessed over the same circuit lines, the processor has to wait for one while the other is being fetched (or written). Well designed data and code caches help, but only when the requested access is already loaded into cache. Some researchers are now experimenting with Harvard architecture to solve the von Neumann bottleneck. In Harvard arrchitecture, named for Howard Aiken’s experimental Harvard Mark I (ASCC) calculator [computer] at Harvard University, a second set of data and address lines along with a second set of memory are set aside for executable code, removing part of the conflict with memory accesses for data.

Von Neumann became an American citizen in 1933 to be eligible to help on top secret work during World War II. There is a story that Oskar Morganstern coached von Neumann and Kurt Gödel on the U.S. Constitution and American history while driving them to their immigration interview. Morganstern asked if they had any questions, and Gödel replied that he had no questions, but had found some logical inconsistencies in the Constitution that he wanted to ask the Immigration officers about. Morganstern recommended that he not ask questions, but just answer them.

Von Neumann occassionally worked with Alan Turing in 1936 through 1938 when Turing was a graduate student at Princeton. Von Neumann was exposed to the concepts of logical design and universal machine proposed in Turing’s 1934 paper “On Computable Numbers with an Application to the Entschiedungs-problem”.

Von Neumann worked with such early computers as the Harvard Mark I, ENIAC, EDVAC, and his own IAS computer.


Early research into computers involved doing the computations to create tables, especially artillery firing tables. Von Neumann was convinced that the future of computers involved applied mathematics to solve specific problems rather than mere table generation. Von Neumann was the first person to use computers for mathematical physics and economics, proving the utility of a general purpose computer.

Von Neumann proposed the concept of stored programs in the 1945 paper “First Draft of a Report on the EDVAC”. Influenced by the idea, Maurice Wilkes of the Cambridge University Mathematical Laboratory designed and built the EDSAC, the world’s first operational, production, stored-program computer.

The first stored computer program ran on the Manchester Mark I [computer] on June 21, 1948.

Von Neumann foresaw the advantages of parallelism in computers, but because of construction limitations of the time, he worked on sequential systems.

Von Neumann advocated the adoption of the bit as the measurement of computer memory and solved many of the problems regarding obtaining reliable answers from unreliable computer components.

Interestingly, von Neumann was opposed to the idea of compilers. When shown the idea for FORTRAN in 1954, von Neumann asked “Why would you want more than machine language?”. Von Neumann had graduate students hand assemble programs into binary code for the IAS machine. Donald Gillies, a student at Princeton, created an assembler to do the work. Von Neumann was angry, claiming “It is a waste of a valuable scientific computing instrument to use it to do clerical work”.

Von Neumann also did important work in set theory (including measure theory), the mathematical foundation for quantum theory (including statistical mechanics), self-adjoint algebras of bounded linear operators on a Hilbert space closed in weak operator topology, non-linear partial differential equations, and automata theory (later applied to cmputers). His work in economics included his 1937 paper “A Model of General Economic Equilibrium” on a multi-sectoral growth model and his 1944 book “Theory of Games and Economic Behavior” (co-authored with Morgenstern) on game theory and uncertainty.

I leave the discussion of von Neumann with a couple of quotations:

 “If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.”

 “Anyone who considers arithmetical methods of producing random numbers is, of course, in a state of sin.”

Jump to UNIX history continues….

Computer operators


One solution to this problem was to have programmers prepare their work off-line on some input medium (often on punched cards, paper tape, or magnetic tape) and then hand the work to a computer operator. The computer operator would load up jobs in the order received (with priority overrides based on politics and other factors). Each job still ran one at a time with complete control of the computer, but as soon as a job finished, the operator would transfer the results to some output medium (punched tape, paper tape, magnetic tape, or printed paper) and deliver the results to the appropriate programmer. If the program ran to completion, the result would be some end data. If the program crashed, memory would be transferred to some output medium for the programmer to study (because some of the early business computing systems used magnetic core memory, these became known as “core dumps”).

The concept of computer operators dominated the mainframe era and continues today in large scale operations with large numbers of servers.

Device drivers and library functions

Soon after the first successes with digital computer experiments, computers moved out of the lab and into practical use. The first practical application of these experimental digital computers was the generation of artillery tables for the British and American armies. Much of the early research in computers was paid for by the British and American militaries. Business and scientific applications followed.

As computer use increased, programmers noticed that they were duplicating the same efforts.

Every programmer was writing his or her own routines for I/O, such as reading input from a magnetic tape or writing output to a line printer. It made sense to write a common device driver for each input or putput device and then have every programmer share the same device drivers rather than each programmer writing his or her own. Some programmers resisted the use of common device drivers in the belief that they could write “more efficient” or faster or "“better” device drivers of their own.

Additionally each programmer was writing his or her own routines for fairly common and repeated functionality, such as mathematics or string functions. Again, it made sense to share the work instead of everyone repeatedly “reinventing the wheel”. These shared functions would be organized into libraries and could be inserted into programs as needed. In the spirit of cooperation among early researchers, these library functions were published and distributed for free, an early example of the power of the open source approach to software development.

Computer manufacturers started to ship a standard library of device drivers and utility routines with their computers. These libraries were often called a runtime library because programs connected up to the routines in the library at run time (while the program was running) rather than being compiled as part of the program. The commercialization of code libraries ended the widespread free sharing of software.

Manufacturers were pressured to add security to their I/O libraries in order to prevent tampering or loss of data.

Jump to UNIX history continues…

Bare hardware

In the earliest days of electronic digital computing, everything was done on the bare hardware. Very few computers existed and those that did exist were experimental in nature. The researchers who were making the first computers were also the programmers and the users. They worked directly on the “bare hardware”. There was no operating system. The experimenters wrote their programs in machine or assembly language and a running program had complete control of the entire computer. Often programs and data were entered by hand through the use of toggle switches. Memory locations (both data and programs) could be read by viewing a series of lights (one for each binary digit). Debugging consisted of a combination of fixing both the software and hardware, rewriting the object code and changing the actual computer itself.

The lack of any operating system meant that only one person could use a computer at a time. Even in the research lab, there were many researchers competing for limited computing time. The first solution was a reservation system, with researchers signing up for specific time slots. The earliest billing systems charged for the entire computer and all of its resources (regardless of whether used or not) and was based on outside clock time, being billed from the scheduled start to scheduled end times.

The high cost of early computers meant that it was essential that the rare computers be used as efficiently as possible. The reservation system was not particularly efficient. If a researcher finished work early, the computer sat idle until the next time slot. If the researcher’s time ran out, the researcher might have to pack up his or her work in an incomplete state at an awkward moment to make room for the next researcher. Even when things were going well, a lot of the time the computer actually sat idle while the researcher studied the results (or studied memory of a crashed program to figure out what went wrong). Simply loading the programs and data took up some of the scheduled time.

Jump to UNIX history continues…

Input output control systems


The first programs directly controlled all of the computer’s resources, including input and output devices. Each individual program had to include code to control and operate each and every input and/or output device used.

One of the first consolidations was placing common input/output (I/O) routines into a common library that could be shared by all programmers. I/O was separated from processing.

These first rudimentary operating systems were called an Input Output Control System or IOCS.

Computers remained single user devices, with main memory divided into an IOCS and a user section. The user section consisted of program, data, and unused memory.

The user remained responsible for both set up and tear down.

Set up included loading data and program, by front panel switches, punched card, magnetic tapes, paper tapes, disk packs, drum drives, and other early I/O and storage devices. Paper might be loaded into printers, blank cards into card punch mahcines, and blank or formatted tape into tape drives, or other output devices readied.

Tear down would include unmounting tapes, drives, and other media.

The very expensive early computers sat idle during both set up and tear down.

This waste led to the introduction of less expensive I/O computers. While one I/O computer was being set up or torn down, another I/O computer could be communicating a readied job with the main computer.

Some installations might have several different I/O computers connected to a single main computer to keep the expensive main computer in use. This led to the concept of multiple I/O channels.

Monitors


As computers spread from the research labs and military uses into the business world, the accountants wanted to keep more accurate counts of time than mere wall clock time.

This led to the concept of the monitor. Routines were added to record the start and end times of work using computer clock time. Routines were added to I/O library to keep track of which devices were used and for how long.

With the development of the Input Output Control System, these time keeping routines were centralized.

You will notice that the word monitor appears in the name of some operating systems, such as FORTRAN Monitor System. Even decades later many programmers still refer to the operating system as the monitor.

An important motivation for the creation of a monitor was more accurate billing. The monitor could keep track of actual use of I/O devices and record runtime rather than clock time.

For accurate time keeping the monitor had to keep track of when a program stopped running, regardless of whether it was a normal end of the program or some kind of abnormal termination (such as aa crash).

The monitor reported the end of a program run or error conditions to a computer operator, who could load the next job waiting, rerun a job, or take other actions. The monitor also notified the computer operator of the need to load or unload various I/O devices (such as changing tapes, loading paper into the printer, etc.).

Jump to UNIX history continues……

1950s

Some operating systems from the 1950s include: FORTRAN Monitor System, General Motors Operating System, Input Output System, SAGE, and SOS.


SAGE (Semi-Automatic Ground Environment), designed to monitor weapons systems, was the first real time control system.

Batch systems


Batch systems automated the early approach of having human operators load one program at a time. Instead of having a human operator load each program, software handled the scheduling of jobs. In addition to programmers submitting their jobs,, end users could submit requests to run specific programs with specific data sets (usually stored in files or on cards). The operating system would schedule “batches” of related jobs. Output (punched cards, magnetic tapes, printed material, etc.) would be returned to each user.

General Motors Operating System, created by General Motors Research Laboratories in early 1956 (or late 1955) for thieir IBM 701 mainframe is generally considered to be the first batch operating system and possibly the first “real” operating system.

The operating system would read in a program and its data, run that program to completion (including outputing data), and then load the next program in series as long as there were additional jobs available.

Batch operating systems used a Job Control Language (JCL) to give the operating system instructions. These instructions included designation of which punched cards were data and which were programs, indications of which compiler to use, which centralized utilities were to be run, which I/O devices might be used, estimates of expected run time, and other details.

This type of batch operating system was known as a single stream batch processing system.

Examples of operating systems that were primarily batch-oriented include: BKY, BOS/360, BPS/360, CAL, and Chios.

 

Early 1960s


The early 1960s saw the introduction of time sharing and multi-processing.

Some operating systems from the early 1960s include: Admiral, B1, B2, B3, B4, Basic Executive System, BOS/360, Compatible Timesharing System (CTSS), EXEC I, EXEC II, Honeywell Executive System, IBM 1410/1710 OS, IBSYS, Input Output Control System, Master Control Program, and SABRE.
    The first major transaction processing system was SABRE (Semi-Automatic Business Related Environment), developed by IBM and American Airlines.

Jump to UNIX history continues…….

Multiprogramming


There is a huge difference in speed between I/O and running programs. In a single stream system, the processor remains idle for much of the time as it waits for the I/O device to be ready to send or receive the next piece of data.

The obvious solution was to load up multiple programs and their data and switch back and forth between programs or jobs.

When one job idled to wait for input or output, the operating system could automatically switch to another job that was ready.

 

System calls

    
The first operating system to introduce system calls was University of Machester’s Atlas I Supervisor.

Time sharing

    
The operating system could have additional reasons to rotate through jobs, including giving higher or lower priority to various jobs (and therefore a larger or smaller share of time and other resources). The Compatible Timesharing System (CTSS), first dmonstrated in 1961, was one of the first attempts at timesharing.
    
While most of the CTSS operating system was written in assembly language (all previous OSes were written in assembly for efficiency), the scheduler was written in the programming lanuage MAD in order to allow safe and reliable experimentation with different scheduling algorithms. About half of the command programs for CTSS were also written in MAD.
    
Timesharing is a more advanced version of multiprogramming that gives many users the illusion that they each have complete control of the computer to themselves. The scheduler stops running programs based on a slice of time, moves on to the next program, and eventually returns back to the beginning of the list of programs. In little increments, each program gets their work done in a manner that appears to be simultaneous to the end users.

 

Mid 1960s

    
Some operating systems from the mid-1960s include: Atlas I Supervisor, DOS/360, Input Output Selector, Master Control Program, and Multics.
    
The Atlas I Supervisor introduced spooling, interrupts, and virtual memory paging (16 pages) in 1962. Segmentation was introduced on the Burroughs B5000. MIT’s Multics combined paging and segmentation.
    
The Compatible Timesharing System (CTSS) introduced email.

 

Late 1960s

    
Some operating systems from the late-1960s include: BPS/360, CAL, CHIPPEWA, EXEC 3, and EXEC 4, EXEC 8, GECOS III, George 1, George 2, George 3, George 4, IDASYS, MASTER, Master Control Program, OS/MFT, OS/MFT-II, OS/MVT, OS/PCP, and RCA DOS.

 

Microprocessors

   
In 1968 a group of scientists and engineers from Mitre Corporation (Bedford, Massachusetts) created Viatron Computer company and an intelligent data terminal using an 8-bit LSI microprocessor from PMOS technology. A year later in 1969 Viatron created the 2140, the first 4-bit LSI microprocessor. At the time MOS was used only for a small number of calculators and there simply wasn’t enough worldwide manufacturing capacity to build these computers in quantity.
    
Other companies saw the benefit of MOS, starting with Intel’s 1971 release of the 4-bit 4004 as the first commercially available microprocessor. In 1972 Rockwell released the PPS-4 microprocessor, Fairchild released the PPS-25 microprocessor, and Intel released the 8-bit 8008 microprocessor. In 1973 National released the IMP microprocessor.
    
In 1973 Intel released the faster NMOS 8080 8-bit microprocessor, the first in a long series of microprocessors that led to the current Pentium.
    
In 1974 Motorola released the 6800, which included two accumulators, index registers, and memory-mapped I/O. Monolithic Memories introduced bit-slice microprocessing. In 1975 Texas Instruments introduced a 4-bit slice microprocessor and Fairchild introduced the F-8 microprocessor.

 

Early 1970s

    
Some operating systems from the early-1970s include: BKY, Chios, DOS/VS, Master Control Program, OS/VS1, and UNIX.
   
In 1970 Ken Thompson of AT&T Bell Labs suggested the name “Unix” for the operating system that had been under development since 1969. The name was an intentional pun on AT&T’s earlier Multics project (uni- means “one”, multi- means “many”).

Jump to UNIX history continues……..

UNIX takes over mainframes
    
I am skipping ahead to the development and spread of UNIX, not because the early history isn’t interesting, but because I notice that a lot of people are searching for information on UNIX history.
    
UNIX was orginally developed in a laboratory at AT&T’s Bell Labs (now an independent corporation known as Lucent Technologies). At the time, AT&T was prohibited from selling computers or software, but was allowed to develop its own software and computers for internal use. A few newly hired engineers were unable to get valuable mainframe computer time because of lack of seniority and resorted to writing their own operating system (UNIX) and programming language (C) to run on an unused mini-computer.
    
The computer game Space Travel was originally written by Jeremy Ben for Multics. When AT&T pulled out of the Multics project, J. Ben ported the program to FORTRAN running on GECOS on the GE 635. J. Ben and Dennis Ritchie ported the game in DEC PDP-7 assembly language. The process of porting the game to the PDP-7 computer was the beginning of Unix.
    
Unix was originally called UNICS, for Uniplexed Information and Computing Service, a play on words variation of Multics, Multiplexed Information and Computing Service.
    
AT&T’s consent decree with the U.S. Justice Department on monopoly charges was interpretted as allowing AT&T to release UNIX as an open source operating system for academic use. Ken Thompson, one of the originators of UNIX, took UNIX to the University of California, Berkeley, where students quickly started making improvements and modifications, leading to the world famous Berkeley Standard Distribution (BSD) form of UNIX.
    
UNIX quickly spread throughout the academic world, as it solved the problem of keeping track of many (sometimes dozens) of proprietary operating systems on university computers. With UNIX all of the computers from many different manufacturers could run the same operating system and share the same programs (recompiled on each processor).
    
When AT&T settled yet another monopoly case, the company was broken up into “Baby Bells” (the regional companies operating local phone service) and the central company (which had the long distance business and Bell Labs). AT&T (as well as the Baby Bells) was allowed to enter the computer business. AT&T gave academia a specific deadline to stop using “encumbered code” (that is, any of AT&T’s source code anywhere in their versions of UNIX).
    
This led to the development of free open source projects such as FreeBSD, NetBSD, and OpenBSD, as well as commercial operating systems based on the BSD code.
    
Meanwhile, AT&T developed its own version of UNIX, called System V. Although AT&T eventually sold off UNIX, this also spawned a group of commercial operating systems known as Sys V UNIXes.
    
UNIX quickly swept through the commercial world, pushing aside almost all proprietary mainframe operating systems. Only IBM’s MVS and DEC’s OpenVMS survived the UNIX onslaught.
    
“Vendors such as Sun, IBM, DEC, SCO, and HP modified Unix to differentiate their products. This splintered Unix to a degree, though not quite as much as is usually perceived. Necessity being the mother of invention, programmers have created development tools that help them work around the differences between Unix flavors. As a result, there is a large body of software based on source code that will automatically configure itself to compile on most Unix platforms, including Intel-based Unix.
    
Regardless, Microsoft would leverage the perception that Unix is splintered beyond hope, and present Windows NT as a more consistent multi-platform alternative.” —Nicholas Petreley, “The new Unix alters NT’s orbit”, NC World

Jump to UNIX history continues………

UNIX to the desktop

Among the early commercial attempts to deploy UNIX on desktop computers was AT&T selling UNIX in an Olivetti box running a w74 680x0 assembly language is discussed in the assembly language section. Microsoft partnered with Xenix to sell their own version of UNIX. Apple computers offered their A/UX version of UNIX running on Macintoshes. None of these early commercial UNIXs was successful. “Unix started out too big and unfriendly for the PC. … It sold like ice cubes in the Arctic. … Wintel emerged as the only ‘safe’ business choice”, Nicholas Petreley.

 “Unix had a limited PC market, almost entirely server-centric. SCO made money on Unix, some of it even from Microsoft. (Microsoft owns 11 percent of SCO, but Microsoft got the better deal in the long run, as it collected money on each unit of SCO Unix sold, due to a bit of code in SCO Unix that made SCO somewhat compatible with Xenix. The arrangement ended in 1997.)” —Nicholas Petreley, “The new Unix alters NT’s orbit”, NC World
   
To date, the most widely used desktop version of UNIX is Apple’s Mac OS X, combining the ground breaking object oriented NeXT with some of the user interface of the Macintosh.

 

Mid 1970s

    
Some operating systems from the mid-1970s include: CP/M, Master Control Program.
   
In 1973 the kernel of Unix was rewritten in the C programming language. This made Unix the world’s first portable operating system, capable of being easily ported (moved) to any hardware. This was a major advantage for Unix and led to its widespread use in the multi-platform environments of colleges and universities.

 

Late 1970s

    
Some operating systems from the late-1970s include: EMAS 2900, General Comprehensive OS, VMS (later renamed OpenVMS), OS/MVS.

 

1980s

    
Some operating systems from the 1980s include: AmigaOS, DOS/VSE, HP-UX, Macintosh, MS-DOS, and ULTRIX.
   
The 1980s saw the commercial release of the graphic user interface, most famously the Apple Macintosh, Commodore Amiga, and Atari ST, followed by Microsoft’s Windows.

 

1990s

    
Some operating systems from the 1990s include: BeOS, BSDi, FreeBSD, NeXT, OS/2, Windows 95, Windows 98, and Windows NT.

 

2000s

    
Some operating systems from the 2000s include: Mac OS X, Syllable, Windows 2000, Windows Server 2003, Windows ME, and Windows XP.

 

Historical Timeline

    
Timeline Notes In addition to listing the years that various operating systems were introduced, this timeline also includes information on supporting technologies to give better context.
    
Year Year that items were introduced.
    
Operating Systems Operating systems introduced in that year.
    
Programming Languages Programming languages introduced. While only a few programming languages are appropriate for operating system work (such as Ada, BLISS, C, FORTRAN, and PL/I, the programming languages available with an operating system greatly influence the kinds of application programs available for an operating system.
    
Computers Computers and processors introduced. While a few operating systems run on a wide variety of computers (such as UNIX and Linux), most operating systems are closely or even intimately tied to their primary computer hardware. Speed listings in parenthesis are in operations per second (OPS), floating point operatins per second (FLOPS), or clock speed (Hz).
    
Software Software programs introduced. Some major application programs that became available. Often the choice of operating system and computer was made by the need for specific programs or kinds of programs.
    
Games Games introduced. It may seem strange to include games in the time line, but many of the advances in computer hardware and software technologies first appeared in games. As one famous example, the roots of UNIX were the porting of an early computer game to new hardware.
    
Technology Major technology advances, which influence the capabilities and possibilities for operating systems.

 

1930s

    
1938:
    
Computers: Zuse Z1 (Germany, 1 OPS, first mechanical programmable binary computer, storage for a total of 64 numbers stored as 22 bit floating point numbers with 7-bit exponent, 15-bit signifocana [one implicit bit], and sign bit)

 

1940s

    
1941:
    
Computers: Atanasoff-Berry Computer; Zuse Z3 (Germany, 20 OPS, added floating point exceptions, plus and minus infinity, and undefined)
    
1942:
    
Computers: work started on Zuse Z4
    
1943:
    
Computers: Harvard Mark I (U.S.); Colossus 1 (U.K., 5 kOPS)
    
1944:
    
Computers: Colossus 2 (U.K., single processor, 25 kOPS)
    
1945:
    
Programming Languages: Planalkül (Plan Calculus)  
Computers: Zuse Z4 (relay based computer, first commercial computer)
    
1946:
    
Computers: UPenn Eniac (5 kOPS); Colossus 2 (parallel processor, 50 kOPS)
Technology: electrostatic memory
    
1948:

Computers: IBM SSEC; Manchester SSEM
Technology: random access memory; magnetic drums; transistor
    
1949:

Computers: Manchester Mark 1
Technology: registers