Atlas
The Atlas operating system was designed at the University of Manchester in England in the late 1950s and early 1960s. Many of its basic features that were novel at the time have become standard parts of modern operating system. Device drivers were a major part of the system. In addition, system calls were added by a set of special instructions called extra codes.
Atlas was a batch operating system with spooling. Spooling allowed the system to schedule jobs according to the availability of peripheral devices, such as magnetic tape units, paper tape readers, paper tape punches, line printers, card readers, or card punches.
The most remarkable feature of Atlas however was its memory management. Core memory was new and expensive at that time. Many computers like the IBM 650 used a drum for primary memory. The Atlas system used a drum for its main memory, but it had a small amount of core memory that was used as a cache for the drum. Demand paging was used to transfer information between core memory and the drum automatically.
The Atlas system used a British computer with 48-bit words. Address were 24 bits, but were encoded in decimal, which allowed only 1 million words to be addressed. At that time this was an extremely large address space. The physical memory for Atlas was a 98 kb word drum and 16 kb words of core. Memory was divided into 512 word pages, providing 32 frames in physical memory. An associative memory of 32 registers implemented the mapping from a virtual address to a physical address.
If a page fault occurred, a page replacement algorithm was invoked. One memory frame was always kept empty, so that a drum transfer could start immediately. The page replacement algorithm attempted to predict the future memory accessing behaviour based on past behaviour. A reference bit for each frame was set whenever the frame was accessed. The reference bits were read into memory every 1024 instructions, and the last 32 values of these bits were retained. This history was used to define the time since the most recent reference and the interval between the last two references
XDS-940
The XDS-940 operating system was designed at the University of California at Berkeley. Like the Atlas system, it used paging for memory management. Unlike the Atlas system, XDS-940 was a time shared system.
The paging was used only for relocation; it was not used for demand paging. The virtual memory of any user process was only 16 kb words, whereas the physical memory was 64 kb words. Pages were 2 kb words each. The page table was kept in registers. Since physical memory was larger than virtual memory, several user processes could be in memory at the same time. The number of users could be increased by sharing of pages when the pages contained read only reentrant code. Processes were kept on a drum and were swapped in and out of memory as necessary.
The XDS-940 system was constructed from a modified XDS-930. The modifications were typical of the changes made to a basic computer to allow an operating system to be written properly. A user- monitor mode was added. Certain instructions, such as I/O and Halt, were defined to be privileged. An attempt to execute a privileged instruction in user mode would trap to the operating system.
A system call instruction was added to the user mode instruction set. This instruction was used to create new resources, such as files, allowing the operating system to manage the physical resources. Files, for example were allocated in 256 word blocks on the drum. A bit map used to manage free drum blocks. Each file had an index block with pointers to the actual data blocks. Index blocks were chained together.
The XDS-940 system also provided system calls to allow processes to create, start, suspend, and destroy subprocesses. A programmer could construct a system of processes. Separate processes could share memory for communication and synchronization. Process creation defined a tree structure, where a process is the root and its subprocesses are nodes below it in the tree. Each of the subprocesses could, in turn create more subprocesses.
THE
The THE operating system was designed at the Technische Hogeschool at Eindhoven in the Netherlands. It was a batch system running on a Dutch computer with 32 kb of 27-bit words. The system was mainly noted for its clean design, particularly its layer structure, and its use of a set of concurrent processes employing semaphores for synchronization.
Unlike the XDS-940 system however, the set of processes in the THE system was static. The operating system itself was designed as a set of cooperating processes. In addition, five user processes were created that served as the active agents to compile, execute and print user programs. When one job was finished, the process would return to the input queue to select another job.
A priority CPU scheduling algorithm was used. The priorities were recomputed every 2 seconds and were inversely proportional to the amount of CPU time used recently. This scheme gave higher priority to I/O bound processes and to new processes.
Memory management was limited by the lack of hardware support. However, since the system was limited and user programs could be written only in Algol. The Algol compiler automatically generated calls to system routines, which made sure the requested information was in memory. The backing store was a 512 kb word drum. A 512 word page was used with an LRU page replacement strategy.
Another major concern of the THE system was dedlock control. The banker’s algorithm was used to provide deadlock avoidance.
Closely related to the THE system is the Venous system. The Venus system was also a layer structured design using semaphores to synchronize processes. The lower levels of the design were implemented in microcode providing a much faster system. The memory management was changed to a page segmented memory. The system was also designed as a time sharing system rather than a batch system.
RC 4000
The RC 400 system was designed for the Danish RC 400 computer by Regnecentralen. The objective was not to design a batch system or time sharing system or any specific system. Rather the goal was to create an operating system nucleus or kernel on which a complete operating system could be built. Thus the system structure was layered and only lower levels were provided.
The kernel supported a collection of concurrent processes. A round robin CPU scheduler supported processes. Although processes could share memory, the primary communication and synchronization mechanism was the message system provided by the kernel. Processes could communicate with each other by exchanging fixed sized messages of eight words in length. All messages were stored in buffers from a common buffer pool. When a message buffer was no longer required, it was returned to the common pool.
A message queue was associated with each process. It contained all the messages that had been sent to that process, but had not yet been received. Messages were removed from the queue in FIFO order. The system supported four primitive operations, which were executed atomically.
I/O devices were also treated as processes. The device drivers were code that converted the device interrupts and registers into messages. Thus a process would write to a terminal by sending that terminal a message. The device driver would receive the message and output the character to the terminal. An input character would interrupt the system and transfer to a device driver. The device driver would create a message from the input character and send it to a waiting process.
CTSS
The Compatible Time Sharing System (CTSS) was designed at MIT as an experimental time sharing system. It was implemented on an IBM 7090 and eventually supported up to 32 interactive users. The users were provided with a
Set of interactive commands that allowed them to manipulate files and to compile and run programs through a terminal.
The 7090 had a 32 kb memory, made up to 36 bit words. The monitor used 5 kb words, leaving 27 kb for the users. User memory images were swapped between memory and a fast drum. CPU scheduling employed a multilevel feedback queue algorithm. The time quantum for level i was 2*I time units. If a program did not finish its CPU burst in one time quantum, it was moved down to the next level of the queue, giving it twice as much time. The program at the highest level was run first. The initial level of a program was determined by its size. So that the time quantum were at least as long as swap time.
CTSS was extremely successful and was in use as late as 1972. Although it was limited, it succeeded in demonstrating that time sharing was a convenient and practical mode of computing. One result of CTSS was increased development of time sharing systems. Another result was the development of MULTICS.
MULTICS
The MULTICS operating system was designed at MIT as a natural extension of CTSS. CTSS and other early time sharing systems were so successful that they created an immediate desire to proceed quickly to bigger and better systems. As larger computers became available, the designers of CTSS set out to create a time sharing utility. Computing service would be provided like electoral power. Large computer systems would be connected by telephone wires to terminals in offices and homes throughout in a city. The operating system would be a time shared system running continuously with a vast file system of shared programs and data.
MULTICS was designed by a team from MIT, GE and Bell Laboratories. The basic GE 635 computer was modified to as new computer system called the GE 645 by addition of paged segmentation memory hardware.
A virtual address was composed of an 18 bit segment number and a 16 bit word offset. The segments were then paged in 1 kb word pages. The second chance page replacement algorithm was used.
The segmented virtual address space was merged into the file system; each segment was a file. Segments were addressed by the name of the file. The file system itself was a multilevel tree structure allowing users to create their own subdirectory structures.
Like CTSS, MULTICS used a multilevel feedback queue for CPU scheduling. Protection was accomplished by an access list associated with each file and a set of protection rings for executing processes. The system which was written almost entirely in PL/1 comprises about 300,000 lines of code. It was extended to a multiprocessor system allowing a CPU to be taken out of service for maintenance while the system continued running.
OS/360
The longest line of operating system development is undoubtedly hat of IBM computers. The early IBM computers, such as the IBM 7090 and the IBM 7094, are prime examples of the development of common I/O subroutines, followed by a resident monitor, privileged instructions, memory protection, and simple batch processing. These systems were developed separately often by each site independently. As a result, IBM was faced with many different computers, with different languages and different system software.
The IBM/360 was designed to alter this situation. The IBM/360 was designed as a family of computers spanning the complete range from small business machines to large scientific machines. Only one set of software would be needed for these systems, which all used the same operating system: OS/360. This arrangement was supposed to reduce the maintenance problems for IBM and to allow users to move programs and applications freely from one IBM system to another.
Unfortunately, OS/360 tries to be all things for all people. As a result, it did none of its tasks especially well. The file system included a type field that defined the type of each file, and different file types were defined for fixed length and variable length records and for blocked and unblocked files. Contiguous allocation was used, so the user had to guess the size of each output file.
The memory management routines were hampered by the architecture. Although
A base register addressing mode was used; the program could access and modify the base register, so that absolute address was generated by the CPU. This arrangement prevented dynamic relocation; the program was bound to physical memory at load time. Two separate versions of the operating system were produced: OS/MFT used fixed regions and OS/MVT used variable regions.
The system was written in assembly language by thousands of programmers, resulting in millions of lines of code. The operating system itself required large amounts of memory for its code and tables. Operating system overhead often consumed one half of the total CPU cycles. Over the years new versions were released to add new features and to fix errors. However, fixing one error often caused another in some remote part of the system. So that the number of known errors in the system was fairly constant.
Virtual memory was added to OS/360 with the change to the IBM 370 architecture. The underlined hardware provided a segmented paged virtual memory. New versions of OS used this hardware in different ways.