See: Vagrant: Up and Running: Create and Manage Virtualized Development Environments 1st Edition
Tag: Virtualization
Return to Timeline of the History of Computers
This article is about the computer memory management technique. For the technique of pooling multiple storage devices, see Storage virtualization.

“In computing, virtual memory, or virtual storage[b] is a memory management technique that provides an “idealized abstraction of the storage resources that are actually available on a given machine”[3] which “creates the illusion to users of a very large (main) memory”.[4]” (WP)
“The computer’s operating system, using a combination of hardware and software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit (MMU), automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer.” (WP)
“The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, ability to share memory used by libraries between processes, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging or segmentation.” (WP)
1962
Virtual Memory
Fritz-Rudolf Güntsch (1925–2012), Tom Kilburn (1921–2001)
Memory was one of the key limitations of early computers. To deal with this limitation, programmers would split code and data into many different pieces, bring them into memory for processing, and then swap them back out to the auxiliary storage device when the computer’s memory was needed for something else. All of this movement was controlled by the programmer, and it took a lot of effort to get everything correct.
Working on his doctoral thesis at Technical University Berlin, Fritz-Rudolf Güntsch proposed having the computer automatically move data from storage into memory when it was referenced, and automatically save it back when the memory was needed for something else. To make this work, the computer’s data would be assigned to different portions of a larger virtual memory address space, which the computer would map, or assign, to much smaller pages of physical memory as needed.
Even by the standards of the day, the machine that Güntsch designed was tiny: it had just six blocks of core memory, each 100 words in size, that it used to create a virtual address space of 1,000 blocks. But he accurately described the idea.
Meanwhile, at the University of Manchester, a team led by Tom Kilburn was working to build what was a massive and fast machine by the day’s standards. For that machine, they designed and implemented a virtual memory system with 16,384 words of core memory and an auxiliary storage of 98,304 words. Each word had 48 bits, large enough to hold a floating-point number or two integers. Originally called MUSE, for “microsecond engine,” and later renamed Atlas, the transistorized computer took six years to create, with the British electronics firms Ferranti and Plessey joining in.
Today virtual memory is a standard part of every modern operating system: even cell phones use virtual memory. Seymour Cray (1925–1996), known for creating the world’s fastest computers in the 1960s, ’70s, ’80s, and ’90s, famously eschewed virtual memory because moving data between auxiliary storage and main memory takes precious time and slows down a computer. “You can’t fake what you don’t have,” Cray often said.
When it comes to virtual memory, it turns out that you can.
SEE ALSO Floating-Point Numbers (1914), Time-Sharing (1961)
Console of the Atlas 1 computer, used to provide information on the operation of the machine. The computer was installed at the Atlas Computer Laboratory in Britain and was mainly used for particle physics research.
Return to Timeline of the History of Computers
1961
Time-Sharing
John Warner Backus (1924–2007), Fernando J. Corbató (b. 1926)
A computer’s CPU can run only one program at a time. Although it was possible to sit down at the early computers and use them interactively, such personal use was generally regarded as a waste of fantastically expensive computing resources. That’s why batch processing became the standard way that most computers ran in the 1950s: it was more efficient to load many programs onto a tape and run them in rapid succession, and then make the printouts available to the much slower humans in due time.
But while batch processing was efficient for the computer, it was lousy for humans. Tiny programming bugs resulting from a single mistyped letter might not be discovered for many hours—typically not until the next day—when the results of the batch run were made available.
Researchers at MIT realized that a single CPU could be shared between several people at the same time if the CPU switched between different programs, running each for perhaps a 10th of a second. From the users’ point of view, the computer would appear to be running slower, but for the users, this system still would be more efficient, because they would find out about their bugs in seconds, rather than hours.
John Backus first proposed this method in 1954 at an MIT summer session sponsored by the Office of Naval Research, but it couldn’t be demonstrated until IBM delivered its 7090 computer to MIT—a computer that was large enough to hold several programs in memory at the same time.
MIT professor Fernando J. Corbató demonstrated his Experimental Time-Sharing System in November 1961. The system time-shared between four users. The operating system had 18 commands, including login, logout, edit (an interactive text editor), listf (list files), and mad (an early programming language). Later, this became the Compatible Time-Sharing System (CTSS), so named because it could support both interactive time-sharing and batch processing at the same time. Corbató was awarded the 1990 A.M. Turing award for his work on CTSS and Multics.
Time-sharing soon became the dominant way of interactive computing and remained so until the PC revolution of the 1980s.
SEE ALSO Utility Computing (1969), UNIX (1969)
Photograph of Fernando Corbató at MIT in the 1960s.
External Links:
Kubernetes (K8s) is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.
Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions such as Microsoft Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS) on AWS, DigitalOcean Kubernetes Service, and Google Kubernetes Engine (GKE) on GCP.
What is Kubernetes?
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.
Source: What is Kubernetes
History
Kubernetes is known to be a descendant of Google’s system BORG
The first unified container-management system developed at Google was the system we internally call Borg. It was built to manage both long-running services and batch jobs, which had previously been handled by two separate systems: Babysitter and the Global Work Queue. The latter’s architecture strongly influenced Borg, but was focused on batch jobs; both predated Linux control groups.
Source: Kubernetes Past
Date of Birth
Kubernetes celebrates its birthday every year on 21st July. Kubernetes 1.0 was released on July 21 2015, after being first announced to the public at Dockercon in June 2014.
Starting Point
A place that marks the beginning of a journey
- Kubernetes Community Overview and Contributions Guide by Ihor Dvoretskyi
- Are you Ready to Manage your Infrastructure like Google?
- Google is years ahead when it comes to the cloud, but it’s happy the world is catching up
- An Intro to Google’s Kubernetes and How to Use It by Laura Frank
- Kubernetes: The Future of Cloud Hosting by Meteorhacks
- Kubernetes by Google by Gaston Pantana
- Key Concepts by Arun Gupta
- Application Containers: Kubernetes and Docker from Scratch by Keith Tenzer
- Learn the Kubernetes Key Concepts in 10 Minutes by Omer Dawelbeit
- The Children’s Illustrated Guide to Kubernetes by Deis
- The ‘kubectl run’ command by Michael Hausenblas
- Docker Kubernetes Lab Handbook by Peng Xiao
- Curated Resources for Kubernetes
- Kubernetes Comic by Google Cloud Platform
- Kubernetes 101: Pods, Nodes, Containers, and Clusters by Dan Sanche
- An Introduction to Kubernetes by Justin Ellingwood
- Kubernetes and everything else – Introduction to Kubernetes and it’s context by Rinor Maloku
- Installation on Centos 7
- Setting Up a Kubernetes Cluster on Ubuntu 18.04
- Cloud Native Landscape
- The Kubernetes Handbook by Farhan Hasin Chowdhury