Software Engineering


The YouTube logo is made of a red round-rectangular box with a white "play" button inside and the word "YouTube" written in black.
Type of businessSubsidiary
Type of siteVideo hosting service
FoundedFebruary 14, 2005; 16 years ago
Headquarters901 Cherry Avenue
San Bruno, California, United States
Area servedWorldwide (excluding blocked countries)
Founder(s)Chad HurleySteve ChenJawed Karim
Key peopleSusan Wojcicki (CEO)
Chad Hurley (advisor)
IndustryInternetVideo hosting service
ProductsYouTube Premium
YouTube Music
YouTube TV
YouTube Kids
RevenueUS$15 billion (2019)[1]
ParentGoogle LLC (2006–present)
(see list of localized domain names)
AdvertisingGoogle AdSense
RegistrationOptionalNot required to watch most videos; required for certain tasks such as uploading videos, viewing flagged (18+) videos, creating playlists, liking or disliking videos, and posting comments
Users2 billion (October 2020)[2]
LaunchedFebruary 14, 2005; 16 years ago
Current statusActive
Content licenseUploader holds copyright (standard license); Creative Commons can be selected.
Written inPython (core/API),[3] C (through CPython), C++Java (through Guice platform),[4][5] Go,[6] JavaScript (UI)

YouTube is an American online video-sharing platform headquartered in San Bruno, California. The service, created in February 2005 by three former PayPal employees—Chad HurleySteve Chen, and Jawed Karim—was bought by Google in November 2006 for US$1.65 billion and now operates as one of the company’s subsidiaries. YouTube is the second most-visited website after Google Search, according to Alexa Internet rankings.[7]

YouTube allows users to upload, view, rate, share, add to playlists, report, comment on videos, and subscribe to other users. Available content includes video clipsTV show clips, music videosshort and documentary films, audio recordings, movie trailerslive streamsvideo blogging, short original videos, and educational videos. Most content is generated and uploaded by individuals, but media corporations including CBS, the BBCVevo, and Hulu offer some of their material via YouTube as part of the YouTube partnership program. Unregistered users can watch, but not upload, videos on the site, while registered users can upload an unlimited number of videos and add comments. Age-restricted videos are available only to registered users affirming themselves to be at least 18 years old.

As of May 2019, there were more than 500 hours of content uploaded to YouTube each minute and one billion hours of content being watched on YouTube every day.[8] YouTube and selected creators earn advertising revenue from Google AdSense, a program that targets ads according to site content and audience. The vast majority of videos are free to view, but there are exceptions, including subscription-based premium channels, film rentals, as well as YouTube Music and YouTube Premium, subscription services respectively offering premium and ad-free music streaming, and ad-free access to all content, including exclusive content commissioned from notable personalities. Based on reported quarterly advertising revenue, YouTube is estimated to have US$15 billion in annual revenues.

Further reading

External links


” (WP)


Fair Use Sources:

Artificial Intelligence GCP History Software Engineering

Google Releases TensorFlow – 2015 AD

Return to Timeline of the History of Computers


Google Releases TensorFlow

Makoto Koike (dates unavailable)

“Cucumbers are a big culinary deal in Japan. The amount of work that goes into growing them can be repetitive and laborious, such as the task of hand-sorting them for quality based on size, shape, color, and prickles. An embedded-systems designer who happens to be the son of a cucumber farmer (and future inheritor of the cucumber farm) had the novel idea of automating his mother’s nine-category sorting process with a sorting robot (that he designed) and some fancy machine learning (ML) algorithms. With Google’s release of its open source machine learning library, TensorFlow®, Makoto Koike was able to do just that.

TensorFlow, a deep learning neural network, evolved from Google’s DistBelief, a proprietary machine learning system that the company used for a variety of its applications. (Machine learning allows computers to find relationships and perform classifications without being explicitly programmed regarding the details.) While TensorFlow was not the first open source library for machine learning, its release was important for a few reasons. First, the code was easier to read and implement than most of the other platforms out there. Second, it used Python, an easy-to-use computer language widely taught in schools, yet powerful enough for many scientific computing and machine learning tasks. TensorFlow also had great support, documentation, and a dynamic visualization tool, and it was as practical to use for research as it was for production. It ran on a variety of hardware, from high-powered supercomputers to mobile phones. And it certainly didn’t hurt that it was a product of one of the world’s behemoth tech companies whose most valuable asset is the gasoline that fuels ML and AI—data.

These factors helped to drive TensorFlow’s popularity. The greater the number of people using it, the faster it improved, and the more areas in which it was applied. This was a good thing for the entire AI industry. Allowing code to be open source and sharing knowledge and data from disparate domains and industries is what the field needed (and still needs) to move forward. TensorFlow’s reach and usability helped democratize experimentation and deployment of AI and ML applications. Rather than being exclusive to companies and research institutions, AI and ML capabilities were now in reach of individual consumers — such as cucumber farmers.”

SEE ALSO: GNU Manifesto (1985), Computer Beats Master at Go (2016), Artificial General Intelligence (AGI) (~2050)

TensorFlow’s hallucinogenic images show the kinds of mathematical structures that neural networks construct in order to recognize and classify images.

Fair Use Sources: B07C2NQSPV

Knight, Will. “Here’s What Developers Are Doing with Google’s AI Brain.” MIT Technology Review, December 8, 2015.

Metz, Cade. “Google Just Open Sources TensorFlow, Its Artificial Intelligence Engine.” Wired online, November 9, 2015.

Data Science - Big Data History

Apache Hadoop Distributed File System (HDFS) with MapReduce Makes Big Data Possible – 2006 AD

Return to Timeline of the History of Computers


Hadoop Makes Big Data Possible

Doug Cutting (dates unavailable)

“Parallelism is the key to computing with massive data: break a problem into many small pieces and attack them all at the same time, each with a different computer. But until the early 2000s, most large-scale parallel systems were based on the scientific computing model: they were one-of-a-kind, high-performance clusters built with expensive, high-reliability components. Hard to program, these systems mostly ran custom software to solve problems such as simulating nuclear-weapon explosions.

Hadoop takes a different approach. Instead of specialty hardware, Hadoop lets corporations, schools, and even individual users build parallel processing systems from ordinary computers. Multiple copies of the data are distributed across multiple hard drives in different computers; if one drive or system fails, Hadoop replicates one of the other copies. Instead of moving large amounts of data over a network to super-fast CPUs, Hadoop moves a copy of the program to the data.

Hadoop got its start at the Internet Archive, where Doug Cutting was developing an internet search engine. A few years into the project, Cutting came across a pair of academic papers from Google, one describing the distributed file system that Google had created for storing data in its massive clusters, and the other describing Google’s MapReduce system for sending distributed programs to the data. Realizing that Google’s approach was better than his, he rewrote his code to match Google’s design.

In 2006, Cutting recognized that his implementation of the distribution systems could be used for more than running a search engine, so he took 11,000 lines of code out of his system and made them a standalone system. He named it “Hadoop” after one of his son’s toys, a stuffed elephant.

Because the Hadoop code was open source, other companies and individuals could work on it as well. And with the “big data” boom, many needed what Hadoop offered. The code improved, and the systems’ capabilities expanded. By 2015, the open source Hadoop market was valued at $6 billion and estimated to grow to $20 billion by 2020.”

SEE ALSO Connection Machine (1985), GNU Manifesto (1985)

Although the big-data program Hadoop is typically run on high-performance clusters, hobbyists have also run it, as a hack, on tiny underpowered machines like these Cubieboards.

Fair Use Sources: B07C2NQSPV

Dean, Jeffrey, and Sanjay Ghemawat. “MapReduce: Simplified Data Processing on Large Clusters.” In Proceedings of the Sixth Symposium on Operating System Design and Implementation (OSDI ’04): December 6–8, 2004, San Francisco, CA. Berkeley, CA: USENIX Association, 2004.

Ghemawat, Sanjay, Howard Gobioff, and Shun-Tak Leung. “The Google File System.” In SOSP ‘03: Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, 29–43. Vol. 37, no. 5 of Operating Systems Review. New York: Association for Computing Machinery, October, 2003.

Artificial Intelligence History

CAPTCHA – Completely Automated Public Turing test to tell Computers and Humans Apart – 2003 AD

Return to Timeline of the History of Computers



“CAPTCHAs are tests administered by a computer to distinguish a human from a bot, or a piece of software that is pretending to be a person. They were created to prevent programs (more correctly, people using programs) from abusing online services that were created to be used by people. For example, companies that provide free email services to consumers sometimes use a CAPTCHA to prevent scammers from registering thousands of email addresses within a few minutes. CAPTCHAs have also been used to limit spam and restrict editing to internet social media pages.

CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. The term was coined in 2003 by computer scientists at Carnegie Mellon; however, the technique itself dates to patents filed in 1997 and 1998 by two separate teams at Sanctum, an application security company later acquired by IBM, and AltaVista that describe the technique in detail.

One clever application of CAPTCHAs is to improve and speed up the digitization of old books and other paper-based text material. The ReCAPTCHA program takes words that are illegible to OCR (Optical Character Recognition) technology when scanned and uses them as the puzzles to be retyped. Licensed to Google, this approach helps improve the accuracy of Google’s book-digitizing project by having humans provide “correct” recognition of words too fuzzy for current OCR technology. Google can then use the images and human-provided recognition as training data for further improving its automated systems.

As AI has improved, the ability of a machine to solve CAPTCHA puzzles has improved as well, creating a sort of arms race, as each side tries to improve. Different approaches have evolved over the years to create puzzles that are hard for computers but easy for people. For example, one of Google’s CAPTCHAs simply asks users to click a box that says “I am not a robot”—meanwhile, Google’s servers analyze the user’s mouse movements, examine the cookies, and even review the user’s browsing history to make sure the user is legitimate. Techniques to break or get around CAPTCHA puzzles also drive the improvement and evolution of CAPTCHA. One manual example of this is the use of “digital sweatshop workers” who type CAPTCHA solutions for human spammers, reducing the effectiveness of CAPTCHAs to limit the abuse of computer resources.”

SEE ALSO The Turing Test (1951), First Internet Spam Message (1978)

CAPTCHAs require human users to enter a series of characters or take specific actions to prove they are not robots.

Fair Use Sources: B07C2NQSPV

GCP History

Google – 1998 AD

Return to Timeline of the History of Computers



Larry Page (b. 1973), Sergey Brin (b. 1973)

“The seed for what would become Google started with Stanford graduate student Larry Page’s curiosity about the organization of pages on the World Wide Web. Web links famously point forward. Page wanted to be able to go in the other direction.

To go backward, Page built a web crawler to scan the internet and organize all the links, named BackRub for the backlinks it sought to map out. He also recognized that being able to qualify the importance of the links would be of great use as well. Sergey Brin, a fellow graduate student, joined Page on the project, and they soon developed an algorithm that would not only identify and count the links to a page but also rank their importance based on quality of the pages from where the links originated. Soon thereafter, they gave their tool a search interface and a ranking algorithm, which they called PageRank. The effort eventually evolved into a full-blown business in 1998, with revenue coming primarily from advertisers who bid to show advertisements on search result pages.

In the following years, Google acquired a multitude of companies, including a video-streaming service called YouTube, an online advertising giant called DoubleClick, and cell phone maker Motorola, growing into an entire ecosystem of offerings providing email, navigation, social networking, video chat, photo organization, and a hardware division with its own smartphone. Recent research has focused on deep learning and AI (DeepMind), gearing up for the tech industry’s next battle—not over speed, but intelligence.

Merriam-Webster’s Collegiate Dictionary and the Oxford English Dictionary both added the word Google as a verb in 2006, meaning to search for something online using the Google search engine. At Google’s request, the definitions refer explicitly to the use of the Google engine, rather than the generic use of the word to describe any internet search.

On October 2, 2015, Google created a parent company to function as an umbrella over all its various subsidiaries. Called Alphabet Inc., the American multinational conglomerate is headquartered in Mountain View, California, and has more than 70,000 employees worldwide.”

SEE ALSO: First Banner Ad (1994)

Google’s self-described mission is to “organize the world’s information and make it universally accessible and useful.”

Fair Use Sources: B07C2NQSPV

Batelle, John. “The Birth of Google.” Wired, August 1, 2005.

Brin, Sergey, and Lawrence Page. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” In Proceedings of the Seventh International Conference on World Wide Web 7. Brisbane, Australia: Elsevier, 1998, 107–17.

GCP History Software Engineering

Google’s Dart Programming Language Invented by Lars Bak and Kasper Lund – 2011 AD

Return to Timeline of the History of Computers

Google developed the open source web-based Dart programming language, introducing it to the public in October 2011.

Fair Use Sources:

Go Programming Language History Software Engineering

Google’s Go Programming Language (Golang) Invented by Robert Griesemer, Rob Pike, and Ken Thompson – 2009 AD

Return to Timeline of the History of Computers

The Go programming language was developed at Google starting in 2007. It was completed and introduced to the public in 2009.

Go is a statically typedcompiled programming language designed at Google[14] by Robert GriesemerRob Pike, and Ken Thompson.[12] Go is syntactically similar to C, but with memory safetygarbage collectionstructural typing,[6] and CSP-style concurrency.[15] The language is often referred to as Golang because of its domain name,, but the proper name is Go.[16]

There are two major implementations:

A third-party transpiler GopherJS[22] compiles Go to JavaScript for front-end web development.

Fair Use Sources:

Cloud DevOps

Kubernetes (K8S) Container-Orchestration System

Kubernetes (K8s) is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.

Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions such as Microsoft Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS) on AWS, DigitalOcean Kubernetes Service, and Google Kubernetes Engine (GKE) on GCP.

What is Kubernetes?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

Source: What is Kubernetes


Kubernetes is known to be a descendant of Google’s system BORG

The first unified container-management system developed at Google was the system we internally call Borg. It was built to manage both long-running services and batch jobs, which had previously been handled by two separate systems: Babysitter and the Global Work Queue. The latter’s architecture strongly influenced Borg, but was focused on batch jobs; both predated Linux control groups.

Source: Kubernetes Past

Date of Birth

Kubernetes celebrates its birthday every year on 21st July. Kubernetes 1.0 was released on July 21 2015, after being first announced to the public at Dockercon in June 2014.

Starting Point

A place that marks the beginning of a journey

Artificial Intelligence GCP History

Computer Beats Master at Game of Go – 2016 AD

Computer Beats Master at Go

“The path for machine victory over the humans who play the ancient Chinese game of Go was not achieved through mathematical superiority, because Go is a very different game from chess.

Rather than the 8 × 8 grid for chess, Go is played on a 19 × 19 board, with each player having dozens of black or white stones. Each stone has the same value—unlike chess, in which the pieces are not all equal. The rules of Go are fairly straightforward—the two players try to surround each other’s stones and take territory from each other. However, because of the size of the grid, the number of potential positions in Go is staggering—considerably larger than the number of atoms in the Universe.

This sheer complexity is why intuition is so often cited as a key factor in winning the game, and why a computer program beating one of the best Go players that ever lived was considered so significant. As players add more stones to the board, the number of possible countermoves and counter-countermoves grows exponentially. As a result, brute-force “look-ahead” computing approaches to solving Go just can’t look far enough ahead: computers aren’t big enough. The Universe isn’t big enough.

AlphaGo® is the AI program that beat South Korean Go master Lee Sedol (b. 1983) in March 2016, in four out of five games, by adopting the same sort of strategic search strategies a human would. The program was created by the Google DeepMind team that evolved from Google’s acquisition of British company DeepMind Technologies, a British AI company that built a neural network to play video games like a human.

Lee Sedol did win once, however, so the computer did not dominate the match. In game four, white move 78, Lee Sedol found AlphaGo’s Achilles’ heel and made a move that so thoroughly confused the system that it started to make rookie mistakes, not recovering in time to save the game. The irony is that Sedol placed the stone where he did because AlphaGo had put him in a position where he saw no alternative move to make.”

SEE ALSO Computer Is World Chess Champion (1997)

“Go is played on a 19 × 19 board, with one player using black stones and the other using white stones, all possessing the same value.”

Fair Use Source: B07C2NQSPV

Byford, Sam. “Why Is Google’s Go Win Such a Big Deal?” The Verge, March 9, 2016.

House, Patrick. “The Electronic Holy War.” New Yorker online, May 25, 2014.

Koch, Christof. “How the Computer Beat the Go Master.” Scientific American online, March 19, 2016.

Moyer, Christopher. “How Google’s AlphaGo Beat a World Chess Champion.” Atlantic online, March 28, 2016.