See: How Google Tests Software 1st Edition
|Type of business||Subsidiary|
|Type of site||Video hosting service|
|Founded||February 14, 2005; 16 years ago|
|Headquarters||901 Cherry Avenue|
San Bruno, California, United States
|Area served||Worldwide (excluding blocked countries)|
|Founder(s)||Chad HurleySteve ChenJawed Karim|
|Key people||Susan Wojcicki (CEO)|
Chad Hurley (advisor)
|Industry||InternetVideo hosting service|
|Revenue||US$15 billion (2019)|
|Parent||Google LLC (2006–present)|
(see list of localized domain names)
|Registration||OptionalNot required to watch most videos; required for certain tasks such as uploading videos, viewing flagged (18+) videos, creating playlists, liking or disliking videos, and posting comments|
|Users||2 billion (October 2020)|
|Launched||February 14, 2005; 16 years ago|
|Content license||Uploader holds copyright (standard license); Creative Commons can be selected.|
YouTube is an American online video-sharing platform headquartered in San Bruno, California. The service, created in February 2005 by three former PayPal employees—Chad Hurley, Steve Chen, and Jawed Karim—was bought by Google in November 2006 for US$1.65 billion and now operates as one of the company’s subsidiaries. YouTube is the second most-visited website after Google Search, according to Alexa Internet rankings.
YouTube allows users to upload, view, rate, share, add to playlists, report, comment on videos, and subscribe to other users. Available content includes video clips, TV show clips, music videos, short and documentary films, audio recordings, movie trailers, live streams, video blogging, short original videos, and educational videos. Most content is generated and uploaded by individuals, but media corporations including CBS, the BBC, Vevo, and Hulu offer some of their material via YouTube as part of the YouTube partnership program. Unregistered users can watch, but not upload, videos on the site, while registered users can upload an unlimited number of videos and add comments. Age-restricted videos are available only to registered users affirming themselves to be at least 18 years old.
As of May 2019, there were more than 500 hours of content uploaded to YouTube each minute and one billion hours of content being watched on YouTube every day. YouTube and selected creators earn advertising revenue from Google AdSense, a program that targets ads according to site content and audience. The vast majority of videos are free to view, but there are exceptions, including subscription-based premium channels, film rentals, as well as YouTube Music and YouTube Premium, subscription services respectively offering premium and ad-free music streaming, and ad-free access to all content, including exclusive content commissioned from notable personalities. Based on reported quarterly advertising revenue, YouTube is estimated to have US$15 billion in annual revenues.
- Dickey, Megan Rose (February 15, 2013). “The 22 Key Turning Points in the History of YouTube”. Business Insider. Retrieved March 25, 2017.
- Haran, Brady; Hamilton, Ted. “Why do YouTube views freeze at 301?”. Numberphile. Brady Haran. Archived from the original on December 26, 2016. Retrieved April 8, 2013.
- Kelsey, Todd (2010). Social Networking Spaces: From Facebook to Twitter and Everything In Between. Springer-Verlag. ISBN 978-1-4302-2596-6.
- Lacy, Sarah (2008). The Stories of Facebook, YouTube and MySpace: The People, the Hype and the Deals Behind the Giants of Web 2.0. Richmond: Crimson. ISBN 978-1-85458-453-3.
- Walker, Rob (June 28, 2012). “On YouTube, Amateur Is the New Pro”. The New York Times. Retrieved March 26, 2017.
- Official website (Mobile)
- YouTube for Press
- YouTube on Blogger
- YouTube – Google Developers
- Are YouTubers Revolutionizing Entertainment? (June 6, 2013), video produced for PBS by Off Book.
- 2005 establishments in California
- 2006 mergers and acquisitions
- Advertising video on demand
- Alphabet Inc.
- American companies established in 2005
- Android (operating system) software
- Companies based in San Mateo County, California
- Firefox OS software
- Go (programming language) software
- Google acquisitions
- Google services
- Internet properties established in 2005
- IOS software
- Multilingual websites
- PlayStation 4 software
- Recommender systems
- Social media
- Transactional video on demand
- Video game streaming services
- Video hosting
Return to Timeline of the History of Computers
Google Releases TensorFlow
Makoto Koike (dates unavailable)
“Cucumbers are a big culinary deal in Japan. The amount of work that goes into growing them can be repetitive and laborious, such as the task of hand-sorting them for quality based on size, shape, color, and prickles. An embedded-systems designer who happens to be the son of a cucumber farmer (and future inheritor of the cucumber farm) had the novel idea of automating his mother’s nine-category sorting process with a sorting robot (that he designed) and some fancy machine learning (ML) algorithms. With Google’s release of its open source machine learning library, TensorFlow®, Makoto Koike was able to do just that.
TensorFlow, a deep learning neural network, evolved from Google’s DistBelief, a proprietary machine learning system that the company used for a variety of its applications. (Machine learning allows computers to find relationships and perform classifications without being explicitly programmed regarding the details.) While TensorFlow was not the first open source library for machine learning, its release was important for a few reasons. First, the code was easier to read and implement than most of the other platforms out there. Second, it used Python, an easy-to-use computer language widely taught in schools, yet powerful enough for many scientific computing and machine learning tasks. TensorFlow also had great support, documentation, and a dynamic visualization tool, and it was as practical to use for research as it was for production. It ran on a variety of hardware, from high-powered supercomputers to mobile phones. And it certainly didn’t hurt that it was a product of one of the world’s behemoth tech companies whose most valuable asset is the gasoline that fuels ML and AI—data.
These factors helped to drive TensorFlow’s popularity. The greater the number of people using it, the faster it improved, and the more areas in which it was applied. This was a good thing for the entire AI industry. Allowing code to be open source and sharing knowledge and data from disparate domains and industries is what the field needed (and still needs) to move forward. TensorFlow’s reach and usability helped democratize experimentation and deployment of AI and ML applications. Rather than being exclusive to companies and research institutions, AI and ML capabilities were now in reach of individual consumers — such as cucumber farmers.”
TensorFlow’s hallucinogenic images show the kinds of mathematical structures that neural networks construct in order to recognize and classify images.
Knight, Will. “Here’s What Developers Are Doing with Google’s AI Brain.” MIT Technology Review, December 8, 2015. https://www.technologyreview.com/s/544356/heres-what-developers-are-doing-with-googles-ai-brain.
Metz, Cade. “Google Just Open Sources TensorFlow, Its Artificial Intelligence Engine.” Wired online, November 9, 2015. https://www.wired.com/2015/11/google-open-sources-its-artificial-intelligence-engine.
Return to Timeline of the History of Computers
Hadoop Makes Big Data Possible
Doug Cutting (dates unavailable)
“Parallelism is the key to computing with massive data: break a problem into many small pieces and attack them all at the same time, each with a different computer. But until the early 2000s, most large-scale parallel systems were based on the scientific computing model: they were one-of-a-kind, high-performance clusters built with expensive, high-reliability components. Hard to program, these systems mostly ran custom software to solve problems such as simulating nuclear-weapon explosions.
Hadoop takes a different approach. Instead of specialty hardware, Hadoop lets corporations, schools, and even individual users build parallel processing systems from ordinary computers. Multiple copies of the data are distributed across multiple hard drives in different computers; if one drive or system fails, Hadoop replicates one of the other copies. Instead of moving large amounts of data over a network to super-fast CPUs, Hadoop moves a copy of the program to the data.
Hadoop got its start at the Internet Archive, where Doug Cutting was developing an internet search engine. A few years into the project, Cutting came across a pair of academic papers from Google, one describing the distributed file system that Google had created for storing data in its massive clusters, and the other describing Google’s MapReduce system for sending distributed programs to the data. Realizing that Google’s approach was better than his, he rewrote his code to match Google’s design.
In 2006, Cutting recognized that his implementation of the distribution systems could be used for more than running a search engine, so he took 11,000 lines of code out of his system and made them a standalone system. He named it “Hadoop” after one of his son’s toys, a stuffed elephant.
Because the Hadoop code was open source, other companies and individuals could work on it as well. And with the “big data” boom, many needed what Hadoop offered. The code improved, and the systems’ capabilities expanded. By 2015, the open source Hadoop market was valued at $6 billion and estimated to grow to $20 billion by 2020.”
Although the big-data program Hadoop is typically run on high-performance clusters, hobbyists have also run it, as a hack, on tiny underpowered machines like these Cubieboards.
Dean, Jeffrey, and Sanjay Ghemawat. “MapReduce: Simplified Data Processing on Large Clusters.” In Proceedings of the Sixth Symposium on Operating System Design and Implementation (OSDI ’04): December 6–8, 2004, San Francisco, CA. Berkeley, CA: USENIX Association, 2004.
Ghemawat, Sanjay, Howard Gobioff, and Shun-Tak Leung. “The Google File System.” In SOSP ‘03: Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, 29–43. Vol. 37, no. 5 of Operating Systems Review. New York: Association for Computing Machinery, October, 2003.
Return to Timeline of the History of Computers
“CAPTCHAs are tests administered by a computer to distinguish a human from a bot, or a piece of software that is pretending to be a person. They were created to prevent programs (more correctly, people using programs) from abusing online services that were created to be used by people. For example, companies that provide free email services to consumers sometimes use a CAPTCHA to prevent scammers from registering thousands of email addresses within a few minutes. CAPTCHAs have also been used to limit spam and restrict editing to internet social media pages.
CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. The term was coined in 2003 by computer scientists at Carnegie Mellon; however, the technique itself dates to patents filed in 1997 and 1998 by two separate teams at Sanctum, an application security company later acquired by IBM, and AltaVista that describe the technique in detail.
One clever application of CAPTCHAs is to improve and speed up the digitization of old books and other paper-based text material. The ReCAPTCHA program takes words that are illegible to OCR (Optical Character Recognition) technology when scanned and uses them as the puzzles to be retyped. Licensed to Google, this approach helps improve the accuracy of Google’s book-digitizing project by having humans provide “correct” recognition of words too fuzzy for current OCR technology. Google can then use the images and human-provided recognition as training data for further improving its automated systems.
As AI has improved, the ability of a machine to solve CAPTCHA puzzles has improved as well, creating a sort of arms race, as each side tries to improve. Different approaches have evolved over the years to create puzzles that are hard for computers but easy for people. For example, one of Google’s CAPTCHAs simply asks users to click a box that says “I am not a robot”—meanwhile, Google’s servers analyze the user’s mouse movements, examine the cookies, and even review the user’s browsing history to make sure the user is legitimate. Techniques to break or get around CAPTCHA puzzles also drive the improvement and evolution of CAPTCHA. One manual example of this is the use of “digital sweatshop workers” who type CAPTCHA solutions for human spammers, reducing the effectiveness of CAPTCHAs to limit the abuse of computer resources.”
CAPTCHAs require human users to enter a series of characters or take specific actions to prove they are not robots.
Return to Timeline of the History of Computers
Larry Page (b. 1973), Sergey Brin (b. 1973)
“The seed for what would become Google started with Stanford graduate student Larry Page’s curiosity about the organization of pages on the World Wide Web. Web links famously point forward. Page wanted to be able to go in the other direction.
To go backward, Page built a web crawler to scan the internet and organize all the links, named BackRub for the backlinks it sought to map out. He also recognized that being able to qualify the importance of the links would be of great use as well. Sergey Brin, a fellow graduate student, joined Page on the project, and they soon developed an algorithm that would not only identify and count the links to a page but also rank their importance based on quality of the pages from where the links originated. Soon thereafter, they gave their tool a search interface and a ranking algorithm, which they called PageRank. The effort eventually evolved into a full-blown business in 1998, with revenue coming primarily from advertisers who bid to show advertisements on search result pages.
In the following years, Google acquired a multitude of companies, including a video-streaming service called YouTube, an online advertising giant called DoubleClick, and cell phone maker Motorola, growing into an entire ecosystem of offerings providing email, navigation, social networking, video chat, photo organization, and a hardware division with its own smartphone. Recent research has focused on deep learning and AI (DeepMind), gearing up for the tech industry’s next battle—not over speed, but intelligence.
Merriam-Webster’s Collegiate Dictionary and the Oxford English Dictionary both added the word Google as a verb in 2006, meaning to search for something online using the Google search engine. At Google’s request, the definitions refer explicitly to the use of the Google engine, rather than the generic use of the word to describe any internet search.
On October 2, 2015, Google created a parent company to function as an umbrella over all its various subsidiaries. Called Alphabet Inc., the American multinational conglomerate is headquartered in Mountain View, California, and has more than 70,000 employees worldwide.”
SEE ALSO: First Banner Ad (1994)
Google’s self-described mission is to “organize the world’s information and make it universally accessible and useful.”
Batelle, John. “The Birth of Google.” Wired, August 1, 2005. https://www.wired.com/2005/08/battelle.
Brin, Sergey, and Lawrence Page. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” In Proceedings of the Seventh International Conference on World Wide Web 7. Brisbane, Australia: Elsevier, 1998, 107–17.
Return to Timeline of the History of Computers
|The Go programming language was developed at Google starting in 2007. It was completed and introduced to the public in 2009.|
Go is a statically typed, compiled programming language designed at Google by Robert Griesemer, Rob Pike, and Ken Thompson. Go is syntactically similar to C, but with memory safety, garbage collection, structural typing, and CSP-style concurrency. The language is often referred to as Golang because of its domain name,
golang.org, but the proper name is Go.
There are two major implementations:
- Google’s self-hosting compiler toolchain targeting multiple operating systems, mobile devices, and WebAssembly.
- gccgo, a GCC frontend.
Kubernetes (K8s) is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.
Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions such as Microsoft Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS) on AWS, DigitalOcean Kubernetes Service, and Google Kubernetes Engine (GKE) on GCP.
What is Kubernetes?
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.
Source: What is Kubernetes
Kubernetes is known to be a descendant of Google’s system BORG
The first unified container-management system developed at Google was the system we internally call Borg. It was built to manage both long-running services and batch jobs, which had previously been handled by two separate systems: Babysitter and the Global Work Queue. The latter’s architecture strongly influenced Borg, but was focused on batch jobs; both predated Linux control groups.
Source: Kubernetes Past
Date of Birth
Kubernetes celebrates its birthday every year on 21st July. Kubernetes 1.0 was released on July 21 2015, after being first announced to the public at Dockercon in June 2014.
A place that marks the beginning of a journey
- Kubernetes Community Overview and Contributions Guide by Ihor Dvoretskyi
- Are you Ready to Manage your Infrastructure like Google?
- Google is years ahead when it comes to the cloud, but it’s happy the world is catching up
- An Intro to Google’s Kubernetes and How to Use It by Laura Frank
- Kubernetes: The Future of Cloud Hosting by Meteorhacks
- Kubernetes by Google by Gaston Pantana
- Key Concepts by Arun Gupta
- Application Containers: Kubernetes and Docker from Scratch by Keith Tenzer
- Learn the Kubernetes Key Concepts in 10 Minutes by Omer Dawelbeit
- The Children’s Illustrated Guide to Kubernetes by Deis
- The ‘kubectl run’ command by Michael Hausenblas
- Docker Kubernetes Lab Handbook by Peng Xiao
- Curated Resources for Kubernetes
- Kubernetes Comic by Google Cloud Platform
- Kubernetes 101: Pods, Nodes, Containers, and Clusters by Dan Sanche
- An Introduction to Kubernetes by Justin Ellingwood
- Kubernetes and everything else – Introduction to Kubernetes and it’s context by Rinor Maloku
- Installation on Centos 7
- Setting Up a Kubernetes Cluster on Ubuntu 18.04
- Cloud Native Landscape
- The Kubernetes Handbook by Farhan Hasin Chowdhury
Computer Beats Master at Go
“The path for machine victory over the humans who play the ancient Chinese game of Go was not achieved through mathematical superiority, because Go is a very different game from chess.
Rather than the 8 × 8 grid for chess, Go is played on a 19 × 19 board, with each player having dozens of black or white stones. Each stone has the same value—unlike chess, in which the pieces are not all equal. The rules of Go are fairly straightforward—the two players try to surround each other’s stones and take territory from each other. However, because of the size of the grid, the number of potential positions in Go is staggering—considerably larger than the number of atoms in the Universe.
This sheer complexity is why intuition is so often cited as a key factor in winning the game, and why a computer program beating one of the best Go players that ever lived was considered so significant. As players add more stones to the board, the number of possible countermoves and counter-countermoves grows exponentially. As a result, brute-force “look-ahead” computing approaches to solving Go just can’t look far enough ahead: computers aren’t big enough. The Universe isn’t big enough.
AlphaGo® is the AI program that beat South Korean Go master Lee Sedol (b. 1983) in March 2016, in four out of five games, by adopting the same sort of strategic search strategies a human would. The program was created by the Google DeepMind team that evolved from Google’s acquisition of British company DeepMind Technologies, a British AI company that built a neural network to play video games like a human.
Lee Sedol did win once, however, so the computer did not dominate the match. In game four, white move 78, Lee Sedol found AlphaGo’s Achilles’ heel and made a move that so thoroughly confused the system that it started to make rookie mistakes, not recovering in time to save the game. The irony is that Sedol placed the stone where he did because AlphaGo had put him in a position where he saw no alternative move to make.”
SEE ALSO Computer Is World Chess Champion (1997)
“Go is played on a 19 × 19 board, with one player using black stones and the other using white stones, all possessing the same value.”
Byford, Sam. “Why Is Google’s Go Win Such a Big Deal?” The Verge, March 9, 2016. https://www.theverge.com/2016/3/9/11185030/google-deepmind-alphago-go-artificial-intelligence-impact.
House, Patrick. “The Electronic Holy War.” New Yorker online, May 25, 2014. https://www.newyorker.com/tech/elements/the-electronic-holy-war.
Koch, Christof. “How the Computer Beat the Go Master.” Scientific American online, March 19, 2016. https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master.
Moyer, Christopher. “How Google’s AlphaGo Beat a World Chess Champion.” Atlantic online, March 28, 2016. https://www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611.