Cloud Computing Is Around Us
People outside of the tech industry may have many questions when they so often come across the term cloud computing. What's cloud computing? What kind of services does cloud computing provide? Where and how do I acquire them?
Cloud computing may be a technical term whose meaning is unclear to many, but there is a good chance that many of us are already using cloud services without being aware it.
Backup & Restore is a default service on Huawei phones. Other brands also have similar services, such as iCloud for iPhone. These services allow you to back up the local files on your phone to a remote data center. After you change to a new phone, you can easily restore your data to your new phone using your account and password configured for this service.
Google Translate is a free service that instantly translates words, phrases, and web pages between English and over 100 other languages. iReader is a popular online reading app that gives you access to a huge library of online electronic books. These three apps are all powered by the cloud. Even if you never used any of them, there is a good chance you are using other cloud-based apps without being aware of it.
Cloud Computing Definition
The National Institute of Standards and Technology (NIST) defines cloud computing as follows:
Cloud computing is a model for enabling ubiquitous, convenient, ondemand network access to a shared pool of configurable computingresources (e.g., networks, servers, storage, applications, and services) thatcan be rapidly provisioned and released with minimal management effort or service provider interaction.
The cloud is a metaphor for the Internet. It is an abstraction of the Internet and the infrastructure that underpins it. Computing refers to computing services provided by a sufficiently powerful computer capable of providing a range of functionalities, resources, and storage. Put together, cloud computing can be understood as the delivery of on-demand, measurable computing services over the Internet.
The IT industry must have at least one hundred definitions of what cloud computing is. One of the most widely accepted is given by the National Institute of Standards and Technology (NIST) of the US: A model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Note the following key points in this definition:
- Cloud computing is not a technology so much as a service delivery model.
- Cloud computing gives users convenient access to IT services, including networks, servers, storage, applications, and services, like using utilities such as water and electricity.
- The prerequisite for convenient, on-demand access to cloud resources is networkconnectivity.
- Rapid resource provisioning and reclamation fall into the rapid elasticity characteristic of cloud computing, while minimal management effort and service provider interaction the on-demand self-service characteristic.
In the term "cloud computing", "cloud" is a metaphor for the Internet. It is an abstraction of the Internet and the infrastructure underpinning it. "Computing" refers to computing services provided by a sufficiently powerful computer capable of providing a range of functionalities, resources, and storage. Put together, cloud computing can be understood as the delivery of on-demand, measured computing services over the Internet.
The history of cloud computing consists of those of the Internet and computing models. In the next chapter, we will talk about how cloud computing develops into what it is today. But before that, let's hear a story about a failed attempt to popularize cloud computing before the Internet reaches its maturity.
This story is about Larry Ellison, co-founder and the executive chairman and chief technology officer of Oracle Corporation, and a legend in the IT industry. If you're interested, you can search for him on the Internet. Around the time Oracle was founded, two other legendary US companies, Apple and Microsoft were also founded. To the general public, Larry Ellison always seemed to be a bit overshadowed by Bill Gates. At the beginning, Microsoft's business was computer operating systems, and Oracle's was databases. However, in 1988, Microsoft also launched the SQL Server database. This seemed a direct challenge to Oracle. In response, Larry Ellison launched an Internet computer without an OS or hard disk. Rather, the OS, user data, and computer programs all resided on servers located in a remote data center. This product also had a price advantage over computers running Microsoft OSs. However, there was a small miscalculation in Larry Ellison's plan: The year was 1995, and the Internet was still at its infancy. At that time, the Internet was still unavailable in most parts of the world, and was unable to provide the bandwidth needed for the Internet computer to function properly. This led to poor user experience, so the project was terminated two years later.
The Internet computer launched by Oracle can be seen as an early form of cloud computing. The only problem was that it was way ahead of its time. Plus, the Doc-Com bubble burst around 2000 also affected people's confidence in cloud-based applications. This situation lasted until Amazon launched AWS in 2006.
A Brief History of the Internet
In the beginning, all computers were separated from each other. Data computation and transmission were all done locally. Later, the Internet was born connecting all these computers, and also the world together. The following are some of the milestone events in the history of the modern Internet.
1969: The Advanced Research Projects Agency Network (ARPANET) was born, and is widely recognized as the predecessor of today's Internet. Like many technologies that are underpinning our modern society, the ARPANET was originally developed to serve military purposes. It is said that the ARPANET was launched by the US military to keep a fault-tolerant communications network active in the US in the event of a nuclear attack. In the beginning, only four nodes joined the ARPANET. They were four universities in the central states of the US: the University of California, Los Angeles (UCLA), Stanford Research Institute (SRI), University of California, Santa Barbara (UC Santa Barbara), and University of Utah. The birth of the ARPANET marked the beginning of the Internet era. In the coming years, more and more nodes joined the ARPANET, and the majority of them came from
non-military fields. In 1983, for security reasons, 45 nodes were removed from ARPANET to form a separate military network called MILNET. The remaining nodes were used for civilian purposes.
1981: The complete specifications of the TCP/IP protocol suite were released for the first time, signaling the birth of the Internet communications language. Why was the TCP/IP protocol needed? TCP/IP is in fact a suite of many protocols, including the Transport Control Protocol
(TCP), Internet Protocol (IP), and others. The earliest protocol used on the ARPANET was called the Network Control Protocol (NCP). However, as the ARPANET grew, NCP could not keep up with the demands of large-scale networks. Born for use on large and mega-size networks, TCP/IP was soon used by the ARPANET to replace NCP on January 01, 1983.
1983: All three of the original networks, ARPANET, PRNET, and SATNET, switched to TCP/IP, which marked the beginning of the accelerated growth of the Internet.
1984: The Domain Name System (DNS) was invented. Since the adoption of TCP/IP, the development of the Internet picked up speed, and more computers were added to the network. Each computer used TCP/IP-compliant numerical IP addresses to identify each other. As the quantity of connected computers continued to increase, the inconvenience of using IP addresses to identify computers became evident: they are hard to memorize. This is comparable to using people's identity numbers, instead of their names, to identify them. It's difficult to memorize such long numbers. This is where DNS came in. DNS translates between numerical IP addresses and more readily memorized domain names. In this way, computer users can locate their peers simply through domain names, leaving the translation work to domain name servers. The domain name consists of two parts: name, for example, HUAWEI; and category or purpose, for example, .com for commercial. Maintenance personnel can enter the domain name HUAWEI.com to reach the computer with the corresponding IP address. Today, domain names, used in URLs, can be used to identify any web pages across the globe.
1986: The modern email routing system MERS was developed.
1989: The first commercial network operator PSINet was founded. Before PSINet, most networks were funded by the government or military for military or industrial purposes, or for scientific research. PSINet marked the beginning of the commercial Internet.
1984: The Domain Name System (DNS) was invented. Since the adoption of TCP/IP, the development of the Internet picked up speed, and more computers were added to the network. Each computer used TCP/IP-compliant numerical IP addresses to identify each other. As the quantity of connected computers continued to increase, the inconvenience of using IP addresses to identify computers became evident: they are hard to memorize. This is comparable to using people's identity numbers, instead of their names, to identify them. It's difficult to memorize such long numbers. This is where DNS came in. DNS translates between numerical IP addresses and more readily memorized domain names. In this way, computer users can locate their peers simply through domain names, leaving the translation work to domain name servers. The domain name consists of two parts: name, for example, HUAWEI; and category or purpose, for example, .com for commercial. Maintenance personnel can enter the domain name HUAWEI.com to reach the computer with the corresponding IP address. Today, domain names, used in URLs, can be used to identify any web pages across the globe.
1986: The modern email routing system MERS was developed.
1989: The first commercial network operator PSINet was founded. Before PSINet, most networks were funded by the government or military for military or industrial purposes, or for scientific research. PSINet marked the beginning of the commercial Internet.
1990: The first network search engine Archie was launched. As the Internet expanded, the amounts of information on the Internet grew at an explosive rate. A search engine or website was needed to index and search for information needed by users, to speed up the searches. Archie is the earliest search engine and a tool for indexing FTP archives located on physically dispersed FTP servers. It was developed by Alan Emtage, then a student at McGill University. Archie allows users to search for files by their names.
1991: WWW was officially open to the public. The World Wide Web (WWW), or simply the Web, that most of us now use on a daily basis, became publicly available only in 1991, less than 30 years ago. Tim Berners-Lee, a British scientist, invented the Web while working at CERN, the European Organization for Nuclear Research. The Web allows hypermedia, which can be documents, voice or video, and a lot more, to be transmitted over the Internet. It was only after the popularization of the Web that all the great Internet companies were born and all kinds of Internet applications that have fundamentally changed the lives of ordinary people began to emerge.
1995: E-commerce platforms Amazon and eBay were founded. Many great companies, such as Yahoo and Google, emerged since the brief history of the Internet began. Here we will talk about Amazon alone, since it's the first Internet company that made commercial cloud computing a reality. In the early days, Amazon mainly sold books online. To process and store commodity and user information, Amazon built huge data centers. The US has a shopping festival called Black Friday, similar to the "Double Eleven" invented by Tmall of China. On this day, Amazon needed to process huge amounts of information, and all the servers in its data centers were used. However, after this day, most of the servers were idle. To improve return on investment, Amazon needed to lease these idle servers. This was the reason why in 2006 Amazon launched its first cloud computing product: Elastic Compute Cloud (ECS).
1991: WWW was officially open to the public. The World Wide Web (WWW), or simply the Web, that most of us now use on a daily basis, became publicly available only in 1991, less than 30 years ago. Tim Berners-Lee, a British scientist, invented the Web while working at CERN, the European Organization for Nuclear Research. The Web allows hypermedia, which can be documents, voice or video, and a lot more, to be transmitted over the Internet. It was only after the popularization of the Web that all the great Internet companies were born and all kinds of Internet applications that have fundamentally changed the lives of ordinary people began to emerge.
1995: E-commerce platforms Amazon and eBay were founded. Many great companies, such as Yahoo and Google, emerged since the brief history of the Internet began. Here we will talk about Amazon alone, since it's the first Internet company that made commercial cloud computing a reality. In the early days, Amazon mainly sold books online. To process and store commodity and user information, Amazon built huge data centers. The US has a shopping festival called Black Friday, similar to the "Double Eleven" invented by Tmall of China. On this day, Amazon needed to process huge amounts of information, and all the servers in its data centers were used. However, after this day, most of the servers were idle. To improve return on investment, Amazon needed to lease these idle servers. This was the reason why in 2006 Amazon launched its first cloud computing product: Elastic Compute Cloud (ECS).
2000: The dotcom bubble burst. The unprecedented growth of the Internet in the 1990s resulted in the dot-com bubble, which burst around 2000. It was during this period that PSINet, the first commercial network operator we mentioned earlier, went bankrupt. The Internet regained rapid growth after the dotcom bubble burst. In 2004, Facebook was founded, and with it came the phenomenon of social networking.

The History of Computing
Parallel computing
Traditionally, software has been written for serial computation:1. Each problem is broken into a discrete series of instructions.
2. Instructions are executed one after another on a single CPU.
3. Only one instruction may execute at any point in time.
| Schematic diagram of serial computing |
With serial computing, a complex problem takes a long time to process. With large-scale applications, especially when there is limited computer memory capacity, a single-CPU architecture is impractical or even impossible. For example, a search engine and networked database process millions of requests per second, which is far beyond the capacity of serial computing.
Limits to serial computing, both in theory and for practical reasons, pose significant
constraints to simply building ever faster serial computers:
- Transmission speeds — the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements. fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
- Limits to miniaturization — processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be. transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.
- Economic limitations — it is increasingly expensive to make a single processor faster.
Using a larger number of moderately fast commodity processors to achieve the same (or
better) performance is less expensive.
resources to solve a computational problem.
- Each problem is broken into discrete parts that can be solved concurrently.
- Each part is further broken down to a series of instructions.
- Instructions from each part execute simultaneously on different CPUs.
- A unified control mechanism is added to control the entire process.
| Schematic diagram of parallel computing |
Traditionally, parallel computing has been considered to be "the high end of computing" and has been used for cases such as scientific computing and numerical simulations of complex systems. Today, commercial applications are providing an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data in sophisticated ways.
The reasons for using parallel computing include the following:
- Time and cost savings: In theory, using more compute resources will lead to completing a task faster and save potential costs. This is even more true considering the resources can be inexpensive, and even out-of-date CPUs clustered together.
- Solving larger problems that can be handled using serial computing.
computers residing on the same network.
Distributed Computing
Distributed computing is a field of computer science that studies distributed systems. A distributed system distributes its different components to different networked computers, which communicate and coordinate their actions using a unified messaging mechanism. The components work together in order to achieve a common goal.Distributed computing provides the following benefits:
- Easier resource sharing
- Balanced load across multiple computers
- Running each program on the most eligible computers
| Schematic diagram of distributed computing |
In fact, in distributed computing, each task is independent. The result of one task, whether unavailable or invalid, virtually does not affect other tasks. Therefore, distributed computing has low requirement on real-timeliness and can tolerate errors. (Each problem is divided into many tasks, each of which is solved by one or more computers. The uploaded results are compared and verified in the case of a large discrepancy.)
In parallel computing, there are no redundant tasks. The results of all tasks affect one another. This requires the correct result to be obtained for each task, and preferably in a synchronized manner. In the case of distributed computing, many tasks are redundant, and many useless data blocks are generated. Despite its advantage in speed, the actual efficiency may be low
Grid Computing
Grid computing is the use of widely distributed computer resources to reach a common goal. It is a special type of distributed computing. According to IBM's definition, a grid aggregates compute resources dispersed across the local network or even the Internet, making the end users (or client applications) believe that they have access to a super and virtual computer. The vision of grid computing is to create a collection of virtual and dynamic resources so that individuals and organizations can have secure and coordinated access to these resources. Grid computing is usually implemented in the form of a cluster of networked, loosely coupled computers.Cloud Computing
Cloud computing is a new way of infrastructure sharing. It pools massive amounts of resources together to support a large variety of IT services. Many factors drive up the demand for such environments, including connected devices, real-time stream processing, adoption of service-oriented architecture (SOA), and rapid growth of Web 2.0 applications such as search, open collaboration, social network, and mobile office. In addition, improved performance of digital components has allowed even larger IT environments to be deployed, which also drives up the demand for unified clouds. Cloud computing is hailed as a revolutionary computing model by its advocates, as it allows the sharing of super computational power over the Internet. Enterprises and individual users no longer need to spend a fortune purchasing expensive hardware. Instead, they purchase on-demand computing power provisioned over the Internet.Cloud computing in a narrow sense refers to a way of delivering and using IT infrastructure to enable access to on-demand and scalable resources (infrastructure, platform, software, etc.) The network over which the resources are provisioned is referred to as the cloud. To consumers, the resources on the cloud appear to be infinite. They are available and scalable on demand and use a pay-as-you-go (PAYG) billing model. These characteristics allow us to use IT services as conveniently as using utilities like water and electricity.
In a broad sense, cloud computing refers to the on-demand delivery and utilization of scalable services over the Internet. These services can be IT, software, Internet, or any other services.
Cloud computing has the following typical characteristics:
- Hyperscale. Clouds are usually large. Google's cloud consists of over 1 million servers. The clouds of Amazon, IBM, Microsoft, and Yahoo each has hundreds of thousands of servers. The private cloud of an enterprise can have hundreds to thousands of servers. The cloud offers users computational power that was impossible with conventional methods.
- Virtualization. Cloud computing gives users access to applications and services regardless of their location or what device they use. The requested resources are provided by the cloud, rather than any tangible entity. Applications run somewhere in the cloud. The users have no knowledge and do not need to concern themselves about the locations of the applications. A laptop or mobile phone is all they need to access the services they need, or even to perform complex tasks like supercomputing.
- High reliability. With mechanisms such as multi-copy redundancy, fault tolerance, fast and automated failover between homogeneous compute nodes, it is possible for cloud computing to deliver higher reliability than local computers.
- General-purpose. A single cloud is able to run a huge variety of workloads to meet wide-ranging customer needs.
- High scalability. Clouds are dynamically scalable to accommodate changing demands.
- On-demand. A cloud provides a huge resource pool from where on-demand resources
can be provisioned. Cloud service usage can be metered similarly to how utilities like
water, electricity, and gas are metered. - Cost savings. With a broad selection of fault tolerance mechanisms available for clouds, service providers and enterprises can use inexpensive nodes to build their clouds. With automated, centralized cloud management, enterprises no longer need to grapple with the high costs entailed in managing a data center. By provisioning hardware-independent, general-purpose resources, cloud significantly improves resource utilization. These allow users to have quick access to cost-efficient cloud services and resources.
Development of Cloud Computing
There are three phases of cloud computing in terms of transforming the enterprise IT architecture from a legacy non-cloud architecture to a cloud-based one.Cloud Computing 1.0
This phase deals with virtualization of IT infrastructure resources, with focus on compute virtualization. Enterprise IT applications are completely decoupled from the infrastructure. With virtualization and cluster scheduling software, multiple enterprise IT application instances and runtime environments (guest operating systems) can share the same infrastructure, leading to high resource utilization and efficiency. HCIA - Cloud Computing mainly covers the implementation and advantages of cloud computing in this phase.Cloud Computing 2.0
Infrastructure resources are provisioned to cloud tenants and users as standardized services, and management is automated. These are made possible with the introduction of standard service provisioning and resource scheduling automation software on the management plane, and software-defined storage and networking on the data plane. The request, release, and configuration of infrastructure resources, which previously required the intervention of data center administrators, are now fully automated, as long as the right prerequisites (e.g. sufficient resource quotas, no approval process in place) are met. This transformation greatly improves the speed and agility of infrastructure resource provisioning required by enterprise IT applications, and accelerates time to market (TTM) for applications by shortening the time needed to ready infrastructure resources. It transforms static, rolling planning of IT infrastructure resources into a dynamic, elastic, and on-demand resource allocation process. With it, enterprise IT is able to deliver higher agility for the enterprise's core applications, enabling it to quickly respond and adapt to changing demands. In this phase, the infrastructure resources provisioned to tenants can be virtual machines (VMs), containers (lightweightVMs), or physical machines. This transformation has not yet touched the enterprise's applications, middleware, or database software architectures that are above the infrastructure layer.
Cloud Computing 3.0
This phase is characterized by:- A distributed, microservices-based enterprise application architecture.
- An enterprise data architecture redesigned using Internet technology and intelligence
unleashed by big data.
In this phase, the enterprise application architecture gradually transforms from a vertical, hierarchical architecture that:
- Relies on traditional commercial databases and middleware suites
- Is purpose-designed for each application domain, siloed, highly sophisticated, stateful,
and large scale
- A distributed, stateless architecture featuring lightweight, fully decoupled functionalities, and total separation of data and application logic
- Databases and middleware service platforms that are based on open-source yet enhanced
enterprise-class architectures and fully shared across different application domains.
The majority of enterprises and industries have already passed Cloud Computing 1.0. Enterprises from some industries have already commercially deployed Cloud Computing 2.0, some on small scales though, and are now considering scaling up or continuing to move towards Cloud Computing 3.0. Others are moving from Cloud Computing 1.0 to 2.0, and are even considering implementing Cloud Computing 2.0 and 3.0 in parallel.
Deployment Model
Public Cloud
Public cloud is the earliest and best-known form of cloud computing. Our previous examples of cloud computing, including Backup & Restore on Huawei phones and Google Translate, are both examples of public cloud. Public cloud offers utility-like IT services over the Internet for the public.Public clouds are usually built and run by cloud service providers. End users access cloud resources or services on a subscription basis while the service provider takes all the O&M and administration responsibilities.
Private Cloud
Private clouds are usually deployed for internal use within enterprises or other types of organizations. All the data of a private cloud is stored in the enterprise or organization's own data center. The data center's ingress firewalls control access to the data. A private cloud can be built based on the enterprise's legacy architecture, allowing most of the customer's hardware equipment to be reused. A private cloud may be able to deliver a higher level of data security and allow reuse of legacy equipment. However, these equipment will eventually need to be updated to keep up with growing demands, and doing so may entail high costs. On the other hand, stricter data access control also means less data sharing, even within the organization.In recent years, a different type of private cloud has emerged encouraging enterprises to deploy core applications on the public cloud: Dedicated Cloud (DeC) on a public cloud. This model offers dedicated compute and storage resources and reliable network isolation, meeting the high reliability, performance, and security standards of tenants' mission-critical applications.
Hybrid Cloud
Hybrid cloud is a flexible cloud deployment mode. It may comprise two or more different types of clouds (public, private, and community, which we will talk about later) that remain distinctive entities. Cloud users can switch their workloads between different types of clouds as needed. Enterprises may choose to keep core data assets on-premises for maximum security while other data on public clouds for cost efficiency, hence a hybrid cloud model. With the pay-per-use model, public clouds offer a highly cost-efficient option for companies with seasonal data processing needs. For example, for some online retailers, their demands for computing power peak during holidays. Hybrid cloud also accommodates elasticity demands of other purposes, for example, disaster recovery. This means that a private cloud can use a public cloud as a disaster recovery destination and recover data from it when necessary. Another feasible option is to run applications on one public cloud while selecting another public cloud for disaster recovery purposes.To sum up, a hybrid cloud allows users to enjoy the benefits of both public and private clouds. It also offers great portability for applications in a multi-cloud environment. In addition, the hybrid cloud model is cost-effective because enterprises can have on-demand access to cloud resources on a pay-per-use basis.
The downside is that hybrid cloud usually requires more complex setup and O&M. A major challenge facing hybrid cloud is integration between different cloud platforms, different types of data, and applications. A hybrid cloud may also need to address compatibility issues between heterogeneous infrastructure platforms.
Community Cloud
A community cloud is a cloud platform where the cloud infrastructure is built and managed by a leading organization of a specific community and shared between several organizations of that community. These organizations typically have common concerns, such as similar security, privacy, performance, and compliance requirements. The level of resource sharing may vary, and the services may be available with or without a fee.Community cloud is not a new concept. Its difference from the public and private clouds lies in its industry attribute. For example, with a cloud built for the healthcare industry, patients' case files and records can be stored in this cloud. Doctors from every hospital can obtain patient information from the cloud for diagnostic purposes. Community cloud can be a huge opportunity as well as a huge challenge. For example, with the community cloud for the healthcare industry, special efforts, including technical and administrative means, must be taken to ensure personal information security on the cloud.
By Service Model
In cloud computing, all deployed applications use some kind of hierarchical architecture. Typically, there is the user-facing interface, where the end users create and manage their own data; the underlying hardware resources; the OS on top of the hardware resources; and the middleware and application runtime environment on top of the OS. We call everything related to applications the software layer; the underlying, virtualized hardware resources (network, compute, and storage) the infrastructure; and the part in between the platform layer. IaaS refers to a situation where a cloud service provider provides and manages the infrastructure layer while the consumer the other two layers. PaaS refers to a situation where the cloud service provider manages the infrastructure and platform layers while the consumer the application layer. SaaS means all three layers are managed by the provider. Let's explain these three cloud service models using the example of a game.| Computer specifications requirements for a video game |
The figure above shows the required computer specifications for this game. If we buy a computer of the required specifications, install an OS and then this game, this is not cloud computing. If we buy a cloud server of the same specifications from a public cloud provider, use an image to install an OS, download and then install the game, we're using the IaaS model. When installing a large game such as this one, we are likely to encounter the following error:
| NET Framework initialization error |
This is because .NET Framework is part of the game's runtime environment. If we buy not only the cloud server but also the ready-to-go runtime environment with the .NET Framework already installed, we're using the PaaS model. If we buy the cloud server with the game and all the necessary software already installed and all we need to do to start playing the game is to enter our user name and password, we're using the SaaS model.
