Basic Network Concepts (3)

  • In medium to large networks, a modular design is usually used to split network functions. Within each module, consideration must be given to the flexibility and scalability of the network structure. Genarally, a hierarchical architecture is used, for example, in a campus network that needs to provide access services for a large number of users.
Traditional networks contain the core, aggregation, and access layers. The core layer provides high-speed data channels, the aggregation layer converges traffic and control policies, and the access layer offers various access modes to devices.
  • OSI model: Open System Interconnect Reference Model
    The OSI model is designed to overcome the interconnection difficulties and low efficiency issues associated with using various protocols by defining an open and interconnected network..
  • The OSI reference model forms the basis for computer network communications. Its design complies with the following principles:
    • There are clear boundaries between layers to facilitate understanding.
    • Each layer implements specific functions and does not affect each other.
    • Each layer is a service provider and a service user. Specifically, each layer provides services to its upper layer and uses services provided by its lower layer.
    • The division of layers encourages the development of standardized protocols.
    • There are sufficient layers to ensure that functions of each layer do not overlap.
  • The OSI reference model has the following advantages:
    • Simplifies network operations.
    • Provides standard interfaces that support plug-and-play and are compatible with different vendors.
    • Enables vendors to design interoperable network devices and accelerate the development of data communications networks.
    • Prevents a change in one area of a network from affecting other areas. Therefore, each area can be updated quickly and independently.
    • Simplifies network issues for easier learning and operations.
  • In the OSI model, units of data are collectively called Protocol Data Units (PDU). However, each PDU is called a different name according to the layer at which it is sent:
    • Application layer (layer 7): data is called an Application Protocol Data Unit (APDU)
    • Presentation layer (layer 6): data is called a Presentation Protocol Data Unit (PPDU)
    • Session layer (layer 5): data is called a Session Protocol Data Unit (SPDU)
    • Transport layer (layer 4): data is called a segment
    • Network layer (layer 3): data is called a packet
    • Data link layer (layer 2): data is called a frame
    • Physical layer: data is called bit stream.
  • Each layer of the OSI model encapsulates data to ensure that the data can reach the destination accurately and can be accepted and executed by the terminal host. A node encapsulates the data to be transmitted by using a specific protocol header for transmission. When data is processed at a layer, packets are also added to the tail of the data , which is also called encapsulation. 
  • The physical layer involves the transmission of bit streams over a transmission medium and is fundamental in the OSI model. It implements the mechanical and electrical features required for data transmission and focuses only on how to transmit bit streams to the peer end through different physical links. The information contained in each bit stream, for example, address or application data, is irrelevant at this layer. Typical devices used at the physical layer include repeaters and hubs.
  • The main tasks of the data link layer are to control the physical layer and allow it to present an error-free link to the network layer, detect and correct any errors, and perform traffic control.
  • The network layer is responsible for forwarding packets and checks the network topology to determine the optimal route for transmission. It is critical to select a route from the source to the destination for data packets. A network layer device calculates the optimal route to the destination by running a routing protocol (such as RIP), identifies the next network device (hop) to which the data packet should be forwarded, encapsulates the data packet by using the network layer protocol, and sends the data to the next hop by using the service provided by the lower layer.
  • The transport layer is responsible for providing effective and reliable services (generally refers to the applications at the application layer) to users. 
  • In the session layer or upper layers, the data transmission unit is packet. The session layer provides a mechanism for establishing and maintaining communications between applications, including access verification and session management. For example, verification of user logins by a server is completed at the session layer.
  • The presentation layer is generally responsible for how user information is represented. It converts data from a given syntax to one that is suitable for use in the OSI system. That is, this layer provides a formatted representation and data conversion service. In addition, this layer is also responsible for data compression, decompression, encryption, and decryption.
  • The application layer provides interfaces for operating systems or network applications to access network services. 
The Transfer Control Protocol/Internet Protocol (TCP/IP) model is widely used due to its openness and usability. The TCP/IP protocol stack is implemented as standard protocols.

The TCP/IP model is divided into four layers (from bottom to top): link layer, internet layer, transport layer, and application layer. Some documents define a model with five layers, where the link layer is split into a link layer and a physical layer (equivalent to layers 1 and 2 in the OSI model).

Each layer of the TCP/IP protocol stack has corresponding protocols, which are achieved to generate network applications. Some protocols cannot be defined in a hierarchical manner. For example, ICMP, IGMP, ARP, and RARP are deployed at the same layer as the IP protocol at the Network layer. However, in some scenarios, ICMP and IGMP may be placed on the upper layer of the IP protocol, and ARP and RARP are placed at the lower layer of the IP protocol.
  • Application layer

    • HyperText Transfer Protocol (HTTP): It is used to access various pages on the web server.
    • File Transfer Protocol (FTP): It is used to transfer data from one host to another.
    • Domain Name System (DNS): It is used to convert the domain name of the host to an IP address.
  • Transport layer

    • TCP: Provides reliable connection-oriented communications services for applications and applies to applications that require responses.
    • User Datagram Protocol (UDP): Provides connectionless communications and does not guarantee reliable transmission of data packets. It is suitable for transmitting a small amount of data at a time, and the application layer is responsible for reliability.
  • Network layer

    • Internet Protocol (IP): The IP protocol and routing protocol work together to find an optimal path that can transmit packets to the destination. The IP protocol is not concerned about the contents of data packets. It is a connectionless and unreliable services. 
    • Address Resolution Protocol (ARP): Resolves known IP addresses into MAC addresses.
    • RARP (Reverse Address Resolution Protocol): It is used to resolve an IP address when a data link layer MAC address is known.
    • Internet Control Message Protocol (ICMP): Defines the functions of controlling and transferring messages at the network layer.
    • Internet Group Management Protocol (IGMP): Manages multicast group members.
  • Network access layer
    The network access layer consists of two sub-layers: Logic Link Control (LLC) sublayer and Media Access Control (MAC) sublayer.


  • The sender submits the user data to the application, which then sends the data to the destination. The data encapsulation process is as follows:

    • The user data is first transmitted to the application layer, and the application layer information is added.
    • After the application layer processing is complete, the data is transmitted to the transport layer. The transport layer information, such as TCP or UDP (the application layer protocol specifies whether to use TCP or UDP) is then added.
    • After the processing at the transport layer is complete, the data is transmitted to the Internet layer. The Internet layer information (such as IP address) is then added.
  • After the data is processed at the Internet layer, the data is transmitted to the network access layer. The network access layer information (such as Ethernet, 802.3, PPP, and HDLC) is added. Then, the data is transmitted to the destination as a bit stream. Processing differs based on different devices. For example, a switch processes only the data link layer information, whereas a router processes the network layer information. The original user data can be restored only when the data reaches the destination.

  • After the user data arrives at the destination, the decapsulation process is performed as follows.

    • Data packets are sent to the network access layer. After the network access layer receives data packets, the data link layer information is removed after packet resolution, and Internet layer information (such as IP address) is obtained.
    • After the Internet layer receives data packets, the Internet layer information is removed after packet resolution, and upper-layer protocols (such as TCP) is obtained.
    • After the transport layer receives data packets, the transport layer information is removed after packet resolution, and upper-layer protocols (such as HTTP) is obtained.
    • After the application layer receives data packets, the application layer information is removed after packet resolution. The data displayed is the same as that received from the host.

  • The application layer and transport layer provide end-to-end services. The Internet layer and network access layer provide segment-to-segment services 

  • Quintuple structure: Source IP address, destination IP address, protocol in use (for example, 6 indicates TCP, and 17 indicates UDP), source port, and destination port.

  • Destination port: Generally, well-known application services have standard ports, such as HTTP, FTP, and Telnet. Because some applications are not popular, the applications are usually defined by development vendors to ensure that the service ports registered on the same server are unique.

Source port: Generally, common application services, such as HTTP, FTP, and Telnet, are assigned well-known port numbers (in the range from 0 to 1023). However, some operating systems may use greater port numbers as their initial ports. Because source ports are unpredictable, they are seldom involved in the ACL policy.

  • A quintuple is a concept. So that an application server can respond to service requests, it must register the port numbers and protocol (TCP or UDP) for the services it hosts. By using the quintuple, the application server can respond to any concurrent service request while ensuring that each link is unique in the system.

  • ARP: When a packet is forwarded to a host or gateway in the same network segment, the destination address is known and the MAC address corresponding to the destination address is obtained. In the same network segment, the MAC address is used for communications.
  • ICMP: ICMP is used to test network connectivity. Typical applications are Ping and Tracert.
  • Routing protocol: Used for communications between users in different network segments.
  • SNMP: a network device management protocol
  • NetStream: an information sampling protocol. It is usually associated with other devices, such as AntiDDoS.
  • By using the ARP protocol, a network device can establish a mapping between a destination IP address and MAC address. After obtaining the destination IP address at the network layer, the network device needs to determine whether the destination MAC address is known.
In this example, the ARP cache table of Host A does not contain the MAC address of Host C. Therefore, Host A sends an ARP request packet to obtain the destination MAC address. ARP request packets are encapsulated in Ethernet frames. In the frame header, the source MAC address is the MAC address of Host A. In addition, because Host A does not know the MAC address of Host C, the destination MAC address is the broadcast address FF-FF-FF-FF-FF-FF. The ARP request packet contains the source IP address, destination IP address, source MAC address, and destination MAC address. The destination MAC address in the packet is all zeros. ARP request packets are broadcast to all hosts, including gateways, on the network. The gateway will prevent the packet from being sent to other networks.
After receiving the ARP request packet, each host checks whether the target protocol address matches its IP address. If the addresses do not match, the host ignores the ARP request packet. If the addresses match, the host creates an entry in its ARP cache table, recording the source MAC address and source IP address in the ARP request packet. The host then replies with an ARP reply packet.
Host C unicasts an ARP reply packet to Host A. In the ARP reply packet, the sender protocol address is the IP address of Host C, and the target protocol address is the IP address of Host A. In the Ethernet frame header, the destination address is the MAC address of Host A, and the source MAC address is the MAC address of Host C. The operation code is set to Reply. ARP does not provide any security protection measures and therefore authentication cannot be performed. Malicious users may exploit this weakness to launch attacks, such as MAC address spoofing. For details, see the following sections.
After an IP address is encompassed to the host, the IP address must be checked to ensure that it is unique on the network and does not conflict with another address. The host sends ARP request packets to detect address conflicts.

Host A sets the destination IP address in the ARP request packet to its own IP address and broadcasts the packet on the network. If Host A receives an ARP reply, it knows that the IP address is in use and can detect IP address conflict.
ICMP is one of the core protocols in the TCP/IP protocol stack. ICMP is used to send control packets between IP network devices to transmit error, control, and query messages.
A typical ICMP application is the ping command. Ping is a common tool for checking network connectivity and collecting related information. In the ping command, users can assign different parameters, such as the length and number of ICMP packets, and the timeout period for waiting for a reply. Devices construct ICMP packets based on the parameters to perform ping tests.

Common Ping parameters:

  • -a source-ip-address: Specifies the source IP address for sending ICMP Echo Request packets. If the source IP address is not specified, the IP address of the outbound interface is used by default.

  • -c count: Specifies the number of times that ICMP Echo Request packets are sent. The default value is 5.

  • -h ttl-value: Specifies the Time To Live (TTL) for ICMP Echo Request packets. The default value is 255.

  • -t timeout: Specifies the timeout period of waiting for an ICMP Echo Reply packet after an ICMP Echo Request packet is sent.

The ping command output contains the destination address, ICMP packet length, packet number, TTL value, and round-trip time. The packet number is a variable parameter field contained in an Echo Reply message (Type=0). The TTL and round-trip time are included in the IP header of the message.
Tracert is another typical application of ICMP. Tracert traces the forwarding path of packets hop by hop based on the TTL value in the packet header. To trace the path to a specific destination address, the source end first sets the TTL value of the packet to 1. After the packet reaches the first node, the TTL times out. Therefore, this node sends a TTL timeout message carrying the timestamp to the source end. Then, the source end sets the TTL value of the packet to 2. After the packet reaches the second node, the TTL times out. This node also returns a TTL timeout message. The process repeats until the packet reaches the destination. In this way, the source end can trace each node through which the packet passes according to the information in the returned packet. This allows the source end to calculate the round-trip time according to the timestamp information. Tracert is an effective method to detect packet loss and delay, and helps administrators discover routing loops on the network

Common Tracert parameters:
  • -a source-ip-address: Specifies the source address of a tracert packet.
  • -f first-ttl: Indicates the initial TTL. The default value is 1.
  • -m max-ttl: Indicates the maximum TTL. The default value is 30.
  • -name: Displays the host name on each hop.
  • -p port: Specifies the UDP port number of the destination host.
The source end (Router A) sends a UDP packet whose TTL value is 1 and destination UDP port number is larger than 30000 to the destination end (Host B). A UDP port number larger than 30000 is not commonly used by any program.

After receiving the UDP packet, the first-hop host (Rourter B) determines that the destination IP address of the packet is not its own IP address and decreases the TTL value by one. The TTL value is now 0. Therefore, Router B discards the UDP packet, and sends an ICMP Time Exceeded packet containing its IP address 10.1.1.2 to Router A. In this way, Router A obtains the IP address of Router B.

Upon receiving the ICMP Time Exceeded packet from Router B, Router A sends a UDP packet with a TTL value of 2.

Upon receiving the UDP packet, the second-hop host (Router C) returns an ICMP Time Exceeded packet containing its IP address 10.1.2.2 to Router A.

The preceding steps are repeated until the destination end determines that the destination IP address of the UDP packet is its IP address and processes the packet. The destination end searches for the upper-layer protocol that occupies the UDP port number based on the destination UDP port number in the packet. If the destination end does not use the UDP port number, the destination end returns an ICMP Destination Unreachable packet to the source end.

Upon receiving the ICMP Destination Unreachable packet, the source end determines that the UDP packet has reached the destination end. It then stops running tracert and generates the path of the UDP packet (10.1.1.2; 10.1.2.2; 10.1.3.2).
Routes are classified into the following types based on the destination address:
  • Network segment routes: The destination is a network segment. The subnet mask of an IPv4 destination address is less than 32 bits or the prefix length of an IPv6 destination address is less than 128 bits.
  • Host routes: The destination is a host. The subnet mask of an IPv4 destination address is 32 bits or the prefix length of an IPv6 destination address is 128 bits.
Routes are classified into the following types based on whether the destination is directly connected to a router:
  • Direct routes: A router is directly connected to the network where the destination is located.
  • Indirect routes: A router is indirectly connected to the network where the destination is located.
Routes are classified into the following types based on the destination address type:
  • Unicast routes: The destination address is a unicast address.
  • Multicast routes: The destination address is a multicast address.
Differences between static routes and dynamic routes
  • Static routes are easy to configure, have low requirements on the system, and apply to small, simple, and stable networks. However, static routes cannot automatically adapt to network topology changes and manual intervention is required.
  • Dynamic routing protocols have their own routing algorithms. Dynamic routes can automatically adapt to network topology changes and apply to networks with a large number of Layer 3 devices. The configurations of dynamic routes are complex. Dynamic routes have higher requirements on the system than static routes do and consume both network and system resources.
Classifications of dynamic routing protocols
  • According to the application range, dynamic routing protocols are classified into the following types:
  1. Interior Gateway Protocols (IGP): running in an AS. Common IGPs include the RIP, OSPF, and IS-IS.
  2. Exterior Gateway Protocols (EGP): running in different ASs. BGP is the most frequently used EGP protocol.
  • According to the used algorithms, dynamic routing protocols are classified into the following types:
  1. Distance-vector protocol: includes RIP and BGP. BGP is also called a pathvector protocol.
  2. Link-state protocol: includes OSPF and IS-IS.
OSPF is a routing protocol based on link status and ensures that the network topology is loop-free. OSPF supports area division. Routers in an area use the shortest path first (SPF) algorithm to ensure that no loop exists in the area. OSPF also uses inter-area connection rules to ensure that no routing loop exists between areas.

OSPF can trigger an update to rapidly detect and advertise topology changes within an AS.

OSPF can solve common issues caused by network expansion. For example, if additional routers are deployed and the volume of routing information exchanged between them increases, OSPF can divide each AS into multiple areas and limit the range of each area. OSPF is suitable for large and medium-sized networks. In addition, OSPF supports authentication: packets between OSPF routers can be exchanged only after being authenticated.
SNMP is a network management protocol widely used in TCP/IP networks. It enables a network management workstation that runs the NMS to manage network devices.

SNMP supports the following operations:
  • The NMS sends configuration information to network devices through SNMP.
  • The NMS queries and obtains network resource information through SNMP.
  • Network devices proactively report alarm messages to the NMS so that network administrators can quickly respond to network issues
The NMS is network management software running on a workstation. It enables network administrators to monitor and configure managed network devices.

An agent is a network management process running on a managed device. After the managed device receives a request sent from the NMS, the agent responds with operations. The agent provides the following functions: collecting device status information, enabling the NMS to remotely operate devices, and sending alarm messages to the NMS.

A MIB is a virtual database of device status information maintained on a managed device. An agent searches the MIB to collect device status information.

Multiple versions of SNMP are available. Typically, these versions are as follows:
  • SNMPv1: Easy to implement but has poor security.
  • SNMPv2c: The security is low. It is not widely used.
  • SNMPv3: Defines a management framework to provide a secure access mechanism for users.
SNMPv1: The NMS on the workstation and the Agent on the managed device exchange SNMPv1 packets to manage the managed devices.

Compared with SNMPv1, SNMPv2c has greatly improved its performance, security, and confidentiality.

SNMPv3 has an enhanced security and management mechanism based on SNMPv2. The architecture used in SNMPv3 uses a modular design and enables administrators to flexibly add and modify functions. SNMPv3 is highly adaptable and applicable to multiple operating environments. It can not only manage simple networks and implement basic management functions, but also provide powerful network management functions to meet the management requirements of complex networks.
The eSight NTA provides users with reliable and convenient traffic analysis solutions, monitors network-wide traffic in real time, and provides multi-dimensional traffic analysis reports. This solution helps users detect abnormal traffic in a timely manner and learn about both network bandwidth usage and traffic distribution. In addition, it helps enterprises implement traffic visualization, fault query, and planning.

Features:
  • Traffic visualization: Monitors IP traffic in real time, displays the network traffic trend, and helps administrators detect and handle exceptions in a timely manner.
  • Exception detectability: Through the NTA, users can analyze and audit the original IP traffic to identify the root cause of abnormal traffic.
  • Proper planning: The traffic trend and customized reports provided by the NTA provide reference for administrators to plan network capacity
NetStream provides data that is useful for many purposes, including:
  • Network management and planning
  • Enterprise accounting and departmental charging
  • ISP billing report
  • Data storage
  • Data mining for marketing purposes

NetStream is implemented using the following devices:
  • NetStream Data Exporter (NDE): Samples the traffic and exports the traffic statistics.
  • NetStream Collector (NSC): Parses packets from the NDE and sends statistics to the database for the NDA to parse.
  • NetStream Data Analyzer (NDA): Analyzes and processes the statistics, generates reports, and provides a foundation for various services, such as traffic charging, network planning, and monitoring.
The NetStream system works as follows:
  • NDE configured with the NetStream function periodically sends the collected traffic statistics to the NSC.
  • NSC processes the traffic statistics, and sends them to the NDA.
  • NDA analyzes the data for applications such as charging and network planning.
To establish a connection, TCP uses a three-way handshake process. This process is used to confirm the start sequence number of the communications parties so that subsequent communications can be performed in an orderly manner. The process is as follows:
  • When the connection starts, the client sends a SYN to the server. The client sets the SYN's sequence number to a random value a.
  • After receiving the SYN, the server replies with a SYN+ACK. The server sets the ACK's acknowledgment number as the received sequence number plus one (that is, a+1), and the SYN's sequence number as a random value b.
  • After receiving the SYN+ACK, the client replies with an ACK. The client sets the ACK's acknowledgment number as the received sequence number plus one (that is, b+1).
To terminate a connection, TCP uses a four-way handshake process. The process is as follows:
  • The client sends a connection release packet (FIN=1) to the server and stops sending data. The client sets the FIN's sequence number as a (seq=a) and enters the FIN-WAIT-1 state.
  • After receiving the FIN, the server replies with an acknowledgement packet (ACK=1). The server sets the ACK's acknowledgement number as the received sequence number plus one (ack=a+1), sets the sequence number as b, and enters the CLOSE-WAIT state.
  • After receiving the ACK, the client enters the FIN-WAIT-2 state and waits for the server to send a FIN.
  • After the server finishes sending any remaining data, it sends a connection release packet to the client: FIN=1; ack=a+1. Because the connection is in the half-closed state, the server may send more data. Assume that the sequence number is seq=c. The server then enters the LAST-ACK state and waits for acknowledgement from the client.
  • After receiving the connection release packet from the server, the client replies with an acknowledgement packet (ACK=1). The client sets the acknowledgement number to ack=c+1 and sequence number to seq=a+1. The client then enters the TIME-WAIT state.
  • After receiving the ACK from the client, the server enters the CLOSED state immediately and ends the TCP connection.
HTTP/HTTPS: refers to Hypertext Transfer Protocol, which is a protocol used to browse web pages.
FTP protocol: refers to File Transfer Protocol, which is used to upload and download file resources.
DNS protocol: refers to Domain Name Resolution Protocol, which is used to resolve domain names to IP addresses.
A root server is primarily used to manage the main directory of the Internet. The number of root servers are limited to 13 server addresses in the world. Among the 13 nodes, 10 are set in the United States, and the other three are in the UK, Sweden and Japan. Although the network has no borders, servers still have national boundaries. All root servers are managed by the Internet domain name and number allocation agency ICANN, which is authorized by the US government.

A top-level domain name server is used to store top-level domain names such as .com, .edu, and .cn.

A recursive server is an authoritative server. It stores definitive domain name records (the resolution relationship between a domain name and an IP address) for the zone in which it servers. If every person accessing the Internet were to send requests to an authoritative server, the server would be overloaded. Therefore, a cache server is necessary.

A cache server is equivalent to a proxy of the authoritative server and reduces the pressure of the authoritative server. Each time a user accesses the Internet, a request for domain name resolution is sent to the cache server. Upon receiving this request for the first time, the cache server requests the domain name and IP address resolution table from an authoritative server, and then stores the table locally. Subsequently, if a user requests the same domain name, the cache server directly replies to the user. The IP address of a website does not often change. However, entries in the resolution table are valid only for a certain period. When the validity period expires, the entry is automatically aged. The system queries the authoritative server again if a user request is sent. This aging mechanism ensures that the domain name resolution on the cache server is updated periodically.

The resolution process of DNS is as follows:
  • The DNS client queries in recursive mode. The cache server first checks whether the local DNS server has the domain name resolution cache.
  • If there is no local cache, the domain name is sent to the root server. After receiving the www.vmall.com request, the root server checks the authorization of the .com and returns the IP address of the top-level domain name server where the .com is located.
  • The cache server continues to send a www.vmall.com resolution request to the toplevel domain name server. After receiving the request, the top-level domain name server returns the recursive server IP address of the next-level .vmall.com.
  • The cache server continues to send a www.vmall.com resolution request to the recursive server. After receiving the request, the recursive server returns a resolution address of www.vmall.com.If there are a large number of domain names, the recursive server also has multiple levels.
  • After obtaining the IP address of www.vmall.com, the cache server sends the IP address to the client and caches the IP address locally.
  • If a client requests the domain name resolution of www.vmall.com again, the cache server directly responds with the IP address.
When FTP is used to transfer files, two TCP connections are used. The first is the control connection between the FTP client and the FTP server. Enable port 21 on the FTP server and wait for the FTP client to send a connection request. The FTP client enables a random port and sends a connection setup request to the FTP server. The control connection is used to transfer control commands between the server and the client.

The second is the data connection between the FTP client and the FTP server. The server uses TCP port 20 to establish a data connection with the client. Generally, the server actively establishes or interrupts data connections.

Because the FTP is a multi-channel protocol, a random port is used to establish a data channel. If a firewall exists, the channel may fail to be set up. For details, see the following sections.
In active mode, if a firewall is deployed, the data connection may fail to be established because it is initiated by the server. Passive mode solves this issue. The active mode facilitates the management of the FTP server but impairs the management of the client. The opposite is true in the passive mode.

By default, port 21 of the server is used to transmit control commands, and port 20 is used to transmit data.

The procedure for setting up an FTP connection in active mode is as follows:
  • The server enables port 21 to listen for data and waits to set up a control connection with the client.
  • The client initiates a control connection setup request and the server responds.
  • The client sends a PORT command through the control connection to notify the server of the temporary port number used for the client data connection.
  • The server uses port 20 to establish a data connection with the client.
The procedure for setting up an FTP connection in passive mode is as follows:
  • The server enables port 21 to listen for data and wait to set up a control connection with the client.
  • The client initiates a control connection setup request and the server responds.
  • The client sends the PASV command through the control connection to notify the server that the client is in passive mode.
  • The server responds and informs the client of the temporary port number used for the data connection.
  • A data connection is set up between the client and the temporary port of the server.
WWW is short for World Wide Web, also known as 3W or Web. Hypertext is a holistic information architecture, which establishes links for the different parts of a document through keywords so that information can be searched interactively. Hypermedia is the integration of hypertext and multimedia.

The Internet uses the combination of hypertext and hypermedia to extend the information link to the entire Internet. A web is a kind of hypertext information system. It enables the text to be switched from one position to another instead of being fixed at a certain position. The multi-link is a unique feature.

HTTP relies on TCP to achieve connection-oriented state and does not have an encryption and verification mechanism. As a result, the security is insufficient. HTTPS is a secure version of HTTP and supports encryption. However, HTTPS can be used to hide malicious content that cannot be identified by security devices, which poses security risks on a network.
HTTP is the most widely used network protocol on the Internet. HTTP was originally developed to provide a method for publishing and receiving HTML pages. Resources requested by HTTP or HTTPS are identified by Uniform Resource Identifiers (URIs).

HTTP working process:
  • The client (browser) sends a connection request to the web server.
  • The server accepts the connection request and establishes a connection. (Steps 1 and 2 are known as TCP three-way handshake.)
  • The client sends HTTP commands such as GET (HTTP request packet) to the server through this connection.
  • The server receives the command and transmits the required data to the client (HTTP response packets) based on the command.
  • The client receives data from the server.
  • The server automatically closes the connection after the data is sent (TCP four-way handshake).
The mail sending process is as follows:
  • The PC encapsulates the email content into an SMTP message and sends it to the sender's SMTP server.
  • The sender sends it to the recipient's SMTP server for storage.
  • After receiving the request from the user, the POP3 server obtains the email stored on the SMTP server.
  • The POP3 server encapsulates the email into a POP3 message and sends it to the PC.

SMTP Server, POP3 Server, and IMAP are management software that provides services for users and are deployed on hardware servers.

The differences between IMAP and POP3 are as follows: When POP3 is used, after the client software downloads unread mails to the PC, the mail server deletes the mails. If IMAP is used, users can directly manage mails on the server without downloading all emails to the local PC.

Ref : [1]