Network Fundamentals (5)

 As a NetAcad student, you may already be well-versed in network operations. If you have been paying attention to the changing world of networks, then more of your time recently may have been spent learning about coding. If so, good for you! You might need this module to refresh your memory about network fundamentals.

Why does a developer need to know about the network that all application traffic goes through and on and between? Why does a network engineer who is already familiar with intricate details and architecture decisions for network traffic need to know about application development? One without the other doesn't quite get the job done, when the job is about managing and deploying applications at scale while meeting performance requirements and expectations. Let's lay the network foundation and study beyond the basics.

Introduction to Network Fundamentals

Overview

For the end-users of a network, they just want it to work. Developers are more curious and often willing to troubleshoot their own connectivity issues. Network administrators benefit from methods that automatically and programmatically manage and deploy network configurations, including day zero scenarios.

Performance is top-of-mind for everyone, regardless of their perspective. With automation you can have faster deployment. With application monitoring you can troubleshoot faster. Knowing how to troubleshoot network connectivity is crucial to both developers and administrators, so quicker resolutions to problems is critical for everyone.

This topic looks at the fundamental pieces of a network. You want to know what standards are used for networks to make sure you have the right vocabulary to talk about network problems or solutions with anyone on any team. A high-level understanding of the layers that network traffic goes through gives you a head start on the knowledge you need to work on networks, applications, and automation.

What Is a Network?

A network consists of end devices such as computers, mobile devices, and printers. These devices are connected by networking devices such as switches and routers. The network enables the devices to communicate with one another and share data. There are many ways to connect to the network. The most common local area network (LAN) methods, specified by the Institute of Electrical and Electronics Engineers (IEEE), are wired Ethernet LANs (IEEE 802.3) and wireless LANs (IEEE 802.11). These end-devices connect to the network using an Ethernet or wireless network interface card (NIC).

Ethernet NICs connect to the network via registered jack 45 (RJ-45) ports and twisted pair Ethernet cables. Wireless NICs connect to the network via wireless radio signals in the 2.4 GHz or more commonly 5 GHz frequency bands.

Protocol Suites

A protocol suite is a set of protocols that work together to provide comprehensive network communication services. Since the 1970s there have been several different protocol suites, some developed by a standards organization and others developed by various vendors. During the evolution of network communications and the internet there were several competing protocol suites:

  • Internet Protocol Suite or TCP/IP - The Transmission Control Protocol/Internet Protocol (TCP/IP) protocol model for internetwork communications was created in the early 1970s and is sometimes referred to as the internet model. This is the most common and relevant protocol suite used today. The TCP/IP protocol suite is an open standard protocol suite maintained by the Internet Engineering Task Force (IETF).
  • Open Systems Interconnection (OSI) protocols - This is a family of protocols developed jointly in 1977 by the International Organization for Standardization (ISO) and the International Telecommunications Union (ITU). The OSI protocols include a seven-layer model called the OSI reference model. The OSI reference model categorizes the functions of its protocols. Today OSI is mainly known for its layered model. The OSI protocols have largely been replaced by TCP/IP.
  • AppleTalk - A short-lived proprietary protocol suite released by Apple Inc. in 1985 for Apple devices. In 1995, Apple adopted TCP/IP to replace AppleTalk.
  • Novell NetWare - A short-lived proprietary protocol suite and network operating system developed by Novell Inc. in 1983 using the IPX network protocol. In 1995, Novell adopted TCP/IP to replace IPX.

Today, the OSI model and the TCP/IP model, shown in the figure, are used to describe network operations.


Both the OSI and TCP/IP models use layers to describe the functions and services that can occur at that layer. These models provide consistency within all types of network protocols and services by describing what must be done at a particular layer, but not prescribing how it should be accomplished. It also describes the interaction of each layer with the layers directly above and below.

Both models can be used with the following differences:

  • OSI model numbers each layer.
  • TCP/IP model uses a single application layer to refer to the OSI application, presentation, and session layers.
  • TCP/IP model uses a single network access layer to refer to the OSI data link and physical layers.
  • TCP/IP model refers to the OSI network layer as the Internet layer.

The figure shows the O S I model on the left and the t c p / i p model on the right. The o s i model is labeled from top down with the numbers 7 down to 1 and the following words at each layer: application, presentation, session, transport, network, data link, and physical. The top three layers of the o s i model is across the application layer of the t c p / i p model. The transport layers of each model are across from each other. The network o s i model layer is across from the internet layer on the right. Layers 1 and 2 of the o s i model are across from the network access layer of the t c p / i p model.

OSI Layer Data Communication

The form that a piece of data takes at any layer is called a protocol data unit (PDU). During encapsulation, each succeeding layer encapsulates the PDU that it receives from the layer above in accordance with the protocol being used. When messages are sent on a network, the encapsulation process works from top to bottom, as shown in the figure.

The figure shows the O S I model on the left and the t c p / i p model on the right. The o s i model is labeled from top down with the numbers 7 down to 1 and the following words at each layer: application, presentation, session, transport, network, data link, and physical. The top three layers of the o s i model is across the application layer of the t c p / i p model. The transport layers of each model are across from each other. The network o s i model layer is across from the internet layer on the right. Layers 1 and 2 of the o s i model are across from the network access layer of the t c p / i p model.

Data Encapsulation at Each Layer of the TCP/IP model


At each stage of the process, a PDU has a different name to reflect its new functions. Typically, the PDUs are named according to the following layers:

  • Data - The general term for the PDU used at the application layer
  • Segment - transport layer PDU
  • Packet - network layer PDU
  • Frame - data Link layer PDU
  • Bits - physical layer PDU used when physically transmitting data over the medium

At each layer, the upper layer information is considered data within the encapsulated protocol. For example, the transport layer segment is considered data within the internet layer packet. The packet is then considered data within the link layer frame.

An advantage with layering the data transmission process is the abstraction that can be implemented with it. Different protocols can be developed for each layer and interchanged as needed. As long as the protocol provides the functions expected by the layer above, the implementation can be abstracted and hidden from the other layers. Abstraction of the protocol and services in these models is done through encapsulation.

In general, an application uses a set of protocols to send the data from one host to the other. Going down the layers, from the top one to the bottom one in the sending host and then the reverse path from the bottom layer all the way to the top layer on the receiving host, at each layer the data is being encapsulated.

At each layer, protocols perform the functionality required by that specific layer. The following describes the functionality of each layer of the OSI model, starting from layer 1.

Note: An OSI model layer is often referred to by its number.

Physical Layer (Layer 1)

This layer is responsible for the transmission and reception of raw bit streams. At this layer, the data to be transmitted is converted into electrical, radio, or optical signals. Physical layer specifications define voltage levels, physical data rates, modulation scheme, pin layouts for cable connectors, cable specification, and more. Ethernet, Bluetooth, and Universal Serial Bus (USB) are examples of protocols that have specifications for the physical layer.

Data Link Layer (Layer 2)

This layer provides NIC-to-NIC communications on the same network. The data link layer specification defines the protocols to establish and terminate connections, as well as the flow control between two physically connected devices. The IEEE has several protocols defined for the data link layer. The IEEE 802 family of protocols, which includes Ethernet and wireless LANs (WLANs), subdivide this layer into two sublayers:

  • Medium Access Control (MAC) sublayer – The MAC sublayer is responsible for controlling how devices in a network gain access to the transmission medium and obtain permission to transmit data.
  • Logical Link Control (LLC) sublayer – The LLC sublayer is responsible for identifying and encapsulating network layer protocols, error checking controls, and frame synchronization. IEEE 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 ZigBee protocols operate at the data link layer. The MAC sublayer within the data link layer is critically important in broadcast environments (like wireless transmission) in which control to the transmission medium has to be carefully implemented.

Network Layer (Layer 3)

This layer provides addressing and routing services to allow end devices to exchange data across networks. IP version 4 (IPv4) and IP version 6 (IPv6) are the principle network layer addressing protocols. Protocols such as Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP) provide routing services.

To accomplish end-to-end communications across network boundaries, network layer protocols perform two basic functions:

  • Addressing - All devices must be configured with a unique IP address for identification on the network.
  • Routing - Routing protocols provide services to direct the packets to a destination host on another network. To travel to other networks, the packet must be processed by a router. The role of the router is to select the best path and forward packets to the destination host in a process known as routing. A packet may cross many routers before reaching the destination host. Each router a packet crosses to reach the destination host is called a hop.

The network layer also includes the Internet Control Message Protocol (ICMP) to provide messaging services such as to verify connectivity with the ping command or discover the path between source and destination with the traceroute command.

Transport Layer (Layer 4)

The transport layer defines services to segment, transfer, and reassemble the data for individual communications between the end devices. This layer has two protocols: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).

TCP provides reliability and flow control using these basic operations:

  • Number and track data segments transmitted to a specific host from a specific application.
  • Acknowledge received data.
  • Retransmit any unacknowledged data after a certain amount of time.
  • Sequence data that might arrive in wrong order.
  • Send data at an efficient rate that is acceptable by the receiver.

TCP is used with applications such as databases, web browsers, and email clients. TCP requires that all data that is sent arrives at the destination in its original condition. Any missing data could corrupt a communication, making it either incomplete or unreadable.

UDP is a simpler transport layer protocol than TCP. It does not provide reliability and flow control, which means it requires fewer header fields. UDP datagrams can be processed faster than TCP segments.

UDP is preferable for applications such as Voice over IP (VoIP). Acknowledgments and retransmission would slow down delivery and make the voice conversation unacceptable. UDP is also used by request-and-reply applications where the data is minimal, and retransmission can be done quickly. Domain Name Service (DNS) uses UDP for this type of transaction.

Application developers must choose which transport protocol type is appropriate based on the requirements of the applications. Video may be sent over TCP or UDP. Applications that stream stored audio and video typically use TCP. The application uses TCP to perform buffering, bandwidth probing, and congestion control, in order to better control the user experience.

Session Layer (Layer 5)

The session layer provides mechanisms for applications to establish sessions between two hosts. Over these end-to-end sessions, different services can be offered. Session layer functions keep track of whose turn it is to transmit data, make sure two parties are not attempting to perform the same operation simultaneously, pick up a transmission that failed from the point it failed, and end the transmission. The session layer is explicitly implemented in applications that use remote procedure calls (RPCs).

Presentation Layer (Layer 6)

The presentation layer specifies context between application-layer entities. The OSI model layers so far, have been mostly dealing with moving bits from a source host to a destination host. The presentation layer is concerned with the syntax and the semantics of the transmitted information and how this information is organized. Differentiation is done at this layer between what type of data is encoded for transmission, for example text files, binaries, or video files.

Application Layer (Layer 7)

The application layer is the OSI layer that is closest to the end user and contains a variety of protocols usually needed by users. One application protocol that is widely used is HyperText Transfer Protocol (HTTP) and its secure version HTTPS. HTTP/HTTPS is at the foundation of the World Wide Web (WWW). Exchanging information between a client browser and a web server is done using HTTP. When a client browser wants to display a web page, it sends the name of the page to the server hosting the page using HTTP. The server sends back the Web page over HTTP. Other protocols for file transfers, electronic email and others have been developed throughout the years.

Some other examples of protocols that operate at the application layer include File Transfer Protocol (FTP) used for transferring files between hosts and Dynamic Host Configuration Protocol (DHCP) used for dynamically assigning IP addresses to hosts.

The figure shows the O S I model on the left and the t c p / i p model on the right. The o s i model is labeled from top down with the numbers 7 down to 1 and the following words at each layer: application, presentation, session, transport, network, data link, and physical. The top three layers of the o s i model is across the application layer of the t c p / i p model. The transport layers of each model are across from each other. The network o s i model layer is across from the internet layer on the right. Layers 1 and 2 of the o s i model are across from the network access layer of the t c p / i p model.

Data Flow in Layered Models


End devices implement protocols for the entire "stack" of layers. The source of the message (data) encapsulates the data with the appropriate protocol header/trailer at each layer, while the final destination de-encapsulates each protocol header/trailer to receive the message (data).

The network access layer (shown as "Link" in the figure above) operates at the local network connection to which an end-device is connected. It deals with moving frames from one NIC to another NIC on the same network. Ethernet switches operate at this layer.

The internet layer is responsible for sending data across potentially multiple distant networks. Connecting physically disparate networks is referred to as internetworking. Routing protocols are responsible for sending data from a source network to a destination network. Routers are devices that operate at the internet layer and perform the routing function. Routers are discussed in more detail later in this module. IP operates at the internet layer in the TCP/IP reference model and performs the two basic functions, addressing and routing.

Hosts are identified by their IP address. To identify network hosts' computers and locate them on the network, two addressing systems are currently supported. IPv4 uses 32-bit addresses. This means that approximately 4.3 billion devices can be identified. Today there are many more than 4.3 billion hosts attached to the internet, so a new addressing system was developed in the late 1990s. IPv6 uses 128-bit addresses. It was standardized in 1998 and implementation started in 2006. The IPv6 128-bit address space provides 340 undecillion addresses. Both IPv4 and IPv6 addressed hosts are currently supported on the internet.

The second function of the internet layer is routing packets. This function means sending packets from source to destination by forwarding them to the next router that is closer to the final destination. With this functionality, the internet layer makes possible internetworking, connecting different IP networks, and essentially establishing the internet. The IP packet transmission at the internet layer is best effort and unreliable. Any retransmission or error corrections are to be implemented by higher layers at the end devices, typically TCP.


Planes of a Router

The logic of a router is managed by three functional planes: the management plane, control plane, and data plane. Each provides different functionality:

  • Management Plane - The management plane manages traffic destined for the network device itself. Examples include Secure Shell (SSH) and Simple Network Management Protocol (SNMP).
  • Control Plane - The control plane of a network device processes the traffic that is required to maintain the functionality of the network infrastructure. The control plane consists of applications and protocols between network devices, such as routing protocols OSPF, BGP, and Enhanced Interior Gateway Routing Protocol (EIGRP). The control plane processes data in software.
  • Data Plane - The data plane is the forwarding plane, which is responsible for the switching of packets in hardware, using information from the control plane. The data plane processes data in hardware.

Network Interface Layer

Understanding the Network Interface Layer

A network consists of end devices such as computers, mobile devices, and printers that are connected by networking devices such as switches and routers. The network enables the devices to communicate with one another and share data, as shown in the figure.


In the figure above, data from the student computer to the instructor computer travels through the switch to the router (FastEthernet 1/0 interface), then to the next switch (FastEthernet 0/0 interface), and finally to the instructor computer.

All hosts and network devices that are interconnected, within a small physical area, form a LAN. Network devices that connect LANs, over large distances, form a wide area network (WAN).


Ethernet

Connecting devices within a LAN requires a collection of technologies. The most common LAN technology is Ethernet. Ethernet is not just a type of cable or protocol. It is a network standard published by the IEEE. Ethernet is a set of guidelines and rules that enable various network components to work together. These guidelines specify cabling and signaling at the physical and data link layers of the OSI model. For example, Ethernet standards recommend different types of cable and specify maximum segment lengths for each type.

There are several types of media that the Ethernet protocol works with: coaxial cable, twisted copper pair cable, single mode and multimode fiber optics.

Bits that are transmitted over an Ethernet LAN are organized into frames. The Ethernet frame format is shown in the figure.

The figure shows the parts of the ethernet frame: preamble, S F D, destination mac address, source mac address, EtherType, payload, and F C S.

Ethernet Frame


In Ethernet terminology, the container into which data is placed for transmission is called a frame. The frame contains header information, trailer information, and the actual data that is being transmitted.

The figure above shows the most important fields of the Ethernet frame:

  • Preamble - This field consists of seven bytes of alternating 1s and 0s that are used to synchronize the signals of the communicating computers.
  • Start of frame delimiter (SFD) – This is a 1-byte field that marks the end of the preamble and indicates the beginning of the Ethernet frame.
  • Destination MAC Address - The destination address field is six bytes (48 bits) long and contains the address of the NIC on the local network to which the encapsulated data is being sent.
  • Source MAC Address - The source address field is six bytes (48 bits) long and contains the address of the NIC of the sending device.
  • Type - This field contains a code that identifies the network layer protocol. For example, if the network layer protocol is IPv4 then this field has a value of 0x0800 and for IPv6 it has a value of 0x086DD.
  • Data - This field contains the data that is received from the network layer on the transmitting computer. This data is then sent to the same protocol on the destination computer. If the data is shorter than the minimum length of 46 bytes, a string of extraneous bits is used to pad the field.
  • Frame Check Sequence (FCS) - The FCS field includes a checking mechanism to ensure that the packet of data has been transmitted without corruption.

MAC addresses are used in transporting a frame across a shared local media. These are NIC-to-NIC communications on the same network. If the data (encapsulated IP packet) is for a device on another network, the destination MAC address will be that of the local router (default gateway). The Ethernet header and trailer will be de-encapsulated by the router. The packet will be encapsulated in a new Ethernet header and trailer using the MAC address of the router's egress interface as the source MAC address. If the next hop is another router, then the destination MAC address will be that of the next hop router. If the router is on the same network as the destination of the packet, the destination MAC address will be that of the end device.

MAC Addresses

All network devices on the same network must have a unique MAC address. The MAC address is the means by which data is directed to the proper destination device. The MAC address of a device is an address that is burned into the NIC. Therefore, it is also referred to as the physical address or burned in address (BIA).

A MAC address is composed of 12 hexadecimal numbers, which means it has 48 bits. There are two main components of a MAC. The first 24 bits constitute the OUI. The last 24 bits constitute the vendor-assigned, end-station address, as shown in the figure.

  • 24-bit OUI - The OUI identifies the manufacturer of the NIC. The IEEE regulates the assignment of OUI numbers. Within the OUI, there are 2 bits that have meaning only when used in the destination address (DA) field of the Ethernet header:
  • 24-bit, vendor-assigned, end-station address - This portion uniquely identifies the Ethernet hardware.

A MAC address can be displayed in any of the following ways:

  • 0050.56c0.0001
  • 00:50:56:c0:00:01
  • 00-50-56-c0-00-01

The figure shows a line across the top with the words 00-50-56-c0-00-01. Under this are two equal sized boxes labeled organizationally unique identifier (O U I) and network interface controller (N I C) specific. Under the OUI box is a line labeled three bytes. The same line and label is under the N I C box.

MAC Address Format


Destination MAC addresses include the three major types of network communications:

  • Unicast - Communication in which a frame is sent from one host and is addressed to one specific destination. In a unicast transmission, there is only one sender and one receiver. Unicast transmission is the predominant form of transmission on LANs and within the internet.
  • Broadcast - Communication in which a frame is sent from one address to all other addresses. In this case, there is only one sender, but the information is sent to all of the connected receivers. Broadcast transmission is essential for sending the same message to all devices on the LAN. Broadcasts are typically used one a device is looking for MAC address of the destination.
  • Multicast - Communication in which information is sent to a specific group of devices or clients. Unlike broadcast transmission, in multicast transmission, clients must be members of a multicast group to receive the information.

Switching

The switch builds and maintains a table (called the MAC address table) that matches the destination MAC address with the port that is used to connect to a node. The MAC address table is stored in the Content Addressable Memory (CAM), which enables very fast lookups.

The switch dynamically builds the MAC address table by examining the source MAC address of frames received on a port. The switch forwards frames by searching for a match between the destination MAC address in the frame and an entry in the MAC address table. Depending on the result, the switch will decide whether to filter or flood the frame. If the destination MAC address is in the MAC address table, it will send it out the specified port. Otherwise, it will flood it out all ports except the incoming port.

Switching Process


In the figure, four topologies are shown. Each topology has a switch and three hosts (HOST A, HOST B, and HOST C). The following describes the switching process illustrated in the figure as Host A sends a frame to Host B:

  1. In the first topology, top left, the switch receives a frame from Host A on port 1.
  2. The switch enters the source MAC address and the switch port that received the frame into the MAC address table.
  3. The switch checks the table for the destination MAC address. Because the destination address is not known, the switch floods the frame to all of the ports except the port on which it received the frame. In the second topology, top right, Host B, the destination MAC address, receives the Ethernet frame.
  4. In the third topology, bottom left, Host B replies to the Host A with the destination MAC address of Host A.
  5. The switch enters the source MAC address of Host B and the port number of the switch port that received the frame into the MAC table. The destination address of the frame and its associated port is known in the MAC address table.
  6. In the fourth topology, bottom right, the switch can now directly forward this frame to Host A out port 1. Frames between the source and destination devices are sent without flooding because the switch has entries in the MAC address table that identify the associated ports.

Virtual LANs (VLANs)

A virtual LAN (VLAN) is used to segment different Layer 2 broadcast domains on one or more switches. A VLAN groups devices on one or more LANs that are configured to communicate as if they were attached to the same wire, when in fact they are located on a number of different LAN segments. Because VLANs are based on logical instead of physical connections, they are extremely flexible.

For example, in the figure, the network administrator created three VLANs based on the function of its users: engineering, marketing, and accounting. Notice that the devices do not need to be on the same floor.

VLANs



VLANs define Layer 2 broadcast domains. Broadcast domains are typically bounded by routers because routers do not forward broadcast frames. VLANs on Layer 2 switches create broadcast domains based on the configuration of the switch. Switch ports are assigned to a VLAN. A Layer 2 broadcast received on a switch port is only flooded out onto other ports belonging to the same VLAN.

You can define one or many VLANs within a switch. Each VLAN you create in the switch defines a new broadcast domain. Traffic cannot pass directly to another VLAN (between broadcast domains) within the switch or between two switches. To interconnect two different VLANs, you must use a router or Layer 3 switch.

VLANs are often associated with IP networks or subnets. For example, all of the end stations in a particular IP subnet belong to the same VLAN. Traffic between VLANs must be routed. You must assign a VLAN membership (VLAN ID) to a switch port on an port-by-port basis (this is known as interface-based or static VLAN membership). You can set various parameters when you create a VLAN on a switch, including VLAN number (VLAN ID) and VLAN name.

Switches support 4096 VLANs in compliance with the IEEE 802.1Q standard which specifies 12 bits (2^12=4096) for the VLAN ID.

A trunk is a point-to-point link between two network devices that carries more than one VLAN. A VLAN trunk extends VLANs across an entire network. IEEE 802.1Q defines a "tag" that is inserted in the frame containing the VLAN ID. This tag is inserted when the frame is forwarded by the switch on its egress interface. The tag is removed by the switch that receives the frame. This is how switches know of which VLAN the frame is a member.

These VLANs are organized into three ranges: reserved, normal, and extended. Some of these VLANs are propagated to other switches in the network when you use the VLAN Trunking Protocol (VTP).

VLANs

Range

Usage

0, 4095

Reserved

For system use only. You cannot see or use these VLANs.

1

Normal

Cisco default. You can use this VLAN, but you cannot delete it.

2 - 1001

Normal

Used for Ethernet VLANs; you can create, use, and delete these VLANs.

1002 - 1005

Normal

Cisco defaults for FDDI and Token Ring. You cannot delete VLANs 1002-1005.

1006 - 4094

Extended

For Ethernet VLANs only.


Internetwork Layer

IPv4 Addresses

Every device on a network has a unique IP address. An IP address and a MAC address are used for access and communication across all network devices. Without IP addresses there would be no internet.

Despite the introduction of IPv6, IPv4 continues to route most internet traffic today. During recent years, more traffic is being sent over IPv6 due to the exhaustion of IPv4 addresses and the proliferation of mobile and Internet of Things (IoT) devices.

An IPv4 address is 32 bits, with each octet (8 bits) represented as a decimal value separated by a dot. This representation is called dotted decimal notation. For example, 192.168.48.64 and 64.100.36.254 are IPv4 addresses represented in dotted decimal notation. The table shows the binary value for each octet.

Value

192

168

48

64

Binary

11000000

10101000

00110000

01000000

Value

64

100

36

254

Binary

01000000

01100100

00100100

11111110

The IPv4 subnet mask (or prefix length) is used to differentiate the network portion from the host portion of an IPv4 address. A subnet mask contains four bytes and can be written in the same format as an IP address. In a valid subnet mask, the most significant bits starting at the left most must be set to 1. These bits are the network portion of the subnet mask. The bits set to 0 are the host portion of the mask

For this example, look at 203.0.113.0/24. The network's IPv4 address is 203.0.113.0 with a subnet mask 255.255.255.0. The last octet of the subnet mask has all 8 bits available for host IPv4 addresses, which means that on the network 203.0.113.0/24, there can be up to 28 (256) available subnet addresses.

Two IPv4 addresses are in use by default and cannot be assigned to devices:

  • 203.0.113.0 is the network address
  • 203.0.113.255 is the broadcast address

Therefore, there are 254 (256 - 2) host IP addresses available, and the range of addresses available for hosts would be 203.0.113.1 to 203.0.113.254.

There are three types of IPv4 addresses:

  • Network address - A network address is an address that represents a specific network and contains all 0 bits in the host portion of the address.
  • Host addresses - Host addresses are addresses that can be assigned to a device such as a host computer, laptop, smart phone, web camera, printer, router, etc. Host addresses contain a least one 0 bit and one 1 bit in the host portion of the address.
  • Broadcast address - A broadcast address is an address that is used when it is required to reach all devices on the IPv4 network. It contains all 1 bits in the host portion of the address.

A network can be divided into smaller networks called subnets. Subnets can be provided to individual organizational units, such as teams or business departments, to simplify the network and potentially make departmental data private. The subnet provides a specific range of IP addresses for a group of hosts to use. Every network is typically a subnet of a larger network.

For example, the network IPv4 network address is 192.168.2.0/24. The /24 (255.255.255.0) subnet mask means that the last octet has 8 bits available for host addresses. You can borrow from the host portion to create subnets. For example, you need to use three bits to create eight subnets (23 = 8). This leaves the remaining five bits for the hosts (25 = 32).

This can be more easily visualized when showing the subnet mask in binary format.

  • /24 subnet mask: 11111111.11111111.11111111.00000000
  • Modified /27 subnet mask: 11111111.11111111.11111111.11100000

Because you need to create eight subnets, you designate three bits in the last octet for subnet use. The remaining five bits are for the hosts, and provide each subnet with 32 IP addresses.

The following table lists the network address, broadcast address, and available host address range for each subnet.

Subnet

Network Address

Broadcast Address

Available Host Address Range

Subnet 1

192.168.2.0

192.168.2.31

192.168.2.1 to 192.168.2.30

Subnet 2

192.168.2.32

192.168.2.63

192.168.2.33 to 192.168.2.62

Subnet 3

192.168.2.64

192.168.2.95

192.168.2.65 to 192.168.2.94

Subnet 4

192.168.2.96

192.168.2.127

192.168.2.97 to 192.168.2.126

Subnet 5

192.168.2.128

192.168.2.159

192.168.2.129 to 192.168.2.158

Subnet 6

192.168.2.160

192.168.2.191

192.168.2.161 to 192.168.2.190

Subnet 7

192.168.2.192

192.168.2.223

192.168.2.193 to 192.168.2.222

Subnet 8

192.168.2.224

192.168.2.255

192.168.2.225 to 192.168.2.254

Notice the allocation for subnets and hosts specified for each row. You should now understand how designating bits to create subnets reduces the number of hosts available for each subnet. The number of hosts available per subnet takes into account that network and broadcast addresses each require an IP address. The more bits you use to create subnets, the fewer bits you have for hosts per subnet.

The table below shows the various options if you have a /24 subnet mask.

Subnet Mask

Binary

CIDR

Subnets

Hosts per Subnet

255.255.255.255

11111111.11111111.111111111.11111111

/32

None

N/A

255.255.255.254

11111111.11111111.111111111.11111110

/31

128

N/A

255.255.255.252

11111111.11111111.111111111.11111100

/30

64

2

255.255.255.248

11111111.11111111.111111111.11111000

/29

32

6

255.255.255.240

11111111.11111111.111111111.11110000

/28

16

14

255.255.255.224

11111111.11111111.111111111.11100000

/27

8

30

255.255.255.192

11111111.11111111.111111111.11000000

/26

4

62

255.255.255.128

11111111.11111111.111111111.10000000

/25

2

126

255.255.255.0

11111111.11111111.111111111.00000000

/24

1

254

Due to the depletion of IPv4 addresses, internally most addresses use private IPv4 addresses (RFC 1918). Use of variable-length subnet masks (VLSM) can also help support more efficient use of IPv4 address space. Originally used when IPv4 addresses were classful (Class A, B, C). VLSM is a method of dividing a single network (or subnet) using different subnet masks to provide subnets with different number of host addresses.

Network Address and Prefix RFC 1918 Private Address Range

  • 10.0.0.0/8 10.0.0.0 - 10.255.255.255
  • 172.16.0.0/12 172.16.0.0 - 172.31.255.255
  • 192.168.0.0/16 192.168.0.0 - 192.168.255.255

Devices using private IPv4 addresses are able to access the internet via Network Address Translation (NAT) and Port Address Translation (PAT). Outgoing data from your device is sent through a router, which maps your device's private IPv4 address to a public IPv4 address. When the data returns to that router, it looks up your device's private IP address and routes it to its destination.

IPv6 Addresses

IPv6 is designed to be the successor to IPv4. IPv6 has a larger 128-bit address space, providing 340 undecillion (i.e., 340 followed by 36 zeroes) possible addresses. However, IPv6 is more than just larger addresses.

When the IETF began its development of a successor to IPv4, it used this opportunity to fix the limitations of IPv4 and included enhancements. One example is Internet Control Message Protocol version 6 (ICMPv6), which includes address resolution and address autoconfiguration features not found in ICMP for IPv4 (ICMPv4).

The depletion of IPv4 address space has been the motivating factor for moving to IPv6. As Africa, Asia, and other areas of the world become more connected to the internet, there are not enough IPv4 addresses to accommodate this growth.

IPv6 is described initially in RFC 2460. Further RFCs describe the architecture and services supported by IPv6.

The architecture of IPv6 has been designed to allow existing IPv4 users to transition easily to IPv6 while providing services such as end-to-end security, quality of service (QoS), and globally unique addresses. The larger IPv6 address space allows networks to scale and provide global reachability. The simplified IPv6 packet header format handles packets more efficiently. IPv6 prefix aggregation, simplified network renumbering, and IPv6 site multihoming capabilities provide an IPv6 addressing hierarchy that allows for more efficient routing. IPv6 supports widely deployed routing protocols such as Routing Information Protocol (RIP), Integrated Intermediate System-to-Intermediate System (IS-IS), OSPF, and multiprotocol BGP (mBGP). Other available features include stateless autoconfiguration and an increased number of multicast addresses.

Private addresses in combination with Network Address Translation (NAT) have been instrumental in slowing the depletion of IPv4 address space. However, NAT is problematic for many applications, creates latency, and has limitations that severely impede peer-to-peer communications. IPv6 address space eliminates the need for private addresses; therefore, IPv6 enables new application protocols that do not require special processing by border devices at the edge of networks.

With the ever-increasing number of mobile devices, mobile providers have been leading the way with the transition to IPv6. The top two mobile providers in the United States report that over 90% of their traffic is over IPv6. Most top ISPs and content providers such as YouTube, Facebook, and NetFlix, have also made the transition. Many companies like Microsoft, Facebook, and LinkedIn are transitioning to IPv6-only internally. In 2018, broadband ISP Comcast reported a deployment of over 65% and British Sky Broadcasting over 86%.

IPv6 addresses are represented as a series of 16-bit hexadecimal fields (hextet) separated by colons (:) in the format: x:x:x:x:x:x:x:x. The preferred format includes all the hexadecimal values. There are two rules that can be used to reduce the representation of the IPv6 address:

  1. Omit leading zeros in each hextet
  2. Replace a single string of all-zero hextets with a double colon (::)

Leading zeros in each 16-bit hextet can be omitted. For example:

Preferred

2001:0db8:0000:1111:0000:0000:0000:0200

No leading 0s

2001:db8:0:1111:0:0:0:200

IPv6 addresses commonly contain successive hexadecimal fields of zeros. Two colons (::) may be used to compress successive hexadecimal fields of zeros at the beginning, middle, or end of an IPv6 address (the colons represent successive hexadecimal fields of zeros).

A double colon (::) can replace any single, contiguous string of one or more 16-bit hextets consisting of all zeros. For example, the following preferred IPv6 address can be formatted with no leading zeros.

Preferred

2001:0db8:0000:1111:0000:0000:0000:0200

No leading 0s

2001:db8:0:1111::200

Two colons (::) can be used only once in an IPv6 address to represent the longest successive hexadecimal fields of zeros. Hexadecimal letters in IPv6 addresses are not case-sensitive according to RFC 5952. The table below lists compressed IPv6 address formats:

IPv6 address type

Preferred format

Compressed format

Unicast

2001:0:0:0:db8:800:200c:417a

2001::db8:800:200c:417a

Multicast

ff01:0:0:0:0:0:0:101

ff01::101

Loopback

0:0:0:0:0:0:0:1

::1

Unspecified

0:0:0:0:0:0:0:0

::

The unspecified address listed in the table above indicates the absence of an IPv6 address or when the IPv6 address does not need to be known. For example, a newly initialized device on an IPv6 network may use the unspecified address as the source address in its packets until it receives or creates its own IPv6 address.

An IPv6 address prefix, in the format ipv6-prefix/prefix-length, can be used to represent bit-wise contiguous blocks of the entire address space. The prefix length is a decimal value that indicates how many of the high-order contiguous bits of the address comprise the prefix (the network portion of the address). For example, 2001:db8:8086:6502::/32 is a valid IPv6 prefix.

IPv6 Unicast Addresses

An IPv6 unicast address is an identifier for a single interface, on a single device. A packet that is sent to a unicast address is delivered to the interface identified by that address. There are several types of IPv6 unicast addresses including:

  • Global unicast addresses
  • Link-local addresses
  • Unique local addresses
  • Multicast addresses

Note: There are other types of IPv6 unicast addresses, but these four are the most significant to our discussion.

Global Unicast Addresses

A global unicast address (GUA) is an IPv6 similar to a public IPv4 address. IPv6 global unicast addresses are globally unique and routable on the IPv6 internet. The Internet Committee for Assigned Names and Numbers (ICANN), the operator for the Internet Assigned Numbers Authority (IANA), allocates IPv6 address blocks to the five Regional Internet Registries (RIRs). Currently, only GUAs with the first three bits (001), which converts to 2000::/3, are being assigned, as shown in the figure.

The figure shows four main textboxes with the number of bits above each one: 001 3 bits, global routing prefix 45 bits, S L A 16 bits, interface I D 64 bits. The word provider is above 001 and global routing prefix, site above S L A, and host above interface I D.

IPv6 GUA Format


The parts of the GUA in the figure above are as follows:

  • Global Routing Prefix - The global routing prefix is the prefix, or network, portion of the address that is assigned by the provider such as an ISP, to a customer or site. It is common for some ISPs to assign a /48 global routing prefix to its customers, which always includes the first 3 bit (001) shown in the figure. The global routing prefix will usually vary depending on the policies of the ISP.
    For example, the IPv6 address 2001:db8:acad::/48 has a global routing prefix that indicates that the first 48 bits (3 hextets or 2001:db8:acad) is how the ISP knows of this prefix (network). The double colon (::) following the /48 prefix length means the rest of the address contains all 0s. The size of the global routing prefix determines the size of the subnet ID.
  • Subnet ID - The Subnet ID field is the area between the Global Routing Prefix and the Interface ID. Unlike IPv4, where you must borrow bits from the host portion to create subnets, IPv6 was designed with subnetting in mind. The Subnet ID is used by an organization to identify subnets within its site. The larger the Subnet ID, the more subnets available.
    For example, if the prefix has a /48 Global Routing Prefix, and using a typical /64 prefix length, the first four hextets are for the network portion of the address, with the fourth hextet indicating the Subnet ID. The remaining four hextets are for the Interface ID.
  • Interface ID - The IPv6 Interface ID is equivalent to the host portion of an IPv4 address. The term Interface ID is used because a single device may have multiple interfaces, each having one or more IPv6 addresses. It is strongly recommended that in most cases /64 subnets should be used, which creates a 64-bit interface ID. A 64-bit interface ID allows for 18 quintillion devices or hosts per subnet. A /64 subnet or prefix (Global Routing Prefix + Subnet ID) leaves 64 bits for the interface ID. This is recommended to allow devices enabled with Stateless Address Autoconfiguration (SLAAC) to create their own 64-bit interface ID. It also makes developing an IPv6 addressing plan simple and effective.

The GUA is not a requirement; however every IPv6-enabled network interface must have an Link-local Address (LLA).

Link-Local Addresses

An IPv6 Link-local Address (LLA) enables a device to communicate with other IPv6-enabled devices on the same link and only on that link (subnet). Packets with a source or destination LLA cannot be routed beyond the link from which the packet originated.

If an LLA is not configured manually on an interface, the device will automatically create its own without communicating with a DHCP server. IPv6-enabled hosts create an IPv6 LLA even if the device has not been assigned a global unicast IPv6 address. This allows IPv6-enabled devices to communicate with other IPv6-enabled devices on the same subnet. This includes communication with the default gateway (router).

The format for an IPv6 LLA is shown in the figure.

The figure shows four main textboxes with the number of bits above each one: 001 3 bits, global routing prefix 45 bits, S L A 16 bits, interface I D 64 bits. The word provider is above 001 and global routing prefix, site above S L A, and host above interface I D.

IPv6 LLA Format



IPv6 LLAs are in the fe80::/10 range. The /10 indicates that the first 10 bits are 1111 1110 10. The first hextet has the following range:

1111 1110 1000 0000 (fe80) to

1111 1110 1011 1111 (febf)

IPv6 devices must not forward packets that have source or destination LLAs to other links.

Unique Local Addresses

Unique local addresses (range fc00::/7 to fdff::/7) are not yet commonly implemented. However, unique local addresses may eventually be used to address devices that should not be accessible from the outside, such as internal servers and printers.

The IPv6 unique local addresses have some similarity to RFC 1918 private addresses for IPv4, but there are significant differences:

  • Unique local addresses are used for local addressing within a site or between a limited number of sites.
  • Unique local addresses can be used for devices that will never need to access another network.
  • Unique local addresses are not globally routed or translated to a global IPv6 address.

Note: Many sites also use the private nature of RFC 1918 addresses to attempt to secure or hide their network from potential security risks. However, this was never the intended use of these technologies, and the IETF has always recommended that sites take the proper security precautions on their internet-facing router.

The figure shows the structure of a unique local address.

The figure shows four main textboxes with the number of bits above each one: 001 3 bits, global routing prefix 45 bits, S L A 16 bits, interface I D 64 bits. The word provider is above 001 and global routing prefix, site above S L A, and host above interface I D.

IPv6 Unique Local Address Format


Multicast Addresses

There are no broadcast addresses in IPv6. IPv6 multicast addresses are used instead of broadcast addresses. IPv6 multicast addresses are similar to IPv4 multicast addresses. Recall that a multicast address is used to send a single packet to one or more destinations (multicast group). IPv6 multicast addresses have the prefix ff00::/8.

Note: Multicast addresses can only be destination addresses and not source addresses.

There are two types of IPv6 multicast addresses:

  • Well-known multicast addresses
  • Solicited node multicast addresses

Well-known IPv6 multicast addresses are assigned. Assigned multicast addresses are reserved multicast addresses for predefined groups of devices. An assigned multicast address is a single address used to reach a group of devices running a common protocol or service. Assigned multicast addresses are used in context with specific protocols such as DHCPv6.

These are two common IPv6 assigned multicast groups:

  • ff02::1 All-nodes multicast group – This is a multicast group that all IPv6-enabled devices join. A packet sent to this group is received and processed by all IPv6 interfaces on the link or network. This has the same effect as a broadcast address in IPv4.
  • ff02::2 All-routers multicast group – This is a multicast group that all IPv6 routers join. A router becomes a member of this group when it is enabled as an IPv6 router with the ipv6 unicast-routing global configuration command. A packet sent to this group is received and processed by all IPv6 routers on the link or network.

The format for an IPv6 multicast address is shown in the figure.

The figure shows four main textboxes with the number of bits above each one: 001 3 bits, global routing prefix 45 bits, S L A 16 bits, interface I D 64 bits. The word provider is above 001 and global routing prefix, site above S L A, and host above interface I D.

IPv6 Multicast Address Format


A solicited-node multicast address is similar to the all-nodes multicast address. The advantage of a solicited-node multicast address is that it is mapped to a special Ethernet multicast address. This allows the Ethernet NIC to filter the frame by examining the destination MAC address without sending it to the IPv6 process to see if the device is the intended target of the IPv6 packet.

The format for an IPv6 solicited-node multicast address is shown in the figure.

The figure shows four main textboxes with the number of bits above each one: 001 3 bits, global routing prefix 45 bits, S L A 16 bits, interface I D 64 bits. The word provider is above 001 and global routing prefix, site above S L A, and host above interface I D.

IPv6 Solicited-Node Address Format


Routers and Routing

Recall that a router is a networking device that functions at the internet layer of the TCP/IP model or Layer 3 network layer of the OSI model. Routing involves the forwarding packets between different networks. Routers use a routing table to route between networks. A router generally has two main functions: Path determination, and Packet routing or forwarding.

Path Determination

Path determination is the process through which a router uses its routing table to determine where to forward packets. Each router maintains its own local routing table, which contains a list of all the destinations that are known to the router and how to reach those destinations. When a router receives an incoming packet on one of its interfaces, it checks the destination IP address in the packet and looks up the best match between the destination address and the network addresses in its routing table. A matching entry indicates that the destination is directly connected to the router or that it can be reached by forwarding the packet to another router. That router becomes the next-hop router towards the final destination of the packet. If there is no matching entry, the router sends the packet to the default route. If there is no default route, the router drops the packet.

Packet Forwarding

After the router determines the correct path for a packet, it forwards the packet through a network interface towards the destination network.

A routing table might look like the following:

Network

Interface or Next Hop

10.9.2.0/24

directly connected: Gi0/0

10.9.1.0/24

directly connected: Gi0/1

10.5.3.0/24

directly connected: Se0/0/1

10.8.3.0/24

via 10.9.2.2 (next-hop router)

As you can see, each row in the routing table lists a destination network and the corresponding interface or next-hop address. For directly connected networks, it means the router has an interface that is part of that network. For example, assume that the router receives a packet on its Serial 0/0/1 interface with a destination address of 10.9.1.5. The router looks up the destination address in its routing table and decides to forward the packet out its interface GigabitEthernet 0/1 towards its destination. Following the same logic, assume the router receives a packet with a destination address in the 10.8.3.0 network on its GigabitEthernet0/1 interface. Doing a routing table lookup, the router decides to forward this packet out its GigabitEthernet0/0 interface that connects it to the next-hop router towards the final destination in the 10.8.3.0 network.

A routing table may contain the following types of entries:

  • Directly connected networks - These network route entries are active router interfaces. Routers add a directly connected route when an interface is configured with an IP address and is activated. Each router interface is connected to a different network segment.
  • Static routes - These are routes that are manually configured by the network administrator. Static routes work relatively well for small networks that do not change in time, but in large dynamic networks they have many shortcomings.
  • Dynamic routes - These are routes learned automatically when a dynamic routing protocol is configured and a neighbor relationship to other routers is established. The reachability information in this case is dynamically updated when a change in the network occurs. Several routing protocols with different advantages and shortcomings have been developed through the years. Routing protocols are extensively used throughout networks deployed all over the world. Examples of routing protocols include OSPF, EIGRP, IS-IS, and BGP.
  • Default routes - Default routes are either manually entered, or learned through a dynamic routing protocol. Default routes are used when no explicit path to a destination is found in the routing table. They are a gateway of last resort option instead of just dropping the packet.

Network Devices

Ethernet Switches

Earlier in this module, you explored both switching and routing functions in the network layer. In this topic, you will explore in more detail the networking devices that perform the switching and routing functions.

A key concept in Ethernet switching is the broadcast domain. A broadcast domain is a logical division in which all devices in a network can reach each other by broadcast at the data link layer. Broadcast frames must be forwarded by the switch on all its ports except the port that received the broadcast frame. By default, every port on a switch belongs to the same broadcast domain. A Layer 3 device, such as a router is needed to terminate the Layer 2 broadcast domain. As discussed previously, VLANs correspond to a unique broadcast domain.

In legacy shared Ethernet, a device connected to a port on a hub can either transmit or receive data at a given time. It cannot transmit and receive data at the same time. This is referred to as half-duplex transmission. Half-duplex communication is similar to communication with walkie-talkies in which only one person can talk at a time. In half-duplex environments, if two devices do transmit at the same time, there is a collision. The area of the network in which collisions can occur is called a collision domain.

One of the main features of Ethernet switches over legacy Ethernet hubs is that they provide full-duplex communications, which eliminates collision domains. Ethernet switches can simultaneously transmit and receive data. This mode is called full-duplex. Full-duplex communication is similar to the telephone communication, in which each person can talk and hear what the other person says simultaneously.

Switches have the following functions:

  • Operate at the network access layer of the TCP/IP model and the Layer 2 data link layer of the OSI model
  • Filter or flood frames based on entries in the MAC address table
  • Have a large number of high speed and full-duplex ports

The switch dynamically learns which devices and their MAC addresses are connected to which switch ports. It builds the MAC address table and filters or floods frames based on that table. A MAC address table on a switch looks similar to the one below:

VLAN

MAC Address

Type

Ports

1

001b.10a0.2500

Dynamic

Gi0/1

1

001b.10ae.7d00

Dynamic

Gi0/2

10

0050.7966.6803

Dynamic

Gi0/3

The switching mode determines whether the switch begins forwarding the frame as soon as the switch has read the destination details in the packet header, or waits until the entire frame has been received and checked for errors, by calculating the cyclic redundancy check (CRC) value, before forwarding on the network. The switching mode is applicable to all packets being switched or routed through the hardware and can be saved persistently through reboots and restarts.

The switch operates in either of the following switching modes:

  • Cut-Through Switching Mode - Switches operating in cut-through switching mode start forwarding the frame as soon as the switch has read the destination details in the frame header. A switch in cut-through mode forwards the data before it has completed receiving the entire frame. The switching speed in cut-through mode is faster than the switching speed in store-and-forward switching mode. Fragment free switching is a modified form of cut-through switching in which the switch only starts forwarding the frame after it has read the Type field. Fragment free switching provides better error checking than cut-through, with practically no increase in latency.
  • Store-and-Forward Switching Mode - When store-and-forward switching is enabled, the switch checks each frame for errors before forwarding it. Each frame is stored until the entire frame has been received and checked. Because it waits to forward the frame until the entire frame has been received and checked, the switching speed in store-and-forward switching mode is slower than the switching speed in cut-through switching mode.

These are some characteristics of LAN switches:

  • High port density - Switches have a large number of ports, from 24 to 48 ports per switch in smaller devices, to hundreds of ports per switch chassis in larger modular switches. Switch ports usually operate at 100 Mbps, 1 Gbps, and 10 Gbps.
  • Large frame buffers - Switches have the ability to store received frames when there may be congested ports on servers or other devices in the network.
  • Fast internal switching - Switches have very fast internal switching. They are able to switch user traffic from the ingress port to the egress port extremely fast. Different methods are used to connect the ports which affects the overall performance of the switch including a fast internal bus, shared memory, or an integrated crossbar switch fabric.

Routers

While switches are used to connect devices on a LAN and exchange data frames, routers are needed to reach devices that are not on the same LAN. Routers use routing tables to route traffic between different networks. Routers are attached to different networks (or subnets) through their interfaces and have the ability to route the data traffic between them.

Routers have the following functions:

  • They operate at the internet layer of TCP/IP model and Layer 3 network layer of the OSI model.
  • They route packets between networks based on entries in the routing table.
  • They have support for a large variety of network ports, including various LAN and WAN media ports which may be copper or fiber. The number of interfaces on routers is usually much smaller than switches but the variety of interfaces supported is greater. IP addresses are configured on the interfaces.

Recall that the functions of a router are path determination and packet forwarding. There are three packet-forwarding mechanisms supported by routers:

  • Process switching - When a packet arrives on an interface, it is forwarded to the control plane where the CPU matches the destination address with an entry in its routing table, and then determines the exit interface and forwards the packet. The router does this for every packet, even if the destination is the same for a stream of packets. This process-switching mechanism is very slow and is rarely implemented in modern networks. Contrast this with fast switching.
  • Fast switching - Fast switching uses a fast-switching cache to store next-hop information. When a packet arrives on an interface, it is forwarded to the control plane where the CPU searches for a match in the fast-switching cache. If it is not there, it is process-switched and forwarded to the exit interface. The flow information for the packet is also stored in the fast-switching cache. If another packet going to the same destination arrives on an interface, the next-hop information in the cache is re-used without CPU intervention.
  • Cisco Express Forwarding (CEF) - CEF is the most recent and default Cisco IOS packet-forwarding mechanism. Like fast switching, CEF builds a Forwarding Information Base (FIB), and an adjacency table. However, the table entries are not packet-triggered like fast switching but change-triggered, such as when something changes in the network topology. Therefore, when a network has converged, the FIB and adjacency tables contain all the information that a router would have to consider when forwarding a packet. Cisco Express Forwarding is the fastest forwarding mechanism and the default on Cisco routers and multilayer switches.

A common analogy used to describe these three different packet-forwarding mechanisms is as follows:

  • Process switching solves a problem by doing math long hand, even if it is the identical problem that was just solved.
  • Fast switching solves a problem by doing math long hand one time and remembering the answer for subsequent identical problems.
  • CEF solves every possible problem ahead of time in a spreadsheet.

Firewalls

A firewall is a hardware or software system that prevents unauthorized access into or out of a network. Typically, firewalls are used to prevent unauthorized internet users from accessing internal networks. Therefore, all data leaving or entering the protected internal network must pass through the firewall to reach its destination, and any unauthorized data is blocked. The role of the firewall in any network is critical. Additional details on how firewalls interact with applications are presented in the Application Development and Security module of the course.

Stateless Packet Filtering

The most basic (and the original) type of firewall is a stateless packet filtering firewall. You create static rules that permit or deny packets, based on packet header information. The firewall examines packets as they traverse the firewall, compares them to static rules, and permits or denies traffic accordingly. This stateless packet filtering can be based on several packet header fields, including the following:

  • Source and/or destination IP address
  • IP protocol ID
  • Source and/or destination TCP or UDP Port number
  • ICMP message type
  • Fragmentation flags
  • IP option settings

This type of firewall tends to work best for TCP applications that use the same static ports every time, or for filtering that is purely based on Layer 3 information such as source or destination IP address.

The static rules are fairly simple, but they do not work well for applications that dynamically use different sets of TCP and/or UDP port numbers. This is because they cannot track the state of TCP or UDP sessions as they transition from initial request, to fulfilling that request, and then the closing of the session. Also, these static rules are built using a restrictive approach. In other words, you write explicit rules to permit the specific traffic deemed acceptable, and deny everything else.

Static rules are transparent to end systems, which are not aware that they are communicating with a destination through a high-performance firewall. However, implementing static rules requires deep understanding of packet headers and application processes.

Stateful Packet Filtering

The stateful packet filtering firewall performs the same header inspection as the stateless packet filtering firewall but also keeps track of the connection state. This is a critical difference. To keep track of the state, these firewalls maintain a state table.

A typical simple configuration works as follows. Any sessions or traffic initiated by devices on trusted, inside networks are permitted through the firewall. This includes the TCP connection request for destination port 80. The firewall keeps track of this outbound request in its state table. The firewall understands that this is an initial request, and so an appropriate response from the server is allowed back in through the firewall. The firewall tracks the specific source port used and other key information about this request. This includes various IP and TCP flags and other header fields. This adds a certain amount of intelligence to the firewall.

It will allow only valid response packets that come from the specific server. The response packets must have all the appropriate source and destination IP addresses, ports, and flags set. The stateful packet filtering firewall understands standard TCP/IP packet flow including the coordinated change of information between inside and outside hosts that occurs during the life of the connection. The firewall allows untrusted outside servers to respond to inside host requests, but will not allow untrusted servers to initiate requests.

Of course, you can create exceptions to this basic policy. Your company might consider certain applications to be inappropriate during work hours. You might want to block inside users from initiating connections to those applications. However, with traditional stateful packet filtering, this capability is limited. These traditional firewalls are not fully application-aware.

Also, you might have a web server hosted on the premises of a corporation. Of course, you would like everyone in the world to access your web server and purchase your products or services. You can write rules that allow anyone on the untrusted internet to form appropriate inbound connections to the web server.

These stateful firewalls are more adept at handling Layer 3 and Layer 4 security than a stateless device. However, like stateless packet filters, they have little to no insight into what happens at OSI model Layers 5–7; they are “blind” to these layers.

Application Layer Packet Filtering

The most advanced type of firewall is the application layer firewall which can perform deep inspection of the packet all the way up to the OSI model’s Layer 7. This gives you more reliable and capable access control for OSI Layers 3–7, with simpler configuration.

This additional inspection capability can impact performance. Limited buffering space can hinder deep content analysis.

The application layer firewall can determine an File Transfer Protocol (FTP) session, just like a stateless or stateful firewall can. However, this firewall can look deeper, into the application layer to see that this is specifically an FTP “put” operation, to upload a file. You could have rules that deny all FTP uploads. Or you can configure a more granular rule such as one that denies all FTP uploads except those from a specific source IP and only if the filename is “os.bin”.

The deeper packet inspection capability of the application layer firewall enables it to verify adherence to standard HTTP protocol functionality. It can deny requests that do not conform to these standards, or otherwise meet criteria established by the security team.

Load Balancers

Load balancing improves the distribution of workloads across multiple computing resources, such as servers, cluster of servers, network links, and more. Server load balancing helps ensure the availability, scalability, and security of applications and services by distributing the work of a single server across multiple servers.

The load balancer decides which server should receive a client request such as a web page or a file. The load balancer selects a server that can successfully fulfill the client request most effectively, without overloading the selected server or the overall network.

At the device level, the load balancer provides the following features to support high network availability:

  • Device redundancy — Redundancy allows you to set up a peer load balancer device in the configuration so that if one load balancer becomes inoperative, the other load balancer can take its place immediately.
  • Scalability — Virtualization allows running the load balancers as independent virtual devices, each with its own resource allocation.
  • Security — Access control lists restrict access from certain clients or to certain network resources.

At the network service level, a load balancer provides the following advanced services:

  • High services availability — High-performance server load balancing allows distribution of client requests among physical servers and server farms. In addition, health monitoring occurs at the server and server farm levels through implicit and explicit health probes.
  • Scalability — Virtualization allows the use of advanced load-balancing algorithms (predictors) to distribute client requests among the virtual devices configured in the load balancer. Each virtual device includes multiple virtual servers. Each server forwards client requests to one of the server farms. Each server farm can contain multiple physical servers.
  • Services-level security — This allows establishment and maintenance of a Secure Sockets Layer (SSL) session between the load balancer and its peer, which provides secure data transactions between clients and servers.

Although the load balancer can distribute client requests among hundreds or even thousands of physical servers, it can also maintain server persistence. With some e-commerce applications, all client requests within a session are directed to the same physical server so that all the items in one shopping cart are contained on one server.

You can configure a virtual server to intercept web traffic to a website and allow multiple real servers (physical servers) to appear as a single server for load-balancing purposes.

A virtual server is bound to physical hardware and software resources that run on a real, physical server in a server farm. They can be configured to provide client services or as backup servers.

Physical servers that are all perform the same or similar functions are grouped into server farms. Servers in the same server farm often contain identical content (referred to as mirrored content) so that if one server becomes inoperative, another server can take over its functions immediately. Mirrored content also allows several servers to share the load during times of increased demand.

You can distribute incoming client requests among the servers in a server farm by defining load-balancing rules called predictors using IP address and port information.

When a client requests an application service, the load balancer performs server load balancing by deciding which server can successfully fulfill the client request in the shortest amount of time without overloading the server or server farm. Some sophisticated predictors take into account factors such as the server load, response time, or availability, allowing you to adjust load balancing to each match the behavior of a particular application.

You can configure the load balancer to allow the same client to maintain multiple simultaneous or subsequent TCP or IP connections with the same real server for the duration of a session. A session is defined as a series of interactions between a client and a server over some finite period of time (from several minutes to several hours). This server persistence feature is called stickiness.

Many network applications require that customer-specific information be stored persistently across multiple server requests. A common example is a shopping cart used on an e-commerce site. With server load balancing in use, it could potentially be a problem if a back-end server needs information generated at a different server during a previous request.

Depending on how you have configured server load balancing, the load balancer connects a client to an appropriate server after it has determined which load-balancing method to use. If the load balancer determines that a client is already stuck to a particular server, then the load balancer sends subsequent client requests to that server, regardless of the load-balancing criteria. If the load balancer determines that the client is not stuck to a particular server, it applies the normal load-balancing rules to the request.

The combination of the predictor and stickiness enables the application to have scalability, availability, and performance as well as persistence for transaction processing.

SSL configuration in a load balancer establishes and maintains an SSL session between the load balancer and its peer, enabling the load balancer to perform its load-balancing tasks on the SSL traffic. These SSL functions include server authentication, private-key and public-key generation, certificate management, and data packet encryption and decryption. Depending on how the load balancer is configured, it can also perform SSL offloading, by terminating the SSL session from the client on the load balancer itself. This way, the resource intensive SSL processes are offloaded on the load balancer itself, instead of terminating on the backend servers.

Application services require monitoring to ensure availability and performance. Load balancers can be configured to track the health and performance of servers and server farms by creating health probes. Each health probe created can be associated with multiple real servers or server farms.

When the load balancer health monitoring is enabled, the load balancer periodically sends messages to the server to determine the server status. The load balancer verifies the server response to ensure that a client can access that server. The load balancer can use the server response to place the server in or out of service. In addition, the load balancer can use the health of servers in a server farm to make reliable load-balancing decisions.

Additional details on how load balancers interact with applications and load balancing algorithms is covered in the Application Development and Security module of the course.

Network Diagrams

It is very important to document your code, not only to make it easier to understand and follow by other people who will be reading and reviewing it, but also for yourself. Six months down the road, when you come back and look at your code, you might find it very difficult and time consuming to remember what exactly went through your mind when you wrote that amazing and aptly named f() function.

Network diagrams are part of the documentation that goes with a network deployment and play just as an important role as the documentation steps in programming code. Network diagrams typically display a visual and intuitive representation of the network, depicting how are all the devices are connected, and in which buildings, floors, closets are they located, as well as what interface connects to each device.

Imagine being dropped into a place you have never been to, without GPS, without a map, with the instruction to find the closest grocery store. This is what it feels like to manage a network of devices without a network diagram and network documentation. Instead of finding the grocery store, you have to figure out why a large number of devices are no longer connected to the network. You might be able to find the grocery store eventually, if you set off in the right direction. Similarly, you also might be able to figure out the network problem. But it would take you a lot less time if you had access to a map, a network diagram.

As networks get built and configured and go through their lifecycle of ordering the devices, receiving them on site, bringing them online and configuring them, maintaining and monitoring them, upgrading them, all the way to decommissioning them, and starting the process over again, network diagrams need to be updated and maintained to document all these changes.

There are generally two types of network diagrams:

  • Layer 2 physical connectivity diagrams
  • Layer 3 logical connectivity diagrams

Layer 2, or physical connectivity diagrams are network diagrams representing how devices are physically connected in the network. It is basically a visual representation of which network port on a network device connects to which network port on another network device. Protocols like Cisco Discovery Protocol (CDP) or Link Layer Discovery Protocol (LLDP) can be used to display the physical network port connectivity between two or more devices. This network diagram is useful especially when troubleshooting direct network connectivity issues.

Layer 3, or logical connectivity diagrams are network diagrams that display the IP connectivity between devices on the network. Switches and Layer 2 devices are usually not even displayed in these diagrams as they do not perform any Layer 3 functions and from a routing perspective, they are the equivalent of a physical wire. This type of network diagram is useful when troubleshooting routing problems. Redundant connections and routing protocols are usually present in networks that require high availability.

An example of a simplified Layer 2 network diagram is displayed in the figure. Notice that there is no Layer 3 information documented, such as IP addresses or routing protocols.


Looking at this diagram you can get a general idea of how the clients connect to the network and how the network devices connect to each other so that end to end connectivity between all clients is accomplished. Router RTR1 has two active interfaces in this topology: FastEthernet 0/0 and Serial 0/0. Router RTR2 has three active interfaces: FastEthernet0/0, FastEthernet 1/0, and Serial 0/0.

Most Cisco routers have network slots that support modular network interfaces. This means that the routers are a future proof investment in the sense that when upgrading the capacity of the network, for example from 100 Mbps FastEthernet to 1 Gbps GigabitEthernet and further to 10 Gbps TenGigabitEthernet, you can simply swap between modular interfaces and still use the same router. Modular Ethernet cards for Cisco routers usually have multiple Ethernet ports on each card.

In order to uniquely identify the modular cards and the ports on each one of these cards, a naming convention is used. In the figure above, FastEthernet 0/0 specifies that this FastEthernet modular card is inserted in the first network module on the router (module 0, represented by the first 0 in 0/0) and is the first port on that card (port 0, represented by the second 0 in 0/0). Following this logic, FastEthernet 0/1, references the second FastEthernet port on the first FastEthernet module and FastEthernet 1/2, references the third FastEthernet port on the second FastEthernet module. Cisco routers support a large number of network modules implementing different technologies including the following: FastEthernet (rarely used these days), GigabitEthernet, 10GigabitEthernet, 100GigabitEthernet, point to point Serial, and more.

Going back to the network diagram above, we see there are 2 routers RTR1 and RTR2 in the diagram connected through a serial network connection. Interface FastEthernet 0/0 on RTR1 connects to a switch that provides network connectivity to a server and 20 hosts in the Administration organization. Interface FastEthernet 0/0 on router RTR2 connects to 4 switches that provide network connectivity to 64 hosts in the Instructor group. Interface FastEthernet 1/0 on Router RTR2 connects to 20 switches that provide network connectivity to 460 hosts in the Student group.

Networking Protocols

Networking Protocols

The internet was built on various standards. You should understand the standard network protocols so you can communicate and troubleshoot effectively.

Each protocol meets a need and uses standard port values. You should know when to use a particular protocol and know the standard port for connections. Many developers have been puzzled by a mismatched port value; therefore, checking these values can be a first line of attack when troubleshooting.

Telnet and Secure SHell (SSH)

Telnet and SSH are both used to connect to a remote computer and log in to that system using credentials. Telnet is less prevalent today because SSH uses encryption to protect data going over the network connection. Telnet should only be used in non-production environments.

SSH connections can use a public key for authentication, rather than sending a username and password over the network. This authentication method means that SSH is a good choice to connect to network devices, to cloud devices, and to containers.

By default, SSH uses port 22 and Telnet uses port 23. Telnet can use port 992 when creating a session over Transport Layer Security (TLS) or SSL.

HTTP and HTTPS

HTTP and its secure version, HTTPS, are both protocols recognized by web browsers and are used to connect to web sites. HTTPS uses TLS or SSL to make a secure connection. You can see the http: or https: in the address bar on your browser. Many browsers also recognize ssh: and ftp: protocols and allow you to connect to remote servers in that way as well.

NETCONF and RESTCONF

Later in this course, you will use NETCONF and RESTCONF to manage a Cisco router. NETCONF uses port 830. RESTCONF does not have a reserved port value. You may see various implementations of different values. Commonly the port value is in the 8000s.

To have multiple network operations, you want to make sure each protocol has a default port and use standards to try to avoid conflicts. TCP and UDP traffic requires a destination port be specified for each packet. The source port is automatically generated by the sending device. The following table shows some common, well-known port values for protocols used in this course. System port numbers are in the range 0 to 1023, though you may see others in use for different reasons.

Note: For a more complete list of ports, search the internet for TCP and UPD port numbers.

Port value

Protocol

22

SSH

23, 992

Telnet

53

DNS

80

HTTP

443

HTTPS (HTTP over TLS or SSL)

830

NETCONF

8008, 8080, 8888

RESTCONF


DHCP

As you have seen previously in this module, IP addresses are needed by all devices connected to a network in order for them to be able to communicate. Assigning these IP addresses manually and one at a time for each device on the network is cumbersome and time consuming. DHCP was designed to dynamically configure devices with IP addressing information. DHCP works within a client/server model, where designated DHCP servers allocate IP addresses and deliver configuration information to devices that are configured to dynamically request addressing information.

In addition to the IP address for the device itself, a DHCP server can also provide additional information, like the IP address of the DNS server, default router, and other configuration parameters. For example, Cisco wireless access points use option 43 in DHCP requests to obtain the IP address of the Wireless LAN Controller that they need to connect to for management purposes.

Some of the benefits of using DHCP instead of manual configurations are:

  • Reduced client configuration tasks and costs - By not having to physically walk up to the device and manually configure the network settings, large cost savings are possible. This especially applies in the case of ISPs that can remotely and dynamically assign IP addresses to the cable or Digital Subscriber Line (DSL) modems of their clients without having to dispatch a person each time a network configuration change is necessary.
  • Centralized management - A DHCP server typically maintains the configuration settings for several subnets. Therefore, an administrator only needs to configure and update a single, central server.

DHCP allocates IP addresses in three ways:

  • Automatic allocation - The DHCP server assigns a permanent IP address to the client.
  • Dynamic allocation - DHCP assigns an IP address to a client for a limited period of time (lease time).
  • Manual allocation - The network administrator assigns an IP address to a client and DHCP is used to relay the address to the client.

DHCP defines a process by which the DHCP server knows the IP subnet in which the client resides, and it can assign an IP address from a pool of available addresses in that subnet. The rest of the network configuration parameters that a DHCP server supplies, like the default router IP address or the IP address of the DNS server, are usually the same for the whole subnet so the DHCP server can have these configurations per subnet rather than per host.

The specifications for the IPv4 DHCP protocol are described in RFC 2131 - Dynamic Host Configuration Protocol and RFC 2132 - DHCP options and BOOTP Vendor Extensions. DHCP for IPv6 was initially described in RFC 3315 - Dynamic Host Configuration Protocol for IPv6 (DHCPv6) in 2003, but this has been updated by several subsequent RFCs. RFC 3633 - IPv6 Prefix Options for Dynamic Host Configuration Protocol (DHCP) version 6 added a DHCPv6 mechanism for prefix delegation and RFC 3736 - Stateless Dynamic Host Configuration Protocol (DHCP) Service for IPv6 added SLAAC. The main difference between DHCP for IPv4 and DHCP for IPv6 is that DHCP for IPv6 does not include the default gateway address. The default gateway address can only be obtained automatically in IPv6 from the Router Advertisement message.

DHCP Relay

In cases in which the DHCP client and server are located in different subnets, a DHCP relay agent can be used. A relay agent is any host that forwards DHCP packets between clients and servers. Relay agent forwarding is different from the normal forwarding that an IP router performs, where IP packets are routed between networks transparently. Relay agents receive inbound DHCP messages and then generate new DHCP messages on another interface, as shown in the figure.

The figure shows host a on the left, DHCP relay agent in the middle, and DHCP server on the far right. A line with an arrow pointing to the relay agent is D H C P discover (broadcast) and a line going from the agent to the server. From the server is a line pointing to the agent labeled D H C P offer (unicast) and a line going from agent to host a. There is an arrow pointing from host a to the agent labeled D H C P request (broadcast) and a line from the agent pointing to the server. There is a line with an arrow from the server to the agent labeled D H C P A C K (unicast) and an arrow from the agent to the host.

DHCP Relay


Clients use port 67 to send DHCP messages to DHCP servers. DHCP servers use port 68 to send DHCP messages to clients.

DHCP operations includes four messages between the client and the server:

  • DHCPDISCOVER - Server discovery
  • DHCPPOFFER - IP lease offer
  • DHCPREQUEST - IP lease request
  • DHCPACK - IP lease acknowledgment

The figure shows how these messages are sent between the client and server.

The figure shows host a on the left, DHCP relay agent in the middle, and DHCP server on the far right. A line with an arrow pointing to the relay agent is D H C P discover (broadcast) and a line going from the agent to the server. From the server is a line pointing to the agent labeled D H C P offer (unicast) and a line going from agent to host a. There is an arrow pointing from host a to the agent labeled D H C P request (broadcast) and a line from the agent pointing to the server. There is a line with an arrow from the server to the agent labeled D H C P A C K (unicast) and an arrow from the agent to the host.

DHCP Operations


In the figure, the client broadcasts a DHCPDISCOVER message looking for a DHCP server. The server responds with a unicast DHCPOFFER. If there is more than one DHCP server on the local network, the client may receive multiple DHCPOFFER messages. Therefore, it must choose between them, and broadcast a DHCPREQUEST message that identifies the explicit server and lease offer that the client is accepting. The message is sent as a broadcast so that any other DHCP servers on the local network will know the client has requested configuration from another DHCP server.

A client may also choose to request an address that it had previously been allocated by the server. Assuming that the IPv4 address requested by the client, or offered by the server, is still available, the server sends a unicast DHCP acknowledgment (DHCPACK) message that acknowledges to the client that the lease has been finalized. If the offer is no longer valid, then the selected server responds with a DHCP negative acknowledgment (DHCPNAK) message. If a DHCPNAK message is returned, then the selection process must begin again with a new DHCPDISCOVER message. After the client has the lease, it must be renewed prior to the lease expiration through another DHCPREQUEST message.

The DHCP server ensures that all IP addresses are unique (the same IP address cannot be assigned to two different network devices simultaneously). Most ISPs use DHCP to allocate addresses to their customers.

DHCPv6 has a set of messages that is similar to those for DHCPv4. The DHCPv6 messages are SOLICIT, ADVERTISE, INFORMATION REQUEST, and REPLY.

DNS

In data networks, devices are labeled with numeric IP addresses to send and receive data over networks. Domain names were created to convert the numeric address into a simple, recognizable name.

On the internet, fully-qualified domain names (FQDNs), such as http://ww​w.cisco.com, are much easier for people to remember than 198.133.219.25, which is the actual numeric address for this server. If Cisco decides to change the numeric address of ww​w.cisco.com, it is transparent to the user because the domain name remains the same. The new address is simply linked to the existing domain name and connectivity is maintained.

Note: You will not be able to access ww​w.cisco.com by simply entering that IP address 198.133.219.25 in your web browser.

The DNS protocol defines an automated service that matches domain names to IP addresses. It includes the format for queries, responses, and data. DNS uses a single format called a DNS message. This message format is used for all types of client queries and server responses, error messages, and the transfer of resource record information between servers.

DNS Message Format

The DNS server stores different types of resource records that are used to resolve names. These records contain the name, address, and type of record. Some of these record types are as follows:

  • A – An end device IPv4 address
  • NS – An authoritative name server
  • AAAA – An end device IPv6 address (pronounced quad-A)
  • MX – A mail exchange record

When a client makes a query to its configured DNS server, the DNS server first looks at its own records to resolve the name. If it is unable to resolve the name by using its stored records, it contacts other servers to resolve the name. After a match is found and returned to the original requesting server, the server temporarily stores the numbered address in the event that the same name is requested again.

The DNS client service on Windows PCs also stores previously resolved names in memory. The ipconfig /displaydns command displays all of the cached DNS entries.

As shown in the table, DNS uses the same message format between servers. It consists of a question, answer, authority, and additional information for all types of client queries and server responses, error messages, and transfer of resource record information.

DNS Message Section

Description

Question

The question for the name server

Answer

Resource Records answering the question

Authority

Resource Records pointing toward an authority

Additional

Resource Records holding additional information

DNS Hierarchy

DNS uses a hierarchical system based on domain names to create a database to provide name resolution, as shown in the figure.


The naming structure is broken down into small, manageable zones. Each DNS server maintains a specific database file and is only responsible for managing name-to-IP mappings for that small portion of the entire DNS structure. When a DNS server receives a request for a name translation that is not within its DNS zone, the DNS server forwards the request to another DNS server within the proper zone for translation. DNS is scalable because hostname resolution is spread across multiple servers.

The different top-level domains represent either the type of organization or the country of origin. Examples of top-level domains are the following:

  • .com – a business or industry
  • .org – a non-profit organization
  • .au – Australia
  • .co – Colombia

Note: For more examples, search the internet for a list of all the top-level domains.

SNMP

SNMP was developed to allow administrators to manage devices such as servers, workstations, routers, switches, and security appliances. It enables network administrators to monitor and manage network performance, find and solve network problems, and plan for network growth. SNMP is an application layer protocol that provides a message format for communication between managers and agents.

There are several versions of SNMP that have been developed through the years:

  • SNMP Version 1 (SNMPv1)
  • SNMP Version 2c (SNMPv2c)
  • SNMP Version 3 (SNMPv3)

SNMP version 1 is rarely used anymore, but versions 2c and 3 are still extensively used. In comparison to previous versions, SNMPv2c includes additional protocol operations and 64–bit performance monitoring support. SNMPv3 focused primarily on improving the security of the protocol. SNMPv3 includes authentication, encryption, and message integrity.

The SNMP system consists of three elements:

  • SNMP manager: network management system (NMS)
  • SNMP agents (managed device)
  • Management Information Base (MIB)

The figure shows the relationship(s) among SNMP manager, agents, and managed devices.

The figure shows a management entity labeled N M S that connects to 3 cylinders labeled agent. Below each agents are the words management database and below all three are the words management devices. A dashed line with arrows at both ends goes between the management entity and the agent. An arrow goes out of the agent textbox back into the agent textbox.

SNMP Components


To configure SNMP on a networking device, it is first necessary to define the relationship between the SNMP manager and the device (the agent).

The SNMP manager is part of a network management system (NMS). The SNMP manager runs SNMP management software. As shown in the figure, the SNMP manager can collect information from an SNMP agent by using the “get” action. It can also change configurations on an agent by using the “set” action. In addition, SNMP agents can forward information directly to the SNMP manager by using “traps”.

The figure shows a management entity labeled N M S that connects to 3 cylinders labeled agent. Below each agents are the words management database and below all three are the words management devices. A dashed line with arrows at both ends goes between the management entity and the agent. An arrow goes out of the agent textbox back into the agent textbox.

SNMP get-requests



SNMP Operation

An SNMP agent running on a device collects and stores information about the device and its operation. This information is stored locally by the agent in the MIB. The SNMP manager then uses the SNMP agent to access information within the MIB and make changes to the device configuration.

There are two primary SNMP manager requests, get and set. A get request is used by the SNMP manager to query the device for data. A set request is used by the SNMP manager to change configuration variables in the agent device. A set request can also initiate actions within a device. For example, a set can cause a router to reboot, send a configuration file, or receive a configuration file.

SNMP Polling

The NMS can be configured to periodically have the SNMP managers poll the SNMP agents that are residing on managed devices using the get request. The SNMP manager queries the device for data. Using this process, a network management application can collect information to monitor traffic loads and to verify the device configurations of managed devices. The information can be displayed via a GUI on the NMS. Averages, minimums, or maximums can be calculated. The data can be graphed, or thresholds can be established to trigger a notification process when the thresholds are exceeded. For example, an NMS can monitor CPU utilization of a Cisco router. The SNMP manager samples the value periodically and presents this information in a graph for the network administrator to use in creating a baseline, creating a report, or viewing real time information.

SNMP Traps

Periodic SNMP polling does have disadvantages. First, there is a delay between the time that an event occurs and the time that it is noticed (via polling) by the NMS. Second, there is a trade-off between polling frequency and bandwidth usage.

To mitigate these disadvantages, it is possible for SNMP agents to generate and send traps to inform the NMS immediately of certain events. Traps are unsolicited messages alerting the SNMP manager to a condition or event on the network. Examples of trap conditions include, but are not limited to, improper user authentication, restarts, link status (up or down), MAC address tracking, closing of a TCP connection, loss of connection to a neighbor, or other significant events. Trap notifications reduce network and agent resources by eliminating the need for some of SNMP polling requests.

SNMP Community Strings

For SNMP to operate, the NMS must have access to the MIB. To ensure that access requests are valid, some form of authentication must be in place.

SNMPv1 and SNMPv2c use community strings that control access to the MIB. Community strings are plaintext passwords. SNMP community strings authenticate access to MIB objects.

There are two types of community strings:

  • Read-only (ro) - This type provides access to the MIB variables, but does not allow these variables to be changed. Because security is minimal in version 2c, many organizations use SNMPv2c in read-only mode.
  • Read-write (rw) - This type provides read and write access to all objects in the MIB.

To get or set MIB variables, the user must specify the appropriate community string for read or write access.

Management Information Base (MIB)

The agent captures data from MIBs, which are data structures that describe SNMP network elements as a list of data objects. Think of the MIB as a "map" of all the components of a device that are being managed by SNMP. To monitor devices, the SNMP manager must compile the MIB file for each equipment type in the network. Given an appropriate MIB, the agent and SNMP manager can use a relatively small number of commands to exchange a wide range of information with one another.

The MIB is organized in a tree-like structure with unique variables represented as terminal leaves. An Object IDentifier (OID) is a long numeric tag. It is used to distinguish each variable uniquely in the MIB and in the SNMP messages. Variables that measure things such as CPU temperature, inbound packets on an interface, fan speed, and other metrics, all have associated OID values. The MIB associates each OID with a human-readable label and other parameters, serving as a dictionary or codebook. To obtain a metric (such as the state of an alarm, the host name, or the device uptime), the SNMP manager puts together a get packet that includes the OID for each object of interest. The SNMP agent on the device receives the request and looks up each OID in its MIB. If the OID is found, a response packet is assembled and sent with the current value of the object included. If the OID is not found, an error response is sent that identifies the unmanaged object.

SNMP traps are used to generate alarms and events that are happening on the device. Traps contain:

  • OIDs that identify each event and match it with the entity that generated the event
  • Severity of the alarm (critical, major, minor, informational or event)
  • A date and time stamp

SNMP Communities

SNMP community names are used to group SNMP trap destinations. When community names are assigned to SNMP traps, the request from the SNMP manager is considered valid if the community name matches one configured on the managed device. If so, all agent-managed MIB variables are made accessible.

If the community name does not match, however, SNMP drops the request. New devices are often preconfigured to have SNMP enabled, and provided with basic communities named public for read-only access and private for read/write access to the system. From a security perspective, is very important to either rename these communities, remove them completely and disable SNMP if not used or apply an access-list to the community, limiting the access to only the IP address or hostname of the SNMP manager station.

SNMP Messages

SNMP uses the following messages to communicate between the manager and the agent:

  • Get
  • GetNext
  • GetResponse
  • Set
  • Trap

The Get and GetNext messages are used when the manager requests information for a specific variable. When the agent receives a Get or GetNext message it will issue a GetResponse message back to the manager. The response message will contain either the information requested or an error message indicating why the request cannot be processed.

A Set message is used by the manager to request that a change should be made to the value of a specific variable. Similarly, to the Get and GetNext requests, the agent will respond with a GetResponse message indicating either that the change has been successfully done or an error message indicating why the requested change cannot be implemented.

The Trap message is used by the agent to inform the manager when important events take place. An SNMP Trap is a change of state message. This means it could be one of the following:

  • an alarm
  • a clear
  • a status message

Several Requests for Comments (RFCs) have been published throughout the years concerning SNMP. Some notable ones are: RFC 1155 - Structure and Identification of Management Information for the TCP/IP-based InternetsRFC 1213 - Management Information Base for Network Management of TCP/IP-based Internets: MIB-II and RFC 2578 - Structure of Management Information Version 2 (SNMP).

NTP

Accurate time and making sure all devices in the network have a uniform and correct view of time has always been a critical component to ensuring a smooth operation of the infrastructure. IT infrastructure, including the network, compute, and storage, has become critical for the success of nearly all businesses today. Every second of downtime or unavailability of services over the network can be extremely expensive. In cases where these issues extend over hours or days it can mean bankruptcy and going out of business. Service Level Agreements (SLAs) are contracts between parties that consume infrastructure services and parties that provide these services. Both parties depend on accurate and consistent timing from a networking perspective. Time is fundamental to measuring SLAs and enforcing contracts.

The system clock on each device is the heart of the time service. The system clock runs from the second the operating system starts and keeps track of the date and time. The system clock can be set to update from different sources and can be used to distribute time to other systems through various mechanisms. Most network devices contain a battery-powered clock to initialize the system clock. The battery-powered clock tracks date and time across restarts and power outages. The system clock keeps track of time based on Universal Time Code (UTC), equivalent to Greenwich Mean Time (GMT). Information about local timezone and regional daylight savings time can be configured to enable display of local time and date wherever the server is located.

NTP Overview

Network Time Protocol (NTP) enables a device to update its clock from a trusted network time source, compensating for local clock drift. A device receiving authoritative time can be configured to serve time to other machines, enabling groups of devices to be closely synchronized.

NTP uses UDP port 123 as source and destination. RFC 5905 contains the definition of NTP Version 4, which is the latest version of the protocol. NTP is used to distribute and synchronize time among distributed time servers and clients. A group of devices on a network that are configured to distribute NTP and the devices that are updating their local time from these time servers form a synchronization subnet. Multiple NTP time masters (primary servers) can exist in the same synchronization subnet at the same time. NTP does not specify any election mechanism between multiple NTP servers in the same synchronization subnet. All available NTP servers can be used for time synchronization at the same time.

An authoritative time source is usually a radio clock, or an atomic clock attached to a time server. Authoritative server in NTP lingo just means a very accurate time source. It is the role of NTP to distribute the time across the network. NTP clients poll groups of time servers at intervals managed dynamically to reflect changing network conditions (primarily latency) and the judged accuracy of each time server consulted (determined by comparison with local clock time). Only one NTP transaction per minute is needed to synchronize the time between two machines.

NTP uses the concept of strata (layers) to describe how far away a host is from an authoritative time source. The most authoritative sources are in stratum 1. These are generally servers connected directly to a very accurate time source, like a rubidium atomic clock. A stratum 2 time server receives time from a stratum 1 server, and so on. When a device is configured to communicate with multiple NTP servers, it will automatically pick the lowest stratum number device as its time source. This strategy builds a self-organizing tree of NTP speakers. NTP performs well over packet-switched networks like the internet, because it makes correct estimates of the following three variables in the relationship between a client and a time server:

  • Network delay between the server and the client.
  • Dispersion of time data exchanges. Dispersion represents a measure of the maximum clock error between the server and the client.
  • Clock offset, which is the correction applied to a client's clock to synchronize it to the current time.

It is not uncommon to see NTP clock synchronization at the 10 millisecond level over long distance WANs with devices as far apart as 2000 miles, and at the 1 millisecond level for LANs.

NTP avoids synchronizing with upstream servers whose time is not accurate. It does this in two ways:

  • NTP never synchronizes with a NTP server that is not itself synchronized.
  • NTP compares time reported by several NTP servers, and will not synchronize to a server whose time is an outlier, even if its stratum is lower than the other servers' stratum.

Communication between devices running NTP, also known as associations, are usually statically configured. Each device is given the IP address or hostname of all NTP servers it should associate with, and connects with them directly to solicit time updates. In a LAN environment, NTP can be configured to use IP broadcast messages instead. Configuration complexity is reduced in this case because each device can be configured to send or receive broadcast messages. The downside with this situation is that the accuracy of timekeeping is a bit reduced because the information flow is only one-way.

The time kept on a device is a critical resource. It is strongly recommended to use the security features that come with NTP to avoid the accidental or malicious configuration of incorrect time. The two security features usually used are:

  • An access list-based restriction scheme in which NTP traffic is allowed in the network only from specific sources.
  • An encrypted authentication mechanism in which both the clients and the servers authenticate each other securely.

Clients usually synchronize with the lowest stratum server they can access. But NTP incorporates safeguards as well: it prefers to have access to at least three lower-stratum time sources (giving it a quorum), because this helps it determine if any single source is incorrect. When all servers are well synchronized, NTP chooses the best server based on a range of variables: lowest stratum, network distance (latency), and precision claimed. This suggests that while one should aim to provide each client with three or more sources of lower stratum time, it is not necessary that all these sources be of highest quality. For example, good backup service can be provided by a same-stratum peer that receives time from different lower stratum sources.

In order to determine if a server is reliable, the client applies many sanity checks:

  • Timeouts to prevent trap transmissions if the monitoring program does not renew this information after a lengthy interval.
  • Checks on authentication, range bounds, and to avoid use of very old data.
  • Checks warn that the server's oscillator (local clock tick-source) has gone too long without update from a reference source.
  • Recent additions to avoid instabilities when a reference source changes rapidly due to severe network congestion.

If any one of these checks fail, the device declares the source insane.

NTP Association Modes

NTP servers can associate in several modes, including:

  • Client/Server
  • Symmetric Active/Passive
  • Broadcast

Client/Server Mode

Client/server mode is most common. In this mode, a client or dependent server can synch with a group member, but not the reverse, protecting against protocol attacks or malfunctions.

Client-to-server requests are made via asynchronous remote procedure calls, where the client sends a request and expects a reply at some future time (unspecified). This is sometimes described as "polling". On the client side, client/server mode can be turned on with a single command or config-file change, followed by a restart of the NTP service on the host. No additional configuration is required on the NTP server.

In this mode, a client requests time from one or more servers and processes replies as received. The server changes addresses and ports, overwrites certain message fields, recalculates the checksum, and returns the message immediately. Information included in the NTP message lets the client determine the skew between server and local time, enabling clock adjustment. The message also includes information to calculate the expected timekeeping accuracy and reliability, as well as help the client select the best server.

Servers that provide time to many clients normally operate as a three-server cluster, each deriving time from three or more stratum 1 or 2 servers as well as all other members of the group. This protects against situations where one or more servers fail or become inaccurate. NTP algorithms are designed to identify and ignore wrongly-functioning time sources and are even resilient against attacks where NTP servers are subverted deliberately and made to send incorrect time to clients. As backup, local hosts can be equipped with external clocks and made to serve time temporarily, in case normal time sources, or communications paths used to reach them, are disrupted.

Symmetric Active/Passive Mode

In this mode, a group of low stratum peers work as backups for one another. Each peer derives time from one or more primary reference sources or from reliable secondary servers. As an example, a reference source could be a radio clock, receiving a corrected time signal from an atomic clock. Should a peer lose all reference sources or stop working, the other peers automatically reconfigure to support one another. This is called a 'push-pull' operation in some contexts: peers either pull or push time, depending on self-configuration.

Symmetric/active mode is usually configured by declaring a peer in the configuration file, telling the peer that one wishes to obtain time from it, and provide time back if necessary. This mode works well in configurations where redundant time servers are interconnected via diverse network paths, which is a good description of how most stratums 1 and 2 servers on the internet are set up today.

Symmetric modes are most often used to interconnect two or more servers that work as a mutually redundant group. Group members arrange their synch paths to minimize network jitter and propagation delay.

Configuring a peer in symmetric/active mode is done with the peer command, and then providing the DNS name or address of the other peer. The other peer may also be configured in symmetric active mode in this way.

If not, a symmetric passive association is activated automatically when a symmetric active message is received. Because intruders can impersonate a peer and inject false time values, symmetric mode should always be authenticated.

Broadcast and/or Multicast Mode

When only modest requirements for accuracy exist, clients can use NTP broadcast and/or multicast modes, where many clients are configured the same way, and one broadcast server (on the same subnet) provides time for them all. Broadcast messages are not propagated by routers, meaning that this mode cannot be used beyond a single subnet.

Configuring a broadcast server is done using the broadcast command, and then providing a local subnet address. The broadcast client command lets the broadcast client respond to broadcast messages received on any interface. This mode should always be authenticated, because an intruder can impersonate a broadcast server and propagate false time values.

NAT

Although implementation of IPv6 and its 40 undecillion addresses is proceeding, IPv4 is still widely used. IPv4 can only accommodate a maximum of slightly over 4 billion unique addresses (2 to the 32nd power). This creates problems. Given the necessarily-limited range of public or external IPv4 addresses an organization (or subnetwork) can control, how can many more devices use these addresses to communicate outside?

Network Address Translation (NAT) helps with the problem of IPv4 address depletion. NAT works by mapping many private internal IPv4 addresses to a range of public addresses or to one single address (as is done in most home networks). NAT identifies traffic to and from a specific device, translating between external/public and internal/private IPv4 addresses.

NAT also hides clients on the internal network behind a range of public addresses, providing a "sense of security" against these devices being directly attacked from outside. As mentioned previously, the IETF does not consider private IPv4 addresses or NAT as effective security measures.

NAT is supported on a large number of routers from different vendors for IPv4 address simplification and conservation. In addition, with NAT you can select which internal hosts are available for NAT and hence external access.

NAT can be configured on hosts and routers requiring it, without requiring any changes to hosts or routers that do not need NAT. This is an important advantage.

Purpose of NAT

By mapping between external and internal IPv4 addresses, NAT allows an organization with non-globally-routable IPv4 addresses to connect to the internet by translating addresses into a globally-routable IPv4 address. Non-globally-routable addresses, or private addresses are defined by RFC 1918 (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16). These addresses are private and cannot be routed on the internet. With the exception of a few other types of IPv4 addresses, other IPv4 addresses are globally-routable on the internet. They are known as public IPv4 addresses.

NAT also enables an organization to easily change service providers or voluntarily renumber network resources without affecting their public IPv4 address space. NAT is an IETF standard and is described in RFC 1631 - The IP Network Address Translator (NAT).

NAT is typically configured at the point of connection between the internal network and the outside network or the internet. For each packet exiting the domain, NAT translates the source address into a globally unique address, and vice-versa. Networks with more than one point of entrance/exit require multiple NATs, sharing the same translation table. If NAT runs out of addresses from the pool, it drops the packet and sends an ICMP host unreachable message to the destination.

Used in the context of NAT, the term "inside" usually means networks controlled by an organization, with addresses in one local address space. "Outside" refers to networks to which the network connects, and which may not be under the organization's control. Hosts in outside networks may also be subject to translation, with their own local and global IPv4 addresses.

Types of NAT

NAT typically runs on a router. Before packets are forwarded between networks, NAT translates the private (inside local) addresses within the internal network into public (inside global) addresses. This functionality gives the option to configure NAT so that it advertises only a single address to the outside world, for the entire internal network. By so doing, NAT effectively hides the internal network from the world.

Types of NAT include:

  • Static address translation (static NAT) – This is one-to-one mapping between global and local IPv4 addresses.
  • Dynamic address translation (dynamic NAT) – This maps registered IPv4 addresses from a pool to registered IP addresses.
  • Overloading (also called Port Address Translation or PAT) – This maps many unregistered IPv4 addresses to a single registered address (many to one) on different ports. Through overloading, thousands of users can be connected to the internet by using only one real global IP address.

IPv6 was developed with the intention of making NAT unnecessary. However, IPv6 does include its own IPv6 private address space called unique local addresses (ULAs). IPv6 ULAs are similar to RFC 1918 private addresses in IPv4 but have a different purpose. ULAs are meant for only local communications within a site. ULAs are not meant to provide additional IPv6 address space, nor to provide a level of security.

IPv6 does provide for protocol translation between IPv4 and IPv6. This is known as NAT64. NAT for IPv6 is used in a much different context than NAT for IPv4. The varieties of NAT for IPv6 are used to transparently provide access between IPv6-only and IPv4-only networks. It is not used as a form of private IPv6 to global IPv6 translation.

Four NAT Addresses

NAT includes four types of addresses:

  • Inside local address
  • Inside global address
  • Outside local address
  • Outside global address

When determining which type of address is used, it is important to remember that NAT terminology is always applied from the perspective of the device with the translated address:

  • Inside address – This is the address of the device which is being translated by NAT.
  • Outside address – This is the address of the destination device.

NAT also uses the concept of local or global with respect to addresses:

  • Local address – This is any address that appears on the inside portion of the network.
  • Global address - This is any address that appears on the outside portion of the network.

Inside Source Address Translation

IPv4 addresses can be translated into globally-unique IPv4 addresses when communicating outside the internal network. There are two options to accomplish this:

  • Static translation - This method sets up a one-to-one mapping between an inside local address and an inside global address. This is useful when a host on the inside must be accessed from a fixed outside address.
  • Dynamic translation - This method maps between inside local addresses and a global address pool.

The figure shows a device translating a source address inside a network to a source address outside the network:

NAT


As shown in the above figure, inside source address translation works as follows:

  1. The user at 10.1.1.1 initiates a connection to Host B in the outside network.
  2. When a packet is received from 10.1.1.1, the NAT device checks its NAT table and determines what to do next:
  • If a static mapping is found, the device goes to Step 3, below.

  • If no static mapping is found, the source address (SA) 10.1.1.1 is dynamically translated. A legal, global address is selected from the dynamic address pool, and a translation entry in the NAT table, called a simple entry, is created.

    The NAT device swaps the inside local source address of host 10.1.1.1 with the global address of the translation entry, then forwards the packet.

  1. Host B receives the packet and replies to host 10.1.1.1 using the inside global IP destination address (DA) 203.0.113.20.
  2. The NAT device uses the inside global address as a key, performs a NAT table lookup, and then translates it to the inside local address of host 10.1.1.1 before forwarding the packet.
  3. Host 10.1.1.1 receives the packet and continues the exchange, beginning with Step 1 above.

Overloading of Inside Global Addresses

Using a single global address for multiple local addresses is known as overloading. When overloading is configured, the NAT device gathers information from higher-level protocols (for example, TCP or UDP port numbers) to translate global addresses back to correct local addresses. To map multiple local addresses to one global address, TCP or UDP port numbers are used to distinguish local addresses. This NAT process is called Port Address Translation (PAT). An example is shown in the following figure.

PAT



PAT works as follows. Both Host B and Host C think they are communicating with a single host at address 203.0.113.20. They are actually communicating with different hosts, differentiated by port number:

  1. Host 10.1.1.1 initiates a connection to Host B.
  2. The first packet received from host 10.1.1.1 causes the device to check its NAT table:
  • If no translation entry exists, the device translates the inside local address to a global address from the available pool.

  • If another translation is ongoing (presuming overloading is enabled), the device reuses that translation's global address and saves information in the NAT table that can be used to reverse the process, translating the global address back to the proper local address. This is called an extended entry.

    The NAT device swaps source address 10.1.1.1 with the global address, then forwards the packet.

  1. Host B receives the packet and responds by using the inside global IP address 203.0.113.20.
  2. The NAT device then uses the inside and outside addresses and port numbers to perfors a NAT table lookup. It translates to the inside local address 10.1.1.1 and forwards the packet.
  3. Host 10.1.1.1 receives the packet and continues the exchange, beginning with Step 1 above.

Troubleshooting Application Connectivity Issues


Troubleshooting Common Network Connectivity Issues

There can be many reasons why network connectivity is not functioning the way you might expect. Whenever an application or a website or any target destination that is being accessed over a network is not behaving as expected, the network takes the blame. While the network might be the culprit in some instances, there are many other reasons why applications stop responding as expected.

Network troubleshooting usually follows the OSI layers. You can start either top to bottom beginning at the application layer and making your way down to the physical layer. Or you can go from the bottom to the top. In this example, we will cover a typical troubleshooting session starting from physical layer and making our way up the stack of layers towards the application layer.

First and foremost, from a client perspective, it is very important to determine how the client connects to the network. Is it a wired or wireless connection?

If the client connects via an Ethernet cable, make sure the NIC comes online and there are electrical signals being exchanged with the switch port to which the cable is connected. Depending on the operating system that the client is running, the status of the network connection will show as a solid green, or it will display "enabled" or "connected" text next to the network interface card in the operating system settings. If the NIC shows as connected, you know the physical layer is working as expected and can move the troubleshooting up the stack. If the NIC on the client does not show up as connected or enabled, then check the configuration on the switch. The port to which the client is connecting might be shut down, or maybe the cable connecting the client to the network port in the wall is defective, or the cable connecting the network port from the wall all the way to the switch might be defective. Troubleshooting at the physical layer basically boils down to making sure there are four uninterrupted pairs of twisted copper cables between the network client and the switch port.

If the client wirelessly connects to the network, make sure that the wireless network interface is turned on and it can send and receive wireless signals to and from the nearest wireless access point. Also, make sure you stay in the range of the wireless access point as long as you need to be connected to the network.

Moving up to the data link layer, or Layer 2, make sure the client is able to learn destination MAC addresses (using ARP) and also that the switch to which the client is connecting is able to learn the MAC addresses received in its ports. On most operating systems you can view the ARP table with the a form of the arp command (such as arp -a on a Windows 10 PC) or on Cisco switches you can verify the MAC address table with the command show mac address-table. If you can verify that the both these tables are accurate, then you can move to the next layer. If the client cannot see any MAC addresses in its local ARP table, check for any Layer 2 access control lists on the switch port that might block this traffic. Also make sure that the switch port is configured for the correct client VLAN.

At the network layer, or Layer 3, make sure the client obtains the correct IP address from the DHCP server, or is manually configured with the correct IP address and the correct default gateway. If the destination of your traffic is in a different subnet than the subnet you are connected to, that means the traffic will have to be sent to the local router (default gateway). Check Layer 3 connectivity one hop at a time. First check connectivity to the first Layer 3 hop in the path of the traffic, which is the default gateway, to make sure you can reach it. If Layer 3 connectivity can be established all the way from the client to the destination, move on with troubleshooting to the transport layer, or Layer 4. If Layer 3 connectivity cannot be established, check IP access lists on the router interfaces, check the routing table on both the client and the default gateway router and make sure the traffic is routed correctly. Routing protocol issues and access control lists blocking IP traffic are some of the usual problems encountered at this layer.

You have verified end to end communication between the source and destination of the traffic in the previous step. This is a major milestone, so give yourself a pat on the back. You almost cleared the network team from the responsibility of that application outage. Before blaming the application support team there is one more thing to verify. Make sure the client can access the port on which the application is running. If the destination of the traffic is a webpage served by a web server, make sure TCP ports 80 (HTTP) and 443 (HTTPS) are accessible and reachable from the client. It could be that the web server is running on an esoteric port like 8080, so make sure you know the correct port the application you are trying to connect to is running. Networking tools like curl or a custom telnet command specifying the application port can be used to ensure the transport layer works end-to-end between the source and destination of the traffic. If a transport connection cannot be established, verify firewalls and security appliances that are placed on the path of traffic for rules that are blocking the traffic based on TCP and UDP ports. Verify if any load balancing is enabled and if the load balancer is working as expected, or if any proxy servers intercepting the traffic are filtering and denying the connection.

So you got this far, checked end-to-end connectivity, you can connect to the port on which the application is running, so the network team is in the clear, right? Almost. One additional thing to verify is traffic load and network delay. Networking tools like iperf can generate traffic and load stress the network to ensure that large amounts of data can be transported between the source and destination. These issues are the most difficult to troubleshoot because they can be difficult to reproduce. They can be temporarily caused by a spike in network traffic, or could be outside your control all together. Implementing QoS throughout the network can help with these issues. With QoS, traffic is categorized into different buckets and each bucket gets separate treatment from the network. For example, real time traffic, like voice and video can be classified as such by changing QoS fields in the Layer 2 and Layer 3 packet headers so that when switches and routers process this type of traffic, it gets a higher priority and guaranteed bandwidth if necessary.

Or, maybe you are lucky to begin with and are verifying a web server access or a REST API endpoint and the server returns a 500 status code. In that case you can start troubleshooting the web server and skip all of the network troubleshooting steps.

If you got this far in your network troubleshooting, there is a good chance that the problem is not with the network and a closer look at the application server is in order. Slow or no responses from the application could also mean an overloaded backend database, or just faulty code introduced through new features. In this case, solutions like Cisco AppDynamics can offer a deeper view into application performance and root cause analysis of application issues.

Networking Tools - Using ifconfig

ifconfig is a software utility for UNIX-based operating systems. There is also a similar utility for Microsoft Windows-based operating systems called ipconfig. The main purpose of this utility is to manage, configure, and monitor network interfaces and their parameters. ifconfig runs as a command-line interface tool and comes by default installed with most operating systems.

Common uses for ifconfig are the following:

  • Configure IP address and subnet mask for network interfaces.
  • Query the status of network interfaces.
  • Enable/disable network interfaces.
  • Change the MAC address on an Ethernet network interface.

Issuing the ifconfig --help command in the command line interface will display all the options that are available with this version of ifconfig. The output should look similar to the following.

devasc@labvm:~$ ifconfig --help
Usage:
  ifconfig [-a] [-v] [-s] <interface> [[<AF>] <address>]
  [add <address>[/<prefixlen>]]
  [del <address>[/<prefixlen>]]
  [[-]broadcast [<address>]]  [[-]pointopoint [<address>]]
  [netmask <address>]  [dstaddr <address>]  [tunnel <address>]
  [outfill <NN>] [keepalive <NN>]
  [hw <HW> <address>]  [mtu <NN>]
  [[-]trailers]  [[-]arp]  [[-]allmulti]
  [multicast]  [[-]promisc]
  [mem_start <NN>]  [io_addr <NN>]  [irq <NN>]  [media <type>]
  [txqueuelen <NN>]
  [[-]dynamic]
  [up|down] ...
<output omitted>

From this output we can see that ifconfig gives us the option to add (add) or del (delete) IP addresses and their subnet mask (prefix length) to a specific network interface. The hw ether gives us the option to change the Ethernet MAC address. Care should be taken especially when shutting down interfaces, because, if you are remotely connected to that host via the interface you are shutting down, you have just disconnected your session and possibly have to physically walk up to the device to bring it back online. That is not such a big problem when the device is in the room next door, but it can be quite daunting when it is physically in a data center hundreds of miles away and you have to drive for hours or even take a flight to bring it back online.

If ifconfig is issued without any parameters, it just returns the status of all the network interfaces on that host. For your DEVASC VM, the output should look similar to the following:

devasc@labvm:~$ ifconfig
dummy0: flags=195<UP,BROADCAST,RUNNING,NOARP>  mtu 1500
        inet 192.0.2.1  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::48db:6aff:fe27:4849  prefixlen 64  scopeid 0x20<link>
        ether 4a:db:6a:27:48:49  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12293  bytes 2544763 (2.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fee9:3de6  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:e9:3d:e6  txqueuelen 1000  (Ethernet)
        RX packets 280055  bytes 281957761 (281.9 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 112889  bytes 10175993 (10.1 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 46014  bytes 14094803 (14.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 46014  bytes 14094803 (14.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

devasc@labvm:~$

MTU is the Maximum Transmission Unit and specifies the maximum number of bytes that the frame can be transmitted on this medium before being fragmented.

The RX packets and RX bytes contain the values of the received packets and bytes respectively on that interface. In the example above there were 280055 packets received on the enp0s3 interface that contained 281.9 MB of data. The TX packets and TX bytes contain the values of the transmit packets and bytes on that specific interface. In the example, there were 112889 packets transmitted on enp0s3 which accounted for 10.1 MB of data.

ifconfig is still used extensively for both configuration and monitoring purposes. There are also GUI clients for most operating systems that take this functionality into a graphical interface and make it more visual for end users to configure network interfaces.

Note: The ifconfig command has been used within Linux for many years. However, some Linux distributions have deprecated the ifconfig command. The ip address command is becoming the new alternative. You will see the ip address command used in some of the labs in this course.


Using ping

Similar to ifconfigping is a software utility used to test IP network reachability for hosts and devices connected to a specific network. It is also available on virtually all operating systems and is extremely useful for troubleshooting connectivity issues. The ping utility uses Internet Control Message Protocol (ICMP) to send packets to the target host and then waits for ICMP echo replies. Based on this exchange of ICMP packets, ping reports errors, packet loss, roundtrip time, time to live (TTL) for received packets, and more.

On Windows 10, enter the ping command to view its usage information. The output should look similar to the following:

C:\> ping

Usage: ping [-t] [-a] [-n count] [-l size] [-f] [-i TTL] [-v TOS]
            [-r count] [-s count] [[-j host-list] | [-k host-list]]
            [-w timeout] [-R] [-S srcaddr] [-c compartment] [-p]
            [-4] [-6] target_name

Options:
    -t             Ping the specified host until stopped.
                   To see statistics and continue - type Control-Break;
                   To stop - type Control-C.
    -a             Resolve addresses to hostnames.
    -n count       Number of echo requests to send.
    -l size        Send buffer size.
    -f             Set Don't Fragment flag in packet (IPv4-only).
    -i TTL         Time To Live.
    -v TOS         Type Of Service (IPv4-only. This setting has been deprecated
                   and has no effect on the type of service field in the IP
                   Header).
    -r count       Record route for count hops (IPv4-only).
    -s count       Timestamp for count hops (IPv4-only).
    -j host-list   Loose source route along host-list (IPv4-only).
    -k host-list   Strict source route along host-list (IPv4-only).
    -w timeout     Timeout in milliseconds to wait for each reply.
    -R             Use routing header to test reverse route also (IPv6-only).
                   Per RFC 5095 the use of this routing header has been
                   deprecated. Some systems may drop echo requests if
                   this header is used.
    -S srcaddr     Source address to use.
    -c compartment Routing compartment identifier.
    -p             Ping a Hyper-V Network Virtualization provider address.
    -4             Force using IPv4.
    -6             Force using IPv6.


C:\>

On MacOS Catalina, enter the ping command to view its usage information. The output should look similar to the following:

$ ping
usage: ping [-AaDdfnoQqRrv] [-c count] [-G sweepmaxsize]
            [-g sweepminsize] [-h sweepincrsize] [-i wait]
            [-l preload] [-M mask | time] [-m ttl] [-p pattern]
            [-S src_addr] [-s packetsize] [-t timeout][-W waittime]
            [-z tos] host
       ping [-AaDdfLnoQqRrv] [-c count] [-I iface] [-i wait]
            [-l preload] [-M mask | time] [-m ttl] [-p pattern] [-S src_addr]
            [-s packetsize] [-T ttl] [-t timeout] [-W waittime]
            [-z tos] mcast-group
Apple specific options (to be specified before mcast-group or host like all options)
            -b boundif           # bind the socket to the interface
            -k traffic_class     # set traffic class socket option
            -K net_service_type  # set traffic class socket options
            -apple-connect       # call connect(2) in the socket
            -apple-time          # display current time

On your DEVASC VM, add the -help option to view its usage information. The output should look similar to the following:

devasc@labvm:~$ ping -help

Usage
  ping [options] <destination>

Options:
  <destination>      dns name or ip address
  -a                 use audible ping
  -A                 use adaptive ping
  -B                 sticky source address
  -c <count>         stop after <count> replies
  -D                 print timestamps
  -d                 use SO_DEBUG socket option
  -f                 flood ping
  -h                 print help and exit
<output omitted>

IPv4 options:
  -4                 use IPv4
  -b                 allow pinging broadcast
  -R                 record route
  -T <timestamp>     define timestamp, can be one of <tsonly|tsandaddr|tsprespec>

IPv6 options:
  -6                 use IPv6
  -F <flowlabel>     define flow label, default is random
  -N <nodeinfo opt>  use icmp6 node info query, try <help> as argument

For more details see ping(8).
devasc@labvm:~$

By default, ping (or ping -help in Linux) will display all the options it has available. Some of the options you can specify include:

  • Count of how many ICMP echo requests you want to send
  • Source IP address in case there are multiple network interfaces on the host
  • Timeout to wait for an echo reply packet
  • Packet size, if you want to send larger packet sizes than the default 64 bytes. This option is very important when determining what is the MTU on an interface.

For example, enter the command ping -c 5 www.cisco.com in your DEVASC VM terminal window to see if the web server is reachable from your PC and responding to ICMP echo-requests. The -c option allows you to specify that only 5 ping packets should be sent.

devasc@labvm:~$ ping -c 5 www.cisco.com
PING e2867.dsca.akamaiedge.net (23.204.11.200) 56(84) bytes of data.
64 bytes from a23-204-11-200.deploy.static.akamaitechnologies.com (23.204.11.200): icmp_seq=1 ttl=57 time=81.4 ms
64 bytes from a23-204-11-200.deploy.static.akamaitechnologies.com (23.204.11.200): icmp_seq=2 ttl=57 time=28.5 ms
64 bytes from a23-204-11-200.deploy.static.akamaitechnologies.com (23.204.11.200): icmp_seq=3 ttl=57 time=31.5 ms
64 bytes from a23-204-11-200.deploy.static.akamaitechnologies.com (23.204.11.200): icmp_seq=4 ttl=57 time=28.8 ms
64 bytes from a23-204-11-200.deploy.static.akamaitechnologies.com (23.204.11.200): icmp_seq=5 ttl=57 time=26.5 ms

--- e2867.dsca.akamaiedge.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4024ms
rtt min/avg/max/mdev = 26.481/39.335/81.372/21.078 ms
devasc@labvm:~$ 

The output of the command in your environment will most probably look a bit different but the major components should be the same. We specified a count of 5 ICMP echo request packets to be sent to ww​w.cisco.com. The ping utility automatically does the DNS resolution and in this case it resolved the ww​w.cisco.com name to the 23.204.11.200 IPv4 address. The packets are sent, and responses are received from 23.204.11.200. TTL for the received echo replies and round trip times are calculated and displayed. The final statistics, confirm that 5 ICMP echo-request packets have been transmitted and 5 ICMP echo-reply packets have been received, hence there is a 0.0% packet loss. Statistics about the minimum, average, maximum and standard deviation of the time it took for the packets to get to the destination and back are also displayed.

Keep in mind that if you do not receive any replies from the destination you are trying to reach with ping it does not mean that the host is offline or not reachable. It could simply mean that ICMP echo-request packets are filtered by a firewall and are not allowed to reach the destination host. It is actually a best practice to expose only the services needed to be available on the hosts in the network. For example, a web server would only expose TCP port 443 for secure HTTP traffic and deny any other types of traffic either through a local firewall on the web server itself or a network firewall.

For IPv6 there exists a similar utility on Linux and MacOS that is called ping6 and is also available on most operating systems. Windows and Cisco IOS uses the same ping command for both IPv4 and IPv6.

Using traceroute

You have seen how ping can display host reachability on the network. traceroute builds on top of that functionality and displays the route that the packets take on their way to the destination. The Microsoft Windows alternative is also a command-line utility and is called tracert. Observing the path the network traffic takes from its source to the destination is extremely important from a troubleshooting perspective, as routing loops and non-optimal paths can be detected and then remedied.

traceroute uses ICMP packets to determine the path to the destination. The Time to Live (TTL) field in the IP packet header is used primarily to avoid infinite loops in the network. For each hop or router that an IP packet goes through, the TTL field is decremented by one. When the TTL field value reaches 0, the packet is discarded, avoiding the dreaded infinite loops. Usually, the TTL field is set to its maximum value, 255, on the host that is the source of the traffic, as the host is trying to maximize the chances of that packet getting to its destination. traceroute reverses this logic, and gradually increments the TTL value of the packet it is sending, from 1 and keeps adding 1 to the TTL field on the next packet and so on. Setting a TTL value of 1 for the first packet, means the packet will be discarded on the first router. By default, most routers send back to the source of the traffic an ICMP Time Exceeded packet informing it that the packet has reached a TTL value of 0 and had to be discarded. traceroute uses the information received from the router to figure out its IP address and hostname and also round trip times.

Note: Instead of ICMP, by default, Linux uses UDP and a high port range (33434 - 33534). Destinations along the path respond with ICMP port unreachable messages instead of the echo replies sent in ICMP-based traceroutes.

On Windows 10, use tracert to see the available options as shown in the following output:

C:\> tracert

Usage: tracert [-d] [-h maximum_hops] [-j host-list] [-w timeout]
               [-R] [-S srcaddr] [-4] [-6] target_name

Options:
    -d                 Do not resolve addresses to hostnames.
    -h maximum_hops    Maximum number of hops to search for target.
    -j host-list       Loose source route along host-list (IPv4-only).
    -w timeout         Wait timeout milliseconds for each reply.
    -R                 Trace round-trip path (IPv6-only).
    -S srcaddr         Source address to use (IPv6-only).
    -4                 Force using IPv4.
    -6                 Force using IPv6.

C:\>

On MacOS, use traceroute to see the available options as shown in the following output:

$ traceroute
Version 1.4a12+Darwin
Usage: traceroute [-adDeFInrSvx] [-A as_server] [-f first_ttl] [-g gateway] [-i iface]
    [-M first_ttl] [-m max_ttl] [-p port] [-P proto] [-q nqueries] [-s src_addr]
    [-t tos] [-w waittime] [-z pausemsecs] host [packetlen]

On your DEVASC VM, use traceroute --help to see the available options as shown in the following output:

devasc@labvm:~$ traceroute --help
Usage: traceroute [OPTION...] HOST
Print the route packets trace to network host.

  -f, --first-hop=NUM        set initial hop distance, i.e., time-to-live
  -g, --gateways=GATES       list of gateways for loose source routing
  -I, --icmp                 use ICMP ECHO as probe
  -m, --max-hop=NUM          set maximal hop count (default: 64)
  -M, --type=METHOD          use METHOD (`icmp' or `udp') for traceroute
                             operations, defaulting to `udp'
  -p, --port=PORT            use destination PORT port (default: 33434)
  -q, --tries=NUM            send NUM probe packets per hop (default: 3)
      --resolve-hostnames    resolve hostnames
  -t, --tos=NUM              set type of service (TOS) to NUM
  -w, --wait=NUM             wait NUM seconds for response (default: 3)
  -?, --help                 give this help list
      --usage                give a short usage message
  -V, --version              print program version

Mandatory or optional arguments to long options are also mandatory or optional
for any corresponding short options.

Report bugs to <bug-inetutils@gnu.org>.
devasc@labvm:~$]

Several options are available with traceroute including:

  • Specifying the TTL value of the first packet sent. By default this is 1.
  • Specifying the maximum TTL value. By default, it will increase the TTL value up to 64 or until the destination is reached.
  • Specifying the source address in case there are multiple interfaces on the host.
  • Specifying QoS value in the IP header.
  • Specifying the packet length.

Because of the way Virtual Box implements a NAT network, you cannot trace outside of your DEVASC VM. You would need to change your VM to Bridged mode. But then, you would not be able to communicate with the CSR1000v in other labs. Therefore, we recommend leaving your VM in NAT mode.

However, you can tracert from your Windows device or traceroute from your MacOS device. The following output is from a MacOS device inside the corporate Cisco network tracing the route to one of Yahoo’s web servers.

$ traceroute www.yahoo.com
traceroute: Warning: www.yahoo.com has multiple addresses; using 98.138.219.232
traceroute to atsv2-fp-shed.wg1.b.yahoo.com (98.138.219.232), 64 hops max, 52 byte packets
 1  sjc2x-dtbb.cisco.com (10.1x.y.z)  2.422 ms  1.916 ms  1.773 ms
 2  sjc2x-dt5.cisco.com (12x.1y.1z.1ww)  2.045 ms
    sjc2x-dt5-01.cisco.com (12x.1y.1z.15w)  2.099 ms  1.968 ms
 3  sjc2x-sbb5.cisco.com (1xx.1x.1xx.4y)  1.713 ms  1.984 ms
    sjc2x-sbb5-10.cisco.com (1xx.1x.1y.4w)  1.665 ms
 4  sjc2x-rbb.cisco.com (1xx.1y.zz.yyy)  1.836 ms  1.804 ms  1.696 ms
 5  sjc1x-rbb-7.cisco.com (1xx.zz.y.ww)  68.448 ms  1.880 ms  1.939 ms
 6  sjc1x-corp-0.cisco.com (1xx.yy.z.w)  1.890 ms  2.660 ms  2.793 ms
 7  * * *
 8  * * *
 9  * * *
 ...
 61  * * *
 62  * * *
 63  * * *
 64  * * *

Note: The output above has been altered for security reasons, but your output should actually have both valid hostnames and IP addresses.

From this output, we can see the first 6 hops or routers that are on the path towards ww​w.yahoo.com. The entries for 2 and 3 have two values, suggesting there is load balancing implemented on this specific path. Round trip times are also included in the output. In this output you can also see that the traceroute traffic is not allowed outside of the corporate Cisco network, so the complete path to the destination is not available. By filtering ICMP Time Exceeded messages with firewalls or just disabling it at the host and router level, visibility of the path with traceroute is greatly limited. Still, even with these limitations, traceroute is an extremely useful utility to have in your tool belt for troubleshooting network-related issues.

For IPv6 there is an alternative called traceroute6 for UNIX-based operating systems. tracert is used for both IPv4 and IPv6 on Windows, however you can use the -4 or -6 parameter to select the Layer 3 protocol.

Using nslookup

nslookup is another command-line utility used for querying DNS to obtain domain name to IP address mapping. Like other tools mentioned in this section, nslookup is widely available on most all operating systems. This tool is useful to determine if the DNS server configured on a specific host is working as expected and actually resolving hostnames to IP addresses. It could be that maybe a DNS server is not configured at all on the host, so make sure you check /etc/resolv.conf on UNIX-like operating systems and that you have at least a nameserver defined.

The DEVASC VM Linux OS does not implement a help option for the nslookup command. However, you can enter man nslookup to learn more about the available options.

In the terminal, execute the command nslookup www.cisco.com 8.8.8.8 to resolve the IP address or addresses for Cisco’s web server and specify that you want to use Google’s DNS server at 8.8.8.8 to do the resolution.

Note: Dig is often the preferred tool to use to query DNS.

devasc@labvm:~$ nslookup www.cisco.com 8.8.8.8
Server:         8.8.8.8
Address:        8.8.8.8#53

Non-authoritative answer:
www.cisco.com   canonical name = www.cisco.com.akadns.net.
www.cisco.com.akadns.net        canonical name = wwwds.cisco.com.edgekey.net.
wwwds.cisco.com.edgekey.net     canonical name = wwwds.cisco.com.edgekey.net.globalredir.akadns.net.
wwwds.cisco.com.edgekey.net.globalredir.akadns.net      canonical name = e2867.dsca.akamaiedge.net.
Name:   e2867.dsca.akamaiedge.net
Address: 23.204.11.200
Name:   e2867.dsca.akamaiedge.net
Address: 2600:1404:5800:392::b33
Name:   e2867.dsca.akamaiedge.net
Address: 2600:1404:5800:39a::b33

devasc@labvm:~$

The DNS service running on server 8.8.8.8 resolved the ww​w.cisco.com domain to 3 IP addresses as you can see above. This resolution from names to IP addresses is critically important to the functioning of any network. It is much easier to remember ww​w.cisco.com than an IPv4 or IPv6 address every time you are trying to access the Cisco website.

Networking Fundamentals Summary


What Did I Learn in this Module?

Introduction to Network Fundamentals

A network consists of end devices such as computers, mobile devices, and printers that are connected by networking devices such as switches and routers. The network enables the devices to communicate with one another and share data. A protocol suite is a set of protocols that work together to provide comprehensive network communication services. Both the OSI and the TCP/IP reference models use layers to describe the functions and services that can occur at that layer. The form that a piece of data takes at any layer is called a protocol data unit (PDU). At each stage of the encapsulation process, a PDU has a different name to reflect its new functions: data, segment, packet, frame, and bits.

The OSI reference model layers are described here from bottom to top:

  1. The physical layer is responsible with the transmission and reception of raw bit streams.
  2. The data link layer provides NIC-to-NIC communications on the same network.
  3. The network layer provides services to allow end devices to exchange data across networks.
  4. The transport layer provides the possibility of reliability and flow control.
  5. The session layer allows hosts to establish sessions between them.
  6. The presentation layer specifies context between application-layer entities.
  7. The application layer is the OSI layer that is closest to the end user and contains a variety of protocols usually needed by users.

End devices implement protocols for the entire "stack", all layers. The source of the message (data) encapsulates the data with the appropriate protocols, while the final destination de-encapsulates each protocol header/trailer to receive the message (data).

Network Interface Layer

Ethernet is a set of guidelines and rules that enable various network components to work together. These guidelines specify cabling and signaling at the physical and data link layers of the OSI model. In Ethernet terminology, the container into which data is placed for transmission is called a frame. The frame contains header information, trailer information, and the actual data that is being transmitted. Important fields of a MAC address frame include preamble, SFD, destination MAC Address, source MAC address, type, data, and FCS. Each NIC card has a unique Media Access Control (MAC) address that identifies the physical device, also known as a physical address. The MAC address identifies the location of a specific end device or router on a LAN. The three major types of network communications are: unicast, broadcast, and multicast.

The switch builds and maintains a table (called the MAC address table) that matches the destination MAC address with the port that is used to connect to a node. The switch forwards frames by searching for a match between the destination MAC address in the frame and an entry in the MAC address table. Depending on the result, the switch will decide whether to filter or flood the frame. If the destination MAC address is in the MAC address table, it will send it out the specified port. Otherwise, it will flood it out all ports except the incoming port.

A VLAN groups devices on one or more LANs that are configured to communicate as if they were attached to the same wire, when in fact they are located on a number of different LAN segments. VLANs define Layer 2 broadcast domains. VLANs are often associated with IP networks or subnets. A trunk is a point-to-point link between two network devices that carries more than one VLAN. A VLAN trunk extends VLANs across an entire network. VLANs are organized into three ranges: reserved, normal, and extended.

Internetwork Layer

Interconnected networks have to have ways to communicate, and internetworking provides that "between" (inter) networks communication method. Every device on a network has a unique IP address. An IP address and a MAC address are used for access and communication across all network devices. Without IP addresses there would be no internet. An IPv4 address is 32 bits, with each octet (8 bits) represented as a decimal value separated by a dot. This representation is called dotted decimal notation. There are three types of IPv4 addresses: network address, host addresses, and broadcast address. The IPv4 subnet mask (or prefix length) is used to differentiate the network portion from the host portion of an IPv4 address.

IPv6 is designed to be the successor to IPv4. IPv6 has a larger 128-bit address space, providing 340 undecillion possible addresses. IPv6 prefix aggregation, simplified network renumbering, and IPv6 site multihoming capabilities provide an IPv6 addressing hierarchy that allows for more efficient routing. IPv6 addresses are represented as a series of 16-bit hexadecimal fields (hextet) separated by colons (:) in the format: x:x:x:x:x:x:x:x. The preferred format includes all the hexadecimal values. There are two rules that can be used to reduce the representation of the IPv6 address: 1. Omit leading zeros in each hextet, and 2. Replace a single string of all-zero hextets with a double colon (::).

An IPv6 unicast address is an identifier for a single interface, on a single node. A global unicast address (GUA) (or aggregatable global unicast address) is an IPv6 similar to a public IPv4 address. The global routing prefix is the prefix, or network, portion of the address that is assigned by the provider such as an ISP, to a customer or site. The Subnet ID field is the area between the Global Routing Prefix and the Interface ID. The IPv6 interface ID is equivalent to the host portion of an IPv4 address. An IPv6 link-local address (LLA) enables a device to communicate with other IPv6-enabled devices on the same link and only on that link (subnet). IPv6 multicast addresses are similar to IPv4 multicast addresses. Recall that a multicast address is used to send a single packet to one or more destinations (multicast group). These are two common IPv6 assigned multicast groups: ff02::1 All-nodes multicast group, and ff02::2 All-routers multicast group.

A router is a networking device that functions at the internet layer of the TCP/IP model or Layer 3 network layer of the OSI model. Routing involves the forwarding packets between different networks. Routers use a routing table to route between networks. A router generally has two main functions: Path determination, and Packet routing or forwarding. A routing table may contain the following types of entries: directly connected networks, static routes, default routes, and dynamic routes.

Network Devices

A key concept in Ethernet switching is the broadcast domain. A broadcast domain is a logical division in which all devices in a network can reach each other by broadcast at the data link layer. S witches can now simultaneously transmit and receive data. Switches have the following functions:

  • Operate at the network access layer of the TCP/IP model and the Layer 2 data link layer of the OSI model
  • Filter or flood frames based on entries in the MAC address table
  • Have a large number of high speed and full-duplex ports

The switch operates in either of the following switching modes: cut-through, and store-and-forward. LAN switches have high port density, large frame buffers, and fast internal switching.

Routers are needed to reach devices that are not on the same local LAN. Routers use routing tables to route traffic between different networks. Routers are attached to different networks (or subnets) through their interfaces and have the ability to route the data traffic between them.

Routers have the following functions:

  • They operate at the internet layer of TCP/IP model and Layer 3 network layer of the OSI model.
  • The route packets between networks based on entries in the routing table.
  • They have support for a large variety of network ports, including various LAN and WAN media ports which may be copper or fiber. The number of interfaces on routers is usually much smaller than switches but the variety of interfaces supported is greater. IP addresses are configured on the interfaces.

There are three packet-forwarding mechanisms supported by routers: process switching, fast switching, and CEF.

A firewall is a hardware or software system that prevents unauthorized access into or out of a network. The most basic type of firewall is a stateless packet-filtering firewall. You create static rules that permit or deny packets, based on packet header information. The firewall examines packets as they traverse the firewall, compares them to static rules, and permits or denies traffic accordingly. The stateful packet-filtering firewall performs the same header inspection as the stateless packet-filtering firewall but also keeps track of the connection state. To keep track of the state, these firewalls maintain a state table. The most advanced type of firewall is the application layer firewall. With this type, deep inspection of the packet occurs all the way up to the OSI model’s Layer 7.

Load balancing improves the distribution of workloads across multiple computing resources, such as servers, cluster of servers, network links, etc. Server load balancing helps ensure the availability, scalability, and security of applications and services by distributing the work of a single server across multiple servers. At the device level, the load balancer provides high network availability by supporting: device redundancy, scalability, and security. At the network service level, a load balancer provides advanced services by supporting: high services availability, scalability, and services-level security.

Network diagrams display a visual and intuitive representation of the network, how are all the devices connected, in which buildings, floors, closets are they located, what interface connects to that end device, etc. There are generally two types of network diagrams: Layer 2 physical connectivity diagrams, and Layer 3 logical connectivity diagrams. Layer 2, or physical connectivity diagrams are network diagrams representing the port connectivity between the devices in the network. It is basically a visual representation of which network port on a network device connects to which network port on another network device. Layer 3, or logical connectivity diagrams are network diagrams that display the IP connectivity between devices on the network.

Networking Protocols

Telnet and SSH, or Secure SHell, are both used to connect to a remote computer and log in to that system using credentials. Telnet is less prevalent today because SSH uses encryption to protect data going over the network connection and data security is a top priority. HTTP stands for Hyper Text Transfer Protocol, and HTTPS adds the "Secure" keyword to the end of the acronym. This protocol is recognizable in web browsers as the one to use to connect to web sites. NETCONF does have a standardized port value, 830. RESTCONF does not have a reserved port value, so you may see various implementations of different values.

Dynamic Host Configuration Protocol (DHCP) is used to pass configuration information to hosts on a TCP/IP network. DHCP allocates IP addresses in three ways: automatic, dynamic, and manual. DHCP operations includes four messages between the client and the server: server discovery, IP lease offer, IP lease request, and IP lease acknowledgment.

The DNS protocol defines an automated service that matches resource names with the required numeric network address. It includes the format for queries, responses, and data. The DNS protocol communications use a single format called a DNS message. The DNS server stores different types of resource records that are used to resolve names. These records contain the name, address, and type of record.

The SNMP system consists of three elements:

  • SNMP manager: network management system (NMS)
  • SNMP agents (managed node)
  • Management Information Base (MIB)

There are two primary SNMP manager requests, get and set. A get request is used by the NMS to query the device for data. A set request is used by the NMS to change configuration variables in the agent device. Traps are unsolicited messages alerting the SNMP manager to a condition or event on the network. SNMPv1 and SNMPv2c use community strings that control access to the MIB. SNMP community strings (including read-only and read-write) authenticate access to MIB objects. Think of the MIB as a "map" of all the components of a device that are being managed by SNMP.

NTP is used to distribute and synchronize time among distributed time servers and clients. An authoritative time source is usually a radio clock, or an atomic clock attached to a time server. NTP servers can associate in several modes, including: client/server, symmetric active/passive, and broadcast.

Network Address Translation (NAT) helps with the problem of IPv4 address depletion. NAT works by mapping thousands of private internal addresses to a range of public addresses. By mapping between external and internal IPv4 addresses, NAT allows an organization with non-globally-routable addresses connect to the internet by translating addresses into a globally-routable address space. NAT includes four types of addresses:

  • Inside local address
  • Inside global address
  • Outside local address
  • Outside global address

Types of NAT include: static NAT, dynamic NAT, and port address translation (PAT).

Troubleshooting Application Connectivity Issues

Network troubleshooting usually follows the OSI layers. You can start either top to bottom beginning at the application layer and making your way down to the physical layer, you can go from the bottom to the top. If you cannot find any network connectivity issues at any of the OSI model layers, it might be time to look at the application server.

Common uses for ifconfig are the following:

  • Configure IP address and subnet mask for network interfaces.
  • Query the status of network interfaces.
  • Enable/disable network interfaces.
  • Change the MAC address on an Ethernet network interface.

ping is a software utility used to test IP network reachability for hosts and devices connected to a specific network. It is also available on virtually all operating systems and is extremely useful for troubleshooting connectivity issues. The ping utility uses Internet Control Message Protocol (ICMP) to send packets to the target host and then waits for ICMP echo replies. Based on this exchange of ICMP packets, ping reports errors, packet loss, roundtrip time, time to live (TTL) for received packets, and so on.

traceroute uses ICMP packets to determine the path to the destination. The Time to Live (TTL) field in the IP packet header is used primarily to avoid infinite loops in the network. For each hop or router that an IP packet goes through, the TTL field is decremented by one. When the TTL field value reaches 0, the packet is discarded. Usually, the TTL field is set to its maximum value, 255, on the host that is the source of the traffic, as the host is trying to maximize the chances of that packet getting to its destination. traceroute reverses this logic, and gradually increments the TTL value of the packet it is sending, from 1 and keeps adding 1 to the TTL field on the next packet and so on. Setting a TTL value of 1 for the first packet, means the packet will be discarded on the first router. By default, most routers send back to the source of the traffic an ICMP Time Exceeded packet informing it that the packet has reached a TTL value of 0 and had to be discarded.

nslookup is another command-line utility used for querying DNS to obtain domain name to IP address mapping. This tool is useful to determine if the DNS server configured on a specific host is working as expected and actually resolving hostnames to IP addresses. It could be that maybe a DNS server is not configured at all on the host, so make sure you check /etc/resolv.conf on UNIX-like operating systems and that you have at least a nameserver defined.

Module 5: Network Fundamentals Quiz

  1. Which statement describes the ping and tracert commands?

    Topic 5.6.0 - The ping utility tests end-to-end connectivity between the two hosts. However, if the message does not reach the destination, there is no way to determine where the problem is located. On the other hand, the traceroute utility (tracert in Windows) traces the route a message takes from its source to the destination. Traceroute displays each hop along the way and the time it takes for the message to get to that network and back.

  2. Which IPv6 address is most compressed for the full  FE80:0:0:0:2AA:FF:FE9A:4CA3 address?​

    Topic 5.3.0 - When an IPv6 address is being compressed, the :: can be used to replace a recurring set of 0s only once.

  3. Which command can be used on Linux and MAC hosts to get IP addressing information?

    Topic 5.6.0 - Network administrators typically view the IP addressing information on Windows hosts by issuing the ipconfig command, and on Linux and Mac hosts by issuing the ifconfig command. The networksetup -getinfo command is used on Mac hosts to verify IP settings. The ip address command is used on Linux hosts to display IP addresses and properties.

  4. What type of IPv6 address is FE80::1?

    Topic 5.3.0 - Link-local IPv6 addresses start with FE80::/10, which is any address from FE80:: to FEBF::. Link-local addresses are used extensively in IPv6 and allow directly connected devices to communicate with each other on the link they share.

  5. Which two statements are true about NTP servers in an enterprise network? (Choose two.)

    Topic 5.5.0 - Network Time Protocol (NTP) is used to synchronize the time across all devices on the network to make sure accurate timestamping on devices for managing, securing and troubleshooting. NTP networks use a hierarchical system of time sources. Each level in this hierarchical system is called a stratum. The stratum 1 devices are directly connected to the authoritative time sources.

  6. A small-sized company has 30 workstations and 2 servers. The company has been assigned a group of IPv4 addresses 209.165.200.224/29 from its ISP. The two servers must be assigned public IP addresses so they are reachable from the outside world. What technology should the company implement in order to allow all workstations to access services over the Internet simultaneously?

    Topic 5.5.0 - The company allocated only 6 usable host public addresses. Two public addresses should be assigned to the two servers. Since the four remaining public addresses are not enough for the 30 clients, NAT must be implemented for internal workstations to access the Internet. Therefore, the company should use PAT, also known as NAT with overload. DHCP can be used to dynamically assign internal private IP addresses to the workstations, but cannot provide the NAT service required.

  7. Which statement describes a stateful firewall?

    Topic 5.4.0 - Basic packet filtering firewalls can only filter based on Layer 3 and sometimes basic Layer 4 information. An application gateway firewall, or proxy firewall, can filter based on information in the upper layers such as the application layer. A NAT firewall can expand the number of available IP addresses on the network.

  8. Which impact does adding a Layer 2 switch have on a network?

    Topic 5.4.0 - Adding a Layer 2 switch to a network increases the number of collision domains and increases the size of the broadcast domain. Layer 2 switches do not decrease the amount of broadcast traffic, do not increase the amount of network collisions and do not increase the number of dropped frames.

  9. Data is being sent from a source PC to a destination server. Which three statements correctly describe the function of TCP or UDP in this situation? (Choose three.)

    Topic 5.1.0 - Layer 4 port numbers identify the application or service which will handle the data. The source port number is added by the sending device and will be the destination port number when the requested information is returned. Layer 4 segments are encapsulated within IP packets. UDP, not TCP, is used when low overhead is needed. A source IP address, not a TCP source port number, identifies the sending host on the network. Destination port numbers are specific ports that a server application or service monitors for requests.

  10. What is the function of the MIB element as part of a network management system?

    Topic 5.5.0 - The Management Information Base (MIB) resides on a networking device and stores operational data about the device. The SNMP manager can collect information from SNMP agents. The SNMP agent provides access to the information.

  11. Which two devices allow hosts on different VLANs to communicate with each other? (Choose two.)

    Topic 5.2.0 - Members of different VLANs are on separate networks. For devices on separate networks to be able to communicate, a Layer 3 device, such as a router or Layer 3 switch, is necessary.

  12. What is obtained when ANDing the address 192.168.65.3/18 with its subnet mask?

    Topic 5.3.0 - The value of the IP address 192.168.65.3 in binary is 11000000.10101000.01001110.00000011. The value of the subnet mask in binary is 11111111.11111111.11000000.00000000. When ANDing the two, the result is 11000000.10101000.01000000.00000000, which in turn converts into 192.168.64.0.



Ref : [1]