For network communication protocols, questions may cover:
the internet protocol suite and its four abstraction layers (application, transport, internet, and link)
application layer protocols - (HTTP / HTTPS)
transport layer protocols (TCP and UDP)
traffic analysis
network optimisation (e.g. queuing theory, predictive maintenance, patterns, anomalies, security threats)
encapsulation and de-encapsulation
security (SSL / TLS).
The Internet Protocol Suite, also known as the TCP/IP model, consists of four abstraction layers that facilitate communication across networks. These layers work together to enable communication across the Internet.
TCP/IP stack is often considered a simplification of the OSI model even though the TCP/IP model came out earlier. This model is often used more in industry because it considers networking from the perspective of the packet rather than splitting up into layers.
Application Layer: Manages communication between software applications and network services.
Transport Layer: Ensures reliable data transfer and manages data segmentation.
Network Layer: Handles routing data packets across networks using IP addresses.
Link Layer: Manages physical connections, local network communication, and hardware-level error handling.
These models are used in industry to not only understand what layer different protocols should function on but they also work as useful problem-solving tools for network diagnosis. For example: A piece of code requires connection to the internet but doesn’t seem to be working, the TCP/IP model gives a good step of understanding where the problem is when we break it down into layers:
Network Interface – Is the computer currently plugged in and am I able to ping my local router?
Internet – am I able to ping the Internet/DNS server (8.8.8.8)?
Transport – Are my packets reaching the desired location?
Application – can I use my browser to access the internet (and have I restarted my computer)
New updates for CSFG: Network Optimisation [new section for CSFG - not yet published]
Network optimization is critical for ensuring that computer networks run efficiently, reliably, and securely. It encompasses a range of strategies and technologies aimed at improving the performance and resilience of network infrastructure. Effective network optimization involves managing data flow, predicting and preventing failures, recognizing efficient design patterns, detecting anomalies, and mitigating security threats.
From Gemini: Network optimization is a critical aspect of modern technology, aiming to enhance efficiency, reliability, and security. This field encompasses a wide range of techniques, including those rooted in queuing theory, predictive maintenance, pattern recognition, anomaly detection, and security threat analysis.
From Gemini: Queuing theory provides a mathematical framework for understanding and optimizing systems where entities (like customers, packets, or jobs) arrive and wait for service. In network contexts, this can involve:
Traffic management: Analyzing and optimizing network traffic patterns to minimize congestion and delay.
Resource allocation: Determining the optimal allocation of resources (like bandwidth or servers) to meet demand.
Service level agreements (SLAs): Ensuring that network performance meets agreed-upon standards.
From : Bannari Amman Tec: Queueing theory aids in optimizing data flow within networks, particularly at points like routers and switches. These components manage data packets, which often form queues before being transmitted. By employing queueing theory, network engineers can predict factors such as arrival rates and service rates.
From the new CSFG: Queuing Theory [new section for CSFG - not yet published]
As we talked about in UDP, packets sometimes get lost in the internet. Mostly dropped packets are from routers because of buffer overflow (the buffer is a limited size queue of packets waiting to be processed, and if it fills up faster than the packets are processed it might fill up completely, so new ones arriving are ignored, and so are dropped).
Queuing theory is a mathematical analysis that allows us to work out things like how large a buffer should be to allow for the expected rates of packets entering the router. It is crucial in computer networks for optimizing the performance and efficiency of data transmission. In networking, it helps manage how data packets are queued and processed as they travel through routers and switches. By analysing packet arrival rates, service rates, queue lengths, and waiting times, queuing theory aids in predicting network performance and preventing congestion.
For example, in a router, queuing theory can help prioritize traffic to ensure that time-sensitive data, such as video streams, receives higher priority over less critical data, such as file downloads. This ensures smooth streaming without buffering. Similarly, in load balancing, queuing theory can distribute incoming traffic evenly across multiple servers, preventing any single server from becoming a bottleneck and improving overall network efficiency.
In networking and queuing theory, FIFO (First In, First Out) and FCFS (First Come, First Served) are fundamental concepts for managing data packets or tasks in a queue.
FIFO (First In, First Out)
· FIFO is a method where the first packet or task that arrives in the queue is the first one to be processed. Think of it like a line at a grocery store. The first person to get in line is the first person to be checked out. FIFO is used in various networking protocols and buffering strategies where maintaining the order of packets is crucial. For example, in TCP (Transmission Control Protocol) where packets must be delivered in the order, they were sent to ensure data integrity, when the buffer is getting full a message is sent back to suspend the transmission of data. If packets are lost, the sequence numbering of packets will trigger them being retransmitted to ensure that all the packets are received... eventually. For something like video or audio connection, this could lead to delays that confuse the user. In this case, UDP is likely to be used; when a buffer is full of a UDP protocol, the packets are just discarded. For audio and video, this just means some parts are skipped, but the transmission stays in real time; however, this would not be acceptable for something like a file download, as it would leave the file incomplete.
FCFS (First Come, First Served)
FCFS is essentially the same as FIFO. It prioritizes processing tasks in the order they arrive. Similar to FIFO, the first task to arrive is the first to be processed, regardless of its size or complexity. FCFS is often used in scheduling algorithms for operating systems, job scheduling in batch processing, and network packet handling.
- Advantages: Fair and simple to understand and implement. Every task gets a turn based on arrival time.
- Disadvantages: May lead to the "convoy effect," where a single long task can hold up shorter tasks behind it, reducing overall system efficiency.
Key Differences
Context Usage: While FIFO is often specifically used in the context of queue data structures and buffers, FCFS is more broadly used in scheduling and processing systems.
Terminology: Both terms are frequently used interchangeably, but FCFS emphasizes the scheduling aspect, whereas FIFO emphasizes the data handling aspect.
From Gemini: Predictive maintenance leverages data analytics and machine learning to anticipate equipment failures and schedule maintenance proactively. In network optimization, this can be used to:
Identify potential faults: Detect signs of equipment degradation or anomalies in network behavior.
Optimize maintenance schedules: Plan maintenance activities to minimize disruptions and costs.
Improve network reliability: Reduce downtime and enhance overall network performance.
From Rockwell Automation: Predictive maintenance (PdM) uses data analysis to identify operational anomalies and potential equipment defects, enabling timely repairs before failures occur. It aims to minimize maintenance frequency, avoiding unplanned outages and unnecessary preventive maintenance costs.
As depicted by the image on the right we have different levels to maintenance:
at the bottom we have reactive maintenance - this is where we don't fix something until it's broken. For example, a pipe bursts in the bathroom so we call the plumber to fix it.
Preventative maintenance - We plan a replacement based on how old something is/how much it is used. For example, the particular bathroom was made in the 70s so we decide it's about time to inspect/replace some of the pipes before they burst.
Condition-based maintenance - Here we have some kind of sensors/checks in place to give us information on the current state of a piece of equipment and whether or not it needs to be replaced soon. E.g. phones will sometimes tell us the remaining life of their battery/how much it has deteriorated.
Predictive/prescriptive maintenance - this is what we started talking about above. Using data analysis and other high-level practices to determine how much life a piece of equipment has left.
From the new CSFG: Predictive maintenance [new section for CSFG - not yet published]
Predictive maintenance in computer science involves using data analysis techniques to predict when a system or component is likely to fail. This approach allows for timely maintenance actions before actual failures occur, thereby minimizing downtime and repair costs. By collecting data from sensors and logs, and applying predictive algorithms, systems can detect patterns that indicate potential issues. This technique is widely used in data centres, manufacturing, and network infrastructure to ensure continuous operation and reliability.
For example, in a data centre, monitoring the temperature, power usage, and performance metrics of servers can help anticipate hardware failures, allowing technicians to replace or repair components before they fail and cause system outages.
At the end of the day, the datacentres need to know when to replace hardware and when is it too costly to trust the hardware and potential downtime of a server. Predictive maintenance allows engineers to replace servers every 5 years rather than 6 because if one goes “down” the cost will be more than the replacement.
From Gemini: Pattern recognition involves identifying recurring patterns or trends in data. Anomaly detection focuses on identifying deviations from normal behavior. Both techniques can be applied to network optimization to:
Detect security threats: Identify suspicious network traffic patterns that may indicate a cyberattack.
Identify performance issues: Detect anomalies in network performance that could signal equipment failures or configuration problems.
Optimize network configuration: Identify patterns in network usage and adjust configurations accordingly.
From the new CSFG: Patterns [new section for CSFG - not yet published]
In computer networking, patterns refer to recurring solutions to common challenges within network design and architecture. These patterns offer a more consistent approach to address specific network issues, improving network efficiency, scalability, and maintainability. These patterns also allow for engineers to have good understandings of processes because they are done in a specific way.
Examples include:
- Load Balancing pattern - which distributes network traffic across multiple servers to prevent any single server from becoming a bottleneck.
- Proxy pattern - which acts as an intermediary for requests from clients seeking resources from other servers, enhancing security and performance.
Recognizing and applying these patterns helps in developing robust and efficient network infrastructures. For instance, the use of the Cache pattern can significantly reduce latency and bandwidth usage by storing frequently accessed data closer to the end user. In dynamic network environments, the Failover pattern ensures high availability by automatically redirecting traffic to a standby server if the primary server fails.
Pattern recognition is crucial in network management and monitoring. Identifying traffic patterns can lead to insights for optimizing bandwidth allocation and detecting potential security threats. For example, the application of the Micro-segmentation pattern enhances security by dividing the network into smaller, isolated segments, thus limiting the spread of potential breaches.
By leveraging these networking patterns, engineers can design and maintain systems that are more resilient, efficient, and easier to manage.
From Gemini: Security threat analysis involves identifying potential vulnerabilities and risks in a network. This can include:
Vulnerability assessment: Identifying weaknesses in network infrastructure and applications.
Threat modeling: Assessing the potential impact of various security threats.
Incident response planning: Developing strategies for responding to security breaches.
From the new CSFG: Anomalies and Security Threats [new section for CSFG - not yet published]
Anomalies in computer science are unusual patterns or behaviours that deviate from the norm and often indicate potential security threats. Detecting anomalies is crucial for maintaining system integrity and security. Techniques such as anomaly detection algorithms and machine learning are employed to identify these irregularities. Anomalies can be indicative of various issues, including malware, intrusions, or hardware malfunctions. By analysing system logs, network traffic, and user behaviour, these techniques help in recognizing suspicious activities early on. For example, a sudden spike in network traffic from an unknown source might suggest a Distributed Denial of Service (DDoS) attack, prompting immediate security measures.
Here are some examples of anomalies and security threats:
Examples of Anomalies:
Unusual Network Traffic Patterns:
A sudden, unexplained spike in network traffic from a single IP address, which could indicate a potential security breach or malfunctioning device.
Unexpected Login Locations:
A user account shows login attempts from geographically distant locations within a short period, suggesting possible account compromise.
Abnormal System Resource Usage:
A server exhibits unexpectedly high CPU or memory usage without a corresponding increase in workload, indicating a possible malware infection or runaway process.
Unusual File Access Patterns:
A user or process accessing a large number of files in a short time, which could be a sign of data exfiltration or a ransomware attack.
Irregular Behaviour in Application Logs:
Application logs contain unusual error messages or activity patterns not typical for regular operations, suggesting potential security issues or bugs.
Examples of Security Threats:
Distributed Denial of Service (DDoS) Attack:
Description: A DDoS attack is a coordinated attack where multiple systems flood a targeted system with traffic, overwhelming it and causing a denial of service to legitimate users. To Mitigate this problem, using firewalls and intrusion detection/prevention systems (IDS/IPS) to filter and block malicious traffic. Implement rate limiting to control the flow of incoming traffic. Employ DDoS protection services that can absorb and mitigate attack traffic.
Phishing Attacks:
Phishing attacks are malicious attempts to acquire sensitive information such as usernames, passwords, and credit card details by disguising as a trustworthy entity in electronic communications. We need to educate users about recognizing phishing attempts. Implement email filtering and anti-phishing tools. Use multi-factor authentication (MFA) to provide an additional layer of security.
SQL Injection:
Description: SQL injection is a code injection technique that exploits vulnerabilities in an application's software by inserting malicious SQL queries into input fields, leading to unauthorized access to the database.
Mitigation: Validate and sanitize all user inputs. Use prepared statements and parameterized queries. Implement web application firewalls (WAFs) to detect and block malicious requests.
Ransomware:
Ransomware is malware that encrypts a victim's data and demands payment for the decryption key, effectively holding the data hostage until the ransom is paid. Regularly backing up data and ensure backups are stored offline or in a separate network will help avoid the problem, also using endpoint protection software to detect and block ransomware. Educate users on avoiding suspicious emails and downloads.
Man-in-the-Middle (MITM) Attack:
In a MITM attack, an attacker intercepts and potentially alters the communication between two parties who believe they are directly communicating with each other, compromising the confidentiality and integrity of the communication. Using virtual private networks (VPNs) to encrypt data transmitted over untrusted networks helps avoid attacks in this manner.
Useful Links
https://www.youtube.com/watch?v=z611FLLR3_8 (Queuing Theory – Advanced)
https://www.youtube.com/watch?v=2V38iVkQ0r4 (Predictive Maintenace)
From the new CSFG: Encapsulation and Decapsulation [to add to TCP/UDP Section of CSFG]
These processes involve adding and removing headers as data moves through network layers. Encapsulation (adding headers) occurs as data moves from the application layer to the physical layer, while decapsulation (removing headers) happens when the data reaches its destination and travels back up the layers. This ensures proper routing and interpretation of data.
By understanding these concepts and models, developers and network engineers can effectively design, troubleshoot, and optimize network systems, ensuring robust and secure communication in various networking environments.
Sending a Facebook Message
You type "Hello, friend!" on Facebook (Application Layer).
The message is converted to data and passed to the transport layer.
TCP adds a header, creating a segment with port numbers, sequence data and a checksum for error detection.
The network layer adds an IP header, creating a packet with IP addresses.
The data link layer adds a frame header and trailer, creating a frame with MAC addresses.
The physical layer transmits the bits over the network.
Receiving the Facebook Message
The server receives the bits and reassembles the frame (Physical Layer).
The frame header and trailer are removed, leaving the packet (Data Link Layer).
The IP header is removed, leaving the segment (Network Layer).
The TCP header is removed, leaving the data (Transport Layer).
The data is converted back to the message and displayed to your friend (Application Layer).
Begins with an explanation of how you might imagine two devices might start the communication process.
Then introduces the problem; the man in the middle
And lastly how it is solved using Digital Certificates
If you are trying to purchase a certificate for a website or to use for encrypting MQTT you will encounter two main types:
Domain Validated Certificates (DVC)
Extended validation Certificates (EVC)
The difference in the two types is the degree of trust in the certificate which comes with more rigorous validation.
The level of encryption they provide is identical
A domain-validated certificate (DV) is typically used for Transport Layer Security (TLS) where the identity of the applicant has been validated by proving some control over a DNS domain.
The validation process is normally fully automated making them the cheapest form of certificate. They are ideal for use on websites like this site that provides content, and not used for sensitive data.
An Extended Validation Certificate (EV) is a certificate used for HTTPS websites and software that proves the legal entity controlling the website or software package. Obtaining an EV certificate requires verification of the requesting entity’s identity by a certificate authority (CA).
They are generally more expensive than domain validated certificates as they involve manual validation.
In September 2018, when Chrome 69 was released, the green padlock was replaced by a grey padlock. In addition to this there is no explicit wording to state that the site is ‘secure’. The end goal being that users should expect all web pages to be secure by default, and to only receive a warning if the site is not secure.
Chrome will be taking even further steps with its security indicators as Google would like to achieve 100% encryption across all websites. So don’t rely on seeing a green padlock as a security indicator, as eventually it won’t apply anymore in Chrome
Also known as Transmission Control Protocol, breaking up data sent over networks into small container of data known as packets.
Packets can be as large as 1500 bytes. They contain many items of metadata, or data that contains information about other data. Items include sequence numbers, checksums, etc. [min 20B TCP header]
Sequence numbers allow packets that arrive out of order to be ordered correctly.
Checksums are like a unique fingerprint, used to verify that a received packet and a sent packet are the same.
Email commonly uses TCP. This is because TCP is reliable and secure, whilst sacrificing speed. Email does not require high speed, but reliability and security are important for ensuring the email is sent and that it is hard to read for attackers.
UDP (User Datagram Protocol)
Connectionless (No need to establish a connection prior to data transfer)
Relies on the receiver to sort out the data
Packets can be duplicated or out of order
Minimum security (only using checksums)
Has a larger byte capacity (65535B) [min 8B UDP header]
UDP is used for streaming softwares such as video games, and live online communication services, such as zoom and discord voice chats since they require larger packets to be sent.
Connection Type:
TCP: Connection-oriented. This means a connection is established and maintained until the data exchange is complete.
UDP: Connectionless. Data is sent without establishing a connection.
Reliability:
TCP: Reliable. It ensures that data is delivered accurately and in the correct order. If data is lost, TCP will retransmit it.
UDP: Unreliable. There is no guarantee that the data will reach its destination, and it does not retransmit lost data.
Speed:
TCP: Slower due to the overhead of establishing a connection, error checking, and ensuring data integrity.
UDP: Faster because it has minimal overhead and does not perform error checking or connection establishment.
Use Cases:
TCP: Used for applications where reliability is crucial, such as web browsing (HTTP/HTTPS), email (SMTP), and file transfers (FTP).
UDP: Used for applications where speed is more critical than reliability, such as live streaming, online gaming, and voice or video communications (VoIP).
Header Size:
TCP: Larger header size (20 bytes) due to additional fields for managing connections and ensuring reliability.
UDP: Smaller header size (8 bytes), making it more efficient for quick data transfers.
Flow Control and Congestion Control:
TCP: Implements flow control and congestion control mechanisms to manage data transmission rates and avoid network congestion.
UDP: Does not have flow control or congestion control mechanisms.
These differences make TCP and UDP suitable for different types of applications depending on the requirements for speed and reliability123.
TCP is connection-based, ensuring reliable data transfer between devices. However, it’s the IP (Internet Protocol) that handles the actual routing of packets across the network.
Here’s a bit more detail on how they work together:
TCP: Manages the data transmission between devices, ensuring that packets are delivered accurately and in the correct order. It establishes a connection, handles error checking, and retransmits lost packets.
IP: Responsible for addressing and routing the packets to their destination. It determines the best path for the packets to travel across the network, based on the current network conditions and routing tables.
In essence, TCP ensures the reliability of the data transfer, while IP ensures that the data reaches the correct destination. This combination is often referred to as the TCP/IP protocol suite, which forms the foundation of the internet.
According to my knowledge if an internet application has to be designed, we should use either a connection-oriented service or connection-less service, but not both.
Internet's connection oriented service is TCP and connection-less service is UDP, and both resides in the transport layer of Internet Protocol stack.
Internet's only network layer is IP, which is a connection-less service. So it means whatever application we design it eventually uses IP to transmit the packets.
Connection-oriented services use the same path to transmit all the packets, and connection-less does not.
Therefore my problem is if a connection oriented application has been designed, it should transmit the packets using the same path. But IP breaks that rule by using different routes.So how do both TCP and IP work together in this sense? It totally confuses me.
You, my friend, are confusing the functionality of two different layers.
TCP is connection oriented in the sense that there's a connection establishment, between the two ends where they may negotiate different things like congestion-control mechanism among other things.
The transport layer protocols' general purpose is to provide process-to-process delivery meaning that it doesn't know anything about routes; how your packets reach the end system is beyond their scope, they're only concerned with how packets are being transmitted between the two end PROCESSES.
IP, on the other hand, the Network layer protocol for the Internet, is concerned with data-delivery between end-systems yet it's connection-less, it maintains no connection so each packet is handled independently of the other packets.
Leaving your system, each router will choose the path that it sees fit for EACH packet, and this path may change depending on availability/congestion.
How does that answer your question?
TCP will make sure packets reach the other process, it won't care HOW they got there.
IP, on the other hand, will not care if they reach the other end at all, it'll simply forward each different packet according to what it sees most fit for a particular packet.
The TCP 3-Way Handshake is a fundamental process that establishes a reliable connection between two devices over a TCP/IP network. It involves three steps: SYN (Synchronize), SYN-ACK (Synchronize-Acknowledge), and ACK (Acknowledge). During the handshake, the client and server exchange initial sequence numbers and confirm the connection establishment.
Sequence number is one of the most important randomly generated numbers. For the very first time when client is sending request to the server, a number is generated between [0–232–1]. First time this number is generated randomly but afterwards this is supposed to increase in an order.
Most importantly, the sequence number of a packet at client side will be different than the seq number at server side. We can say that this sequence number is an identity of a packet. If a client creates a data packet, it will give it a sequence number. Similarly, when a server generates a data packet, it will assign it a sequence number.
Step 1 (SYN): In the first step, the client wants to establish a connection with a server, so it sends a segment with SYN(Synchronize Sequence Number) which informs the server that the client is likely to start communication and with what sequence number it starts segments with
Step 2 (SYN + ACK): Server responds to the client request with SYN-ACK signal bits set. Acknowledgement(ACK) signifies the response of the segment it received and SYN signifies with what sequence number it is likely to start the segments with
Step 3 (ACK): In the final part client acknowledges the response of the server and they both establish a reliable connection with which they will start the actual data transfer
From O'Rielley.com: "TCP checksums are identical to UDP checksums, with the exception that checksums are mandatory with TCP (instead of being optional, as they are with UDP). Furthermore, their usage is mandatory for both the sending and receiving systems."
If UDP detects a packet is invalid - It simply ends right there. The packet is discarded and that's it.
if TCP detects a packet is invalid - it will request the packet is resent.
From Quora:
Streaming apps implement their own end-to-end control on top of UDP. They need more than just packet order; they need timing control (obviously). One of the possible solutions is the Real Time Transport protocol, to be used on top of UDP. But there are other mechanisms, like HTTP Streaming, adaptive streaming (DASH), and a lot of proprietary solutions.
HTTP is a protocol that allows users to interact with web resources like HTML files by sending messages between clients (like web browsers) and servers. These messages are usually sent over TCP connections.
There are a number of different methods that HTTP uses to request data, we are going to focus on five of them:
GET - we use GET to read/retrieve data from a server. This returns a "200" status code if the request is successful. Scenario: The HTTP GET method is commonly used to retrieve data from a server, including fetching web pages. When you enter a URL in your browser and hit enter, your browser sends a GET request to the server to retrieve the content of that webpage.
POST - we use POST when we are sending data to a server. This returns a "201" status code to let us know the resource was created. Scenario: User Registration on a Website - Imagine you have a website where new users can sign up by providing their details such as name, email, and password. When a user submits the registration form, the client application sends a POST request to the server to create a new user account.
PUT - we use PUT when we want to update existing data on the server. This will replace all of the content that it is referring to. Scenario: Updating a Product in an E-commerce System - Imagine you have an e-commerce platform where administrators can update product details. When an admin wants to update the details of a product, such as its price or description, they can use the PUT method to send the updated information to the server.
PATCH - this is similar to PUT but we use PATCH when we want to update only a portion of some content on the server. Scenario: Updating User Profile Information - Imagine you have a web application for a social media platform. Users can update their profile information, such as their bio, profile picture, etc. Instead of sending the entire user profile data every time a user makes a small change, you can use the PATCH method to update only the specific fields that have changed.
DELETE - probably unsurprisingly, we use DELETE when we want to entirely remove some data from a server. Scenario: Deleting a User Account - Imagine you have a web application where users can delete their accounts. When a user decides to delete their account, the client application sends a DELETE request to the server to remove the user’s data.
These methods send, receive and change data. When talking about HTTP requests we sometimes term them as "safe" or "unsafe" - this merely refers to whether or not they will (or might) change the state of the server. Of the above methods only GET is considered safe as it is simply receiving data, all of the rest have the potential to change the state of the server.
Further reading:
To overcome TCP and UDP limitations, a combined approach is utilized.
TCP secures connections for tasks like retrieving graphical data (websites).
UDP serves real-time functions such as voice calls and live updates (e.g., Discord).
As networks grow, scalability becomes challenging, straining protocols designed for smaller scales.
Solutions like CIDR improve IP address allocation, and IPv6 adoption expands address space.
Security vulnerabilities can lead to breaches and unauthorized access.
Protocols like TLS and IPSec provide encryption and authentication, while updates address vulnerabilities.
Different protocols may not seamlessly work together, causing communication issues.
Standardization and gateways bridge gaps between protocols, ensuring compatibility.
High data traffic causes network congestion, leading to performance degradation.
Quality of Service (QoS) prioritizes critical traffic, and load balancing prevents overloads.
Protocols must ensure reliable data transmission despite network failures.
Redundancy and failover mechanisms like BGP ensure fault tolerance.
Configuring protocols, especially in large networks, can be complex.
Automated tools like DHCP simplify IP allocation, while SDN centralizes management.
Real-time applications like VoIP face latency challenges.
QoS mechanisms minimize latency for critical applications.
IoT devices' diverse requirements necessitate accommodating various device types and data.
IoT-specific protocols like MQTT and CoAP facilitate lightweight communication.
Evolving technology demands adaptable protocols.
Ongoing research leads to new protocols and enhancements.
Balancing the advantages and challenges of network communication protocols is essential for optimizing modern communication. Integrating the strengths of protocols like TCP and UDP, while addressing issues like security and scalability, contributes to efficient and effective data transmission across diverse contexts.
TCP breaks data into packets to prevent congestion in a single lane.
DOS attacks flood clients with packets, overwhelming the receiver and disabling it.
TCP operates through a three-way handshake, susceptible to SYN flood attacks.
TCP's larger packet size leads to latency in real-time applications like videoconferencing.
UDP is not designed for reliability, but its smaller packets make it more suitable for fast communication.
Using UDP for email communication risks information leakage and Man-in-the-Middle attacks.
UDP's lower security measures and lack of ordered packets can compromise email security.
Unordered packets in UDP can result in unintelligible messages.
UDP's lack of packet expectations allows visual observation of packet loss in videoconferencing.
Audio and image cuts in videoconferencing indicate packet loss.
TCP addresses security but struggles with latency.
Combining the strengths of both protocols could lead to more effective communication strategies.
In gaming environments, network communication protocols play a crucial role.
Low latency and real-time responsiveness are critical for online gaming.
Protocols like UDP are favored for gaming due to their speed, but reliability concerns exist.
The 5 layer network model is the TCP/IP model.
Goes into detail with UDP and TCP protocols.
From: APNIC - Internet protocols are changing
When the Internet started to become widely used in the 1990s, most traffic used just a few protocols: IPv4 routed packets, TCP turned those packets into connections, SSL (later TLS) encrypted those connections, DNS named hosts to connect to, and HTTP was often the application protocol using it all.
For many years, there were negligible changes to these core Internet protocols.
As a result, network operators, vendors, and policymakers that want to improve (and sometimes, control) the Internet have adopted a number of practices based upon the official protocol to tweak things, whether intended to debug issues, improve quality of service, or impose policy.
Now, significant changes to the core Internet protocols are underway. While they are intended to be compatible with the Internet at large (since they won’t get adoption otherwise), they might be disruptive to those who have taken liberties with undocumented aspects of protocols or made an assumption that things won’t change.
There are a number of factors driving these changes.
The limits of the core Internet protocols have become apparent, especially regarding performance. Because of structural problems in the application and transport protocols, the network was not being used as efficiently as it could be, leading to end-user perceived performance (in particular, latency).
This translates into a strong motivation to evolve or replace those protocols because there is a large body of experience showing the impact of even small performance gains.
The ability to evolve Internet protocols — at any layer — has become more difficult over time, largely thanks to the unintended uses by networks discussed above. For example, HTTP proxies that tried to compress responses made it more difficult to deploy new compression techniques; TCP optimization in middleboxes made it more difficult to deploy improvements to TCP.
Finally, we are in the midst of a shift towards more use of encryption on the Internet. Encryption is one of best tools we have to ensure that protocols can evolve and many protocols have been developed to insist on encryption.
You can read about these further by going to the APNIC site referenced above.
A possible improvement of the UDP and TCP protocols would be the creation of something save, reliable, good carrying capacity and that is fast. Realistically speaking this would be almost impossible to achieve with the current technology.
Figuratively speaking we could create a protocol based in UDP in which the header would be heavier, but allowing for a greater security and stability, and the total amount of bytes carried would probably decrease, but if balanced correctly we could find a middle ground where we could have a good reliability and delivery of packets, decent speed and a good amount of bytes stored. This could be used for things that require packets to arrive uncorrupted and in order yet maintaining a relatively good speed.
A problem of UDP is done incorrectly is that since it has a lack of security protocols it means it could be spoofed easier than some other protocols. One example of such forms of illegal activity is the UDP denial-of-service attack (DDoS) more specifically called a UDP flood.
Since the UDP does not require a handshake process this means that the hacker can send a great amount of packets to a device to overwhelm it. Leading to a forced “server shutdown” as it can't process that much information quickly.
Therefore, by the creation of a less secure yet faster method of package transfer it also created a breeding ground for exploitation of mechanics.
From NIST - The University of Delhi
Determining the best path involves the evaluation of multiple paths to the same destination network and selecting the optimum or shortest path to reach that network. Whenever multiple paths to the same network exist, each path uses a different exit interface on the router to reach that network.
The best path is selected by a routing protocol based on the value or metric it uses to determine the distance to reach a network. A metric is the quantitative value used to measure the distance to a given network. The best path to a network is the path with the lowest metric.
Dynamic routing protocols typically use their own rules and metrics to build and update routing tables. The routing algorithm generates a value, or a metric, for each path through the network. Metrics can be based on either a single characteristic or several characteristics of a path. Some routing protocols can base route selection on multiple metrics, combining them into a single metric.
The following lists some dynamic protocols and the metrics they use:
Routing Information Protocol (RIP): Hop count
Open Shortest Path First (OSPF): Cisco routers use a cost based on cumulative bandwidth from source to destination
Enhanced Interior Gateway Routing Protocol (EIGRP): Bandwidth, delay, load, reliability