Review on "The Design Philosophy of the DARPA Internet Protocols" by Clark, dated 1988

Post date: Jun 19, 2012 3:56:34 PM

Reviewed paper:

D. Clark, "The Design Philosophy of the DARPA Internet Protocols", SIGCOM'88, 106-114, Palo Alto, CA, Sept 1988

While the preliminary paper on TCP/IP [1] discusses more of the protocol procedures that need to be done in order to handle sharing of information over different networks, this paper discusses issues that challenged the design of the protocol. Specifically, the paper evaluates if the goals of the developers of the original protocol have been achieved given its features at that time. It is a paper that has been published about 15 years after the release of the preliminary work. Weaknesses of the current protocol have been emphasized and ideas on what features must be incorporated in order to fix the problems have been given. As a result of the evaluation, it was found out that while the TCP/IP protocol has achieved its main objective of being able to handle messages sent across various networks and assure that connection is still on-going despite multiple packet switch fails in the network, there still are a lot more issues that need to be addressed especially when it comes to handling different services and changing of priorities in objectives, one accounting for resource management in order to accomodate commercial networks.

Fifteen years after, the proposed ideas in Cerf and Kahn's work have already been improved. First, they were able to recognize the network layers by breaking the previously single protocol into multiple protocols, one for the network (IP) and one for the transport layer (TCP). They were also able to realize the need for another protocol in the transport layer (User Datagram Protocol (UDP)) in order to handle services that can tolerate unreliable communication in exchange for best-effort delivery of data. (Surprisingly, it is just upon my reading that I realized real-time applications have already been existing even then, as necessitated by command and control applications in the military where this type of protocol is initially deployed.) Although the paper was able to lay down ideas that may be used to assure best-effort delivery of packets, the author have not yet detailed out how it must be implemented.

Amongst the weaknesses pointed out in the current design of the Internet, one of the keypoints that captured my attention was the problem encountered on retransmission of packets. He has mentioned that the current architecture doesn't provide features that may recover lost packets at network-level. I agree the situation of frequently having to resend everything from sender to receiver will be improved if the gateways are permitted to participate in the detection of lost packets. However, such task must not be computationally consuming because it may cause long delay in the gateways. I agree that perhaps, the concept of 'fate-sharing' forces the Internet to be assumed untrusted and unreliable, thus resulting to the impression that the power of the network cannot be exploited. It seems like simple techniques as in packet prioritization and broadcasting may be found useful in increasing the control of the network-layer for communication. If I recall correctly, techniques as in broadcasting are already being done in the said layer nowadays.

On the issue of how to guide designers of 'realizations' (as mentioned, it refers to 'a particular set of networks, gateways and hosts which have been connected in the context of the Internet architecture'), it is true that they cannot simply rely on how faithful they have stayed true to the protocol, because the protocol is developed for different objectives. However, it seems a difficult job to provide the perfect handbook that shall guarantee a well-performing network because there are multiple factors that must be considered. I guess until now, this problem is still being encountered although there perhaps already exists basic guidelines.

Lastly, the paper has mentioned that perhaps there is a need to improve the basic building block of the network, the datagram. Instead of treating packets as isolated entities, he encourages the idea of detecting sequence of packets passing from one source and destination through the concept of 'flow state' and 'soft state'. In my understanding, it seems to be a cross from dedicated line and datagram. However, I am not quite sure I understood well what he meant by them and how this is planned to be implemented.

[1] V. Cerf, and R. Kahn, "A Protocol for Packet Network Intercommunication", IEEE Transactions on Communications, Vol. 22, No. 5, May 1974, pp.637-648