Optimize and Fix your Network Connection

<<Previous Page                Next Page>>

Audience Level, prerequisite knowledge

This article assumes that the Reader has a fairly well grounded understanding of TCP/IP, particularly the concept why sliding windows are necessary to re-compile out of order packets into a successful transfer of a block of data.

For all others, YMMV -- If you are looking for Microsoft Windows info, OS including and later than Vista and Windows Server 2008 decisions and modifications are relatively easy and less risky.. For earlier Microsoft Windows OS and all OS running the Linux kernel, I highly recommend some background research on how TCP/IP works because your decisions aren't as straightforward and simple. With early Windows, you must understand how to make Registry changes (and recommended Registry Backup/Restore). Linux kernel modifications require accurate choices among many options, so should understand how to gather and analyze test data. Less rigorous subjective evaluation is possible and may even be appropriate at times but carries greater risks of mistakes.


Of all the computers in use today, the PC is the most widely used form factor, and on that form factor Windows dominates the Desktop OS, Linux dominates Internet Servers, and Macintosh is smaller but growing. This paper applies to all three Operating Systems and likely many more close UNIX cousins.

On an accelerating curve over the past 10 years or so, maintainers of both Windows and UNIX kernels recognized increasing problems transferring larger and larger files, especially across the unpredictable Internet using new and varied networking mediums like various wireless which are much less reliable than traditional wired networks.

The problems transferring files across this new and emerging Internet(and even new LAN technologies) were and are geometrically multiplying, in step with massive increases in bandwith, moving larger and larger files across ever longer distances over unforeseeable new infrastructures, many not as reliable as plain old wires.

As file transfers become more varied, the traditional assumptions these kernels must still make become invalid at times, and this article attempts to provide you with the information necessary to make necessary changes to maximize the throughput and efficiency of your networks.

Innovation? - Maybe not New, just insight into the Complete Solution.

The "innovation" of this article is not that it's possible to enlarge TCP Buffer sizes or modify the TCP Congestion Control algorithm, there are many references for the former and a few other good ones for the latter that can be found. But, as of today AFAIK this may be one of the first articles that integrates the two as essentially linked, that TCP Buffer sizes today and into the future sometimes must be enlarged but that the use of the TCP Buffer must also be well-managed, and today that is the province of the TCP Congestion Control algorithm.

During research into this topic a series of blogs by Jim Getty was brought to my attention which in its own way has rightly earned respect across the Internet and spawned numerous comments and further discussion in many places about proper TCP Buffer sizing. My take on his series is that his observation that "Overbuffering" (actually excessively large sliding windows) is consistent with the well known principle that it can be just as detrimental as sliding windows that are too small, but unless I am mistaken he does not include selection of the appropriate Congestion Control algorithm anywhere which IMO would be a significant omission that possibly undermines the most basic of his contentions that TCP Buffer sizes should be set back to tiny sizes.

Then again... I admit that I also may involuntarily have overlooked something that could invalidate parts (hopefully not most) of this article and welcome comment.


  • Machine responsiveness slows, might look like a memory leak

  • Network throughput slows, then grinds to a halt for no explicable reason

<<Previous Page                Next Page>>