Generally the home office will have computers that are connected on small locally attached Ethernet 100Mb/s links. These systems typically have more processing power than they have storage access capability. These installations will find value in placing their storage in a central location, dynamically adding it to their personal computer systems with a simple plug-and-play configuration, thereby obtaining additional storage without having to open up their systems.
The home office environment can use standard, low-cost, off-the-shelf networking components. The switches are readily available, and the desktop and laptop processors usually have more processing power than is needed for the types of work they do. These home office systems usually come with Ethernet connections and do not need any special adapter cards. The only things they need are iSCSI software drivers and the low-cost IP storage devices that iSCSI will permit.
The real problem starts when the user has to migrate the data from an old file server to a new unit, a difficult and very disruptive process. There are technical solutions, of course. One of them is to buy another file server or a network attached storage (NAS) appliance. Thanks to iSCSI there is another, generally less costly approach—pooled storage. Pooled storage may consist of simple iSCSI JBODs (just a bunch of disks) or RAID controllers, all connected to the same network via iSCSI. With iSCSI, users can also begin small and add storage as needed, placing it wherever they have room and a network connection. Regardless of how many units they add, all of them are logically pooled and yet any of the individual systems in the office can have its own private storage portion.
With iSCSI all the major storage placement decisions are performed by the various host systems as if the storage were directly connected to them. Because of this fact, iSCSI is fairly simple compared to NAS and this results in low processing requirements in the iSCSI storage controllers. It is therefore expected that iSCSI JBODs and RAID controllers will be significantly less costly, and support more systems, than the same storage in a NAS appliance. Using iSCSI units in this way is not always the right answer; however, it does give the customer another option that may meet their needs for flexibility and price. In other words, for the same processor power an iSCSI appliance can support more clients and more storage than is possible with a NAS appliance.
A performance analysis carried out by IBM compared a NAS (NFS) server to an iSCSI target (with the same basic power and equipment). This comparison showed that the iSCSI target used much less processing power than did the NFS server. (Refer to the discussion of measurements in Chapter 3.)
Small offices are similar to home offices, except that they have more links and switches. In most installations, a switch can be used to attach either a new NAS appliance or an iSCSI storage controller
Moving up to the midrange company environment, we find multiple server systems. These are unlike desktop or laptop systems, which usually have more processing power than they can use. Servers are heavily loaded performance-critical systems that consume all the CPU cycles they have and often want more. They need access to storage with the smallest amount of lost CPU cycles possible.
In an FC, or a direct-attach environment, these systems expend approximately 5% processing overhead to read and write data from and to a storage device. If iSCSI is to be competitive in the server environment, it needs a similar overhead profile. This requires that the processor overhead associated with TCP/IP processing be offloaded onto a chip or host bus adapter (HBA).
Offloading requires a TCP/IP offload engine (TOE), which can be completely incorporated in an HBA. All key TOE functions can be integrated on a single chip or placed on the HBA via the "pile-on" approach. The pile-on approach places a normal processor and many discrete components on the HBA (along with appropriate memory and other support chips) and includes normal TCP/IP software stacks. The integrated chip and the pile-on technique both permit the host processor to obtain iSCSI storage access without suffering the overhead associated with host-based TCP/IP processing.
We will be seeing a number of HBAs of all types from a collection of vendors. These will include not only a TOE but also in many cases full iSCSI offload. We will also see a pile-on HBA that can support 70% to 100% of the line speed while operating at close to 100% CPU utilization on the HBA. The customer of the server processor will care not how hard the HBA is working, but only that it can keep up with line speed and offload the iSCSI overhead (including TCP/IP) from the host processor.
One significant difference between the midrange and small office computing environments is that the I/O requirements of the various servers can be as demanding as that found in many high-end servers. Therefore, in the midrange one tends to see more use of iSCSI HBAs and chips in various servers and storage controllers, and a smaller reliance on software versions of iSCSI.
High-end environments will have the same processor offload and performance requirements that midrange environments have. However, they will probably be more sensitive to latency, so it is expected that the pile-on type of HBA will not be very popular. Because of the never-ending throughput demand from high-end servers, it is in this environment that HBAs with multiple 1Gb Ethernet connections and 10Gb implementations will eventually find their most fertile ground.
Another important distinction is that the high-end environment will probably have some amount of FC equipment already installed. This means that the cohabitation of iSCSI and Fibre Channel will be required.
Because of the usefulness and flexibility of iSCSI-connected storage, and because high-end servers are probably already connected with Fibre Channel, it is expected that high-end environments will first deploy iSCSI in their campus, satellite, and "at-distance" installations.