Steve Michael Hotz (born 1963)

born April 4 1963 - See [HL006E][GDrive]


Companies / Employers:

From Centergate website : Steve Hotz, Chief Scientist

PDF at [HC005L][GDrive]

Steve Hotz serves CenterGate® as Chief Scientist, bringing a depth of expertise in networking, operating, and distributed systems, as well as broad skills in the areas of academic research, system evaluation, and project management. His experience in routing protocols, naming and directory services, large-scale and high-performance server systems, and transport protocol design and implementation provides a framework for CenterGate's system development projects. Steve brings eleven years of scientific experience with USC's Information Sciences Institute (ISI) to his leadership position at CenterGate.

Steve's long association with ISI, where he was mentored by [Jon Bruce Postel (born 1943)] and Paul Mockapetris, began in 1988 as a PhD student. He successfully defended his thesis dissertation on Route Computation and Information Organization for Heterogeneous Global Internetworks in 1994, and joined ISI as a research scientist. As a Computer Scientist at ISI, Steve was involved in a number of technical and funding proposals, and was a member of several key research teams.

His first efforts were in the emerging Domain Name System, where he developed tools to evaluate performance and diagnose system misconfigurations. He continued his work on DNS by serving as technical lead of the MINDEX Project, which produced the initial ideas for several DNS mechanisms that were developed within the IETF and are deployed in the 'bind' implementation today, i.e., the notify and the incremental transfer mechanisms. For the past five years, Steve has been the primary technical expert for DNS issues at ISI, including responsibility for technical oversight of the US Domain.

Steve contributed to a diverse set of projects during his tenure at ISI. These included the X-Bone project, an automated system for deploying private overlay networks, the Large Scale Active Middleware (LSAM) project, a system to automatically configure sets of interconnected proxy caches and algorithms to route HTML requests among caches, and the Netstation project, which focused on a system architecture where the system bus is replaced by gigabit LAN technology. As part of Netstation, Steve developed two high-performance transport protocol implementations: a custom Device Transport Protocol (DTP), and a port of TCP to the Netstation operating environment. As part of LSAM, Steve designed a mechanism based on IP multicast to automatically configure a hierarchical system of proxy caches.

Steve worked as a USC Research Assistant with Deborah Estrin from 1991-1994 on projects focusing on inter-domain routing. During this time he was a collaborator in the development of the Unified Routing Architecture, and designed a software simulation tool to generate very large internetwork models for use in research and commercial protocol development.

During his years at USC and ISI, Steve was selected for several visiting researcher and consulting positions. Most recently, he was a Network Systems Consultant to Genuity Inc. where he was the principle architect of their flagship "HopScotch" product and leader of the HopScotch development team. Steve was a Visiting Research Fellow at Bolt, Beranek and Newman, Inc. in the spring of 1994, and spent six months as a research intern with IBM T.J. Watson Research Center, where he designed, evaluated and published a configuration language for administrative policy for the ISO inter-domain routing protocol, IDRP.

Steve has eleven publications and over ten presentations to his credit, as well as several software distributions, including the widely used 'dig' tool for DNS queries. He has taught the senior operating systems class at USC, and has been a teaching assistant for various computer systems classes. He has served as a reviewer for National Science Foundation proposals, and for a number of professional conferences and publications including IEEE InfoCom, ACM SigComm, GlobeComm, Networld+Interop, Supercomputing, and IEEE Selected Areas in Communications.

LinkedIn profile

PDF - [HL006G][GDrive] / Image - [HL006H][GDrive]

  • UltraDNS - Founding CTO ( Oct 1999 – May 2001 )

      • Chief technologist directly involved in founding the company; architected company product platform; worked with CEO and VP Marketing to shape initial product offering and business plan resulting in completion of an 8 million dollar series-A financing round.

      • Provided technology team oversight and management of system development and operational deployment of the company's first product: an outsourced Internet DNS service platform. Responsible for growing the initial engineering and operational team from three to eighteen employees; grew team to over forty employees as company scaled.

      • Played a significant role in representing the company externally; key contributor to company and product positioning collateral; lead analyst and press interviews, strategic partnership discussions and major customer engagements. Member of business development team that initiated major customer acquisitions including Microsoft Hotmail and Oracle.

      • Maintained ongoing responsibility for defining technical direction, development of intellectual assets, management of the Technical Advisory Board, interaction with external standards organizations, and evaluation of potential technology partnerships. Defined CTO Office mission, developed operating process for CTO activities, and hired team to analyze and develop strategic company initiatives.

  • CenterGate Research Group LLC Chief Scientist and Technology Group Manager (May 1999 – Oct 1999)

      • Principal technologist and leader of Internet and systems infrastructure incubator; responsible for strategic direction, system design, technical oversight, and resource and project management.

      • Principal architect and project manager of UltraDNS software development and service deployment.

      • Co-designer and responsible for technical oversight of network performance monitoring project which eventually became Catbird Networks.

      • Technical lead and project manager for initial version of Whitehat Internet mail campaign system.

  • Information Sciences Institute - Computer Scientist ( Nov 1994 – May 1999 )

      • Member of Xbone research group that developed an automated virtual network deployment system. Collaborated on initial project direction and architecture, responsible for initial resource management daemon, collaborated on initial application protocol definition, and represented Xbone at funding agency conferences and group meetings.

      • Member of Large Scale Active Middleware research group. Responsible for system to automatically configure set of interconnected proxy caches, and algorithms to route HTML requests among caches.

      • Member of Netstation research group that developed a system architecture in which the system bus is replaced by gigabit LAN technology. Designed and implemented high-performance TCP for Netstation display; designed and built display operating environment based on Texas Instrument's executive primitive library. Designed and prototyped Device Transport Protocol, a structured protocol designed to allow efficient implementation; achieved over 30,000 acknowledged pkts/sec.

      • Task leader of Mini-INDEX Project. Set direction and provided leadership for efforts to improve, and provide new, DNS tools and functionality. Recruited and managed graduate student assistants.

      • Technical lead for development of tools to assist automated administration of the .US Domain namespace; set project direction; recruited and managed graduate student assistants.

  • Genuity Networks - Network Systems Consultant, Architect and Team Lead ( Nov 1996 – Jun 1997 )

      • Principal architect of first HopScotch system, Genuity's flagship web service product and one of the Internet's earliest CDN services.

      • Provided evaluations and recommendations about competing and complementary technologies. Architected product to achieve leveragable product differentiation.

      • Managed and mentored HopScotch software development team.

      • Assisted with acquisition of early adopter customers by explaining and positioning product solution.

      • Developed content and provided technical information to legal team for patent application.

  • IBM - Research Intern, High Performance Networking Group ( May 1992 – Nov 1992 ) in TJ Watson Research Center, NY

      • Designed configuration language for administrative policy for ISO inter-domain routing protocol (IDRP); specified language syntax and semantics, analyzed the interaction of policy semantics and the protocol's operational specification, and implemented the prototype version of policy language parser.

  • University of Southern California - Research Assistant ( Sep 1991 – May 1992 )

      • Analyzed inter-domain routing systems; evaluated performance and scaling issues of algorithms and mechanisms for route computation in very large internetworks. Collaborated in the development of the Unified Routing Architecture with primary responsibility for route computation algorithms and information systems for the Source-Demand Routing Protocol.

      • Designed software simulation tool to generate very large internetwork models with configured topology and service characteristics; directed masters student's prototype implementation efforts.

In Omaha in 1997 to 2000ish ? - See [HL006F][GDrive]

"Netstation architecture"


NVD Research Issues & Preliminary Models

Gregory G. Finn, Steven Hotz, Rod Van Meter

USC/Information Sciences Institute

March 1995

Version of: 9/1/95

This research was sponsored by the Advanced Research Projects Agency under Contract No. DABT63-93-C-0062. Views and conclusions contained in this report are the authors? and should not be interpreted as representing the of ficial opinion or policies, either expressed or implied, of ARPA, the U.S. Government, or any person or agency connected with them.

Copyright ? 1995 by USC/Information Sciences Institute

SOURCE text (Not yet downloaded) - [HE003W][GDrive]




1. Overview

Microprocessors are now often used in peripheral devices. Recent advances in multicomputer research have produced chip-sized gigabit network interfaces that operate across spans from a centimeter to 100s of meters [1][2]. Since a gigabit local-area network (LAN) has roughly the same channel capacity as a system bus, it is now reasonable to consider the substitution of a gigabit network in place of a system bus.

In conventional computer architectures the main processor communicates with its major peripheral devices via a system bus. In a netstation architecture the communication function of the system bus is replaced by a high-speed internetwork [3]. Peripheral devices communicate with the hosts that control them via the network. Netstation architecture blurs the physical boundary of a computer system. It also blurs the definition of what a host is or is not.

The advantages of utilizing a gigabit network rather than a bus are improved scaling and performance combined with vastly greater accessibility. Networks provide symmetric communication and support simultaneous transfers. Data can be exchanged between any two hosts directly. By relying upon Internet protocols, devices attached to the network are directly accessible to any other host on the internetwork.

To access and control devices across a network requires the development of commonly understood network protocols and interfaces. The client/server model can be extended to apply here. The focus of netstation research is the design and implementation of efficient communication and control mechanisms between the device-client, that controls a device, and the device-server, that presents via the internet an interface to the physical device.

2. Problem Outline

The network interface in the past was not seen as a principal part of the system architecture. The network channel was treated as a slow-speed peripheral device. However, with the advent of gigabit networks, the network is no longer a slow-speed device and it operates at speeds that approximate mainsystem memory bandwidth. To achieve reasonable performance, a gigabit network interface must be tightly coupled to main-system memory.

In a netstation architecture the principal peripheral device is the network interface. The system bus is largely replaced as a key communications medium by the internetwork. To achieve this, peripheral devices such as disks, displays, keyboard with mouse, are made autonomous nodes on the internet. The operating system or application process that accesses and controls peripheral devices does so via messages that are sent across reliable transport protocol connections.

This architectural shift toward the network as the dominant peripheral communications medium requires both an adaptation of existing kernel device-control methods where practical and creation of new control methods that reflect the physical and administrative differences of a message-based distributed-system architecture. Questions of system configuration, naming, resource discovery and access control must be addressed.

Figure 1. Netstation Architecture


2.1 Unit of Transfer

The fundamental unit of data transfer across a bus is a word and the fundamental bus data-transfer operations are load and store. Transfer latency across a bus is a few nanoseconds and the transfer overhead a few tens of nanoseconds typically. The fundamental unit of data transfer across the internet is a packet. The packet is a grossly larger unit of communication than a word. Transfer latency across the internet varies from several microseconds to tens of milliseconds, and the typical transfer overhead at the source and destination is currently at least several microseconds.

It is possible to access and control devices that are non-resident with little or no change in devicecontrol software by using distributed memory. However, the performance obtained using distributed memory in an internet setting would be poor, since the vast majority of hosts provide little or no support for it. A message-passing model of distributed computation is better adapted to the internet operating environment.

Though message passing is better suited to the internet environment, differences in overhead and latency restrict the types of devices that are suitable for network distribution. Transfer latency across gigabit networks is limited by path length. Little can be done to significantly improve that. For some device types, that latency will limit the maximum practical separation between the client and device. Unlike transfer latency, reducing the per-packet overhead is not an intractable problem. In the context of high-speed networking it is being actively studied and remains a fruitful area of research.

2.2 Means of Transfer

In the context discussed here, we assume that data and commands that pass between a device and its controlling application occur via messages that are sent across the internet. The reliability and fidelity of the command transfer across the internet must closely approximate that obtained crossing a system bus. Commands sent to the device sub-system must exhibit execute once (and only once) semantics. A straightforward way to accomplish this is for the application to issue commands using remote procedure calls (RPCs) that are carried via a reliable transport-layer protocol.

The rate that commands can be issued is limited by that request/response time. Reducing the packetrelated overheads between the controlling application and the netstation node that it is using is particularly important. TCP is not well-adapted to the task of transferring RPCs to an application. It provides a byte-stream to the source and destination applications, rather than a sequence of objects framed by the source. The delay-bandwidth product across a high-speed LAN is also often less than even a small RPC packet. A specialized transport protocol that is designed to carry RPCs may reduce packet-related overhead by allowing RPCs to be individually framed.

The External Data Representation and Remote Procedure Call standards as documented by RFC1014 [4] and RFC1050 [5] are sufficient standards that allow creating the RPCs used to control devices. The RPC standard in RFC 1050 is not transport layer-specific. Implementations exist for more than one transport protocol.

Industry is moving towards device-command formats that are suitable to the networking domain. An example of this is the SCSI-3 standard for command of devices that is being developed by ANSI/ISO [6]. SCSI-3 defines the syntax framework of commands sent to target devices for execution, their semantics, certain aspects of error reporting and recovery, and the behavior of queueing at the target device for environments where multiple requests may be outstanding. Methods for access ordering are covered, but access control, ownership and authentication issues are not discussed.

The SCSI-3 committee has considered the transporting of commands over a variety of different media, including high-bandwidth point-to-point channels and networks. As a result, SCSI-3 commands adopt a

to previous section2 4to next section