Author: Alan C. Dube
Date: 2/12/96
This paper is the first of a five-topic series on client/server systems development. The paper will provide an overview of client/server systems, describing its importance, discussing background and current information, and examining the future direction of client/server application development. The paper will close with some conclusions on client/server development based on the areas covered.
It is difficult to succinctly define exactly what client/server systems are. Different people and organizations have different definitions. However, Bochenski (1994) points out that there are ten commonly accepted characteristics that help to define the meaning of client/server computing:
A client/server architecture consists of a client process and a server process that can be distinguished from each other, yet that can interact seamlessly.
The client and server portions can operate on separate computer platforms - but do not have to.
Either the client or the server platform can be upgraded without having to upgrade the other platform.
The server is able to service multiple clients concurrently; clients can access multiple servers.
The client/server system includes a networking capability.
A significant portion of the application logic resides on the client end.
Action is usually initiated at the client end. However, database servers can perform tasks based on triggers, business rules, and stored procedures.
A user-friendly graphical user interface (GUI) generally resides on the client end.
A structured query language (SQL) capability is characteristic of the majority of client/server systems.
The database server should provide data protection and security (p. 8).
Regardless of the definition used, the computer industry is going through a period of rapid and dynamic change -- and client/server computing is a key component of this evolution. 100% of the corporate respondents to a recent survey on information systems (IS) spending for 1996 are budgeting significant funds for client/server development and deployment (Wilder, 1996). Whitten, Bentley, and Barlow (1994) point out that there are several reasons for the intense interest in client/server computing:
Clients are increasingly powerful and cheaper than mainframes and minicomputers.
Servers are becoming powerful enough to handle the workload of many clients at low cost.
Data storage can be moved closer to the end user where it becomes a more valuable business resource.
Client/server applications are generally easier to construct and maintain (p. 408).
The change in emphasis from host/terminal systems to client/server applications occurred gradually. Traditionally, most serious application systems were developed using mainframe computers, host languages, databases, operating systems, and dumb- terminal configurations. Character-based input and data manipulation were the rule. IS departments were the central point for handling a large backlog of requests for reports and systems development. The mainframes were, in essence, both server and client. The terminal, not capable of doing independent processing, was merely an input/output device.
The advent and distribution of the personal computer and workstation in the 1980's, along with significant advances in networking, server, peripheral, and software products through the 1990's, have changed everything. Corporations are now looking to client/server technologies as the foundation of their IS development efforts in the future. Systems are becoming more distributed, data is being replicated, and development itself is no longer within the sole domain of the IS department. Additionally, many organizations are undertaking efforts to rewrite or bridge their legacy mainframe systems to a client/server platform to generate cost savings and put a graphical front end on their applications (Bowen, 1996).
Many vendors have emerged to take their place as "best of class" for products within the modular and open constructs of client/server designs. Novell Netware has emerged as the dominant network operating system, and Sybase and Oracle have captured a large share of the relational database market. Hardware vendors like Compaq, Sun, and Hewlett Packard have introduced an impressive array of products to corporate consumers, forcing older companies like IBM and Digital to rethink their strategies and adapt to the demands of the dynamic client/server market. Microsoft maintains a strong presence on the corporate desktop with their Office suite of applications. Client/server developers have a wide range of visual development tools to choose from: Visual Basic, PowerBuilder, SQL Windows, and Delphi (Gallagher, 1996).
In the future, IS organizations and business units will continue to grapple with the complexities of developing and deploying client/server environments within their organizations, while seeking the "promised" user, data, and cost benefits of doing so. Hard choices will need to be made after careful needs and requirements analysis have been conducted. Standards for client/server performance and design will continue to emerge as vendors agree on the best approaches for product segmentation and interaction. Hardware advances will continue to outpace those in the software area. Data bases will continue to grow in size, and new channels for application deployment (e.g., the Internet) will have to be explored if companies want to remain competitive and use information technology as a resource for adapting to market influences and demands.
Client/server systems should be approached with a cautious optimism. Client/server systems can yield many financial and usability rewards when done correctly. To date however, IS management has had to deal with the harsh realities of client/server development: the time and staff required for development is often underestimated, the technology is not as stable as it should be, project benefits are often overrated, client/server systems can be expensive to implement, and performance issues have been many (Hamilton, 1994). Laudon and Laudon (1994) note that companies have not experienced the cost savings they were expecting in moving to a client/server environment because the hardware savings are often offset by the high cost of specialized labor needed to develop and implement the systems. Vendors have yet to provide tools for comprehensive management of distributed systems. Performance analysis, software coordination, troubleshooting, change management, archiving, batch processing -- all known and mastered tools and techniques in the mainframe world -- need to be implemented effectively for client/server systems to flourish.
As with any computing architecture, education, discipline, and planning are key solutions to successfully developing and deploying client/server systems. Organizations must understand all of the components or "sum of the parts" involved with client/ server development, select a standard approach, and train their staffs on the components. Modular and object-oriented approaches to development must be utilized. Systems should be tested thoroughly before deployment -- not only for its performance, but also its interaction with other systems. Above all, proper consideration must be given to user interface design, toolset selection, and relational database design and management systems.
This paper, part two a five topic series on subjects related to client/server systems development, will report on user interface design. The paper will provide an overview of user interface design, describing what it is, its importance to client/server systems, and providing background and current information. The future direction of how user interface design will impact client/server applications will be explored. The paper will close with some conclusions on user interface design.
Lauden and Lauden (1994) define a user interface as "the part of the information system through which the end user interacts with the system; the type of hardware and the series of on-screen commands and responses required for a user to work with the system" (p. 475). Whitten, Bentley, and Barlow (1994) further state that a user interface design is "the specification of a conversation between the system user and the computer that results in the input of new data to the information system, the output of information to the user, or both" (p. 693). As Shneiderman (1992) observes, human performance in the use of computer and information systems will remain a rapidly expanding research and development topic in the coming decades -- both hardware and software developers are exploring how best to present information to users, and enhance their use of a system.
It is clearly evident that the design of user interfaces for conversational, query, and transaction-based systems has taken on a greater importance due to the recent gains in client/server technologies, distributed computing, and graphical operating systems (OS). Systems developers are now depending on graphical user interfaces (GUIs) to facilitate user satisfaction with systems, reduce the amount of keystrokes required for input, limit the chances for error, and provide greater data display and presentation capabilities. Parker (1994) notes that to be considered a GUI, an application should have the following characteristics:
A "desktop" environment from which application "windows" can receive user focus.
A title bar that indicates the window's name.
A menu bar that shows commands which apply to the current window.
Sizing buttons that allow a window to be expanded to fit the entire desktop or minimized to an icon.
Scroll arrows and bars that enable scrolling through data within a window.
A window border that is used for moving or resizing windows.
Icons that represent choices and can be invoked using an input device (p. 224).
Until the recent shift to client/server development, most business systems where character-based (with the exception being kiosk systems). This was primarily due to the lack of computing power needed to effectively maintain a graphical system. Mainframe systems and their toolsets were not inherently suited to developing and deploying graphical applications due to the bandwidth constraints of their host/terminal environments. A "smart" client, or one that was capable of doing most of the display and video manipulation required for a graphical system, did not exist.
As a result, great care was taken to translate a user's system needs through a series of structured menu choices, prompts, and function-key assignments. Color has and continues to play a key role in a character-based environment due to its usefulness in getting a user's attention or highlighting an exception condition. Reverse video, bolding, and underlining of text were also used effectively.
However, these character-based systems often required users to remember a series of operational codes and commands in order to effectively used them. Menu choices and function key assignments were limited. Screen clutter was common, as developers struggled to fit as much information as possible onto a screen to avoid additional transactions. The applications, often a product of COBOL code and screen-mapped templates, were not event driven. Users were forced to complete a series of sequential events, and more importantly, were not able to multitask with different applications due to the limitations of the mainframe OS and terminal environment.
The introduction of a powerful and intelligent client (i.e., the PC workstation) into the client/server mix allowed developers to exploit the increased hardware capabilities by introducing graphical techniques and standards. As major vendors such as Microsoft, IBM, and Apple improved their OS offerings, graphical applications continued to arrive on the market and be adopted by corporate consumers eager to implement a better approach to their user interfaces.
Indeed, a GUI and an OS are currently viewed as synonymous: Windows, Windows NT, Windows 95, OS/2, Macintosh, and Unix (X- Windows) all sport graphical front ends to their environments. While the various OSs differ in their syntactical approaches, each GUI exists for the main goal of doing away with (or at least drastically reducing) the complexity of using the system and the dreaded "command-line" prompt. A plethora of development tools and application generators are now on the market that assist in developing graphical, client/server applications that work on one (and in some cases several) of the aforementioned OSs. The move to standardize GUI applications and how they interact with each other (e.g., the "suite" approach used by Microsoft Office) has gained momentum.
The future for user interface design appears to be both exciting and challenging. The use of hypertext applications, via its large application on the World Wide Web of the Internet, as well as in groupware products like Lotus Notes and Collabra Share, will continue to flourish. Current GUI toolset designers, as well as Internet software companies, will likely adopt a standard for markup-language scripting. This will require that client/server developers adopt a structured design approach for multiple environments as they are tasked with merging access to the dissimilar systems (Nanard & Nanard, 1995).
A well planned user interface design is critical to the acceptance and success of any system. A system that has superior hardware and response times behind it is doomed to failure if no one wants to use it because of bad interface design. Developers need to keep human factors in mind at all times when creating and deploying applications. Users should be involved in the design process by jointly working on prototypes, transition diagrams, and screen layout charts with the development team.
Developers also need to refrain from cluttering applications with excessive graphics and functionality. Johnson (1994) notes that designers often work under the misconception that the more graphical an interface is, the better it performs. The proliferation of icons and button bars in many GUI applications lead to information overload. The end result is user confusion - - with so many options to use, none of them are. To keep things simple, applications designers should not be afraid to create separate dialogue windows, keep screens clean, be consistent, use naming standards and keep the keyboard involved. Angus (1994) observes that a beautiful user interface can actually slow an experienced user down by displaying too much detail. This is especially true for data entry and keypunched transactions.
Client/server systems are about balanced processing. User interface design should also adhere to this paradigm. When graphical application generators are used to create systems, developers should balance keyboard and mouse input, along with what is actually being displayed to the user.
This paper is part three a five topic series on client/server systems development, and discusses the state of application generation toolkits. The paper will provide an overview of what application generators are, describing their importance to the client/server environment, and discussing how they evolved to their present functionality. Current information on and the future direction of how application generators may impact client/server development will be covered. The paper will close with some conclusions on how application generators should be used to affect and shape client/server systems.
As with the definition of a "client/server" architecture, there is no single and exact definition for an application generator. Categorizing client-based construction tools is difficult because there is much overlap of features among them, with new features being continuously added. Some adhoc query and database products, like Gupta's Quest and Microsoft Access, are considered end-user tools but have sophisticated form, reporting, and scripting capabilities. As Bochenski (1994) notes, they can produce impressive stand-alone results -- and therefore should be viewed as end-user application development tools.
However, a high-end group of application generators (e.g., Visual Basic, PowerBuilder, SQL Windows, and Delphi), can now be considered the "market" leaders because they target themselves primarily to client/server developers who are rapidly creating and deploying transaction-intensive application systems (Gallagher, 1996). These products usually contain several of the following features that are desirable to a client/server development environment:
Multiple operating system support.
Support for team development via a LAN-based repository.
Support for popular GUIs and back-end database systems via a standard SQL dialect.
A graphically-oriented code generator with Windows-supported debugging.
Object-oriented capabilities with class libraries.
An "industrial-strength" programming language.
Support for portability and interoperability.
Inherent or supported transaction management.
Thus, the products aim to allow for total application generation. The designer(s), given the tools, must effectively learn how to create systems that balance the benefits of a graphical (or character based where appropriate) presentation with the performance characteristics of client/server systems.
Information systems (IS) departments are trying to cope with the new client/server frontier by transitioning away from third- generation languages (3GLs) to those application toolsets which best meet their needs and objectives. Choosing the correct tool remains a difficult process. Like COBOL on the mainframe, the C programming language evolved as the dominant engine for developing GUI systems in a client-server environment. However, the use of C poses significant challenges to traditional IS organizations that want to migrate to a client-server platform: C is difficult to learn and master, and requires a large amount of code for GUI applications. An easier, more intuitive development platform was needed if client-server, GUI applications were to be written and distributed in a rapid and efficient manner.
Enter Visual Basic, PowerBuilder, SQL Windows, and Delphi. Visual Basic (VB), from Microsoft, is a derivative of Quick Basic that was modified to handle graphical and event-driven constructs for the Windows platform. VB has a function rich library that supports object linking and embedding and a database library that allows VB applications to access a variety of external data sources. VB's development environment is somewhat unstructured in that it lacks a built-in methodology, but with experimentation a developer can quickly create impressive applications. VB also has hundreds of custom controls (i.e., VBXs) available for it from a variety of third party vendors.
PowerBuilder, from Powersoft, is built on event-driven, object-based technology. PowerBuilder retains characteristics that are familiar to the developer, via its easy-to-use graphical "painter" environment. It supports team development by providing a built-in repository and a shared library management system. PowerBuilder, like VB, is a fairly open toolset in that it supports many back-end databases and links to third-party products (Darling, 1993). A key component of PowerBuilder is its "data window" object: a reusable data retrieval ent ity. The data window can be shared across applications, allowing developers to standardize database screens. PowerBuilder uses PowerScript, a BASIC-like combination of object-based and procedural programming concepts, to link together application objects and events. PowerBuilder is expensive ($3,000+), but its features, methodology, and flexibility make it economical in many ways for those IS groups that need to quickly transition into GUI development.
SQL Windows, from Gupta Technologies, provides predefined objects like forms, fields, menus, list boxes, and push buttons that are associated with even-driven logic. SQL Windows provides a flowchart interface to its toolset that walks programmers through the event-driven process. Like VB and PowerBuilder, SQL Windows offers add-in products to assist in the construction and delivery of client/server applications. SQL Windows has a good reputation for allowing programmers to create industrial-strength transaction processing front ends that are quite robust.
Delphi, from Borland International, contains an object repository, data library, and integration with library control systems (i.e., PVCS). Delphi was touted by Borland during it initial development as a VB killer. While VB still exists, Delphi has won over many Microsoft customers and established a niche for building fast desktop-database applications. Delphi balances the flexibility and power of an object-oriented programming language with the speed of a rapid development tool, and had strong database integration features (Gallagher, 1996).
The future for client/server application generators looks promising. As operating systems continue to improve, and client/server architectures stabilize, application generators will exploit the advances. One of the biggest complaints of GUI toolsets is that their stability and performance pales in comparison to that of third-generation languages (e.g., C++). With the addition of compilers, direct programming hooks into the operating systems, and a continued trend towards an open programming environment, these concerns should be alleviated.
Application generators will also become more tightly integrated with transaction processors (e.g., CICS and Tuxedo), allowing fast and economical access to legacy data. This will allow mainframe data to be treated no differently (from the user's perspective) than data stored on a local server. Also, toolsets will become cross platform (PowerBuilder is already to a degree), allowing developers to create applications that operate on multiple environments.
No application generator can anticipate every possible requirement, so extensibility is an important factor -- especially in a client/server environment. Generators by themselves cannot create a good user interface -- this is the realm of the designer and user. The developer(s) must master the toolset in order to reap the benefits of its capabilities. Most importantly, the relational data accessed by the generated application must be well designed and implemented to ensure adequate system performance.
This paper is part four of a five topic series on client/server systems development. The paper will focus on relational database design, discussing background information, describing its methodology, and examining the future direction of relational database design. The paper will close with some conclusions on design and implementation methods used for relational databases on client/server systems.
The relational model of data was introduced by Codd in 1970. It is based on a simple data structure -- the relation -- and has its foundation in theoretical and algebraic principles. The database itself is a collection of relations. Each relation (table) consists of rows (tuples) that intern consist of related data values (column attributes) that describe a real-world relationship. The data type that describes the types of values that can appear for each attribute is called a domain. Each relation within a database may be related to others and have instances of occurrence (i.e., cardinality): one to many, many to many, many to one, and one to one.
Codd further defined the relational model in 1972 by introducing the concept of "normalization." Normalization of data is the process during which unsatisfactory relations are decomposed by breaking up their attributes into smaller tables the possess desirable properties. There are three tests of normal form; First normal (1NF) form requires that the value of any attribute in a tuple must be a single value from the domain of the attribute; Second normal form (2NF) demands that every nonprime attribute in a 1NF relation is fully dependent on every key attribute in the relation; Third normal form (3NF) states that a relation is in 2NF and that no nonprime attribute is transitively dependent (i.e., a subset) on the primary key.
Elmasri and Navathe (1994) note that the relational database design process consists of six phases: requirements collection and analysis, conceptual database design, relation database management system (RDBMS) selection, logical database design, physical database design, and database system implementation. Database design is often done in a team environment: database administrators, programmer analysts, business analysts, and end users all working together to understand all aspects of and set expectations for the database.
Requirements collection and analysis is the process of identifying and analyzing the intended use of a database. This includes identifying the major application area and user groups that will use the database. Documentation on existing systems, the current operating environment (including transaction frequency and flow), and reposes to user questionnaires are all studied to determine their influence on the requirements collection process. Data collected during this phase is often input into a structured form using techniques such as HIPO (hierarchical input process output), data flow diagrams, and Orr- Warnier diagrams. The phase is time consuming, but crucial to the future success of the relational database.
Conceptual database design examines the data requirements resulting from phase one and produces a conceptual schema. A schema is a RDBMS-independent, high-level data model. The goal of the schema design is a complete and normalized understanding of the database structure, meaning, relationships, and constraints. A diagrammatic description of the conceptual model serves as a excellent device for communicating amongst database users, designers, and analysts. One of the most commonly used diagrams for conceptual design is the entity-relationship (E-R) approach, developed by Chen (1991). The E-R approach allows designers to graphical diagram entities (relations), relation types, and dependencies. This graphical approach helps to resolve conflicts (naming, type, domain, and constraints) that can occur within a database schema.
Choosing the proper RDBMS in which to implement a conceptual design on is governed by technical, economic, and political factors within an organization. The technical factors are concerned with the suitability for the RDBMS to handle the processing tasks, the storage and access methods supported by the RDBMS, the user and programming interfaces available for it, and the standard SQL (structured query language) release level it supports. Economic factors include costs for software acquisition, maintenance, hardware, database creation and conversion, personnel, training, and on-going operations. Political and organizational factors that need to be considered are the familiarity of personnel with the system, availability of vendor services, and the hardware/software platforms that the RDBMS supports. The supported platforms must fit within the company's current and proposed client/server architectures.
The next phase of relational database design is to generate the schema of the conceptual design to that of the selected RDBMS. This result of this logical design phase is DDL (data definition language) statements that can be used to define the database and table structures on the target RDBMS. The logical schema may have to be adjusted to conform to the specific implementation features of the data model as used in the RDBMS.
The results of the logical phase then drive the requirements and planning for the next phase: physical database design. In physical design, system planners choose the specific storage structures and access paths for the database system in order to achieve optimum performance for any applications that access the data. Indexes, clustering schemes, response times, space utilization, referential integrity, and transaction throughput are all taken into consideration. The RDBMS is then closely monitored and tuned to facilitate a successful implementation.
The final phase, database system implementation, deals with executing the DDL from the logical phase, initializing the empty database, and populating the database with data. The data may be converted from an earlier system and loaded via batch routines or scripts, or may be loaded by direct-application input. Programmers at this stage are greatly concerned with transaction management on the RDBMS, and will make changes to their front-end code or RDBMS triggers as needed.
The future of relational database design should see an automation of the design/implementation process and a convergence of design toolsets and the application generators for the client/server platform(s) of choice. Products that automate, or at least make the design process easier to manage, include S- Designor, ER-WIN, LBMS, and CASE (computer aided software engineering) products such as IEF, and Bachman. These RDBMS design products will also have to account for cross-platform database access, distributed relational models, and database replication. The design products will also need to be open in their architecture, allowing front-end tools like VB and PowerBuilder to take advantage of the schema definition generated in order to enhance their own data dictionary capabilities.
When designed properly, a relational database, coupled with a dynamic client/server environment, can yield enormous benefits. Relational systems reduce the redundancy of data, represent complex data relationships, enforce integrity constraints, provide a consistent platform for data access, and increase application security.
The successful installation of a relational database in a corporate client/server environment is highly dependent upon the skills of the design team and the stability of the database engine selected for implementation. Companies need to start small and be willing to experiment with prototype applications to determine the best "fit" within their client/server architectures and user base.
This paper is the last of a five-topic series on client/server systems development, and discusses RDBMS technology. The paper will provide an overview of what RDBMSs are, describing their features and importance to the client/server environment, and discussing their evolution. The future direction of how RDBMSs will affect client/server development will be covered. The paper will close with some conclusions on the role of the RDBMS in client/server application development.
A RDBMS is software which is designed to control and manage databases using relational structures. RDBMSs use a data repository or dictionary to store the structure and definitions of data. Advanced RDBMSs also control the relationships between tables, transaction management for updating the data, space management for the data itself, and the methods and utilities used for accessing the data. RDBMS software is often referred to as the "back-end" system because it is usually separate from the client "front-end" and is dedicated solely to process database queries and functions as requested.
Bochenski (1994) notes that there are some other general characteristics that are found in most database servers: locking mechanisms, multi-user access control, security, query optimization, and caching. Additional characteristics that are desirable include referential integrity controls (via triggers) and the ability to process a two-phase commit. Referential integrity is a mechanism in databases which ensures that every foreign key matches a primary key value. A two-phase commit guarantees the validity of distributed updates in instances of hardware or software failure.
SQL is often used by RDBMSs as a standard language for accessing relational data. As Martin (1994) observes, it is the existence of the SQL standard which has most strongly shaped the current client/server database systems. The existence of SQL established a clean dividing line which has cut across all relational applications. The division makes it relatively easy to split processing between a client (which generates the SQL and displays the results) and the server (which produces results in response to an SQL statement). Most major RDBMS vendors allow application and query products to send SQL statements across the network; The client need not know much about the database, and the database does not have to know anything about the client. Additionally, some databases allow the creation and invocation of stored procedures (pre-compiled SQL statements) to improve processing time, separate processing, and reduce network traffic.
The current state of RDBMSs evolved from earlier research and definition. After Codd introduced the relational model, a flurry of experimentation with relational ideas occurred. IBM lead the development efforts for commercial relational platforms, announcing two products in the 1980s: SQL/DS for its minicomputer environments, and DB2 for its mainframe operating system (Schur, 1994). Other relational database management systems that were also released for the minicomputer and client/server platforms include: Ingres, Oracle, Sybase, and Informix.
At first, performance of these relational systems was poor - - and IS organizations were hesitant to switch to them because of it. As the shift to client/server deployment unfolded and the operating systems improved, the market for relational database management systems grew dramatically. By 1990, products like Ingres, Oracle, Sybase, and Informix had achieved a high enough level of performance, so that all but the simplest, most time- critical, and stable information system applications had switched to relational systems. Approximately 80% of all database packages sold today for client/server installations are relational (Shasha, 1992).
Ingres is known for its technical excellence. Ingres was first released as a database server for the Unix platform and was subsequently ported to OS/2. A major innovation of Ingres is its cost-based optimization techniques via a procedure that analyzes the various ways of resolving the database query and statistically estimating which approach would be the most efficient. Ingres also provides referential integrity and event triggers based on attribute values in a given table.
Oracle is one of the current leaders in the RDBMS market due to its portability, extensibility, and front-end support. Oracle has evolved over time into a solid product, offering support for referential integrity, stored procedures, and triggers. Oracle has some of the best product and customer support in the business. Oracle also provides SQL extensions to their product that allows developers to code error handling, looping, and branching constructs into SQL statements.
Like Oracle, Sybase SQL Server is also one of the leaders in the RDBMS market. It has a large base of third-party vendor support due to its open environment and runs under many operating systems, including OS/2, Netware, several variations of Unix. Sybase is often the RDBMS of choice for large, high-volume applications which require high-speed processing. It fully supports referential integrity, stored procedures, triggers, and exceptional transaction management.
Informix is considered another top performer and is often used with mission-critical transaction processing systems. Informix operates primarily under the Unix operating system, and provides high performance and high data availability. It uses a cost-based optimizer, shared memory, and direct disk input/output. Additional features include on-line archiving, disk mirroring, and fast database recovery.
As the competition heats up in the client/server RDBMS market, vendors will continue to release products in the future that contain enhanced functionality: centralized management and control, interoperability, replication, distributed access, and object orientation. Vendors are working furiously to improve their query optimizers (Ricciuti, 1995), take advantage of hardware advances (Cole, 1995), link their systems to transaction processing monitors (Ricciuti, 1994), and partner with OS vendors for packaged product delivery (Nash, 1994).
Ingres, Oracle, Sybase, and Informix are all excellent client/server RDBMSs. Each offer the robust environment needed to design and deliver top notch databases. All are supported by a wide variety of front-end application development tools. All provide facilities for data management, access, and recovery. To make the choice even harder, most can be installed on a variety of operating systems and hardware platforms.
The relational database system and the server it resides on are the keys to and the beginning of any successful client/server implementation. Therefore, the selection of the RDBMS platform, both hardware and software, should not be made lightly. IS organizations need to invest the time and resources needed to test candidate RDBMSs and prototype with them under realistic production environments in order to determine the best performance and technical match. Companies need to balance economic constraints with the need to purchase quality hardware, operating systems, application tools and RDBMSs. Corporations also need to consider the human element when creating client/server systems. Only with the help of structured design techniques and knowledgeable systems professionals will client/server systems deliver upon the promise.
Angus, J. (1994). Coding a masterpiece. InfoWorld, Vol. 16, No. 44, 85-90.
Bochenski, B. (1994). Implementing production-quality client/server systems. New York: John Wiley & Sons.
Bowen, B. (1996). What's the right tool? Information Week, Issue 561, 40-44.
Chen, P. (1991). The entity-relationship approach to logical database design. Wellesley, MA: QED Information Sciences.
Cole, B. (1995). Database vendors taking alternate routes to support parallel processing. Network World, Vol. 12, No. 27, p. 24.
Darling, C. (1993). Two SQL front ends. InfoWorld, Vol. 15, No. 9, 51-59.
Elmasri, R. & Navathe, S. (1994). Fundamentals of database systems (2nd Ed.). Reading, MA: Benjamin/Cummings Publishing.
Gallagher, S. (1996). Constructing better visuals. Information Week, Issue 560, 50-53.
Hamilton, D. (1994). Don't let client/server performance gotchas getcha. Datamation, Vol. 40, No. 21, 39-40.
Johnson, S.. (1994). Tour the software chamber of horrors. Computer Technology Review, Vol. 14, No. 6, p. 6.
Laudon, K. & Laudon, J. (1994). Management information systems: Organization and technology. Englewood Cliffs, NJ: Macmillan Publishing.
Martin, R. (1994). Servers vs. clients. Journal of Systems Management, Vol. 45, No. 4, 26-27.
Nanard, J. & Nanard, M. (1995). Hypertext design environments and the hypertext design process. Communications of the ACM, Vol. 38, No. 8, 49-56.
Nash, K. (1994). Sybase/Novell deal one-ups Oracle. Computerworld, Vol. 28, No. 43, p. 73.
Parker, C. (1994). Understanding computers and information processing (5th ed.). New York: Dryden Press.
Ricciuti, M. (1994). Oracle, Sybase take on OLTP. InfoWorld, Vol. 16, No. 30, p. 24.
Ricciuti, M. (1995). Breaking the query barrier. InfoWorld, Vol. 17, No. 33, p. 20.
Schur, S. (1994). The database factory: Active database for enterprise computing. New York: John Wiley & Sons.
Shasha, D. (1992). Database tuning: A principled approach. Englewood Cliffs, NJ: Prentice Hall.
Shneiderman, B. (1992). Designing the user interface: Strategies for effective human-computer interaction (2nd ed.). Reading, MA: Addison Wesley.
Whitten, J., Bentley, L., & Barlow, V. (1994). Systems analysis & design methods (3rd ed.). Boston: Irwin.
Wilder, C. (1996). The Money Machine. Information Week, Issue 561, 28-35.
.