Chapter 2 Software Components
A Ph.D. Thesis by Andrew Le Gear
[Back to Home Page] [Previous Chapter] [Next Chapter]
“Architecture starts when you carefully put two bricks together. There it
begins.”
-Ludwig Mies van der Rohe (German born American Architect. 1886-1969).
Though many definitions exist that describe components, no officially recognised, standard definition of components, that is sufficiently constrained, exists at present (Hamlet, 2001). The word component has been used to describe procedures (Wilde and Scully, 1995), collections of reusable code (Mii and Takeshita, 1993), identified modules within systems (Cimitile and Visaggio, 1995; Girard and Koschke, 1997), class libraries (Zweben et al., 1995) and, more recently, black-box units of composition in software (Eddon, 1999; Ran et al., 2001). These are but a few understandings of components from a much wider list of contradictory viewpoints (Bachmann et al., 2000; McGurren, 2004; Szyperski, 2003; Johnson, 2002; Wang et al., 1999; Stevens and Pooley, 2000; Girard and Koschke, 1997; Chiricota et al., 2003; Cimitile and Visaggio, 1995; Cheesman and Daniels, 2001; Allen and Frost, 1998; Wilde and Scully, 1995; Ran et al., 2001; Mii and Takeshita, 1993).
More recently Clemens Szyperski took a different approach to component definition (Szyperski, 2003). Rather than attempting to adopt a single interpretation that serves as a panacea, he instead categorised existing viewpoints on components into a four tiered classification framework based upon their use and history of inception. This is summarised by table 2.1. The first two tiers in table 2.1 describe reuse only.
Terms that relate to more recent component-based development are introduced in tiers 3 and 4. The entries in this table are core issues of consideration when designing a state of the art component (Woodman et al., 2001) and should be addressed when implementing component recovery. For example the designer must decide if the system needs to be able to undergo dynamic alteration (Buckley et al., 2003), be deployable, or is likely to be regarded as reusable again.
Szyperski refined this categorization to describe four tiers of maturity among component technologies (table 2.2), defining where the elements of reuse are available and whether they can be introduced dynamically.
Table 2.1: Motivations for software components described by four tiers.
Table 2.2: Levels of maturity among CBD Technologies.
The component technologies referred to in this report are concerned primarily with tier 3 of table 2.1 at maturity level 3. a. i. closed composition. That is, this thesis focuses on reuse identification and replaceability of components from existing systems of an organisation. In doing so, it is likely that the reusable entities identified will be of the most use in generating varietal systems for that organisation or organisations in a similar domain, or alternatively they could be used in evolving the organisation’s current systems.
By confining the definition of a component in this report to tier three of table 2.1 at a maturity level of 3.a.i., our ability to precisely define the nature of components, relevant to this thesis, is considerably clarified. Three core criteria are identified to characterise the nature of components:
A black-box implementation of some functionality (Bass et al., 2000; Bachmann et al., 2000; Wallnau, 2003).
May be reused, “as-is” by a third-party consumer (Washizaki et al., 2002).
Conforms to some component model (Councill, 2001). This definition can be further expanded to account for the maintenance life cycle in evolving systems by additionally describing components as units of versioning and replacement (Szyperski, 2003; Cheesman and Daniels, 2001).
This definition should not be taken as firmly established yet. For example, debate surrounds the requirement that a component must be a black-box implementation (Cho et al., 2001). However, for the purposes of this thesis, we will consider these characteristics
as defining components.
Core to components and the assembly of components into a component architecture is the concept of an interface (Cheesman and Daniels, 2001; Lau, 2001). The IEEE describes an interface as,
“A shared boundary across which information is passed ... To connect two or more components for the purpose of passing information from one to the other.” (IEEE, 1990)
This sixteen year old definition still carries weight in describing the underlying principals of interfaces. Expanding upon this, the shared boundary described is essentially a formalism for controlling dependencies between software implementations, where a software implementation could be an operating system’s modules, class libraries or even procedure libraries (Bachmann et al., 2000).
Beugnard et. al. (Beugnard et al., 1999) describes how complete interface specifications can be described by categorising interface properties into four distinct levels (figure 2.1) (McGurren, 2004; Bachmann et al., 2000):
Figure 2.1: Levels of interface specification adapted from Beugnard et. al. (Beugnard et al., 1999)
Syntactic Level: The format of method and function signatures as prescribed by the grammar of a programming language caters for this interface level. API’s already adequately cater for syntactic level interfaces.
Behavioral Level: A behavioral specification is a formal description of what should happen when a software artifact executes, and is often called a contract (Cicalese and Rotenstreich, 1999). Languages such as Eiffel (Eiffel Software, 2004) and OCL (Clark, 2002) support behavioral level interfaces.
Synchronisation Level: At this level properties describing component synchronisation, mutual exclusion, atomicity and transactions are specified. Java already implements a lightweight version of synchronisation through its “synchronised” keyword.
Quality-of-Service Level: The previous three levels reasoned about properties that could be precisely defined. The quality-of-service level, however, is concerned with quantifying component properties such as “average response” and “quality of result.” These are a measure of one’s trust in a component (Councill, 2001) and is usually specified by third party certification.
In section 1.3 component-based development was specifically cited as a means of curbing the problems associated with monolithic software development by explicitly placing reuse as core to the process. Component-based development can be simply defined as
“... the building of software systems out of prepackaged generic elements.”(Meyer and Mingins, 1999) ... [this] involves the technical steps for designing and implementing software components, assembling systems from pre-built software components, and deploying assembled systems into their target environments.” (Bass et al., 2000)
A graphical description of the component architectural style, adapted from (Bachmann
et al., 2000), can be seen in figure 2.2. It includes (Bachmann et al., 2000):
Clearly defined interfaces on the components to describe the services that the component offers.
The components assembled in accordance to clearly defined contracts to describe the interaction between component instances.
Multiple instances of component types, which describe families of component instances in the same way that an object is an instance of a class.
Each instance of these component types can be deployed either statically or dynamically forming a component-based piece of software. A statically deployed component is deployed at implementation time. A dynamically deployed component is deployed at runtime.
The combination of component types, their interfaces and an explicit description of their valid patterns of interaction forms a component model.
The component model is supported by a component framework. A framework will consist of a set of supporting services and other components that are useful and sometimes necessary in building the component-based application.
The component model provides an array of runtime services that enforces the component model.
Figure 2.2: A Generic Component Architecture adapted from Bachmann et al.
(Bachmann et al., 2000).
With this architecture comes the potential to take software development from being a practiced craft to a fully fledged engineering discipline (Johnson, 2002; Whittaker and Voas, 2002) that includes the predictable assembly (Wallnau, 2003) of software systems. This component-based software engineering approach can be defined as
“... the practices needed to perform [component-based development] in repeatable way to build systems that have predicable properties.” (Basset al., 2000)
While this “holy grail” is yet to be achieved, positive research towards component-based software engineering (Wallnau, 2003; Cheesman and Daniels, 2001; Hamlet, 2001) plus software support for component-based development principals (Eddon, 1999; Ran et al.,2001; Sadd, 2003) are emerging. Two examples are Progress Dynamics (Sadd, 2003) or the .NET framework (Microsoft, 2006a).
Several subtly different approaches to component based development exist (Allen and Frost, 1998; Cheesman and Daniels, 2001; Wallnau, 2003), and these are often supported by existing component technologies (Ran et al., 2001; Eddon, 1999; Progress Software, 2003). Here, Chessman and Daniels’ process of specifying component-based software is discussed to contextualise the research (Cheesman and Daniels, 2001). Built upon the UML notation, the process is portable to a wide variety of platforms and component technologies (Chessman and Daniels also extend the current UML notation to explicitly handle components. However, the focus of this discussion is on the process. ).
All projects follow two processes simultaneously -A management process with a subservient development process (Cheesman and Daniels, 2001, page 25). The management process accounts for time constraints and the setting of milestones and goals.
Figure 2.3: The component based development process adapted from (Cheesman and Daniels, 2001).
The development process, which we are concerned with, creates working software from requirements. The diagram in figure 2.3 describes Chessman and Daniels component-based development process.
The process is driven by five workflows, with a workflow being a sequence of actions that produce an output of observable value (Kruchten, 1999):
Requirements: The requirements of the system are gathered and organised in a useful way. Two new artifacts are outputted by this workflow; The business concept model and the use case model. The business concept model is a conceptual model of the business domain that provides a common vocabulary to be used by software engineers and project managers in relation to the project. The use case model is a set of use cases describing all identified functional requirements of the system.
Specification: A complete set of requirements, a business concept model and the set of use case models are taken as input and combined with other existing information regarding software assets. This extra information could include existing documentation, designs or recovered or existing software components. These are used to produce a complete set of component specifications and a component architecture as output. The component specifications describe, in detail, what component types will be required. The component architecture shows how instances of these types will interact. The specification workflow can be subdivided further into
three major tasks:
Component identification: Taking the business concept model and the use case model as input. The component identification stage identifies an initial set of component interfaces and an architecture.
Component interaction: This stage examines how the system’s operations will be achieved using the identified component architecture, thus refining upon the component identification workflow.
Component specification: Detailed specifications for components are created along with an interface information model artifact. The interface information model describes operations, states and constraints that are enforced on the component.
Provisioning: The component specifications and architecture, taken as input, are used to determine the components that are available, the ones that must be built and the ones that must be bought. It is the job of the provisioning workflow to make available the required components for subsequent workflows. The reuse of components is explicitly catered for here as can be seen in figure 2.4. Furthermore, the reuse is not confined to components, and can include any existing software assets. The potential of reengineering towards components, from existing legacy applications, to supplement the provisioning workflow is the focus of this thesis.
Assembly: The components, a suitable user interface and existing software assets such as recovered components or components from a repository, are combined to form an application.
Testing and Deployment: During this workflow, standard testing and roll out of the new application occurs. Individual components will be unit tested and the entire assembly will be functionally tested.
The previous section introduced the concepts of Cheesman and Daniel’s component-based development process. In particular, it is suggested as a means of introducing the widespread reuse of software across systems. However, component-based development remains a relatively new concept. This implies that the majority of existing software is written using different or even obsolete development paradigms. This existing source code should somehow be exploited for reuse in modern component-based development processes, since it is prohibitively difficult to reimplement the implicit business rules in the existing system (Verhoef, 2000). The provisioning workflow (figure 2.3), whose task it is to make components available for subsequent development, caters explicitly for just such exploitation.
Figure 2.4 expands the provisioning workflow presented in figure 2.3. showing that components may be acquired from three sources:
- Components may be bought from external vendors.
- Components may be built.
- Components may be reused from a repository of existing components.
Figure 2.4: A detailed description of the provisioning workflow.
Of particular interest is the ability to take existing software components from a repository for reuse. This repository may be established in advance, particularly as part of a development philosophy such as product line software development (Bergey et al., 2000; Simon and Eisenbarth, 2002; Eisenbarth and Simon, 2001). Alternatively this repository could be populated by recovering components from existing software. This can be legacy source code reengineered towards components that can populate the repository. Reengineering legacy systems towards components for reuse in existing or new systems is the primary concern of this thesis.
Encapsulation is a means of reducing interdependence between parts of a software system (Snyder, 1986). By applying encapsulation to portions of software appropriately, increased ease of development for software engineers can be afforded to software engineers (Zweben et al., 1995). This section explores the evolution of encapsulation in software development, describes encapsulation in detail, discusses the core quality measures for encapsulation -coupling and cohesion -and finally discusses why component-based development is yet another improved means of development that better supports encapsulation in software.
2.5.1.1 Monitors
During the early 1970’s Hoare introduced the concept of monitors as a means of controlling access to procedures and local data in a running program (Hoare, 1974). A monitor can be declared according to the following template:
monitorname:monitor
begin ... declarations of local data to the monitor
procedure declarations
end
The monitor construct would allow any number of processes in the operating system to request access to the monitor source code. However, never would more than one process be allowed to be executing the source code or accessing the local data of the monitor at any particular time. In this fashion a monitor would achieve process encapsulation by grouping operations and data that should be only be executed together.
The grouping of data and procedures would be referred to simply by the monitor name, hence achieving the desired abstraction effect afforded by encapsulation.
2.5.1.2 Information Hiding
At roughly the same time as Hoare published his work on monitors (Hoare, 1974), Parnas had begun to introduce the concept of information hiding (Parnas, 1972, 1971, 2002). Parnas first introduced the concepts of information hiding in (Parnas, 1972). Information hiding, similar to the process encapsulation afforded by Hoare monitors, advocates the hiding of portions of a program’s data and operations associated with that data behind a defined interface (Parnas, 2002, 1972). Unlike Hoare’s monitors, information hiding is advocating encapsulation and abstraction on the static structure of the program, rather than encapsulation of the program in terms of the running processes of the operating system.
The principles of information hiding provides the necessary basis for dividing a software system into modules, hiding complexity of the system and interacting through well defined interfaces (Wikipedia, 2006b). This form of encapsulation would eventually form the basis for mainstream software development and design.
2.5.1.3 Object Oriented Programming
Object oriented programming is a style of programming that supports the concepts of information hiding as a first class language construct. The first object oriented programming languages emerged during the 1960’s with Simula (Simula, 2006). Object oriented programming introduces modern programming concepts such as inheritance, polymorphism and, most relevant to this discussion, information hiding through data encapsulation. This encapsulation is afforded to the user using the class construct in object oriented languages.
As research began to highlight the importance of information hiding (Parnas, 1972; Zweben et al., 1995) and the need to promote these concepts during software development, object oriented languages, with their explicit use of data encapsulation, began to rise in popularity. By the early 1990’s object oriented programming had gained widespread acceptance in software development and has been shown to provide significant benefits in ease and manageability of development (Zweben et al., 1995).
Object oriented languages achieve better encapsulation over their non-object oriented counterparts by providing several key language concepts to the programmer:
• The ability to group related operations and data using the class construct.
• The ability to limit access to methods and data to a given scope. This is achieved using keywords such as public, privateand protected.
• The ability to abstract over related class types using inheritance and polymorphism.
One possible interpretation of encapsulation is that its purpose is to protect portions of a system from operations and data that are irrelevant to those portions. In the non-object oriented code example in figure 2.5 no such protection is put in place in the program. All four procedures potentially have access to all the data of the code fragment, in spite of the fact that “procedure1” and “procedure2” only access variables r,s, t, and u and “procedure3” and “procedure4” only access v,w,x,y and z. A clear division of data and the operations that act over that data exists in the code fragment, however no first class entity of the programming language exists that makes explicit or enforces this encapsulation of data is available. Furthermore, within our two divisions
of the code fragment, “procedure2” and “procedure4” are only ever accessed via “procedure1” and “procedure3” respectively and never from “main”. Again, as the fragment currently stands the potential to access “procedure2” and “procedure4” from “main” does exist. The language offers no means of encapsulation that would “hide” access to “procedure2” and “procedure4” from “main”.
int r,s,t,u,v,w,x,y,z;
void main()
{
procedure1();
procedure4();
}
void procedure1() {
r=s+ t;
procedure2();
}
void procedure2() {
u=r+ s + t;
}
void procedure3() {
v=w+ x;
procedure4();
}
void procedure4() {
x = y;
y = v;
z++;
z += v;
}
Figure 2.5: A code fragment from a C program.
Object oriented languages provide these needed language constructs. The revised code example in figure 2.7 is a revised version of the code example in figure 2.5 that makes appropriate use of the object oriented language’s class and scoping constructs in C++. Using the class construct the separate variable groupings mentioned above have been placed into separate classes and grouped with the procedures that operate on those variables. The variables of these classes have been scoped as private since we wish to encapsulate this data in within the scope of the class and deny access to the data from outside the class. Likewise one of the procedures within each of the classes has been marked as “private” (“procedure2” and “procedure4”) because no procedures outside of their respective classes access these procedures. Notice, in particular, how even tight
scoping can be achieved by declaring certain variables as local variables within the methods. In this case u is declared locally in “procedure2,” w is declared locally in “procedure3” and y and z are declared locally in “procedure4.” This is because those variables are used exclusively by the procedures that they are now declared in. The classes themselves and the procedures within them that we wish to provide access to are marked with the “public” modifier. This allows access to these elements, program wide. The result of these measures is a reduction in the list of operations and data that classes and procedures can access, through information hiding. This encapsulation results in a reduction in complexity when creating and maintaining the application by separating the concerns of the program into explicitly scoped groupings of data and associated operations, accessible only through clearly defined interfaces. In figure 2.6 a visualisation of this encapsulation is shown to help clarify what has been achieved.
Further, encapsulation benefits are provided by object oriented languages through effective use of the object oriented concepts of “inheritance” and “polymorphism.” Inheritance allows one to define a hierarchy of class types in a program. The inheriting type will inherit the characteristics (data and operations) of the type it inherits from. Take the “Animal” inheritance hierarchy in figure 2.8, which is visualised in figure 2.9,
Figure 2.6: A visualisation of the encapsulation exercise shown in figure 2.7.
this time using Java syntax (Sun Microsystems, 2006). The hierarchy describes a set of animals that share some characteristics, and become more specialised as we descend the hierarchy. Carefully note the use of the ”protected” modifier. “private,” as we saw, limits access to the enclosing class. “protected,” however limits access to the enclosing class and any classes that inherit from that class. Classes outside of the inheritance hierarchy will still have no access to the protected members of the class.
Polymorphism is a feature of object orientation that operates on inheritance hierarchies and provides the ability to treat a derived class just like it’s parent class, sometimes to the extent that the derived class’ use becomes invisible to the programmer. This encapsulation effect is demonstrated in the code example in figure 2.10. The code example models a scenario where a animal is caught in the “Wild,” brought to a “Clinic” to be treated and then put into captivity in a “Zoo.” This example makes use of the inheritance hierarchy in figure 2.8. Notice how the “capture” method will capture a specific type of animal depending on the circumstances. However, the type of the animal is not of any concern to the “Clinic” class as the clinic will treat any type of animal and place it in the “Zoo”. Using polymorphism in the example this form of information hiding can be achieved. All operations in the “Clinic” occur on the type “Animal” and the ”Clinic” class remains agnostic to the actual type of the instance it is dealing with. In this way encapsulation over an inheritance hierarchy can be achieved, shielding, where possible, portions of the program from the the complexities of the type hierarchies.
void main()
{
MyClass1 cl1 = new MyClass1();
MyClass2 cl2 = new MyClass2();
cl1.procedure1();
cl2.procedure2();
}
public class MyClass1 {
private int r,s,t;
public void procedure1() {
r=s+ t;
procedure2();
}
private void prodecure2() {
int u;
u=r+ s + t;
}
}
public class MyClass2 {
private int v,x;
public void procedure3() {
int w;
v=w+ x;
procedure4();
}
private void procedure4() {
int y,z;
x = y;
y = v;
z++;
z += v;
}
}
Figure 2.7: A revised version of the code fragment in figure 2.5.
class Animal {
protected int morale = 0;
public void raiseMorale() {
morale++;
}
public void decreaseMorale() {
morale--;
}
}
class Biped extends Animal { }
class Quadruped extends Animal { }
class Monkey extends Biped { }
class Orangutan extends Biped { }
class Dog extends Quadruped { }
class Cat extends Quadruped { }
Figure 2.8: An inheritance hierarchy code sample.
Figure 2.9: A visualisation of the inheritance hierarchy in 2.8.
By the late 1970’s researchers had begun to arrive at a consensus regarding the merits of encapsulation and abstraction during software development and design. The focus next began to shift to how to assess the quality of encapsulation. The led to two commonly accepted understandings of encapsulation quality being formed -Coupling and Cohesion. These indicators of “good” design were conceived over thirty years ago (Stephens et al., 1974). Coupling is the degree of interdependence between component’s or modules and cohesion is the extent to which an individual component or module’s individual parts are needed to perform the same task (Yourdon and Constantine, 1979). Low coupling and high cohesion often indicate a more replaceable (and reusable) component and, by measuring coupling and cohesion we can get an indirect measure of replaceability and reusability. We define and measure coupling between two modules in terms of the type and degree of communication between them (Fenton, 1991).
Figure 2.11 is an example of a recovered component (“Transforms”) from later in the thesis. Notice the high number of internal connections on the components relative to the interconnections between the components. In line with the suggested use of encapsulation as suggested by the previous section (section 2.5.1.3) it could be said that this component is well encapsulated since many calls that are irrelevant to clients are encapsulated in the component, with a minimised number of calls between the components.
public class Zoo {
public incarcerate(Animal animal) {
if (animal.getType().equals(Dog.Class.getType())) {
Dog dog = animal;
dog.raiseMorale();
} else {
Monkey monkey = animal;
monkey.decreaseMorale();
}
}
}
public class Wild {
String loc;
Wild(String location) {
loc = location;
}
public Animal capture() {
if (loc == "Europe")
return new Dog();
else
return new Monkey();
}
}
public class Clinic {
public static void main(String args []) {
Wild theWild = new Wild("Europe");
Zoo theZoo = new Zoo();
Animal capturedAnimal = theWild.capture();
treatAnimal(capturedAnimal);
theZoo.incarcerate(capturedAnimal);
}
treatAnimal(Animal theAnimal) {
theAnimal.raiseMorale();
}
}
Figure 2.10: Using polymorphism for encapsulation code example.
Figure 2.11: An example of a loosely coupled and highly cohesive component.
Correspondingly these components are loosely coupled due to the low interdependence between the components, and they display high cohesion as a virtue of the fact that there is a large number of internal, hidden calls relative to the intercomponent calls.
Fenton (Fenton, 1991) describes a classification of 6 types of coupling, between two modules x and y, that can be arranged by increasing strength (0-5) and are based upon the type of communication between two modules:
• 0: x and y have no communication; that is they are totally independent of one another.
• 1: x and y communicate by parameters, where each parameter is either a single data element or a homogeneous set of data items that incorporate no control element. This type of coupling is necessary for any meaningful communication between modules.
• 2: x and yaccept the same record type as a parameter. This type of coupling may cause interdependency between otherwise unrelated modules.
• 3: x passes a parameter to y with the intention of controlling its behavior; that is the parameter is a flag.
• 4: x and y refer to the same global data. This type of coupling is undesirable; if the format of the global data must be changed, then all common coupled modules must also be changed.
• 5: x refers to the inside of y; that is, it branches into, changes data in, or alters a statement in y.
He continues to provide a calculable metric for coupling,
where c is the coupling between two modules x and y, i is the level of coupling on the six part scale and n is the number of interconnections between x and y.
To measure cohesion Yourden and Constantine (Yourdon and Constantine, 1979) provided a seven point scale of decreasing cohesion. Functional cohesion, where the module performs a single well defined function, is the best and subsequent items are presented here in terms of decreasing cohesion:
•Functional: the module performs a single well defined function.
• Sequential: the module performs more than one function, but they occur in an order prescribed in the specification.
• Communicational: the module performs multiple functions, but all on the same body of data (which is not organised as a single type or structure).
• Procedural: the module performs more than one function, and they are related only to a common procedure of the software.
•Temporal: the module performs more than one function and they are only related by the fact that they must occur within the same timespan.
• Logical: the module performs more than one function, and they are related only logically.
• Coincidental: the module performs more than one function and they are unrelated.
As stated early in this chapter, software components and component-based development claims to provide a better means of software development than the current state of the practice in the software industry. Software components are intended to build upon the existing object oriented technologies (Meyer and Mingins, 1999) by adding to and improving the means of encapsulation during development (Meyer and Mingins, 1999; Cheesman and Daniels, 2001).
A software component may constitute any number of classes. Thus, encapsulation can be implemented on a much larger scale. This is an important feature of encapsulation that becomes desireable as an application or domain being modeled grows large. Take, for example, the classes in the diagram in figure 2.12. The edges in the diagram represent dependencies in the program between the classes. After brief examination of the diagram you will notice that two distinct groupings of classes exist (d,e,f,g,h,u,v and w,x,y,z,a,b,c) and that all communication between these two groupings only occurs via three classes (u,v,w). Similar to the problem posed in figure 2.5 in the previous section, a mechanism for aggregating classes and hiding complexity through encapsulation would be useful (see figure 2.13). Software components provide this explicit construct as a first class entity. Unlike packages in Java, for example, which also may be considered in this light, typical component technologies provide one or several explicit,
Figure 2.12: Many classes without and encapsulation policy.
localised interfaces that declare the public services of the component, thus increasing encapsulation. When we encapsulate the two components, as with figure 2.13 the interface (public classes with respect to the component) on the component become u,v and w. By making explicit what classes are public and private to a component one removes the potential for breaking the desired encapsulation. Earlier, in section 2.3, it was noted how a component framework offers an array of runtime services to the programmer. One such service is an event-based model of programming. In such a model, client code may register with the component to listen for a specific event that occurs in the component. When such an event is fired the client
code can respond to the event with the invocation of a specified procedure. Figure 2.14 shows a sample C#.Net code fragment (Microsoft, 2006a) that demonstrates client code registering to listen for a “ComponentShutDown” event. In this circumstance, when the component is shut down, the component (“Component”) will raise an event. Because the client code has registered to listen for this event it will notice that the event has occurred and respond to it by invoking it’s own client code (“ClientProcedure”). Better encapsulation of the state of the component is achieved by hiding more internal data of the component using the event based model. The alternative to using such a model
Figure 2.13: Many classes from figure 2.12 encapsulated by a component.
public class ClientCode {
ClientCode() {
Component.ComponentShutDownEvent +=
new EventHandler(ClientProcedure);
// Registers client to listen for an event with the component.
}
public void ClientProcedure(Eventargs e) {
// Some operations that respond to the event here.
}
}
Figure 2.14: Event handling code sample.
would have been to have the client source code continually poll the component to see if there has been a change of state in the component. Instead, with the event based model, the client code becomes a passive entity and the relationship between client and component becomes inverted. The state of the component is no longer a concern of the client code until it informed by an event. The onus is on the component to provide clients with notification of an event and information about that event (including changes in state) via the “Eventargs e” argument passed by the component to the client code in figure 2.14.
Figure 2.15: A deployment diagram of two distributed components.
A form of geographic or topological encapsulation is also provided by a component framework. Component frameworks such as J2EE or .Net provide a mechanism for components (The specific named for these types of component are Enterprise Java Beans (J2EE) and Web Services (.Net) ) (Microsoft, 2006a; Inc, 2005) on different machines, which could potentially be in very different geographic locations, to register with a component framework. This makes the component available for use in a distributed fashion. However, calls to these components may be made as though they were on the same machine. Take, for example two components on different machines, as shown in the deployment diagram in figure 2.15. Once properly registered with the framework a method call between the two components could potentially be as simple as shown in figure 2.16. In this way information regarding the location of components and the required information to reach these components over a network is encapsulated by the component and the framework and completely hidden from clients of that component.
Also noted in section 2.3 was that components can communicate between each other using clearly defined interfaces. The intention here being that the only knowledge we have about a component should be through its interface and that all other information about the component should be encapsulated, including the original language implementation of that component. This suggests that a component-based system could potentially be composed of many components written in many different languages. The .Net component framework, for example, supports the definition of components written in over 50 different languages (Ritchie, 2006).
//Component1 definition
.
.
.
// definition of class within the component
public class MyClass {
public void someMethod() {
Component2.Component2Class component2Class
= new Component2.Component2Class();
component2Class.component2Method();
}
}
.
.
.
// remainder of component.
Figure 2.16: A call between the distributed components shown in 2.15.
[Back to Home Page] [Previous Chapter] [Next Chapter]
Component Reconn-exion 2006 by Andrew Le Gear