Intangibility of software
The lack of stability and the nature of software has raised the perception about the risk management of software project organizations [5]. Intangible system is that system whose services are not visible and therefore cannot be touched. The consequences of intangibility for clients and managers are numerous. It makes it very difficult for evaluating the software [6]. The uncertainty in a software project is the main cause of problems associated with software project management [7]. The degree of intangibility of any system determines the level of associated risk. If the performance of the software cannot be specified quantitatively, it is very hard for the developers to move in the right direction since software is intangible [8]. Software engineers have developed a way of making software tangible. Some of the methods developed for this conversion are:
ü Use of modeling tools
ü Use of diagrams
ü Use of documents
The use of this method ensures that the project is visualize to an extent. Since a software system is intangible in nature, various problems are faced in its management. Some of the problems are:
· Buggy software products and customers who are unsatisfied with the products they receive.
· Customers' inability to know all the requirements in advance.
Buggy software products are due to its intangibility. The testing of intangible products is not an easy or trivial task. This naturally of software also leads to confusing communication of requirement by the customer and developers.
The main reason why there is a managerial problem with software process is because software is a unique intangible entity with which the human mind find difficult to cope with. Therefore, our ability to deal with such intangible artifacts is limited by the extent we can model the project. Software is not only an intangible product; it is also executed on a computer. This means that the developed artifact is not the one that is actually used by its users. The same argument that highlights software uniqueness is very well applicable also with respect to other artifacts, such as knowledge and the emerging discipline of knowledge management.
Due to the fact that the software is human resources intensive, it requires the management of workers to ensure that everything works fine. It is always very hard to know the actual number of workers needed to develop a project. For that reason, at a time, the amount of human capital needed to execute a project is giving challenges to the management since it is difficult to estimate the project since the software is intangible.
Another challenge facing management in software project is how to reduce communication overhead between workers. Because software is an intangible entity, it increases the need for verbal communication between teammates and between customers and developers. The time taken to communicate between teammates normally occupies a large percentage of work time and therefore reduces productivity.
Another problem is that when a team works on an intangible entity it is harder to build a relationship of trust. The reason for this is that under the common belief that knowledge is power, with both its positive and negative aspects, it is easier to hide knowledge when the subject being dealt with is an intangible entity.
Another managerial implication of the nature of software is the estimating the resources and the difficulty of carrying out a project. Since software is intangible, it is very difficult to accurately estimate the cost of the project from the beginning. Estimating software is hard for several reasons: big differences in individual productivity, the fact that creative processes are difficult to plan, the fact that software is intangible and so difficult to measure, and the fact that during the life of the project, anything can change - scope of the product, budget, deadlines, or, as often happen in the software world, requirements. For these reasons, estimates have always a degree of uncertainty. Uncertainty is usually described and managed using probabilities - an estimate, typically, comes with a best, at worst, and a likely outcome with some probabilities attached. Single point estimates always have a probability less than 100% (usually closer to 0% than to 100%). Therefore, Software development, given the intangible nature and uniqueness of software, is inherently difficult to estimate and schedule [5].
Another implication is the uncertainty of the requirements due to the nature. The customer supplies requirement as they occur to him and this makes planning management impossible. This uncertainty leads to excess personnel turnover and therefore present risk [5]. Since software is intangible, as the project progresses more and more features that were not identified at the beginning of the project emerge that threaten estimates and timelines. This compounds the problem of the management of software project.
SOFTWARE SCALABILITY
The term scalability applies to many areas of human endeavors. It may be in software engineering, computer engineering, and academics etc. It is not strange to see a student scaling to achieve better results. However, many scholars have attempted to define scalability. The definition given by a scholar depends on his perspective. According to Ian Sommerville [1], the system is scalable if the capacity can be increased by adding new resources to the system so that the system will be able to cope with new demands. He based his definition on distributed systems. However, Progress Software Corporation [2], scalability is a needed property of a system, a network, or a process, which shows its ability to either handle growing amounts of work in a graceful manner, or to be readily enlarged. For example, it can mean the capability of a system to increase total throughput under an increased load when resources (typically hardware, software) are added. According to him, this can be used in a commercial context, where scalability of a company implies that the underlying business model offers the potential for economic growth within the company. Scaling of a system is also called upgrading.
Dimensions of scalability
The concept of scalability has two dimensions; load scalability and administrative scalability.
Load scalability
Load scalability deals with a system’s ability to effectively handle increasing levels of demand and throughput in a non-disruptive way. By non-disruptive we mean that additional resources can be added to the mix in a predefined way (i.e., servers, processors, RAM, disk space, bandwidth, etc.) so that as demand increases, the environment can be expanded to handle the increase. By contrast, a system that has not been designed to be scalable may need to go through a re-configuration or a disruptive re-architecture when demand reaches a certain point. So, it is good practice to think through how a solution will scale from the beginning to avoid unexpected future bottlenecks.
Administrative scalability
Administrative scalability has to do with the ability of company management to effectively manage multiple customers in a single environment. The ability to deliver consistent levels of service to each customer as the customer base grows is a function of how effective the administrative systems are that the staff uses to manage the customers. Therefore, it is useful to think about how billing, operations and support will grow over time to meet customer demands. An example of scalability in companies involves adding more staff strength to the existing company to staff to serve customers better.
Scalability method
The methods of scaling generally fall into two categories: vertical scaling and horizontal scaling. A clear understanding of the distinction between these two methods will help in making the right choice when considering alternatives for scalable implementations.
Vertical scaling
Scaling vertically sometimes referred to as scaling up, means to add more resources to already existing resources in a system to boost the system capacity. In terms of an application infrastructure, this might involve adding additional processors, RAM or disk space to a server [2]. Planning for vertical scaling of hardware usually includes adding redundancy of all to insure that the machines remain running if a given component fails. Another way to look at vertical scaling might be to add more software servers to a single hardware server.
Horizontal scaling
Scaling horizontally sometimes referred to as scaling out, means to add more nodes to a system. This may look like adding another server to an application server cluster [2]. A cluster is a group of systems that are coupled together, generally via a fast network, so they are aware of each other. Work is distributed to individual systems in the cluster and if one becomes unavailable, the others are able to handle the processing seamlessly.
Scalability metrics
Scalability is a many-sided property which can be captured in a scalability metrics. The scalability metrics balances cost, volume, timeliness and other attributes of value in the system, as a function of its size [3]. Because of the cost of scaling most systems, attention is always paid to scaling the module with the optimum benefit. Studies of typical metrics can reveal which parts of the agent infrastructure are most critical for scalability. The difficult part of scaling metric is performance calculation to predict QoS. The throughput and cost put it into perspective, as one can often improve QoS by enhancing the system. The metric emphasizes that scalability is an economic concept as well as a technical challenge. The analysis may include optimization of the configuration at each scale factor, by adjusting the investment in different factors, the distribution of the load, or any other parameters of the configuration. It is a very general framework, but so far it has been applied only to statically configured layered systems of servers for telecom applications. The performance calculation to predict QoS is the difficult part of this metric; the throughput and cost put it into perspective, as one can often improve QoS by enhancing the system. The metric emphasizes that scalability is an economic concept as well as a technical challenge. The analysis may include optimization of the configuration at each scale factor, by adjusting the investment in different factors, the distribution of the load, or any other parameters of the configuration. It is a very general framework, but so far it has been applied only to statically configure layered systems of servers for telecom applications.
Examples of scalability
1. Addition of additional processors, RAM or disk space to computer to improve the performance.
2. Adding more staff strength to the existing company to staff to serve customers better.
3. Adding more capacity to website to accommodate new users
COTS AND BESPOKE SOFTWARE
Most application software in use today are commercial-off-the-shelf' (COTS) packages which are produced in large quantity and sold to the consumers by the software vendors. In COTS software, the same application will be run by thousands or even millions of users unlike the BESPOKE software where the software is custom made for a given client. The number of persons that use the same application determines the cost of the software and the degree of attention given to the users when problem is encountered. The bespoke application is produced when a client identifies a problem to be solved and approaches the software to develop solution for the problem. There are various advantages and disadvantages to both off-the-shelf and bespoke applications.
COMPARISON BETWEEN COTS AND BESPOKE SOFTWARE
Large software companies such as Microsoft and Adobe produce a massive range of off-the-shelf software for a variety of different purposes and audiences. The BESPOKE software provides business and commercial benefits for clients by providing significant competitive advantage. Bespoke applications are generally easier to use and can work with. However, it is more expensive compare to COTS. COTS software and the BESPOKE software have individual characteristics which differentiate or relate them. The two types of application software can be compared and contrasted using the following properties:
Quality
The COTS is always of lower quality compared to BESPOKE. This is always true if one is speaking of typical challenges and typical solutions. This is because the users did are not consulted before the development. However, at long run, the developer will get enough feedback that will help them improve the quality of their product. Therefore, custom software development by a reputable supplier is often a matter of building a house upon a solid foundation and, if managed properly, it is possible to do this quickly and to a high standard.
Number of customers
Bespoke is specially developed for a specific organization or users. However, COTS is developed for the mass market. The commercial off-the-shelf (COTS) software can exist as free software, an in-house software development group, or be commissioned from an external software development company [1]. Because the custom software is made for single user, it does not need marketing activities from the developers,
Customer satisfaction
Since custom software is developed for a single customer it can accommodate all the customer's particular preference and expectations. Custom software may be designed in stage by stage processes, allowing all nuances and possible hidden dangers to be taken into account, including issues which were not mentioned in the specifications. Meanwhile, COTS software does not accommodate all customers’ requirement.
Security
The degree of security offered by software is very important when it is needed for a sensitive transaction like bank activities. Most COTS software has low security comparable to the BESPOKE. This is why most sensitive organizations prefer custom-made software to maintain security.
Complexity
COTS software is always more complex in functionality and usage compare to BESPOKE. Due to the number of customers expected to use the COTS software, a lot of functionalities and settings are included to suit each user to some extent unlike BESPOKE which only satisfy one customer requirement [2].
Control
A major advantage of bespoke components is that the customer can control the functionality, interfaces, and schedule, upgrade path, etc. Conversely, with COTS items, the customer has no influence on the quality of the product. At a time, the feedback given by the customer is ignored by the developer. By its nature, COTS software is subject to much more variation in functionality and interfaces than a relatively constrained item such as a processor chip [2].
Visibility
Bespoke software can be examined to ensure it does not include any feature to bridge security and safety of the client organization. In contrast, most COTS software have hidden loopholes that can only be identified at long run. It is generally accepted these days that obscurity is not the same as security and that on balance; the interests of security are best served by transparency. Again, visibility is an issue that is particularly relevant for software since software is generally much more complex and flexible that hardware.
Cost
The COTS software is cheap to buy and own compared to the BESPOKE software. With such a large user community the cost of the software development and maintenance is spread to each user. It will typically have better help pages, and may well have plenty of web content publicly and freely available to help users get the most out of the software [3].
Feature richness
The COTS software is more feature-rich compared to BESPOKE. The COTS has a lot of features to approximately accommodate the requirements of most expected users. This is to ensure that it contains lots of things that an end user can do and therefore making it to appeal to different types of end-user. This is made possible by the large user community paying license fees [4].
Usability
The COTS software is very hard to use compared to BESPOKE software. It takes a long time to learn how to use off-the-shelf software efficiently due to redundant features. Before COTS is used efficiently, the user needs to know how to which features he should be using and how to get the most out of them. However, once the user is comfortable with using a small subset of the features the more advanced features are often perceived as daunting, and so left unused.
Software maintenance
Software maintenance is the act of keeping software in a state of efficiency and validity to protect it from failure [4]. According to IEEE [5], “ Software maintenance is the process of modifying a software system or component after delivery to correct faults, improve performances or other attributes, or adapt to a changed environment.” Software maintenance is very expensive, difficult and time consuming. This is due to the fact that it is difficult for a software maintenance engineer to understand software he did not develop [6]. Even if he is the developer of the software, he may forget the detail of the design is the documentation is not effective. There are many reasons for maintaining software. Some of the reasons are:
a. To correct errors in the software (corrective maintenance)
b. To enhance the feature of software to make it maintainable (preventive maintenance)
c. To evolve final product from the already existing software (evolutionary maintenance)
d. To make the software to be able to operate on different platforms (adaptive maintenance)
e. To maintain the performance of the software (perfective maintenance)
Preventive maintenance
The preventive maintenance is very necessary in any reliable software design. It involves all the series of proactive designs that help in the detection and mitigation the software from degradation [7]. It is the most important pivot in all software maintenance. Without preventive maintenance, it will be very difficult to carry out other maintenance activities. Preventive maintenance can be defined as all the modifications made to a piece of software to make it more maintainable [5]. It involves making programs easier to understand and hence facilitating future maintenance work. When software is well documented and commented, it implies the software is maintainable since the maintenance engineer will find it easy to maintain.
The preventive maintenance is always time-driven; it depends on the amount of time that has elapsed after the start of use of the software [8]. The actual implementation of preventive maintenance varies greatly. Some programs are extremely limited and consist of only minor adjustments. Comprehensive preventive maintenance programs schedule repairs and software re-engineering for all critical program modules. The common denominator for all of these preventive maintenance programs is the scheduling guideline-time. All preventive maintenance management programs assume that system will degrade within a typical time frame of their particular classification and therefore need to be repaired before failure. The problem with this approach is that the mode of operation and system-specific variables directly affects the normal operating life of the system.
The care and servicing by personnel for the purpose of maintaining equipment and facilities in satisfactory operating condition by providing for systematic inspection, detection, and correction of incipient failures either before they occur or before they develop into major defects. The primary goal of maintenance is to avoid or mitigate the consequences of failure of equipment. This may be by preventing the failure before it actually occurs without sign of imminent failure which is called Planned Maintenance and a sign of imminent failure which is called Condition Based Maintenance help to achieve. It is designed to preserve and restore equipment reliability by replacing worn components before they actually fail. Preventive maintenance activities include partial or complete overhauls at specified periods, oil changes, lubrication and so on. In addition, workers can record equipment deterioration so they know to replace or repair worn parts before they cause system failure. The ideal preventive maintenance program would prevent all equipment failure before it occurs.
Classes of preventive maintenance
Preventive maintenance can be described as maintenance of equipment or systems before a fault occurs. It can be divided into two groups, thus:
v Planned maintenance
v Condition-based maintenance.
The main difference of groups is determination of maintenance time, or determination of the moment when maintenance should be performed. In planned maintenance, the time the software will be maintained is predetermined and fixed while in Condition-based maintenance, the time of maintenance depends on the working condition of the software.
Problems of preventive maintenance
While preventive maintenance is generally considered to be worthwhile, there are risks such as equipment failure or human error involved when performing preventive maintenance, just as in any maintenance operation. Preventive maintenance as scheduled overhaul or scheduled replacement provides two of the three proactive failure management policies available to the maintenance engineer. Preventive maintenance is conducted to keep equipment working and/or extend the life of the equipment.
Examples of preventive maintenance
An individual bought an incandescent light bulb. The manufacturing company mentioned that the life span of the bulb is 3 years. Just before the 3 years, the individual decided to replace the bulb with a new one. This is called preventive maintenance.