Analysis of the Current System:
Research Methods:
Observation:
Characteristics: Directly watching how the current system operates, either in real-time or through recorded data.
Uses: Helps identify system workflows, bottlenecks, or inefficiencies.
Advantages: Provides firsthand, real-world data.
Disadvantages: Observer bias, potential for disruption, limited scope.
Interviews:
Characteristics: One-on-one or group conversations with stakeholders to gather their feedback on the system.
Uses: Gathers qualitative data from users, managers, and others involved with the system.
Advantages: In-depth insights and feedback, allows for clarification of responses.
Disadvantages: Time-consuming, potential interviewer bias, not always representative.
Questionnaires:
Characteristics: A set of questions designed to collect responses from a larger group.
Uses: Efficient for gathering data from many people.
Advantages: Can collect data from a wide range of users quickly.
Disadvantages: Limited depth, responses may lack context, low response rates.
Examination of Existing Documents:
Characteristics: Reviewing current documentation, reports, and records to understand how the system operates.
Uses: Provides factual data on system operations and history.
Advantages: Objective data, can help uncover hidden issues.
Disadvantages: Documents may be outdated or incomplete.
Identifying Inputs, Outputs, and Processing:
Inputs: What data enters the system.
Outputs: What data is produced or delivered by the system.
Processing: How data is handled within the system.
Identifying Problems with the Current System:
This involves pinpointing areas where the current system fails to meet user needs or business goals.
User and Information Requirements for the New System:
Identifying what the new system must achieve and understanding the needs and expectations of the users.
System Specification:
Identify and Justify Suitable Hardware and Software for the New System:
Choosing the right technology for the new system, based on performance, cost, and user needs.
Design:
Design File/Data Structures: Organizing how data will be stored and structured in the system.
Field Length: Defines the amount of data each field can hold.
Field Name: Descriptive names for data fields (e.g., "CustomerName").
Data Type: Defines the type of data (e.g., integer, string, date).
Coding of Data: How data is represented (e.g., "M/F" for gender).
Validation Routines: Rules to ensure that the data entered into the system is correct and reasonable.
Range Check: Ensures that input values fall within a specified range.
Character Check: Ensures that input contains only valid characters.
Length Check: Ensures that input data fits within a specified length.
Type Check: Ensures the correct data type is entered (e.g., a date).
Format Check: Ensures that data is in the correct format (e.g., phone number format).
Presence Check: Ensures that a value is entered.
Check Digit: Used for ensuring data integrity (e.g., validating a credit card number).
Input Formats:
Data Capture Forms: Designing forms for entering data into the system (e.g., user registration forms).
Output Formats:
Screen Layouts: The design of how data is displayed on the screen.
Report Layouts: The format and design of reports generated by the system.
Testing:
Need to Test the System:
Testing is essential to ensure that the system works as expected and meets requirements before it’s implemented.
Test Designs, Test Strategies, and Test Plan:
Test Data: Data used in testing the system (e.g., normal, extreme, and abnormal test cases).
Expected Outcomes: The anticipated results of the tests.
Actual Outcomes: The results generated by the system during testing.
Remedial Action: Actions taken to correct issues found during testing.
Test Designs:
Testing different elements of the system, including data structures, file structures, input formats, and output formats.
Test Strategies:
Module Testing: Testing individual parts of the system.
Function Testing: Testing specific functions within the system.
System Testing: Testing the entire system as a whole.
Definition and Use of Test Data:
Normal Data: Data that fits within the expected range or parameters.
Abnormal Data: Data that falls outside expected parameters, used to test error handling.
Extreme Data: Data that is at the edge of acceptable limits.
Live Data: Real-world data used for testing in production environments.
System Implementation:
Implementation Methods:
Direct Changeover: Switching from the old system to the new one immediately.
Advantages: Simple and quick.
Disadvantages: High risk if the new system fails.
Parallel Running: Running the old and new systems simultaneously for a period of time.
Advantages: Reduces risk since both systems are operational.
Disadvantages: Can be costly and time-consuming.
Pilot Running: Implementing the new system in one part of the organization before full-scale deployment.
Advantages: Low risk, easy to monitor.
Disadvantages: Limited scope may not identify all issues.
Phased Implementation: Gradually rolling out the new system in stages.
Advantages: Allows testing of each part of the system.
Disadvantages: Can take longer to fully implement.
Technical Documentation:
Purpose: Describes the system’s functions, design, and implementation.
Program Listing: The full code used in the system.
Program Flowcharts: Visual representations of the program’s logic.
Hardware and Software Requirements: Specifications for the system.
Input and Output Format: Describes how data is entered and displayed.
User Documentation:
Purpose: Helps end-users understand how to use the system effectively.
How to Use the System: Instructions on how to operate the software, save files, print data, etc.
Troubleshooting Guide: Helps users resolve common issues.
Evaluate a Solution:
Efficiency of the Solution: Assessing how well the system performs its tasks.
Ease of Use: Measuring how user-friendly the system is.
Appropriateness of the Solution: Ensuring that the system meets the needs and goals it was intended for.
Comparison with Original Task Requirements: Verifying if the solution meets the defined project objectives.
Identifying Limitations and Improvements: Highlighting areas where the system could be improved.
Evaluate User Responses to Test Results: Gathering feedback from users to ensure that the system performs as expected in real-world conditions.
The Systems Life Cycle encompasses stages from analyzing the current system, designing a new one, developing and testing the solution, implementing it, creating documentation, and evaluating its performance. Each stage plays a critical role in ensuring that the system is developed efficiently, tested thoroughly, and implemented successfully to meet user needs.