What is Regression testing?
Re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing..

Regression testing is done in the following cases:
1. If the bugs reported in the previous build are fixed
2. If a new functionality is added
3. If the environment changes
Regression testing is done to ensure that the functionality which was working in the previous build was not disturbed due to the modifications in the build.
It is done to check that the code changes did not introduce any new bugs or disturb the previous functionality

When do you start developing your automation tests?
First, the application has to be manually tested. Once the manual testing is over and baseline is established.

What is a successful product?
A bug free product, meeting the expectations of the user would make the product successful.

What you will do during the first day of job?
Get acquainted with my team and application

Who should test your code?
QA Tester

How do we regression testing?
Various automation testing tools can be used to perform regression testing like WinRunner, Rational Robot and Silk Test.

Why do we do regression testing?
In any application new functionalities can be added so the application has to be tested to see whether the added functionalities have affected the existing functionalities or not. Here instead of retesting all the existing functionalities baseline scripts created for these can be rerun and tested.

In a Calculator, what is the major functionality, you are going to Test, which has been built specifically for a accountant? Assume that all basic functions like addition, subtraction etc are supported.
Check the maximum numbers of digits it supports?
Check for the memory?
Check for the accuracy due to truncation

Difference between Load Testing & Stress Testing?
Load Testing : the application is tested within the normal limits to identify the load that a system can with stand. In Load testing the no. of users varies.
Stress Testing: Stress tests are designed to confront programs with abnormal situations. Stress testing executes a system in a manner that demand rescues in abnormal quantity, frequency or volume.

If you have shortage of time, how would you prioritize you testing?
1) Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Considerations can include:

•Which functionality is most important to the project's intended purpose?
•Which functionality is most visible to the user?
•Which functionality has the largest safety impact?
•Which functionality has the largest financial impact on users?
•Which aspects of the application are most important to the customer?
•Which aspects of the application can be tested early in the development cycle?
•Which parts of the code are most complex, and thus most subject to errors?
•Which parts of the application were developed in rush or panic mode?
•Which aspects of similar/related previous projects caused problems?
•Which aspects of similar/related previous projects had large maintenance expenses?
•Which parts of the requirements and design are unclear or poorly thought out?
•What do the developers think are the highest-risk aspects of the application?
•What kinds of problems would cause the worst publicity?
•What kinds of problems would cause the most customer service complaints?
•What kinds of tests could easily cover multiple functionalities?
•Which tests will have the best high-risk-coverage to time-required ratio?

2) We work on the major functionalities Which functionality is most visible to the user, Which functionality is most important to the project, which application is most important to the customer, highest-risk aspects of the application

Who in the company is responsible for Quality?
Both development and quality assurance departments are responsible for the final product quality

2) Quality assurance teams. For both Development and testing side.

Should we test every possible combination/scenario for a program?
Ideally, yes we should test every possible scenario, but this may not always be possible. It depends on many factors viz., deadlines, budget, complexity of software and so on. In such cases, we have to prioritize and thoroughly test the critical areas of the application

2) Yes, we should test every possible scenario, but some time the same functionality occurs again and again like LOGIN WINDOW so there is no need to test those functionalities again. There are some more factors:

Priority of the application.
Time or deadline.
Budget.

How will you describe testing activities?
Testing planning, scripting, execution, defect reporting and tracking, regression testing.

What is the purpose of the testing?
Testing provides information whether or not a certain product meets the requirements.

When should testing be stopped?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:

-Deadlines (release deadlines, testing deadlines, etc.)

-Test cases completed with certain percentage passed

-Test budget depleted

-Coverage of code/functionality/requirements reaches a specified point

-Bug rate falls below a certain level

-Beta or alpha testing period ends

Do you have a favorite QA book? Why?
Effective Methods for Software Testing - Perry, William E.

It covers the whole software lifecycle, starting with testing the project plan and estimates and ending with testing the effectiveness of the testing process. The book is packed with checklists, worksheets and N-step procedures for each stage of testing.

What are the roles of glass-box and black-box testing tools?
Glass-box testing also called as white-box testing refers to testing, with detailed knowledge of the modules internals. Thus these tools concentrate more on the algorithms, data structures used in development of modules. These tools perform testing on individual modules more likely than the whole application. Black-Box testing tools refer to testing the interface, functionality and performance testing of the system module and the whole system.

How do we regression testing?
Various automation testing tools can be used to perform regression testing like WinRunner, Rational Robot and Silk Test

Why do we do regression testing?
In any application new functionalities can be added so the application has to be tested to see whether the added functionalities have affected the existing functionalities or not. Here instead of retesting all the existing functionalities baseline scripts created for these can be rerun and tested.

What is the value of a testing group? How do you justify your work and budget?
All software products contain defects/bugs, despite the best efforts of their development teams. It is important for an outside party (one who is not developer) to test the product from a viewpoint that is more objective and representative of the product user.
Testing group test the software from the requirements point of view or what is required by the user. Testers job is to examine a program and see if it does not do what it is supposed to do and also see what it does what it is not supposed to do.

At what stage of the SDLC does testing begin in your opinion?
QA process starts from the second phase of the Software Development Life Cycle i.e. Define the System. Actual Product testing will be done on Test the system phase(Phase-5). During this phase test team will verify the actual results against expected results

Explain the software development lifecycle.
There are seven stages of the software development lifecycle

1.Initiate the project – The users identify their Business requirements.

2.Define the project – The software development team translates the business requirements into system specifications and put together into System Specification Document.

3.Design the system – The System Architecture Team design the system and write Functional Design Document. During design phase general solutions re hypothesized and data and process structures are organized.

4.Build the system – The System Specifications and design documents are given to the development team code the modules by following the Requirements and Design document.

5.Test the system - The test team develops the test plan following the requirements. The software is build and installed on the test platform after developers have completed development and Unit Testing. The testers test the software by following the test plan.

6.Deploy the system – After the user-acceptance testing and certification of the software, it is installed on the production platform. Demos and training are given to the users.

7.Support the system - After the software is in production, the maintenance phase of the life begins. During this phase the development team works with the development document staff to modify and enhance the application and the test team works with the test documentation staff to verify and validate the changes and enhancement to the application software.

FREQUENTLY ASKED QUESTIONS

1) What are your roles and responsibilities as a tester?

2) Explain Software development life cycle

3) What is master test plan? What it contains? Who is responsible for writing it?

4) What is test plan? Who is responsible for writing it? What it contains?

5) What different type of test cases you wrote in the test plan?

6) Why test plan is controlled document?

7) What information you need to formulate test plan?

8) What template you used to write testplan?

9) What is MR?

10) Why you write MR?

11) What information it contains.?

12) Give me few examples of the MRs you wrote.

13) What is Whit Box/Unit testing?

14) What is integration testing?

15) What is black box testing?

16) What knowledge you require to do the white box, integration and black box testing?

17) How many testers were in the test team?

18) What was the test team hierarchy?

19) Which MR tool you used to write MR?

20) What is regression testing?

21) Why we do regression testing?

22) How we do regression testing?

23) What are the different automation tools you kno?.

24) What is difference between regression automation tool and performance automation tool?

25) What is client server architecture?

26) What is three tier and multi-tier architecture?

27) What is Internet?

28) What is intranet?

29) What is extranet?

30) How Intranet is different from client-server?

31) What is different about Web Testing than Client server testing?

32) What is byte code file?

33) What is an Applet?

34) How applet is different from application?

35) What is Java Virtual Machine?

36) What is ISO-9000?

37) What is QMO?

38) What are the different phases of software development cycle?

39) How do help developers to track the faults is the software?

40) What are positive scenarios?

41) What are negative scenarios?

42) What are individual test cases?

43) What are workflow test cases?

44) If we have executed individual test cases, why we do workflow scenarios?

45) What is object oriented model?

46) What is procedural model?

47) What is an object?

48) What is class?

49) What is encapsulation? Give one example

50) What is inheritance? Give example.

51) What is Polymorphism? Give example.

52) What are the different types of MRs?

53) What is test Metrics?

54) What is the use Metrics?

55) How we decide which automation tool we are going to use for the regression testing?

56) If you have shortage of time, how would you prioritize you testing?

57) What is the impact of environment of the actual results of performance testing?

58) What is stress testing, performance testing, Security testing, Recovery testing and volume testing.

59) What criteria you will follow to assign severity and due date to the MR.

60) What is user acceptance testing?

61) What is manual testing and what is automated testing?

62) What are build, version, and release.

63) What are the entrance and exit criteria in the system test?

64) What are the roles of Test Team Leader

65) What are the roles of Sr. Test Engineer

66) What are the roles of QA analyst/QA Tester

67) How do you decide what functionalities of the application are to be tested?

68) If there are no requirements, how will you write your test plan?

69) What is smoke testing?

70) What is soak testing?

71) What is a pre-condition data?

72) What are the different documents in QA?

73) How do you rate yourself in software testing

74) With all the skills, do you prefer to be a developer or a tester? And Why?

75) What are the best web sites that you frequently visit to upgrade your QA skills?

76) Are the words “Prevention” and “Detection” sound familiar? Explain

77) Is defect resolution a technical skill or interpersonal skill from QA view point?

78) Can you automate all the test scripts? Explain

79) What is End to End business logic testing?

80) Explain to me about a most critical defect you found in your last project.


What is integration, and how u will execute this?
A. Integrated System Testing (IST) is a systematic technique for validating the construction of the overall Software structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and test the overall Software structure that has been dictated by design. IST can be done either as Top down integration (Stubs) or Bottom up Integration (Drivers).

Suppose there are 1000 bugs, and there is only 10 days to go for release the product. Developer's said that it can't be fixed within this period, then what u will do.
A. In this case, most critical bugs should be fixed first, such as Severity 1 & 2 bugs, and rest of the bug can be fixed in the next release. Again it completely depends on the business people.

In Sp testing, don't u think, u r doing Unit testing.
A. If we look in the developer's pointer of view, then yes, it is a kind of unit testing. But from a tester point of view, the tester tests the Store proc. in more detail than a developer.

What is regression testing, and how it started and end. Assume that in one module u found a bug, u send that to the developer end to fix that bug, but after that bug fixed, how will u do the regression testing and how will u end that.

A. Regression Testing is re testing unchanged segments of the application system, it normally involves re-running test that have been previously executed to ensure that the same results can be achieved currently as were achieved when the segment was last tested.. For example, the tester get a bug in a module, after getting that module, tester sends that part to the developers end to fix that bug. After fixing that bug, that module comes to the tester end. After received, the tester again test that module and find out that, whether all the bugs are fixed or not. If those bug are fixed and after that the tester have to checked out that by fixing these bugs, whether the developer made some idiot move, and it leads to rise other bugs in other modules. So the tester has to regressively test different modules.

Understandability
The more information we have, the smarter we will test.

•The design is well understood
•Dependencies between internal external and shared components are well understood.
•Changes to the design are communicated.
•Technical documentation is instantly accessible
•Technical documentation is well organized
•Technical documentation is specific and detailed
•Technical documentation is accurate

Stability
The fewer the changes, the fewer the disruptions to testing

•Changes to the software are infrequent
•Changes to the software are controlled
•Changes to the software do not invalidate existing tests
•The software recovers well from failures

Simplicity
The less there is to test, the more quickly it can be tested

•Functional simplicity
•Structural simplicity
•Code simplicity

Decomposability
By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be performed.

•The software system is built from independent modules
•Software modules can be tested independently

Controllability
The better the software is controlled, the more the testing can be automated and optimized.

•All possible outputs can be generated through some combination of input
•All code is executable through some combination of input
•Software and hardware states can be controlled directly by testing
•Input and output formats are consistent and structured
•Tests can be conveniently specified, automated, and reproduced.

Observability
What is seen is what is tested

•Distinct output is generated for each input
•System states and variables are visible or queriable during execution
•Past system states and variables are visible or queriable ( e.g., transaction logs)
•All factors affecting the output are visible
•Incorrect output is easily identified
•Incorrect input is easily identified
•Internal errors are automatically detected through self-testing mechanism
•Internally errors are automatically reported
•Source code is accessible

Software Testing Requirements

Software testing is not an activity to take up when the product is ready. An effective testing begins with a proper plan from the user requirements stage itself. Software testability is the ease with which a computer program is tested. Metrics can be used to measure the testability of a product. The requirements for effective testing are given in the following sub-sections.

Testing Principles
The basic principles for effective software testing are as follows:

•A good test case is one that has a high probability of finding an as-yet undiscovered error.
•A successful test is one that uncovers an as-yet-undiscovered error.
•All tests should be traceable to the customer requirements
•Tests should be planned long before testing begins
•Testing should begin “ in the small” and progress towards testing “in the large”
•Exhaustive testing is not possible

Testing Objectives
Testing is a process of executing a program with the intent of finding an error.
Software testing is a critical element of software quality assurance and represents the ultimate review of system specification, design and coding. Testing is the last chance to uncover the errors / defects in the software and facilitates delivery of quality system.

Who will attend the User Acceptance Tests?
The MIS Development Unit is working with relevant Practitioner Groups and managers to identify the people who can best contribute to system testing. Most of those involved in testing will also have been involved in earlier discussions and decision making about the system set-up. All users will receive basic training to enable them to contribute effectively to the test.

What are the objectives of a User Acceptance Test?
Objectives of the User Acceptance Test are for a group of key users to:
? Validate system set-up for transactions and user access
? Confirm use of system in performing business processes
? Verify performance on business critical functions
? Confirm integrity of converted and additional data, for example values that appear in a look-up table
? Assess and sign off go-live readiness

What does the User Acceptance Test cover?
The scope of each User Acceptance Test will vary depending on which business process we are testing. In general however, all tests will cover the following broad areas:
? A number of defined test cases using quality data to validate end-to-end business processes
? A comparison of actual test results against expected results
? A meeting/discussion forum to evaluate the process and facilitate issue resolution

What is a User Acceptance Test?
A User Acceptance Test is:
? A chance to completely test business processes and software
? A scaled-down or condensed version of the system
? The final UAT for each module will be the last chance to perform the above in a test situation

What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.

What can be done if requirements are changing continuously?
A common problem and a major headache.

•Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans and strategies can be worked out in advance, if possible.

•It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the application from scratch.

•If the code is well-commented and well-documented this makes changes easier for the developers.

•Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.

•The project's initial schedule should allow for some extra time commensurate with the possibility of changes.

•Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the 'Phase 1' version.

•Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new requirements into future versions of the application.

•Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted - after all, that's their job.

•Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with changes.

•Try to design some flexibility into automated test scripts.

•Focus initial automated testing on application aspects that are most likely to remain unchanged.

•Devote appropriate effort to risk analysis of changes to minimize regression testing needs.

•Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test cases, or set up only higher-level generic-type test plans)

•Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that this entails).

What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.)

Considerations can include:

•Which functionality is most important to the project's intended purpose?

•Which functionality is most visible to the user?

•Which functionality has the largest safety impact?

•Which functionality has the largest financial impact on users?

•Which aspects of the application are most important to the customer?

•Which aspects of the application can be tested early in the development cycle?

•Which parts of the code are most complex, and thus most subject to errors?

•Which parts of the application were developed in rush or panic mode?

•Which aspects of similar/related previous projects caused problems?

•Which aspects of similar/related previous projects had large maintenance expenses?

•Which parts of the requirements and design are unclear or poorly thought out?

•What do the developers think are the highest-risk aspects of the application?

•What kinds of problems would cause the worst publicity?

•What kinds of problems would cause the most customer service complaints?

•What kinds of tests could easily cover multiple functionalities?

•Which tests will have the best high-risk-coverage to time-required ratio?

What steps are needed to develop and run software tests?
The following are some of the steps to consider:

•Obtain requirements, functional design, and internal design specifications and other necessary documents.

•Obtain budget and schedule requirements

•Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)

•Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests

•Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.

•Determine test environment requirements (hardware, software, communications, etc.)

•Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)

•Determine test input data requirements

•Identify tasks, those responsible for tasks, and labor requirements

•Set schedule estimates, timelines, milestones

•Determine input equivalence classes, boundary value analyses, error classes

•Prepare test plan document and have needed reviews/approvals

•Write test cases

•Have needed reviews/inspections/approvals of test cases

•Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data

•Obtain and install software releases

•Perform tests

•Evaluate and report results

•Track problems/bugs and fixes

•Retest as needed

•Maintain and update test plans, test cases, test environment, and testware through life cycle.

What's the role of documentation in QA?
Critical. (Note that documentation can be electronic, not necessarily paper.) QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible.