Wednesday, September 26, 2007

Software QA and Testing Resource Center

Software QA and Testing Resource CenterTable of Contents
Software QA and Testing Frequently-Asked-Questions Part 1, covers the following:
· What is 'Software Quality Assurance'?
· What is 'Software Testing'?
· What are some recent major computer system failures caused by software bugs?
· Does every software project need testers?
· Why does software have bugs?
· How can new Software QA processes be introduced in an existing organization?
· What is verification? validation?
· What is a 'walkthrough'?
· What's an 'inspection'?
· What kinds of testing should be considered?
· What are 5 common problems in the software development process?
· What are 5 common solutions to software development problems?
· What is software 'quality'?
· What is 'good code'?
· What is 'good design'?
· What is SEI? CMM? CMMI? ISO? Will it help?
· What is the 'software life cycle'?
Software QA and Testing Frequently-Asked-Questions Part 2, covers the following:
· What makes a good Software Test engineer?
· What makes a good Software QA engineer?
· What makes a good QA or Test manager?
· What's the role of documentation in QA?
· What's the big deal about 'requirements'?
· What steps are needed to develop and run software tests?
· What's a 'test plan'?
· What's a 'test case'?
· What should be done after a bug is found?
· What is 'configuration management'?
· What if the software is so buggy it can't really be tested at all?
· How can it be known when to stop testing?
· What if there isn't enough time for thorough testing?
· What if the project isn't big enough to justify extensive testing?
· How does a client/server environment affect testing?
· How can World Wide Web sites be tested?
· How is testing affected by object-oriented designs?
· What is Extreme Programming and what's it got to do with testing?
Software QA and Testing Less-Frequently-Asked-Questions, covers the following:
· Why is it often hard for organizations to get serious about quality assurance?
· Who is responsible for risk management?
· Who should decide when software is ready to be released?
· What can be done if requirements are changing continuously?
· What if the application has functionality that wasn't in the requirements?
· How can QA processes be implemented without reducing productivity?
· What if an organization is growing so fast that fixed QA processes are impossible?
· Will automated testing tools make testing easier?
· What's the best way to choose a test automation tool?
· How can it be determined if a test environment is appropriate?
· What's the best approach to software test estimation?
Other Software QA and Testing Resources
· Top 5 List
· Software QA and Testing-related Organizations and Certifications
· Links to QA and Testing-related Magazines/Publications
· General Software QA and Testing Resources
· Web QA and Testing Resources
· Web Security Testing Resources
· Web Usability Resources
Software QA and Test Tools
· Test tools
· CM tools and PM tools
· Web site test and management tools
Web Site Test Tools and Site Management Tools
· Load and performance test tools
· Java test tools
· HTML Validators
· Link Checkers
· Free On-the-Web HTML Validators and Link Checkers
· PERL and C Programs for Validating and Checking
· Web Functional/Regression Test Tools
· Web Site Security Test Tools
· External Site Monitoring Services
· Web Site Management Tools
· Log Analysis Tools
· Other Web Test Tools
Jobs and News
· Web Job Boards useful to QA and Test Engineers
· Latest News Headlines -- Technology, Software Development, Computer Security, Tech Stocks, more...
Software QA and Testing Bookstore
· Software Testing Books
· Software Test Automation Books
· Software Quality Assurance Books
· Software Requirements Engineering Books
· Software Metrics Books
· Configuration Management Books
· Software Risk Management Books
· Software Engineering Books
· Software Project Management Books
· Technical Background Basics Books
· Other Books
Web Stress Testing Tool
Easy to use and powerful: Auto-configuration of dynamic parameters, SOAP, monitoring of servers...
www.neotys.com

How much can your site take?
Web Performance Suite's reports answer this question and many more. Comes in free and pro versions.
Web Performance Inc.

Easy Web Load & Stress Testing
Script Free & Low CostEnterprise Test Tool. Download a Free Trial of the OpenLoad Testing Solution.
www.opendemand.com

"I downloaded the SQL Bundle from
Red Gate Software and the next 14 days were the best days of my SQL life!" G. Haraldsson - KT Bank.
www.red-gate.com
How to advertise on Softwareqatest.com













































































Software QA and Testing Frequently-Asked-Questions, Part 1
What is 'Software Quality Assurance'? What is 'Software Testing'? What are some recent major computer system failures caused by software bugs? Does every software project need testers? Why does software have bugs? How can new Software QA processes be introduced in an existing organization? What is verification? validation? What is a 'walkthrough'? What's an 'inspection'? What kinds of testing should be considered? What are 5 common problems in the software development process? What are 5 common solutions to software development problems? What is software 'quality'? What is 'good code'? What is 'good design'? What is SEI? CMM? CMMI? ISO? Will it help? What is the 'software life cycle'?
Software testing
• Ten things you didn't know about images on Wikipedia •
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Software development process
Activities and steps
Requirements Architecture Implementation Testing Deployment
Models
Agile Cleanroom Iterative RAD RUP Spiral Waterfall XP
Supporting disciplines
Configuration management Documentation Software quality assurance (SQA) Project management User experience design
Software testing is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding errors. Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (SQA), which encompasses all business process areas, not just testing.
There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following routine procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester[citation needed]. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is also used to connote the dynamic analysis of the product—putting the product through its paces. Sometimes one therefore refers to reviews, walkthroughs or inspections as "static testing", whereas actually running the program with a given set of test cases in a given development stage is often referred to as "dynamic testing", to emphasize the fact that formal review processes form part of the overall testing scope
Introduction
In general, software engineers distinguish software faults from software failures. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU. A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended.
Software testing may be viewed as a sub-field of Software Quality Assurance but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster.
Regardless of the methods used or level of formality involved, the desired result of testing is a level of confidence in the software so that the organization is confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of defects in a software product can be very large, and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software, since it is not usually commercially viable to test over the proposed length of time unless this is a relatively short period. A few days or a week would normally be acceptable, but any longer period would usually have to be simulated according to carefully prescribed start and end conditions.
A common practice of software testing is that it is performed by an independent group of testers after the functionality is developed but before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays, thereby compromising the time devoted to testing. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.
This is highly problematic in terms of controlling changes to software: if faults or failures are found part way into the project, the decision to correct the software needs to be taken on the basis of whether or not these defects will delay the remainder of the project. If the software does need correction, this needs to be rigorously controlled using a version numbering system, and software testers need to be accurate in knowing that they are testing the correct version, and will need to re-test the part of the software wherein the defects were found. The correct start point needs to be identified for retesting. There are added risks in that new defects may be introduced as part of the corrections, and the original requirement can also change part way through, in which instance previous successful tests may no longer meet the requirement and will need to be re-specified and redone (part of regression testing). Clearly the possibilities for projects being delayed and running over budget are significant.
Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in regression testing suites to ensure that future updates to the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it. This is reasonable based on the risk of any given defect contributing to or being confused with further defects later in the system or process. In particular, if a defect erroneously changes the state of the data on which the software is operating, that data is no longer reliable and therefore any testing after that point cannot be relied on even if there are no further actual software defects.
Time Detected [1]
Time Introduced
Requirements
Architecture
Construction
System Test
Post-Release
Requirements
1
3
5-10
10
10-100
Architecture
-
1
10
15
25-100
Construction
-
-
1
10
10-25
In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed.
Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
[edit] History
This section may require cleanup to meet Wikipedia's quality standards.Please discuss this issue on the talk page, and/or replace this tag with a more specific message. Editing help is available.This section has been tagged since May 2007.
The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[2] Although his attention was on breakage testing it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Drs. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing as follows:[3]
until 1956 it was the debugging oriented period, where testing was often associated to debugging: there was no clear difference between testing and debugging. From 1957-1978 there was the demonstration oriented period where debugging and testing was distinguished now - in this period it was shown, that software satisfies the requirements. The time between 1979-1982 is announced as the destruction oriented period, where the goal was to find errors. 1983-1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults.
Dr. Gelperin chaired the IEEE 829-1989 (Test Documentation Standard) with Dr. Hetzel writing the book The Complete Guide to Software Testing. Both works were pivotal in to today's testing culture and remain a consistent source of reference. Dr. Gelperin and Jerry E. Durant also went on to develop High Impact Inspection Technology that builds upon traditional Inspections but utilizes a test driven additive.
[edit] White-box, black-box, and gray-box testing
This section may require cleanup to meet Wikipedia's quality standards.Please discuss this issue on the talk page, and/or replace this tag with a more specific message. Editing help is available.This section has been tagged since February 2007.
White box and black box testing are terms used to describe the point of view a test engineer takes when designing test cases. Black box testing assumes an external view of the test object; one inputs data and one sees only outputs from the test object. White box testing provides an internal view of the test object and its processes.
In recent years the term gray box testing has come into common usage. The typical gray box tester is permitted to set up or manipulate the testing environment, such as by seeding a database, and can view the state of the product after his actions, such as performing a SQL query on the database to be certain of the values of columns.
Gray box testing is used almost exclusively by client-server testers or others who use a database as a repository of information, but can also apply to a tester who has to manipulate input or configuration files directly, or perform testing like SQL injection. It can also be used by testers who know the internal workings or algorithm of the software under test and can write tests specifically for the anticipated results. For example, testing a data warehouse implementation involves loading the target database with information, and verifying the correctness of data population and loading of data into the correct tables.
[edit] Verification and validation
Software testing is used in association with verification and validation (V&V). Verification is the checking of or testing of items, including software, for conformance and consistency with an associated specification. Software testing is just one kind of verification, which also uses techniques such as reviews, inspections, and walkthroughs. Validation is the process of checking what has been specified is what the user actually wanted.
· Verification: Are we doing the job right?
· Validation: Have we done the right job?
[edit] Levels of testing
· Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented.
· Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a whole.
· System testing tests an integrated system to verify that it meets its requirements.
· System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.
· Acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed after the testing and before the implementation phase. See also Development stage
o Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.
o Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
It should be noted that although both Alpha and Beta are referred to as testing it is in fact use immersion. The rigors that are applied are often unsystematic and many of the basic tenets of testing process are not used. The Alpha and Beta period provides insight into environmental and utilization conditions that can impact the software.
After modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality. Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.
[edit] Test cases, suites, scripts, and scenarios
A test case is a software testing document,which consists of event, action, input, output, expected result, and actual result. Clinically defined (IEEE 829-1998) a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
The term test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.
The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be called a test specification. If sequence is specified, it can be called a test script, scenario, or procedure.
[edit] A sample testing cycle
Although testing varies between organizations, there is a cycle to testing:
1. Requirements Analysis: Testing should begin in the requirements phase of the software development life cycle.
During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work.
2. Test Planning: Test Strategy, Test Plan(s), Test Bed creation.
3. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software.
4. Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.
5. Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
6. Retesting the Defects
Not all errors or defects reported must be fixed by a software development team. Some may be caused by errors in configuring the test software to match the development or production environment. Some defects can be handled by a workaround in the production environment. Others might be deferred to future releases of the software, or the deficiency might be accepted by the business user. There are yet other defects that may be rejected by the development team (of course, with due reason) if they deem it
[edit] Code coverage
Main article: Code coverage
Code coverage is inherently a white box testing activity. The target software is built with special options or libraries and/or run under a special environment such that every function that is exercised (executed) in the program(s) are mapped back to the function points in the source code. This process allows developers and quality assurance personnel to look for parts of a system that are rarely or never accessed under normal conditions (error handling and the like) and helps reassure test engineers that the most important conditions (function points) have been tested.
Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets that will increase the code coverage over vital functions. Two common forms of code coverage used by testers are statement (or line) coverage, and path (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches, or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage.
Generally code coverage tools and libraries exact a performance and/or memory or other resource cost which is unacceptable to normal operations of the software. Thus they are only used in the lab. As one might expect there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.
There are also some sorts of defects which are affected by such tools. In particular some race conditions or similar real time sensitive operations can be masked when run under code coverage environments; and conversely some of these defects may become easier to find as a result of the additional overhead of the testing code.
Code coverage may be regarded as a more up-to-date incarnation of debugging in that the automated tools used to achieve statement and path coverage are often referred to as "debugging utilities". These tools allow the program code under test to be observed on screen whilst the program is executing, and commands and keyboard function keys are available to allow the code to be "stepped" through literally line by line. Alternatively it is possible to define pinpointed lines of code as "breakpoints" which will allow a large section of the code to be executed, then stopping at that point and displaying that part of the program on screen. Judging where to put breakpoints is based on a reasonable understanding of the program indicating that a particular defect is thought to exist around that point. The data values held in program variables can also be examined and in some instances (with care) altered to try out "what if" scenarios. Clearly use of a debugging tool is more the domain of the software engineer at a unit test level, and it is more likely that the software tester will ask the software engineer to perform this. However, it is useful for the tester to understand the concept of a debugging tool.
Testing certifications
· CSTE offered by the Quality Assurance Institute (QAI)
· CSTP offered by the International Institute for Software Testing
· ISEB offered by the Information Systems Examinations Board
· ISTQB offered by the International Software Testing Qualification Board
[edit] Quality assurance certifications
· CSQE offered by the American Society for Quality (ASQ)
· CSQA offered by the Quality Assurance Institute (QAI)
[edit] Roles in software testing
Software testing can be done by software testers. Until the 1950s the term software tester was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing (see D. Gelperin and W.C. Hetzel) there have been established different roles: test lead/manager, tester, test designer, test automater/automation developer, and test administrator.
Participants of testing team:
1. Tester
2. Developer
3. Business Analyst
4. Customer
5. Information Service Management
6. Senior Organization Management
7. Quality team
. How do you implement a new testing process in a company which must replace existing process
Subscribe

Latest Answer: Implementation new testing process is fully based on Project requirements and its accepatable by the... Last Updated By emurugan83 on July 05, 2007
(Answers: 2) Read / Answer

2. How do you implement a new testing process in a company
Subscribe

Latest Answer: Implementation of testing process is vary depends on company requirements.To implement any process a... Last Updated By Suraj on June 29, 2007
(Answers: 6) Read / Answer

3. Waht is the difference between an bug and error ?
Subscribe

Latest Answer: Error: An error is a human action that produces incorrect resultDefect: ... Last Updated By ramya on August 18, 2007
(Answers: 17) Read / Answer

4. what is dfference between positive and testing testing.which one do you prefer as tester and wwhat is dfference between positive and testing testing.which one do you prefer as tester and why?
Subscribe

Read / Answer

5. why we use winrunner mostly why don't we use loadrunner to test applications?
Subscribe

Latest Answer: Winrunner is for testing functionality and loadrunner is for performance testing.... Last Updated By jainbrijesh on November 16, 2006
(Answers: 4) Read / Answer

6. Given time is short and u have to test and release application,what approach u follow to test?
Subscribe

Latest Answer: 1. We have to test the area which was not tested in previous build.2. Need to test fixed bugs raised... Last Updated By TestSau on June 30, 2007
(Answers: 6) Read / Answer

7. what is quality matrices?
Subscribe
Asked by: tripura
Latest Answer: Quality metrics is used to measure various parameters in a Software Engineering.1) It is used to mea... Last Updated By sridhar vangapandu on January 12, 2006
(Answers: 3) Read / Answer

8. what is severity and priority?its differnce
Subscribe
Asked by: sowjan_sk
Latest Answer: Severity defines the importance of defect with respective to functional point of view.Priority defi... Last Updated By lakshmiprasad on July 03, 2007
(Answers: 18)

Interviews are a two-way street. Therefore, you need to participate in the discussion in a very active way, which includes asking questions. Many job seekers feel hesitant about seeming too "forward"; however, it’s imperative that you gain as much understanding about the position and the business. After all, if you’re hired, you’ll be spending roughly 40 hours of each week there.
Below are some of the multitude of questions you may want to ask your interviewer(s). But don’t limit yourself to these queries; depending upon the job for which you’re vying, you may need to ask field- and/or managerial level-specific ones.

"What kind of company culture do you have?"
Obviously, any interviewer worth his or her merit is going to sugar-coat a description of the company. Yet you can still glean much information out of the answer to this question by checking out the body language of the respondent.
For instance, does he or she hesitate? Is there a sudden nervousness in his or her voice? Do physical mannerisms such as paper shuffling, foot tapping, or knuckle cracking suddenly appear? If so, he or she might not like the company and may have difficulty in painting an attractive picture.
On the other hand, if the interviewer is quick to answer and seems quite sincere in his or her immediate response, you may have found a very happy employee.
"How long have you worked for the company?"
If your interviewer has been with the company less than six months, you may want to try and get company culture information from someone else. Although he or she may be an excellent worker, it’s unlikely that within half a year he or she could have gained a thorough understanding of the operations of the business.
However, if the person interviewing you has been around for a number of years, that will tell you that this might be a good place to work. After all, low turnover is a good sign of a productive, healthy corporation.
"Why did the last person leave this position?"
This is another question that might be met with a blank stare at first, and expect some umming and ahhing before the answer is stated. Most interviewers feel a little awkward when answering, but how they respond will be extremely telling.
For example, if they badmouth the former employee, it’s an indication that this is a negative place to work. After all, pessimism during an interview is always a red flag that should warn the interviewee of danger.
Alternately, they may tell you that the employee left to start his or her own business to take a more lucrative position elsewhere. Or, he or she may have moved away from the area, thus resulting in a resignation. These are all acceptable reasons to leave a job, and the way in which your interviewer speaks about the departed worker will speak volumes. Look for signs that the person who left will be missed by his or her former colleagues; that shows a certain amount of corporate loyalty.
"How long has this position been vacant?"
If the position you’re trying for has been open a long time, it may be that this company moves incredibly slowly or is extremely picky. Depending upon your style of working, those characteristics can be either positive or negative. For instance, do you like to get things done as soon as possible? If this company’s executives and managers cannot fill a position in a timely manner, this may be a place where paperwork gets stuck for weeks in a chain of command.
"Can you describe your ideal candidate for this position?"
Listen to the way in which your interviewer(s) responds to this question. Does he or she echo what you’ve said about yourself? Or does he or she simply pull out the job description and read it verbatim? What you’re looking for here are specifics in terms of reasonable expectations.
As an example, if the interviewer says, "I would like to see someone who regularly comes in early and works late to finish assignments," this company may not value family time. The same could be said for an answer such as, "I would like to hire someone who doesn’t question authority but jumps right in and does what he or she is expected to do." Again, this could indicate a dictatorial style of managing employees. Depending upon your preferred way of being supervised, it may or may not be to your liking.
"Could I see a little more of the office?"
This question may throw your interviewer for a bit of a loop, but getting to see more of the office will be a huge benefit for you if you’re offered the position.
If your interviewer declines your request, citing confidentiality reasons, don’t make a fuss. Some companies deal with information and data that shouldn’t be seen by those who are not employees.
If, on the other hand, your interviewer walks you around, make note of how the people you meet appear. Are they cheerful or working hard? Or do they seem bored and unmotivated? Do they greet you or avert their eyes? Are you welcomed or ignored? Met with skepticism or a handshake? Again, you’ll learn much about the culture of the company from a simple walk around the building or department.
"When do you expect to be making a decision?"
Chances are, your interviewer won’t have a specific answer to this question, but may be able to provide you with a timeframe, such as two weeks or a month. This way, you’ll know that if you haven’t heard anything by that point in time, it’s likely you will not. If that’s the case, make sure you move on to your next job possibility and prepare for future interviews.
Answer :
There are several interview technique s one can use during the actual interviewAttention Aware Interview technique :
Limit you amount of talking. Interviews have only a limited attention time, to be specific there is only about 80 seconds where you have the interviews attention. After you start replying to an Interview Question
The back ground to the Interview technique is :
As you start your reply to the interview question, you interviews has full attention. As more time passes his attention is decreasing rapidly. After 60 seconds, you have basically lost him/her. So aim to deliver the answer in less than 60 seconds. Delivering your highlight after 60 seconds will not necessarily reach the interviewers head! If you are not convinced by the level of detail you have given. Ask: "Do you want me to expand more on this?"
Ask Questions Interview Technique :
Engage the interviewer by asking questions. Asking Questions improves your relations with the interviewer, and you will be more easily remembered after the interview. Interviewers are impressed by the interest you show in the job, sometimes even more the selling points you have. If you can manage to get your interviewer talking about himself, you are doing great!
Have u have any que for us?
Answer :
It is important that you do have questions for the following reasons:
· In order to make your own assessment of the job you need to find out as much as possible about what the job is really like, or more information about the organisation;
· To show your serious interests in the position and preparation for the interview;
· To further outline achievements and skills not covered so far in the interview. This is a good time to ask the employer what skills they consider to be the most critical for the position, and whether they see a gap in the skills you have to offer. This will give you an opportunity to identify skills and/or experiences which have not yet come up during the interview.
Body language?
Answer :
Handshake: A dry, firm hand shake reflects a strong personality and is what most employers are looking for. Limp, sweaty hands are definitely a no. This is the first body language in the interview that your interviewer will "read".
Hands: Do not exaggerate hand gestures when you are talking. Try answering an interview question in front of a mirror to help you understand how much you move your hands while talking.
Eye Contact: Maintain eye contact but do not stare. If you are uncomfortable with this kind of body language look at the interviewer’s nose as it has the same effect. Do not let your eyes wander away from your interviewer.Posture: Reflects energy, enthusiasm and self control. Stand and sit erect. Slouching does not reflect a positive attitude in interview body language.
Fidget: Simple - do not fidget. Avoid playing with you hair, clicking pens and the like.
Hoe much do u know about our organization
Answer :
Your answer will reveal the amount of homework you have done before the interview. For example, if the company has products in the market place look for these at points of sale. Use your initiative to find out as much as you can about the organisation and during the interview cite ways in which you have gone about finding out this information.
1.What is test bed?
Answer :
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
2.what is software requirement specifications
Answer :
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software.
3.what is soak testing?
Answer :
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
4.what is smoke testing?
Answer :
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
5.what is scalability testing
Answer :
Performance testing focused on ensuring the application under test gracefully handles increases in work load.
6.what is release candidate
Answer :
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
7.what is ramp testing
Answer :
Continuously raising an input signal until the system breaks down.
8.what is race condition
Answer :
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
9.what is quality system
Answer :
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
10what is quality policy
Answer :
The overall intentions and direction of an organization as regards quality as formally expressed by top management.
11What is quality management
Answer :
That aspect of the overall management function that determines and implements the quality policy.
12.what is quality circle
Answer :
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality
14.what is quality audit
Answer :
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
15.What is quality assurance
Answer :
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
What is monkey testing?
Answer :
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

What is metric?
Answer :
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

What is localizationtesting?
Answer :
This term refers to making software specifically designed for a specific locality.

What is independent test Group?
Answer :
A group of people whose primary responsibility is software testing.
What is gorilla testing?
Answer :
Testing one particular module, functionality heavily.

What is gray box testing
Answer :
A combination of Black Box and White Box testing methodologies, testing a piece of software against its specification but using some knowledge of its internal workings.

What is functional specification?
Answer :
A document that describes in detail the characteristics of the product with regard to its intended features.

What is functional decomposition?
Answer :
A technique used during planning, analysis and design; creates a functional hierarchy for the software.

What is exhaustive testing?
Answer :
Testing which covers all combinations of input values and preconditions for an element of the software under test.

What is equivalence partitioning
Answer :
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

What is equalience class
Answer :
A portion of a component’s input or output domains for which the component’s behavior is assumed to be the same from the component’s specification.

What is endurance testing
Answer :
Checks for memory leaks or other problems that may occur with prolonged execution.

What is emulator
Answer :
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

What is depth testing
Answer :
A test that exercises a feature of a product in full detail.

What is dependence testing
Answer :
Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
What is defect
What is debugging
What is data driven testing
What is data flow diagram
What is cyclomatic complexity
What is conversion testing
What is context driven testing
Kyapoocha.com -----3page.

1 comment:

ibrainovative said...

It si so useful for people for career in Testing industry