Tuesday, September 23, 2008

Software Testing Dictionary



Acceptance Test. Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.

Accessibility testing. Testing that determines if software will be usable by people with disabilities.

Ad Hoc Testing. Testing carried out using no recognised test case design technique. [BCS]


Algorithm verification testing. A software development and test phase focused on the validation and tuning of key algorithms using an iterative experimentation process.[Scott Loveland, 2005]

Alpha Testing. Testing of a software product or system conducted at the developer's site by the customer.

Aperiodic bug. A transient bug that becomes active periodically (sometimes referred to as an intermittent bug). Because of their short duration, transient faults are often detected through the anomalies that result from their propagation. [Peter Farrell-Vinay 2008] .

Artistic testing. Also known as Exploratory testing.

Assertion Testing. (NBS) A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Automated Testing. Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.

Audit.

  • (1) An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria. (IEEE)
  • (2) To conduct an independent review and examination of system records and activities in order to test the adequacy and effectiveness of data security and data integrity procedures, to ensure compliance with established policy and operational procedures, and to recommend any necessary changes. (ANSI)

    ABEND Abnormal END. A mainframe term for a program crash. It is always associated with a failure code, known as an ABEND code.[Scott Loveland, 2005]



    Background testing. Is the execution of normal functional testing while the SUT is exercised by a realistic work load. This work load is being processed "in the background" as far as the functional testing is concerned. [ Load Testing Terminology by Scott Stirling ]

    Bandwidth testing. Testing a site with a variety of link speeds, both fast (internally connected LAN) and slow (externally, through a proxy or firewall, and over a modem); sometimes called slow link testing if the organization typically tests with a faster link internally (in that case, they are doing a specific pass for the slower line speed only).[Lydia Ash, 2003]

    Basis path testing. Identifying tests based on flow and paths of the program or system. [William E. Lewis, 2000]

    Basis test set. A set of test cases derived from the code logic which ensure that 100\% branch coverage is achieved. [BCS]

    Bug: glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, elision. [B. Beizer, 1990], defect, issue, problem

    Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

    Benchmarks Programs that provide performance comparison for software, hardware, and systems.

    Benchmarking is specific type of performance test with the purpose of determining performance baselines for comparison. [Load Testing Terminology by Scott Stirling ]

    Big-bang testing. Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system.[BCS]

    Black box testing. A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed.

    Blink testing. What you do in blink testing is plunge yourself into an ocean of data-- far too much data to comprehend. And then you comprehend it. Don't know how to do that? Yes you do. But you may not realize that you know how.[James Bach's Blog]

    Bottom-up Testing. An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. [BCS]

    Boundary Value Analysis (BVA). BVA is different from equivalence partitioning in that it focuses on "corner cases" or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation. 

    Branch Coverage Testing. - Verify each branch has true and false outcomes at least once. [William E. Lewis, 2000]

    Breadth test. - A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail [Dorothy Graham, 1999]

    BRS - Business Requirement Specification



    Capability Maturity Model (CMM). - A description of the stages through which software organizations evolve as they define, implement, measure, control and improve their software processes. The model is a guide for selecting the process improvement strategies by facilitating the determination of current process capabilities and identification of the issues most critical to software quality and process improvement. [SEI/CMU-93-TR-25]

    Capture-replay tools. - Tools that gives testers the ability to move some GUI testing away from manual execution by �capturing� mouse clicks and keyboard strokes into scripts, and then �replaying� that script to re-create the same sequence of inputs and responses on subsequent test.[Scott Loveland, 2005]

    Cause Effect Graphing. (1) [NBS] Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. (2)A systematic method of generating test cases representing combinations of conditions. See: testing, functional.[G. Myers]

    Clean test. A test whose primary purpose is validation; that is, tests designed to demonstrate the software`s correct working.(syn. positive test)[B. Beizer 1995]

    Clear-box testing. See White-box testing.

    Code audit. An independent review of source code by a person, team, or tool to verify compliance with software design documentation and programming standards. Correctness and efficiency may also be evaluated. (IEEE)

    Code Inspection. A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items. [G.Myers/NBS] Syn: Fagan Inspection

    Code Walkthrough. A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.[G.Myers/NBS]

    Coexistence Testing. Coexistence isn't enough. It also depends on load order, how virtual space is mapped at the moment, hardware and software configurations, and the history of what took place hours or days before. It�s probably an exponentially hard problem rather than a square-law problem. [from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

    Comparison testing. Comparing software strengths and weaknesses to competing products

    Compatibility bug A revision to the framework breaks a previously working feature: a new feature is inconsistent with an old feature, or a new feature breaks an unchanged application rebuilt with the new framework code. [R. V. Binder, 1999]

    Compatibility Testing. The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

    Composability testing -testing the ability of the interface to let users do more complex tasks by combining different sequences of simpler, easy-to-learn tasks. [Timothy Dyck, 'Easy' and other lies, eWEEK April 28, 2003]

    Condition Coverage. A test coverage criteria requiring enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. Contrast with branch coverage, decision coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

    Configuration. The functional and/or physical characteristics of hardware/software as set forth in technical documentation and achieved in a product. (MIL-STD-973)

    Configuration control. An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification. (IEEE)

    Conformance directed testing. Testing that seeks to establish conformance to requirements or specification. [R. V. Binder, 1999]

    Cookbook scenario. A test scenario description that provides complete, step-by-step details about how the scenario should be performed. It leaves nothing to change. [Scott Loveland, 2005]

    Coverage analysis. Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program. Tools that capture this data and provide reports summarizing relevant information have this feature. (NIST)

    CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E. Lewis, 2000]



    Data-Driven testing. An automation approach in which the navigation and functionality of the test script is directed through external data; this approach separates test and control data from the test script. [Daniel J. Mosley, 2002]

    Data flow testing. Testing in which test cases are designed based on variable usage within the code.[BCS]

    Database testing. Check the integrity of database field values. [William E. Lewis, 2000]

    Defect. The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system

    Defect. Also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures.[Robert M. Poston, 1996.]

    Defect. A flaw in the software with potential to cause a failure.. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Defect Age. A measurement that describes the period of time from the introduction of a defect until its discovery. . [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Defect Density. A metric that compares the number of defects to a measure of size (e.g., defects per KLOC). Often used as a measure of defect quality. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Defect Discovery Rate. A metric describing the number of defects discovered over a specified period of time, usually displayed in graphical form. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Defect Removal Efficiency (DRE). A measure of the number of defects discovered in an activity versus the number that could have been found. Often used as a measure of test effectiveness. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Defect Seeding. The process of intentionally adding known defects to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of defects still remaining. Also called Error Seeding. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Defect Masked. An existing defect that hasn't yet caused a failure because another defect has prevented that part of the code from being executed. [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Depth test. A test case, that exercises some part of a system to a significant level of detail. [Dorothy Graham, 1999]

    Decision Coverage. A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Syn: branch coverage. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.[G.Myers]

    Design-based testing. Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms). [BCS

    Dirty testing Negative testing. [Beizer]

    Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs [Tim Koomen, 1999]



    End-to-End testing. Similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

    Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition. 

    Error: An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests.[Robert M. Poston, 1996.]

    Errors: The amount by which a result is incorrect. Mistakes are usually a result of a human action. Human mistakes (errors) often result in faults contained in the source code, specification, documentation, or other product deliverable. Once a fault is encountered, the end result will be a program failure. The failure usually has some margin of error, either high, medium, or low.

    Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

    Error guessing. A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them [from BS7925-1]

    Error seeding. The purposeful introduction of faults into a program to test effectiveness of a test suite or other quality assurance program. [R. V. Binder, 1999]

    Exception Testing. Identify error messages and exception handling processes an conditions that trigger them. [William E. Lewis, 2000]

    Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

    Exploratory Testing: An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply: The outcome of this test influences the design of the next test. [James Bach]



    Failure: A failure is a deviation from expectations exhibited by software and observed as a set of symptoms by a tester or user. A failure is caused by one or more defects. The Causal Trail. A person makes an error that causes a defect that causes a failure.[Robert M. Poston, 1996]

    Fix testing. Rerunning of a test that previously found the bug in order to see if a supplied fix works. [Scott Loveland, 2005]

    Follow-up testing, we vary a test that yielded a less-thanspectacular failure. We vary the operation, data, or environment, asking whether the underlying fault in the code can yield a more serious failure or a failure under a broader range of circumstances.[Measuring the Effectiveness of Software Testers,Cem Kaner, STAR East 2003]

    Formal Testing. (IEEE) Testing conducted in accordance with test plans and procedures that have been reviewed and approved by a customer, user, or designated level of management. Antonym: informal testing.

    Framework scenario. A test scenario definition that provides only enough high-level information to remind the tester of everything that needs to be covered for that scenario. The description captures the activity’s essence, but trusts the tester to work through the specific steps required.[Scott Loveland, 2005]

    Free Form Testing. Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]

    Functional Decomposition Approach. An automation method in which the test cases are reduced to fundamental tasks, navigation, functional tests, data verification, and return navigation; also known as Framework Driven Approach. [Daniel J. Mosley, 2002]

    Functional testing Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

    Function verification test (FVT). Testing of a complete, yet containable functional area or component within the overall software package. Normally occurs immediately after Unit test. Also known as Integration test. [Scott Loveland, 2005]



    Gray box testing. Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester.[Cem Kaner]

    Gray box testing. Test designed based on the knowledge of algorithm, internal states, architectures, or other high -level descriptions of the program behavior. [Doug Hoffman]

    Gray box testing. Examines the activity of back-end components during test case execution. Two types of problems that can be encountered during gray-box testing are:

  •  A component encounters a failure of some kind, causing the operation to be aborted. The user interface will typically indicate that an error has occurred.
  •  The test executes in full, but the content of the results is incorrect. Somewhere in the system, a component processed data incorrectly, causing the error in the results.
    [Elfriede Dustin. "Quality Web Systems: Performance, Security & Usability."]

    Grooved Tests. Tests that simply repeat the same activity against a target product from cycle to cycle. [Scott Loveland, 2005]


    [Back to Top]

    Heuristic Testing: An approach to test design that employs heuristics to enable rapid development of test cases.[James Bach]

    High-level tests. These tests involve testing whole, complete products [Kit, 1995]

    HTML validation testing. Specific to Web testing. This certifies that the HTML meets specifications and internal coding standards. 
    W3C Markup Validation Service, a free service that checks Web documents in formats like HTML and XHTML for conformance to W3C Recommendations and other standards.



    Incremental integration testing. Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

    Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

    Integration. The process of combining software components or hardware components or both into overall system.

    Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

    Integration Testing. Testing conducted after unit and feature testing. The intent is to expose faults in the interactions between software modules and functions. Either top-down or bottom-up approaches can be used. A bottom-up method is preferred, since it leads to earlier unit testing (step-level integration) This method is contrary to the big-bang approach where all source modules are combined and tested in one step. The big-bang approach to integration should be discouraged.

    Interface Tests. Programs that probide test facilities for external interfaces and function calls. Simulation is often used to test external interfaces that currently may not be available for testing or are difficult to control. For example, hardware resources such as hard disks and memory may be difficult to control. Therefore, simulation can provide the characteristics or behaviors for specific function.

    Internationalization testing (I18N) - testing related to handling foreign text and data within the program. This would include sorting, importing and exporting test and data, correct handling of currency and date and time formats, string parsing, upper and lower case handling and so forth. [Clinton De Young, 2003].

    Interoperability Testing which measures the ability of your software to communicate across the network on multiple machines from multiple vendors each of whom may have interpreted a design specification critical to your success differently.

    Inter-operability Testing. True inter-operability testing concerns testing for unforeseen interactions with other packages with which your software has no direct connection. In some quarters, inter-operability testing labor equals all other testing combined. This is the kind of testing that I say shouldnÂ’t be done because it can�t be done.[from Quality Is Not The Goal. By Boris Beizer, Ph. D.]

    Inspection. A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94].

    Install/uninstall testing. Testing of full, partial, or upgrade install/uninstall processes.



    Key Word-Driven Testing. The approach developed by Carl Nagle of the SAS Institute that is offered as freeware on the Web; Key Word-Driven Test. ing is an enhancement to the data-driven methodology. [Daniel J. Mosley, 2002]


    Latent bug A bug that has been dormant (unobserved) in two or more releases. [R. V. Binder, 1999]

    Lateral testing. A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]

    Limits testing. See Boundary Condition testing.

    Load testing. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

    Load stress test. A test is design to determine how heavy a load the application can handle.

    Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.

    Load isolation test. The workload for this type of test is designed to contain only the subset of test cases that caused the problem in previous testing.

    Longevity testing. See Reliability testing.

    Long-haul Testing. See Reliability testing.



    Master Test Planning. An activity undertaken to orchestrate the testing effort across levels and organizations.[Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Memory leak testing. Testing the server components to see if memory is not properly referenced and released, which can lead to instability and the product's crashing.

    Model-Based Testing. Model-based testing takes the application and models it so that each state of each input, output, form, and function is represented. Since this is based on detailing the various states of objects and data, this type of testing is very similar to charting out states. Many times a tool is used to automatically go through all the states in the model and try different inputs in each to ensure that they all interact correctly.[Lydia Ash, 2003]

    Monkey Testing.Testers use the term monkey when referring to a fully automated testing tool. This tool doesn’t know how to use any application, so it performs mouse clicks on the screen or keystrokes on the keyboard randomly. The test monkey is technically known to conduct stochastic testing, which is in the category of black-box testing. There are different types of monkey testing.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

    Monkey Testing. (smart monkey testing) Input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

    Monkey Testing. (brilliant monkey testing) The inputs are created from a stochastic regular expression or stochastic finite-state machine model of user behavior. That is, not only are the values determined by probability distributions, but the sequence of values and the sequence of states in which the input provider goes is driven by specified probabilities.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

    Monkey Testing. (dumb-monkey testing)Inputs are generated from a uniform probability distribution without regard to the actual usage statistics.[Visual Test 6 Bible by Thomas R. Arnold, 1998 ]

    Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.

    Migration Testing. Testing to see if the customer will be able to transition smoothly from a prior version of the software to a new one. [Scott Loveland, 2005]

    Mutation testing. A testing strategy where small variations to a program are inserted (a mutant), followed by execution of an existing test suite. If the test suite detects the mutant, the mutant is 'retired.' If undetected, the test suite must be revised. [R. V. Binder, 1999]

    Multiple Condition Coverage. A test coverage criteria which requires enough test cases such that all possible combinations of condition outcomes in each decision, and all points of entry, are invoked at least once.[G.Myers] Contrast with branch coverage, condition coverage, decision coverage, path coverage, statement coverage.



    Negative test. A test whose primary purpose is falsification; that is tests designed to break the software[B.Beizer1995]


    Noncritical code analysis. Examines software elements that are not designated safety-critical and ensures that these elements do not cause a hazard. (IEEE)



    Orthogonal array testing: Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of TC.Pay attention to the fact that it is an old and proven technique. The OAT was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi, 1987



    Orthogonal array testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]

    Oracle. Test Oracle: a mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test [from BS7925-1]



    Parallel Testing. Testing a new or an alternate data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison. Syn: parallel run.[ISO]

    Penetration testing. The process of attacking a host from outside to ascertain remote security vulnerabilities. Other responsibilities of professional penetration tester is to enforce the countermeasure's for certain types of known attacks and vulnerabilities.

    Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]

    Performance testing can be undertaken to: 1) show that the system meets specified performance objectives, 2) tune the system, 3) determine the factors in hardware or software that limit the system's performance, and 4) project the system's future load- handling capacity in order to schedule its replacements" [Software System Testing and Quality Assurance. Beizer, 1984, p. 256]

    Postmortem. Self-analysis of interim or fully completed testing activities with the goal of creating improvements to be used in future.[Scott Loveland, 2005]

    Preventive Testing Building test cases based upon the requirements specification prior to the creation of the code, with the express purpose of validating the requirements [Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]

    Prior Defect History Testing. Test cases are created or rerun for every defect found in prior tests of the system. [William E. Lewis, 2000]



    Qualification Testing. (IEEE) Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements. See: acceptance testing.

    Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.

    Quality Assurance (QA) Consists of planning, coordinating and other strategic activities associated with measuring product quality against external requirements and specifications (process-related activities).

    Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.

    Our definition of Quality: Achieving the target (not conformance to requirements as used by many authors) & minimizing the variability of the system under test



    Race condition defect. Many concurrent defects result from data-race conditions. A data-race condition may be defined as two accesses to a shared variable, at least one of which is a write, with no mechanism used by either to prevent simultaneous access. However, not all race conditions are defects.

    Random-input testing.The processes of testing a program by randomly selecting a subset of all possible input values.[Glenford J.Myers, 2004]

    Recovery testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

    Regression Testing. Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure. Regression testing is often accomplished through the construction, execution and analysis of product and system tests.

    Regression Testing. - testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program [Glenford J.Myers, 1979]



    Reengineering. The process of examining and altering an existing system to reconstitute it in a new form. May include reverse engineering (analyzing a system and producing a representation at a higher level of abstraction, such as design from code), restructuring (transforming a system from one representation to another at the same level of abstraction), recommendation (analyzing a system and producing user and support documentation), forward engineering (using software products derived from an existing system, together with new requirements, to produce a new system), and translation (transforming source code from one language to another or from one version of a language to another).

    Reference testing. A way of deriving expected outcomes by manually validating a set of actual outcomes. A less rigorous alternative to predicting expected outcomes in advance of test execution. [Dorothy Graham, 1999]

    Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.

    Reliability of an object is defined as the probability that it will not fail under specified conditions, over a period of time. The specified conditions are usually taken to be fixed, while the time is taken as an independent variable. Thus reliability is often written R(t) as a function of time t, the probability that the object will not fail within time t.

    Any computer user would probably agree that most software is flawed, and the evidence for this is that it does fail. All software flaws are designed in -- the software does not break, rather it was always broken. But unless conditions are right to excite the flaw, it will go unnoticed -- the software will appear to work properly. [Professor Dick Hamlet. Ph.D.]

    Range Testing. For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]

    Risk-Based Testing: Any testing organized to explore specific product risks.[James Bach website]

    Risk management. An organized process to identify what can go wrong, to quantify and access associated risks, and to implement/control the appropriate approach for preventing or handling each risk identified.

    Robust test. A test, that compares a small amount of information, so that unexpected side effects are less likely to affect whether the test passed or fails. [Dorothy Graham, 1999]



    Sanity Testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is often crashing systems, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

    Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the SUT is increased over time. [Load Testing Terminology by Scott Stirling ]

    Scenario-Based Testing. Scenario-based testing is one way to document the software specifications and requirements for a project. Scenario-based testing takes each user scenario and develops tests that verify that a given scenario works. Scenarios focus on the main goals and requirements. If the scenario is able to flow from the beginning to the end, then it passes.[Lydia Ash, 2003]

    (SDLC) System Development Life Cycle - a phases used to develop, maintain, and replace information systems. Typical phases in the SDLC are: Initiation Phase, Planning Phase, Functional Design Phase, System Design Phase, Development Phase, Integration and Testing Phase, Installation and Acceptance Phase, and Maintenance Phase. 
    The V-model talks about SDLC (System Development Life Cycle) phases and maps them to various test levels 

    Security Audit. An examination (often by third parties) of a server's security controls and may be disaster recovery mechanisms.

    Sensitive test. A test, that compares a large amount of information, so that it is more likely to defect unexpected differences between the actual and expected outcomes of the test. [Dorothy Graham, 1999]

    Server log testing. Examining the server logs after particular actions or at regular intervals to determine if there are problems or errors generated or if the server is entering a faulty state.

    Service test. Test software fixes, both individually and bundled together, for software that is already in use by customers. [Scott Loveland, 2005]

    Skim Testing A testing technique used to determine the fitness of a new build or release of an AUT to undergo further, more thorough testing. In essence, a "pretest" activity that could form one of the acceptance criteria for receiving the AUT for testing [Testing IT: An Off-the-Shelf Software Testing Process by John Watkins]

    Smoke test describes an initial set of tests that determine if a new version of application performs well enough for further testing.[Louise Tamres, 2002]

    Sniff test. A quick check to see if any major abnormalities are evident in the software.[Scott Loveland, 2005 ]

    Specification-based test. A test, whose inputs are derived from a specification.

    Spike testing. to test performance or recovery behavior when the system under test (SUT) is stressed with a sudden and sharp increase in load should be considered a type of load test.[ Load Testing Terminology by Scott Stirling ]


    STEP (Systematic Test and Evaluation Process) Software Quality Engineering's copyrighted testing methodology.

    Stability testing. Testing the ability of the software to continue to function, over time and over its full range of use, without failing or causing failure. (see also Reliability testing)

    State-based testing Testing with test cases developed by modeling the system under test as a state machine [R. V. Binder, 1999]

    State Transition Testing. Technique in which the states of a system are fist identified and then test cases are written to test the triggers to cause a transition from one condition to another state. [William E. Lewis, 2000]

    Static testing. Source code analysis. Analysis of source code to expose potential defects.

    Statistical testing. A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. [BCS]

    Stealth bug. A bug that removes information useful for its diagnosis and correction. [R. V. Binder, 1999]

    Storage test. Study how memory and space is used by the program, either in resident memory or on disk. If there are limits of these amounts, storage tests attempt to prove that the program will exceed them. [Cem Kaner, 1999, p55]

    Streamable Test cases. Test cases which are able to run together as part of a large group. [Scott Loveland, 2005]

    Stress / Load / Volume test. Tests that provide a high degree of activity, either using boundary conditions as inputs or multiple copies of a program executing in parallel as examples.

    Stress Test. A stress test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored. A stress test helps determine, for example, the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down.[Load Testing by S. Asbock]

    Structural Testing. (1)(IEEE) Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function. Contrast with functional testing. Syn: white-box testing, glass-box testing, logic driven testing.

    System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

    System verification test. (SVT). Testing of an entire software package for the first time, with all components working together to deliver the project's intended purpose on supported hardware platforms. [Scott Loveland, 2005]


    Table testing. Test access, security, and data integrity of table entries. [William E. Lewis, 2000]

    Test Artifact Set. Captures and presents information related to the tests performed.

    Test Bed. An environment containing the hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test [IEEE 610].

    Test Case. A set of test inputs, executions, and expected results developed for a particular objective.

    Test conditions. The set of circumstances that a test invokes. [Daniel J. Mosley, 2002]

    Test Coverage The degree to which a given test or set of tests addresses all specified test cases for a given system or component.


    Test Criteria. Decision rules used to determine whether software item or software feature passes or fails a test.

    Test data. The actual (sets of) values used in the test or that are necessary to execute the test. Test data instantiates the condition being tested (as input or as pre-existing data) and is used to verify that a specific requirement has been successfully implemented (comparing actual results to the expected results). [Daniel J. Mosley, 2002]

    Test Documentation. (IEEE) Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.

    Test Driver A software module or application used to invoke a test item and, often, provide test inputs (data), control and monitor execution. A test driver automates the execution of test procedures.

    Test-driven development (TDD) Is an evolutionary approach to development which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring.[Beck 2003; Astels 2003] 
    Comparing 

    Test environment bug. Bug class indicating that some test environment is found to be insufficient to support some test type or inconsistent with its specification. [DOD STD 2167A].

    Test Harness A system of test drivers and other tools to support test execution (e.g., stubs, executable test cases, and test drivers). See: test driver.

    Test Inputs. Artifacts from work processes that are used to identify and define actions that occur during testing. These artifacts may come from development processes that are external to the test group. Examples include Functional Requirements Specifications and Design Specifications. They may also be derived from previous testing phases and passed to subsequent testing activities.[Daniel J. Mosley, 2002]

    Test Idea: an idea for testing something.[James Bach]

    Test Item. A software item which is the object of testing.[IEEE]

    Test Log A chronological record of all relevant details about the execution of a test.[IEEE]

    Test logistics: the set of ideas that guide the application of resources to fulfilling the test strategy.[James Bach]

    Test Plan. A high-level document that defines a testing project so that it can be properly measured and controlled. It defines the test strategy and organized elements of the test life cycle, including resource requirements, project schedule, and test requirements

    Test Procedure. A document, providing detailed instructions for the [manual] execution of one or more test cases. [BS7925-1] Often called - a manual test script.

    Test Results. Data captured during the execution of test and used in calculating the different key measures of testing.[Daniel J. Mosley, 2002]

    Test Rig A flexible combination of hardware, software, data, and interconnectivity that can be configured by the Test Team to simulate a variety of different Live Environments on which an AUT can be delivered.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

    Test Script. The computer readable instructions that automate the execu- tion of a test procedure (or portion of a test procedure). Test scripts may be created (recorded) or automatically generated using test automation tools, programmed using a programming language, or created by a combination of recording, generating, and programming.[Daniel J. Mosley, 2002]

    Test strategy. Describes the general approach and objectives of the test activities. [Daniel J. Mosley, 2002]

    Test Status. The assessment of the result of running tests on software.

    Test Stub A dummy software component or object used (during development and testing) to simulate the behaviour of a real component. The stub typically provides test output.

    Test Suites A test suite consists of multiple test cases (procedures and data) that are combined and often managed by a test harness.

    Test technique: test method; a heuristic or algorithm for designing and/or executing a test; a recipe for a test. [James Bach]

    Test Tree. A physical implementation of Test Suite. [Dorothy Graham, 1999]

    Testability. Attributes of software that bear on the effort needed for validating the modified software [ISO 8402]

    Testability Hooks. Those functions, integrated in the software that can be invoked through primarily undocumented interfaces to drive specific processing which would otherwise be difficult to exercise. [Scott Loveland, 2005]

    Testing. The execution of tests with the intent of providing that the system and application under test does or does not perform according to the requirements specification.

    (TPI) Test Process Improvement. A method for baselining testing processes and identifying process improvement opportunities, using a static model developed by Martin Pol and Tim Koomen.

    Test Suite. The set of tests that when executed instantiate a test scenario.[Daniel J. Mosley, 2002]

    Test Workspace. Private areas where testers can install and test code in accordance with the project's adopted standards in relative isolation from the developers.[Daniel J. Mosley, 2002]

    Thread Testing. A testing technique used to test the business functionality or business logic of the AUT in an end-to-end manner, in much the same way a User or an operator might interact with the system during its normal use.[Testing IT: An Off-the-Shelf Software Testing Process by John Watkins ]

    Timing and Serialization Problems. A class of software defect, usually in multithreaded code, in which two or more tasks attempt to alter a shared software resource without properly coordinating their actions. Also known as Race Conditions.[Scott Loveland, 2005]

    Transient bug. A bug which is evident for a short period of time. see aperiodic bug. [Peter Farrell-Vinay 2008]

    Translation testing. See internationalization testing.

    Thrasher. A type of program used to test for data integrity errors on mainframe system. The name is derived from the first such program, which deliberately generated memory thrashing (the overuse of large amount of memory, leading to heavy paging or swapping) while monitoring for corruption. [Scott Loveland, 2005]




    Unit Testing. Testing performed to isolate and expose faults and failures as soon as the source code is available, regardless of the external interfaces that may be required. Oftentimes, the detailed design and requirements documents are used as a basis to compare how and what the unit is able to perform. White and black-box testing methods are combined during unit testing.

    Usability testing. Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer.



    Validation. The comparison between the actual characteristics of something (e.g. a product of a software project and the expected characteristics).Validation is checking that you have built the right system.

    Variance. A variance is an observable and measurable difference between an actual result and an expected result.

    Verification The comparison between the actual characteristics of something (e.g. a product of a software project) and the specified characteristics.Verification is checking that we have built the system right.

    Volume testing. Testing where the system is subjected to large volumes of data.[BS7925-1]



    Walkthrough In the most usual form of term, a walkthrough is step by step simulation of the execution of a procedure, as when walking through code line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc. 


    White Box Testing (glass-box). Testing is done under a structural testing strategy and require complete access to the object's structure¡that is, the source code.[B. Beizer, 1995 p8],

  • Software Testing-

    1. What is Software Testing ?

      Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. 

    2. What is black box, white box, gray box testing?

      Black-box and white-box are test design methods. Black-box test design treats the system as a “black-box”, so it doesn’t explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box. White-box test design allows one to peek inside the “box”, and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Synonyms for white-box include: structural, glass-box and clear-box.

      While black-box and white-box are terms that are still in popular use, many people prefer the terms 'behavioral' and 'structural'. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this 'gray-box' or 'translucent-box' test design, but others wish we'd stop talking about boxes altogether.

    3. What are unit and integration testing?

      Unit - The smallest compliable component. A unit typically is the work of one programmer (At least in principle). As defined, it does not include any called sub-components (for procedural languages) or communicating components in general.
      Unit Testing: in unit testing called components (or communicating components) are replaced with stubs, simulators, or trusted components. Calling components are replaced with drivers or trusted super-components. The unit is tested in isolation.

      Integration - Two components (actually one or more) are said to be integrated when:
      a. They have been compiled, linked, and loaded together.
      b. They have successfully passed the integration tests at the interface between them.

      Thus, components A and B are integrated to create a new, larger, component (A,B).So, In Integration testing, We basically check the output of A is correct or not, the way with which the output will go to B means the communication way between A & B is correct or not and the input received by B is correct or not.

    4. What's the difference between load and stress testing ?

      One of the most common, but unfortunate misuse of terminology is treating “load testing” and “stress testing” as synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly “load tested” nor subjected to a meaningful stress test.

      Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.

      Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads is in support of software reliability testing and in performance testing. The term 'load testing' by itself is too vague and imprecise to warrant use. For example, do you mean representative load,' 'overload,' 'high load,' etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions >suffer (application-specific) excessive delay.

      A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, 'load testing' is merely testing at the highest transaction arrival rate in performance testing.

    5. What's the difference between QA and testing? 

      QA is more a preventive thing, ensuring quality in the company and therefore the product rather than just testing the product for software bugs.

      TESTING means 'quality control'
      QUALITY CONTROL measures the quality of a product 
      QUALITY ASSURANCE measures the quality of processes used to create a quality product.

    6. What is Software Quality Assurance? 

      Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

    7. Why does Software have bugs? 

      • Miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements). 
      • Software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Windows-type interfaces, client-server and distributed applications, data communications, enormous relational databases, and sheer size of applications have all contributed to the exponential growth in software/system complexity. And the use of object-oriented techniques can complicate instead of simplify a project unless it is well-engineered. 
      • Programming errors - programmers, like anyone else, can make mistakes. 
      • changing requirements - the customer may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc. If there are many minor changes or any major changes, known and unknown dependencies among parts of the project are likely to interact and cause problems, and the complexity of keeping track of changes may result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing business environments, continuously modified requirements may be a fact of life. In this case, management must understand the resulting risks, and QA and test engineers must adapt and plan for continuous extensive testing to keep the inevitable bugs from running out of control.
      • time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made. 
      • poorly documented code - it's tough to maintain and modify code that is badly written or poorly documented; the result is bugs. In many organizations management provides no incentive for programmers to document their code or write clear, understandable code. In fact, it's usually the opposite: they get points mostly for quickly turning out code, and there's job security if nobody else can understand it ('if it was hard to write, it should be hard to read'). 
      • software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce their own bugs or are poorly documented, resulting in added bugs.

    8. What is verification & validation? 

      Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

    9. What is a 'walkthrough'? 

      A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

    10. What's an 'inspection'? 

      An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requiremenor a test plan, and the purpose is to find problems and see what's missing, not to fix anything.The result of the inspection meeting should be a written report.

    11. What kinds of testing are there? 

      • Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality. 
      • White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions. 
      • Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code. 
      • Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. 
      • Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.) 
      • System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system. 
      • Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state. 
      • Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing. 
      • Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails. 
      • Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc. 
      • Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes. 
      • Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. 
      • Compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment. 
      • Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. 
      • Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. 
      • Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. 

    12. What are 5 common problems in the software development process? 

      • Poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.
      • Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable. 
      • Inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash. 
      • Featuritis - requests to pile on new features after development is underway; extremely common. 
      • Miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed. 

    13. What are 5 common solutions to software development problems? 

      • Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements. 
      • Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out. 
      • Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing. 
      • Stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on. 
      • Proper communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use prototypes early on so that customers' expectations are clarified. 

    14. What is software 'quality'? 

      Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. 

    15. What is SEI & CMM ? 

      • SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense Department to help improve software development processes. 

      • CMM = 'Capability Maturity Model', developed by the SEI. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. It is geared to large organizations such as large U.S. Defense Department contractors. However, many of the QA processes involved are appropriate to any organization, and if reasonably applied can be helpful. Organizations can receive CMM ratings by undergoing assessments by qualified auditors. 

        Level 1 (Initial)- characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if any processes in place;successes may not be repeatable.
        Level 2 (Repeatable) - software project tracking, requirements management,realistic planning, and configuration management processes are in place; successful practices can be repeated.
        Level 3 (Defined) - standard software development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in place to oversee software processes, and training programs are used to ensure understanding and compliance.
        Level 4 (Managed) - metrics are used to track productivity, processes,and products. Project performance is predictable,and quality is consistently high.
        Level 5 (Optimizing) - the focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

    16. What is the 'software life cycle'? 

      The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.

    17. Will automated testing tools make testing easier? 

      Possibly. For small projects, the time needed to learn and implement them may not be worth it. For larger projects, or on-going long-term projects they can be valuable.
      A common type of automated tool is the 'record/playback' type. For example, a tester could click through all combinations of menu choices, dialog box choices, buttons, etc. in an application GUI and have them 'recorded' and the results logged by a tool. The 'recording' is typically in the form of text based on a scripting language that is interpretable by the testing tool. If new buttons are added, or some underlying code in the application is changed, etc. the application can then be retested by just 'playing back' the 'recorded' actions, and comparing the logging results to check effects of the changes. The problem with such tools is that if there are continual changes to the system being tested, the 'recordings' may have to be changed so much that it becomes very time-consuming to continuously update the scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a difficult task. 

    18. What makes a good test engineer? 

      A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

    19. What's a 'test plan'? 

      A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
      • Title 
      • Identification of software including version/release numbers 
      • Revision history of document including authors, dates, approvals 
      • Table of Contents 
      • Purpose of document, intended audience 
      • Objective of testing effort 
      • Software product overview 
      • Relevant related document list, such as requirements, design documents, other test plans, etc. 
      • Relevant standards or legal requirements 
      • Traceability requirements 
      • Relevant naming conventions and identifier conventions 
      • Overall software project organization and personnel/contact-info/responsibilties 
      • Test organization and personnel/contact-info/responsibilities 
      • Assumptions and dependencies 
      • Project risk analysis 
      • Testing priorities and focus 
      • Scope and limitations of testing 
      • Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable 
      • Outline of data input equivalence classes, boundary value analysis, error classes 
      • Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems 
      • Test environment validity analysis - differences between the test and production systems and their impact on test validity. 
      • Test environment setup and configuration issues 
      • Software migration processes 
      • Software CM processes 
      • Test data setup requirements 
      • Database setup requirements 
      • Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
      • Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs 
      • Test automation - justification and overview 
      • Test tools to be used, including versions, patches, etc. 
      • Test script/test code maintenance processes and version control 
      • Problem tracking and resolution - tools and processes 
      • Project test metrics to be used 
      • Reporting requirements and testing deliverables 
      • Software entrance and exit criteria 
      • Initial sanity testing period and criteria 
      • Test suspension and restart criteria 
      • Personnel allocation 
      • Personnel pre-training needs 
      • Test site/location 
      • Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues 
      • Relevant proprietary, classified, security, and licensing issues. 
      • Open issues 
      • Appendix - glossary, acronyms, etc.

    20. What's a 'test case'? 

      A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. 
      Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible. 

    21. What should be done after a bug is found? 

      The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available. 

    22. What is 'configuration management'? 

      Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.

    23. How can it be known when to stop testing? 

      This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
      • Deadlines (release deadlines, testing deadlines, etc.) 
      • Test cases completed with certain percentage passed 
      • Test budget depleted 
      • Coverage of code/functionality/requirements reaches a specified point 
      • Bug rate falls below a certain level 
      • Beta or alpha testing period ends 

    Thursday, September 18, 2008

    Unix Commands

    1. Files

    1.1. Filename Substitution

    Wild Cards ? *

    Character Class (c is any single character) [c…]
    Range [c-c]
    Home Directory ~
    Home Directory of Another User ~user
    List Files in Current Directory ls [-l]
    List Hidden Files ls -[l]a

    1.2. File Manipulation

    Display File Contents cat filename
    Copy cp source destination
    Move (Rename) mv oldname newname
    Remove (Delete) rm filename
    Create or Modify file pico filename

    1.3. File Properties

    Seeing Permissions ls -l filename
    Changing Permissions chmod nnn filename
    chmod c=p…[,c=p…] filename
    n, a digit from 0 to 7, sets the access level for the user
    (owner), group, and others (public), respectively. c is one of:
    u–user; g–group, o–others, or a–all. p is one of: r–read
    access, w–write access, or x–execute access.
    Setting Default Permissions umask ugo
    ugo is a (3-digit) number. Each digit restricts the default
    permissions for the user, group, and others, respectively.
    Changing Modification Time touch filename
    Making Links ln [-s] oldname newname
    Seeing File Types ls -F

    1.4. Displaying a File with less

    Run less less filename
    Next line RETURN
    Next Page SPACE
    Previous line k
    Previous Page b

    1.5. Directories

    Change Directory cd directory
    Make New Directory mkdir directory
    Remove Directory rmdir directory
    Print Working (Show Current) Directory pwd

    2. Commands

    2.1. Command-line Special Characters

    Quotes and Escape
    Join Words "…"
    Suppress Filename, Variable Substitution '…'
    Escape Character \
    Separation, Continuation
    Command Separation ;
    Command-Line Continuation (at end of line) \

    2.2. I/O Redirection and Pipes

    Standard Output >
    (overwrite if exists) >!
    Appending to Standard Output >>
    Standard Input <
    Standard Error and Output >&
    Standard Error Separately
    ( command > output ) >& errorfile
    Pipes/ Pipelines command | filter [ | filter]
    Filters
    Word/Line Count wc [-l]
    Last n Lines tail [-n]
    Sort lines sort [-n]
    Multicolumn Output pr -t
    List Spelling Errors ispell

    2.3. Searching with grep

    grep Command grep "pattern" filename
    command | grep "pattern"
    Search Patterns
    beginning of line ^
    end of line $
    any single character .
    single character in list or range […]
    character not in list or range [^…]
    zero or more of preceding char. or pattern *
    zero or more of any character .*
    escapes special meaning \

    3. C-Shell Features.

    3.1 History Substitution

    Repeat Previous Command !!
    Commands Beginning with str !str
    Commands Containing str !?str[?]
    All Arguments to Prev. Command !*
    Word Designators
    All Arguments :*
    Last Argument :$
    First Argument :^
    n'th Argument :n
    Arguments x Through y :x-y
    Modifiers
    Print Command Line :p
    Substitute Command Line :[g]s/l/r/

    3.2 Aliases

    alias Command alias name 'definition'
    definition can contain escaped history substitution event
    and
    word designators as placeholders for command-line arguments.

    3.3. Variable Substitution

    Creating a Variable set var
    Assigning a Value set var = value
    Expressing a Value $var
    Displaying a Value echo $var
    value is a single word, an expression in quotes, or an
    expression that results in a single word after variable,
    filename and command substitution takes place.
    Assigning a List set var = (list)
    list is a space-separated list of words, or an expression that
    results in a space-separated list.
    Selecting the n'th Item $var[n]
    Selecting all Items $var
    Selecting a Range $var[x-y]
    Item Count $#var

    3.4 foreach Lists

    Start foreach Loop foreach var (list)
    foreach prompts for commands to repeat for each item in
    list (with >), until you type end. Within the loop, $var
    stands for the current item in list.

    3.5. Command Substitution

    Replace Command with its Output on Command Line `…`

    3.6 Job Control

    Run Command in the Background &
    Stop Foreground Job CTRL-Z
    List of Background Jobs jobs
    Bring Job Forward %[n]
    Resume Job in Background %[n] &

    4. Processes

    Listing ps [-[ef]]
    Terminating kill [-9] PID
    Timing time command
    time is a number up to 4 digits. script is the name of a file
    containing the command line(s) to perform.

    5. Users

    Seeing Who is Logged In who
    w
    Seeing Your User Name whoami
    Standard Commands whereis file
    Aliases and Commands which command
    Describe Command whatis command
    Searching Out Files find dir -name name -print
    dir is a directory name within which to search.
    name is a filename to search for.

    6.2. Finding Changes

    Comparing Files diff leftfile rightfile
    diff prefixes a less-than (<) to selected lines from leftfile
    and a greater-than (>) to lines from rightfile.

    6.3. Automating Tasks

    Create a Makefile pico Makefile
    A makefile consists of macro definitions and targets.
    Test Makefile make -n [target]
    Run make make [target]

    6.4. Managing Disk Usage

    Check Quota quota -v
    Seeing Disk Usage df
    du -s

    6.5. Combining and Compressing Files

    Create a tarfile tar cf file.tar file1 file2 … fileN
    tar combines files but does not compress
    Create a zipfile zip filename
    Unzip a file unzip filename

    7. Printing

    7.1 Formatting Output for Printing

    Paginate with Page Headers pr filename
    in n columns pr -n filename
    Format for Laser Printer tex document

    7.2 The Printer Queue

    Print a File lp [-dpr] filename
    lpcae filename
    Check Quota lpquot
    List Queue lpq
    Stop Job lprm

    8. Miscellaneous

    8.1 Miscellaneous Commands

    List Commands for Subject man -k subject
    Display Current Date and Time date
    Log off exit
    Electronic Mail pine
    Display Documentation man command

    8.2 Control Keys

    Abort Program CTRL-C
    Backspace (Delete Last Character) CTRL-H
    Pause Display on Screen CTRL-S
    Resume Display after CTRL-S CTRL-Q
    Send Job to Background CTRL-Z followed by bg

    Questions and Answers - Linix / Unix

    1. Q. How do you list files in a directory?

       A. ls  - list directory contents

       ls -l(-l  use a long listing format)

     

    2. Q. How do you list all files in a directory, including the hidden files?

       A. ls -a  (-a, do not hide entries starting with .)

     

    3. Q. How do you find out all processes that are currently running?

       A. ps -f (-f does full-format listing.)

     

    4. Q. How do you find out the processes that are currently running or a particular user?

       A. ps -au Myname   (-u by effective user ID (supports names)) (a - all users)

     

    5. Q. How do you kill a process?

       A. kill -9  8 (process_id 8) or kill -9  %7  (job number 7)

          kill -9 -1  (Kill all processes you can kill.)

          killall - kill processes by name most (useful - killall java)

     

     

    6. Q. What would you use to view contents of the file?

       A. less filename

          cat filename

          pg filename

          pr filename

          more filename

          most useful is command: tail file_name - you can see the end of the log file.

     

    7. Q. What would you use to edit contents of the file?

       A. vi  screen editor or jedit, nedit or ex  line editor

     

    8. Q. What would you use to view contents of a large error log file?

       A. tail  -10 file_name   ( last 10 rows)

     

    9. Q. How do you log in to a remote Unix box?

       A. Using telnet server_name or ssh -l ( ssh - OpenSSH SSH client (remote login program))

     

    10.Q. How do you get help on a UNIX terminal?

       A. man command_name

          info command_name (more information)

     

    11.Q. How do you list contents of a directory including all of its

      subdirectories, providing full details and sorted by modification time?

       A. ls -lac

          -a all entries

          -c by time

     

    12.Q. How do you create a symbolic link to a file (give some reasons of doing so)?

       A. ln /../file1 Link_name

    Links create pointers to the actual files, without duplicating the contents of

    the files. That is, a link is a way of providing another name to the same file.

    There are two types of links to a file:Hard link, Symbolic (or soft) link;

     

    13.Q. What is a filesystem?

       A. Sum of all directories called file system.

          A file system is the primary means of file storage in UNIX. 

          File systems are made of inodes and superblocks.

     

    14.Q. How do you get its usage (a filesystem)?

       A. By storing  and manipulate  files.

     

    15.Q. How do you check the sizes of all users� home directories (one command)?

       A. du -s

          df

     

    The du command summarizes disk usage by directory. It recurses through all subdirectories and shows disk usage by each subdirectory with a final total at the end.

     

       Q. in current directory

       A. ls -ps (p- directory; s - size)

     

    16.Q. How do you check for processes started by user 'pat'?

     

       A. ps -fu pat   (-f -full_format u -user_name )

     

    17.Q. How do you start a job on background?

     

       A. bg  %4  (job 4)

     

    18 Q. What utility would you use to replace a string '2001' for '2002' in a text file?

     

       A. Grep,  Kde( works on Linux and Unix)

     

    19. Q. What utility would you use to cut off the first column in a text file?

        A. awk, kde

     

    20. Q. How to copy file into directory?

        A. cp  /tmp/file_name . (dot mean in the current directory)

     

    21. Q. How to remove directory with files?

        A. rm -rf directory_name

     

    22. Q. What is the difference between internal and external commands?

        A. Internal commands are stored in the; same level as the operating system while external

    commands are stored on the hard disk among the other utility programs.

     

    23. Q. List the three main parts of an operating system command:

        A. The three main parts are the command, options and arguments.

     

    24  Q. What is the difference between an argument and an option (or switch)?

        A. An argument is what the command should act on: it could be a filename,

    directory or name. An option is specified when you want to request additional

    information over and above the basic information each command supplies.

     

    25. Q. What is the purpose of online help?

        A. Online help provides information on each operating system command, the

    syntax, the options, the arguments with descriptive information.

    26.  Q. Name two forms of security.

        A.  Two forms of security are Passwords and File Security with permissions specified.

     

    27. Q. What command do you type to find help about the command who?

        A. $ man who

     

    28. Q. What is the difference between home directory and working directory?

        A. Home directory is the directory you begin at when you log into the

    system. Working directory can be anywhere on the system and it is where you are currently

    working.

     

    29. Q. Which directory is closer to the top of the file system tree, parent directory or current directory?

        A. The parent directory is above the current directory, so it is closer to

    the root or top of the

    file system.

     

    30. Q. Given the following pathname:

    $ /business/acctg/payable/supplier/april

    a) If you were in the directory called acctg, what would be the relative

    pathname name for the file called april?

    b) What would be the absolute pathname for april?

        A.

    a) $ payable/supplier/april

    b) $ /business/acctg/payable/supplier/april

     

    31. Q. Suppose your directory had the following files:

    help. 1 help.2 help.3 help.4 help.O1 help.O2

    aid.O1 aid.O2 aid.O3 back. 1 back.2 back.3

    a) What is the command to list all files ending in 2?

    b) What is the command to list all files starting in aid?

    c) What is the command to list all "help" files with one character extension?

         A.

    a) ls *2

    b) ls aid.*

    c) ls help.?

     

    32. Q. What are two subtle differences in using the more and the pg commands?

        A.  With the more command you display another screenful by pressing

     the spacebar, with pg you press the return key.

          The more command returns you automatically to the UNIX

    shell when completed, while pg waits until you press return.

     

    33. Q. When is it better to use the more command rather than cat command?

        A. It is sometimes better to use the more command when you are viewing

     a file that will display over one screen.

     

    34. Q. What are two functions the move mv command can carry out?

        A. The mv command moves files and can also be used to rename a file or directory.

     

    35. Q. Name two methods you could use to rename a file.

        A. Two methods that could be used:

    a. use the mv command

    b. copy the file and give it a new name and then remove the original file if no longer needed.

     

    36. The soccer league consists of boy and girl teams. The boy file names begin

    with B, the girl teams begin with G. All of these files are in one directory

    called "soccer", which is your current directory:

    Bteam.abc Bteam.OOl Bteam.OO2 Bteam.OO4

    Gteam.win Gteam.OOl Gteam.OO2 Gteam.OO3

    Write the commands to do the following:

    a) rename the file Bteam.abc to Bteam.OO3.

    b) erase the file Gteam. win after you have viewed the contents of the file

    c) make a directory for the boy team files called "boys", and one for the girl team files

    called" girls"

    d) move all the boy teams into the "boys" directory

    e) move all the girl teams into the "girls" directory

    f) make a new file called Gteam.OO4 that is identical to Gteam.OOl

    g) make a new file called Gteam.OO5 that is identical to Bteam.OO2

      A.

    a) mv Bteam.abc Bteam.OO3.

    b) cat Gteam.win -or- more Gteam.win

    rm Gteam. win

    c) mkdir boys

    mkdir girls

    d) mv Bteam* boys

    e) mv Gteam* girls

    f) cd girls

    cp Gteam.OO1 Gteam.OO4

    g) There are several ways to do this. Remember that we are currently in the directory

    /soccer/girls.

    cp ../boys/Bteam.OO2 Gteam.OO5

    or

    cd ../boys

    cp Bteam.OO2 ../girls/Gteam.OO5

     

     

    37. Q. Draw a picture of the final directory structure for the "soccer"

    directory, showing all the files and directories.

     

     

    38. Q. What metacharacter is used to do the following:

    1.1 Move up one level higher in the directory tree structure

    1.2 Specify all the files ending in .txt

    1.3 Specify one character

    1.4 Redirect input from a file

    1.5 Redirect the output and append it to a file

        A.

    1. 1.1 double-dot or ..

    1.2 asterisk or *

    1.3 question or ?

    1.4 double greater than sign: >>

    1.5 the less than sign or <

     

    39. Q. List all the files beginning with A

        A. To list all the files beginning with A command: ls A*

     

     

    40. Q. Which of the quoting or escape characters allows the dollar sign ($) to retain its special meaning?

        A. The double quote (") allows the dollar sign ($) to retain its special meaning.

    Both the backslash (\) and single quote (') would remove the special meaning of the dollar sign.

     

    41. Q. What is a faster way to do the same command?

    mv fileO.txt newdir

    mv filel.txt newdir

    mv file2.txt newdir

    mv file3.txt newdir

        A. A shortcut method would be:   mv file?.txt newdir

     

     

    42. Q. List two ways to create a new file:

        A.

    a. Copy a file to make a new file.

    b. Use the output operator e.g. ls -l > newfile.txt

     

    43. Q. What is the difference between > and >> operators?

        A. The operator > either overwrites the existing file (WITHOUT WARNING) or creates a new file.

    The operator >> either adds the new contents to the end of an existing file or creates a new file.

     

    44. Write the command to do the following:

    44.1 Redirect the output from the directory listing to a printer.

    44.2 Add the file efg.txt to the end of the file abc.txt.

    44.3 The file testdata feeds information into the file called program

    44.4 Observe the contents of the file called xyz.txt using MORE.

    44.5 Observe a directory listing that is four screens long.

         A.

    44.1 ls > lpr

    44.2 cat efg.txt >> abc.txt

    44.3 program <>

    44.4 more <>

    44.5 ls > dirsave | more

     

     

     

    45. Q. How do you estimate file space usage

        A. Use du command (Summarize disk usage of each FILE, recursively for

    directories.) Good to use arguments du -hs

    (-h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)  

    (-s, --summarize display only a total for each argument)

     

    46. Q. How can you see all mounted drives?

        A. mount -l

     

    47. Q. How can you find a path to the file in the system?

        A. locate file_name (locate - list files in databases that match a pattern)

     

    48. Q. What Linux  HotKeys do you know?

        A. Ctrl-Alt-F1        Exit to command prompt

           Ctrl-Alt-F7 or F8    Takes you back to KDE desktop from command prompt

           Crtl-Alt-Backspace           Restart XWindows

           Ctrl-Alt-D          Show desktop

     

    49. Q. What can you tell about the tar Command?

        A. The tar program is an immensely useful archiving utility. It can combine

    an entire directory tree into one large file suitable for transferring or

    compression.

     

    50.  Q. What types of files you know?

         A. Files come in eight flavors:

    Normal files

    Directories

    Hard links

    Symbolic links

    Sockets

    Named pipes

    Character devices

    Block devices

     

    51.  Q. How to copy files from on PC to another on the same network

         A. Use the following command:scp yur_file you_login@your_IP

    example: copy .conf file from your PC  to alex computer-

     scp /etc/X11/xorg.conf alex@10.0.10.169:

     

    52.  Q. Please describe information below:

     

    -rw-rw-r--    1 dotpc    dotpc         102 Jul 18  2003 file.buf

    drwxr-xr-x    9 dotpc    dotpc        4096 Oct 21 09:34 bin

    lrwxrwxrwx    1 dotpc    dotpc          20 Mar 21 15:00 client -> client-2.9.5

    drwxrwxr-x   11 dotpc    dotpc        4096 Sep  2  2005 client-2.8.9

    drwxrwxr-x    7 dotpc    dotpc        4096 Dec 14 12:13 data

    drwxr-xr-x   12 dotpc    dotpc        4096 Oct 21 09:41 docs

    drwxr-xr-x    5 dotpc    dotpc        4096 Dec  7 14:22 etc

    drwxr-xr-x   11 dotpc    dotpc        4096 Mar 21 15:54 client-2.9.5

    -rw-r--r--    1 dotpc    dotpc      644836 Mar 22 09:53 client-2.9.5.tar.gz

     

         A. This is a result of command $ls -l

    we have two files, 6 directories and one link to client-2.9.5 directory.

    There is number of files in every directory, size and data of last change.

     

     

    53. Q. If you would like to run two commands in sequence what operators you can use?

     

         A. ; or && the difference is:

    if you separate commands with ; second command will be run automatically.

    if you separate commands with && second command will be run only in the case

     the first was run successfully.

     

    54.  Q. How you will uncompress the file?

          A. Use tar command (The GNU version of the tar archiving utility):

    tar -zxvf file_name.tar.gz

     

    55. Q.How do you execute a program or script, my_script in your current directoty?

        A. ./my_script

     

    56. Q.How to find current time configuration in the file my_new.cfg

        A. grep time my_new.cfg

    Grep  searches the named input files (or standard input if

    no files are named, or the file name - is given) for lines

    containing a match to the given pattern.

     

    Q. What does grep() stand for?

    A. General Regular Expression Parser.

     

    57. Q. What does the top command display?

         A. Top provides an ongoing look at processor activity in real

           time.   It  displays  a  listing of the most CPU-intensive

           tasks on the system, and can provide an interactive inter­

           face for manipulating processes. (q is to quit)

     

    58. Q. How can you find configuration on linux?

        A. by using /sin/ifconfig

    If no arguments are given, ifconfig displays the  status  of  the  cur-

    rently  active interfaces.  If a single interface argument is given, it displays the status of the given interface only; if a single  -a  argu-

    ment  is  given,  it  displays the status of all interfaces, even those

    that are down.  Otherwise, it configures an interface.

     

    59.  Q. How to find difference in two configuration files on the same server?

         A. Use diff command that is compare files line by line

    diff -u /usr/home/my_project1/etc/ABC.conf /usr/home/my_project2/etc/ABC.conf

     

    60.  Q. What is the best way to see the end of a logfile.log file?

         A. Use tail command - output the last part of files

    tail -n file_name ( the last N lines, instead of the last 10 as default)

     

    61.  Q. Please write a loop for removing all files in the current directory that contains a word 'log'

         A. for i in *log*; do rm $i; done

     

    62.  Question: How to switch to a previously used directory?

         Answer:   cd -