Integration testing using a real project as an example.

Subscribe
Join the “koon.ru” community!
In contact with:

From the institute course on programming technologies, I learned the following classification of types of testing (criterion - degree of code isolation). Testing happens:

  • Unit testing - testing one module in isolation.
  • Integration Testing - testing a group of interacting modules.
  • System Testing - testing the system as a whole.
The classification is good and clear. However, in practice it turns out that each type of testing has its own characteristics. And if they are not taken into account, testing becomes burdensome and is not done as well as it should be. Here I have collected approaches to real application various types testing. And since I'm writing in .NET, the links will be to the corresponding libraries.

Unit testing

Block (modular, unit testing) testing is the most understandable for a programmer. In fact, this is testing the methods of a program class in isolation from the rest of the program.

Not every class is easy to cover with unit tests. When designing, you need to take into account the possibility of testability and make class dependencies explicit. To guarantee testability, you can use TDD methodology, which prescribes first writing a test, and then the implementation code of the method being tested. Then the architecture becomes testable. Dependency untangling can be done using Dependency Injection. Then each dependency is explicitly associated with an interface and it is explicitly determined how the dependency is injected - into a constructor, into a property, or into a method.

There are special frameworks for unit testing. For example, NUnit or the test framework from Visual Studio 2008. To test classes in isolation, there are special Mock frameworks. For example, Rhino Mocks. They allow interfaces to automatically create stubs for dependency classes, giving them the required behavior.

Many articles have been written on unit testing. I really like the MSDN article Write Maintainable Unit Tests That Will Save You Time And Tears, which explains well and clearly how to create tests that do not become burdensome to maintain over time.

Integration testing

Integration testing is, in my opinion, the most difficult to understand. There is a definition - this is testing the interaction of several classes performing some kind of work together. However, it is not clear how to test by this definition. You can, of course, build on other types of testing. But this is fraught.

If we approach it as unit testing, in which dependencies are not replaced by mock objects in tests, then we get problems. For good coverage need to write a lot of tests, since the number possible combinations interacting components is a polynomial dependence. In addition, unit tests test exactly how interaction is carried out (see white box testing). Because of this, after refactoring, when some interaction was highlighted in new class, tests fail. A less invasive method needs to be used.

It is also not possible to approach integration testing as a more detailed system testing. In this case, the opposite of tests will be few to check all interactions used in the program. System testing is too high level.

I came across a good article on integration testing only once - Scenario Driven Tests. After reading it and Ayende’s book on DSL DSLs in Boo, Domain-Specific Languages ​​in .NET, I got an idea on how to arrange integration testing.

The idea is simple. We have input data, and we know how the program should work on it. Let's write this knowledge into a text file. This will be a test data specification that states what results are expected from the program. Testing will determine compliance with the specification and what the program actually finds.

I will illustrate with an example. The program converts one document format to another. The conversion is tricky and involves a lot of math. The customer provided a set of typical documents that he needed to convert. For each such document we will write a specification, where we will write down all sorts of intermediate results, which our program will reach during conversion.

1) Let’s say the documents sent have several sections. Then in the specification we can specify that the document being parsed must have sections with the specified names:

$SectionNames = Introduction, Article text, Conclusion, Literature

2) Another example. When converting you need to split geometric figures to primitives. The partition is considered successful if, in total, all primitives completely cover original figure. From the documents sent, we will select various figures and write our own specifications for them. The fact that a figure is covered by primitives can be reflected as follows:

$IsCoverable = true

It is clear that to check such specifications, you will need an engine that would read the specifications and check their compliance with the behavior of the program. I wrote such an engine and was pleased with this approach. I'll release the engine to Open Source soon. (UPD: Posted)

This type of testing is integration, since during testing the interaction code of several classes is called. Moreover, only the result of interaction is important, and not the details and order of calls. Therefore, tests are not affected by code refactoring. There is no over- or under-testing - only those interactions that occur when processing real data are tested. The tests themselves are easy to maintain because the specification is easy to read and easy to change to suit new requirements.

System testing

System testing is testing of the program as a whole. For small projects This is, as a rule, manual testing - I launched it, clicked, and made sure that it (doesn’t) work. Can be automated. There are two approaches to automation.

The first approach is to use a variation of the MVC pattern - Passive View (here's another good article according to variations of the MVC pattern) and formalize user interaction with the GUI in code. Then system testing comes down to testing Presenter classes, as well as the logic of transitions between Views. But there is a nuance here. If you test Presenter classes in the context of system testing, then you need to replace as few dependencies as possible with mock objects. And here comes the problem of initializing and bringing the program into the state required to start testing. The Scenario Driven Tests article mentioned above talks about this in more detail.

The second approach is to use special tools to record user actions. That is, in the end the program itself starts, but buttons are clicked automatically. For .NET, an example of such a tool is the White library. WinForms, WPF and several other GUI platforms are supported. The rule is this: for each use case, a script is written that describes the user’s actions. If all use cases are covered and the tests pass, then you can hand over the system to the customer. The acceptance certificate must be signed.

Not a single development software cannot do without testing the executable code. In fact, it takes up half of the total development time and more than half the project cost. However, this is an integral part of the process of creating new applications, programs, and systems.

Integration testing as part of a big job

One of the ways to control software quality is integration testing, the input of which is taken from individual modules tested at the previous stage.

Unlike modular version, during which errors are identified that are localized in each individual function or class, integration testing is the search for defects associated with the implementation of interaction between in separate parts created product. Integration functional testing uses the “white box” method, that is, the quality engineer has access to and knowledge of the texts of each individual module, as well as the principles of interaction between them.

Module assembly methods

The monolithic method means that all modules that will be subject to integration testing in the future are assembled together at the same time. Situations almost certainly arise when part of the complex under test is not yet ready.

In this case, it is replaced with additionally developed “stubs”, or drivers.

Along with the monolithic method, an incremental method is distinguished (it is also called step-by-step), since the volume of the tested code increases gradually, making it possible to localize areas with defects in the relationships between individual parts.

The incremental method includes two ways to add modules:

  • top-down or ascending,
  • bottom-up - descending.

Features of monolithic and incremental testing

The main disadvantage of the monolithic type of assembly is that a large amount of time and labor is spent on simulating the missing parts of the complex under test. It would seem that stubs are enough handy tool testing, however, situations arise when during the process it is necessary to re-create simulation parts of the program. For example, if the composition of the tested modules changes. In addition, the efficiency of finding defects is not so high when the work is not with a real product, but only with a fictitious component. The same drawback also accompanies incremental testing with a bottom-up build method.

At the same time, one of the disadvantages step by step method is the need to organize and create an environment for executing modules in a given sequence. It is also practically impossible to develop the upper and lower levels in parallel.

Of course, both assembly methods, monolithic and incremental, have not only disadvantages, but also advantages. In the first case there appear great opportunities for parallel development of all classes and functions involved in testing, as in initial stage, and after modification. The step-by-step method is less labor-intensive: modules are added gradually, and errors and defects are also gradually discovered. This is known to reduce the time spent searching for them.

Benefits of Integration Testing

At this stage, a colossal amount of work is carried out to check the relationships of all levels, without which, of course, further testing is impossible.

Software integration testing has a number of advantages:

  • checking the interaction interface between individual program modules;
  • control of relationships between the tested complex and third-party software solutions;
  • testing the operation of external components of the solution;
  • control of compliance of project documentation regarding the interaction of individual modules.

Correction of defects

Integration testing is complete, but that's not all. Errors found are recorded and sent to the developer for correction, after which the process begins again.

First, it is necessary to check whether the identified defects have been eliminated. Secondly, when the source code was changed, new errors could arise in the operation of the program and interaction with third-party software.

Although there are now a large number of quality control methods, there are still many important role integration testing plays a role. An example of this type of verification can clearly demonstrate bottlenecks in software development and documentation.

Test automation

Depending on the volume of the initial data set and the subject area of ​​development, the problem of testing time and the labor intensity of the event as a whole may arise.

For the most effective verification development requires the use of a huge amount of input data and conditions, which is impossible to handle “manually”. Test automation is used to solve this problem. Like other types, integration testing can also be automated. This will reduce overall development time and also increase the efficiency of the error detection process.

However, testing automation cannot completely replace the work of a quality engineer, but only supplement it.

So, integration testing is an integral part of the development of any software and one of the stages of the entire process of checking the quality of the product. Like any method, it has a number of advantages and disadvantages, but without its use, high-quality software development becomes impossible.

Classes and types of tests.

There are two main classes of tests: traditional And non-traditional.

The test has composition, integrity And structure. It consists of:

  • assignments;
  • rules for their application;
  • grades for completing each task;
  • recommendations for interpreting test results.

Test integrity means the relationship of tasks, their belonging to a common measured factor. Each test task fulfills its assigned role and therefore none of them can be removed from the test without loss of measurement quality.

Test structure forms a way to connect tasks with each other. Basically, this is the so-called factor structure, in which each task is related to others through common content and common variation in test results.

A traditional test is a unity of at least three systems:

  • a meaningful system of knowledge described in the language of the academic discipline being tested;
  • a formal system of tasks of increasing difficulty;
  • statistical characteristics of tasks and test subjects’ results.

The traditional pedagogical test must be viewed in two significant ways: as a method of pedagogical measurement and as a result of test application.

those st is a system of tasks that form the best methodological integrity. The integrity of the test is the stable interaction of tasks that form the test as a developing system.

Homogeneous tests

Traditional tests include tests homogeneous And heterogeneous.

Homogeneous test represents a system of tasks of increasing difficulty, specific form and specific content - a system created for the purpose of objective, qualitative, and effective method assessing the structure and measuring the level of preparedness of students in one academic discipline.

Homogeneous tests are more common than others. In pedagogy, they are created to control knowledge in one academic discipline or in one section of, for example, a large academic discipline such as physics. In a homogeneous pedagogical test, the use of tasks that reveal other properties is not allowed. The presence of the latter violates the requirement of disciplinary purity of the pedagogical test. After all, every test measures something predetermined.



Heterogeneous tests

Heterogeneous test represents a system of tasks of increasing difficulty, specific form and specific content - a system created for the purpose of an objective, high-quality, and effective method for assessing the structure and measuring the level of preparedness of students in several academic disciplines.

Often such tests also include psychological tasks to assess the level of intellectual development.

Typically, heterogeneous tests are used for a comprehensive assessment of school graduates, personality assessment when applying for a job, and for selecting the most prepared applicants for admission to universities. Since each heterogeneous test consists of homogeneous tests, the interpretation of test results is carried out based on the answers to the tasks of each test (here they are called scales) and, in addition, through various methods By aggregating scores, attempts are made to give an overall assessment of the test subject’s preparedness.

Interpretation of test results is carried out primarily in the language of testology, based on the arithmetic mean, mode or median and on the so-called percentile norms, which show what percentage of subjects have a test result worse than that of any subject taken for analysis with his test score. This interpretation is called normative-oriented.

Integrative tests

Integrative can be called a test consisting of systems of tasks that meet the requirements of integrative content, test form, increasing difficulty of tasks aimed at a generalized final diagnosis of the preparedness of a graduate of an educational institution.

Diagnostics is carried out by presenting such tasks, the correct answers to which require integrated (generalized, clearly interrelated) knowledge in the field of two or more academic disciplines. The creation of such tests is given only to those teachers who have knowledge of a number of academic disciplines, understand the important role of interdisciplinary connections in learning, and are able to create tasks, the correct answers to which require students to have knowledge of various disciplines and the ability to apply such knowledge.

Integrative testing is preceded by organization integrative learning. Unfortunately, the current class-lesson form of conducting classes, combined with excessive fragmentation of academic disciplines, together with the tradition of teaching individual disciplines (rather than generalized courses), will for a long time hinder the implementation of an integrative approach in the processes of learning and monitoring preparedness.

The advantage of integrative tests over heterogeneous ones lies in the greater informative content of each task and in the smaller number of tasks themselves.

Adaptive tests

The feasibility of adaptive control arises from the need to rationalize traditional testing.

Every teacher understands that there is no need to give a well-prepared student easy and very easy tasks, because the probability of the right decision. In addition, lightweight materials do not have noticeable development potential. Symmetrically, due to the high probability of a wrong decision, there is no point in giving difficult tasks to a weak student. It is known that difficult and very difficult tasks reduce the learning motivation of many students.

The most main characteristic tasks of an adaptive test is their level of difficulty, obtained empirically, which means: before getting to the bank, each task undergoes empirical testing for sufficient large number typical students of the population of interest. The words “contingent of interest” are intended to represent here the meaning of the more rigorous concept known in science “general population”.

Testing has the following advantages over other methods of pedagogical control:

· increasing the speed of checking the quality of knowledge and skills acquired by students;

· implementation of although superficial, but complete coverage of everything educational material;

· reduction of impact negative influence on the results of testing such factors as mood, level of qualifications and other characteristics of a particular teacher, i.e. minimizing the subjective factor when evaluating answers;

· high objectivity and, as a result, a greater positive stimulating effect on cognitive activity student;

· focus on modern technical means, for use in the environment of computer training and monitoring systems;

· the possibility of mathematical and statistical processing of control results, and as a result, increasing the objectivity of pedagogical control;

· implementation of the principle of individualization and differentiation of training through the use of adaptive tests;

· the ability to increase the frequency and regularity of control by reducing the time required to complete tasks and automating the inspection;

· facilitating the process of integration of the country’s education system into the European one.

Tests can be classified on the following basis:

1. Subject area of ​​application of tests: single-subject, multi-subject, integrative.
Integrative can be called a test consisting of such tasks, the correct answers to which require integrated (interrelated, generalized) knowledge of two or more academic disciplines. The use of such tests in school, both monitoring and educational, is an excellent means of implementing interdisciplinary connections in teaching.

2. General orientation of the test design: normative-oriented or criterion-oriented (subject-oriented).
At normative-oriented approach, tests are developed to compare subjects according to the level of educational achievements.
Main hallmark subject-specific testing is the interpretation of test performance from the point of view of its semantic content. The emphasis is on a strictly defined content area (what test takers can and know), and not on how they look compared to others.

3.Didactic-psychological test orientation: achievement test to control knowledge of theory; an achievement test to monitor abilities and skills of varying degrees of complexity in a given subject, a learning ability test (diagnosis of real learning capabilities in a given range of subject or cyclical knowledge - mathematical, linguistic, etc.).

4.Orientation to a specific stage of control: preliminary control tests, current control tests, final control tests.

5. Dominant activity of the subject when performing tests– oral, written, computer.

6. Number of control objects: tests that have one object of control (for example, the number of operations performed at the proper level) or several (quality, quantity, speed, strict sequence, awareness of the same operations).

7. Degree of homogeneity of test items: tests with homogeneous or heterogeneous forms of constructing tasks.

8. Speed ​​factor: high-speed (with mandatory recording of execution time) and non-fast.

9. Test organization form: mass, individual, group.

Separately, there are the so-called adaptive tests based on the principle of individualization of learning. Every teacher understands that there is no point in giving easy and very easy tasks to a good student, just as there is no point in giving difficult tasks to a weak student. In the theory of pedagogical measurements, a measure of the difficulty of tasks and a measure of the level of knowledge were found that were comparable on the same scale. After the advent of computers, this measure formed the basis of the method of adaptive knowledge control, where the difficulty and number of tasks presented are regulated depending on the students’ answers.

12 answers

Integration testing is when you test multiple components and how they work together. For example, how another system interacts with your system or a database interacts with the data abstraction layer. This usually requires complete installed system, although in its pure forms it does not work.

Functional testing is when you test a system against the functional requirements of the product. Product/project management typically records this data, and QA formalizes the process of what the user should see and experience, and what the end result of these processes is. Depending on the product, this may or may not be automated.

Functional Testing: Yes, we test the product or software as a whole functionally whether it works functionally or not (test buttons, links, etc.)

For example: Login page

you provide a username and password, you check if it takes you to the home page or not.

Integration Testing: Yes, you are only testing the integrated software, but you are checking where the data flow is happening and if any changes are happening in the database.

For example: Sending an email

You send one message to someone, there is a data flow, and also a change in the database (the table sent increases the value by 1)

Hope this helped you.

This is an important distinction, but unfortunately you will never find agreement. The problem is that most developers define them from their own point of view. This is very similar to the Pluto debate. (If it were closer to the Sun, would it be a planet?)

Unit testing is easy to define. It tests CUT ( Code Under Test) and nothing else. (Well, as little as possible.) This means they are mockeries, knockoffs, and fixtures.

On the other end of the spectrum is what many people call system integration testing. This is testing as much as possible, but still looking for bugs in your own CUT.

But what about the vast space in between?

  • For example, what if you check a little more than CUT? What if you enabled the Fibonacci function instead of using the fixture you entered? I would call it functional testing, but the world doesn't agree with me.
  • What if you included time() or rand() ? Or what if you call http://google.com ? I would call this system testing, but again, I'm alone.

Why does this matter? Because system tests are unreliable. They are necessary, but sometimes they can fail for reasons beyond your control. On the other hand, functional tests should always pass and not randomly; if they happen to be fast, they can also be used from the very beginning to use Test-Driven Development without writing too much large quantity tests for your internal implementation. In other words, I think unit tests may be more complicated than they're worth, and I'm in good company.

I set the tests on 3 axes, with all of them zeros when unit testing:

  • Functional testing: Using real code deeper and deeper into your call stack.
  • Integration-testing: higher and higher your call stack; in other words, testing your CUT by running code that will use it.
  • System testing: more and more unique operations (O/S scheduler, clock, network, etc.).

The test could easily be all 3 to varying degrees.

Functional Testing: This is a testing process in which each component of a module is tested. For example: If a web page contains a text field, you need to check the radiobot checkboxes, buttons and dropdowns, etc.

Integration testing: A process in which the flow of data between two modules is tested.

Integration testing. Integration testing is nothing but testing various modules. You should check the relationship between modules. For example you open facebook then you see login page after entering login id and password you can see facebook home page so login page is one module and home page is another module. you should only check the connection between them when you are logged in, then only the home page should be open and not the message box or anything else. There are two main types of integration testing: the TOP-DOWN approach and the BOTTOM UP approach.

Functional testing. In functional testing, you should only think about input and output. In this case, you have to think like a real user. Testing what you gave and what result you got is functional testing. you only need to watch the exit. With functional testing, you do not need to test the coding of the application or software.

A functional testing test focuses on the functionality and supporting functionality of the application. The functionality of the application must work correctly or not.

In an integration testing test, you need to check the dependency between modules or submodules. The example for module entries must be rendered correctly and displayed in another module.

Integration Test: - When the unit testing is done and the problems with the related components are resolved, then all the required components must be integrated into one system so that it can perform the operation. After integrating the components of the system To check whether the system is working correctly or not, this type of testing is called integration testing.

Functional Testing: - Testing is mainly divided into two categories: 1.Functional Testing 2. Non-Functional Testing **Functional Testing: - To check whether the software works as per the user's requirements or not. ** Non-functional testing: - To check whether the software meets quality criteria such as stress test, security test, etc.

Usually the Client provides requirements only for a functional test and for a non-functional test, the requirements should not be mentioned, but the application is required to perform these activities.

I would say that both of them are closely related to each other and it is very difficult to differentiate between them. In my opinion, integration testing is a subset of functional testing.

The functionality check is based on the initial requirements you receive. You will test the application's behavior as expected against the requirements.

When it comes to integration testing, it is the interaction between modules. If a module sends input, module B can process it or not.

Integration testing

You can see how the different modules of the system work together. We mainly refer to the integrated functionality of the various modules rather than to the different components of the system. For efficient work In any system or software product, each component must be synchronized with each other. In most cases, the tool we used for integration testing will be the one we used for unit testing. It is used in difficult situations when unit testing is not sufficient to test the system.

Functional testing

It can be defined as testing individual functionality modules. It refers to testing a software product at an individual level to check its functionality. Test cases are designed to test the software for expected and unexpected results. This type of testing is done more from the user's perspective. That is, it takes into account the user's expectation for type input. It is also called black box testing and also called closed box testing

The lecture is the second of three covering the levels of the verification process. The topic of this lecture is the process of integration testing, its tasks and goals. Organizational aspects of integration testing are considered - structural and temporal classifications of integration testing methods, planning of integration testing. The purpose of this lecture: to give an idea of ​​the process of integration testing, its technical and organizational components

Objectives and goals of integration testing

The result of testing and verification of the individual modules that make up the software system should be a conclusion that these modules are internally consistent and comply with the requirements. However, individual modules rarely function on their own, so the next task after testing individual modules is testing the correct interaction of several modules combined into a single whole. This type of testing is called integration testing. Its purpose is to ensure the correctness collaboration system component.

Integration testing is also called system architecture testing. On the one hand, this name is due to the fact that integration tests include checks of all possible types interactions between software modules and elements that are defined in the system architecture - thus, integration tests check the completeness of interactions in the system implementation being tested. On the other hand, the results of integration tests are one of the main sources of information for the process of improving and clarifying the system architecture, intermodule and intercomponent interfaces. That is, from this point of view, integration tests check the correct interaction of system components.

An example of checking the correctness of interaction can be two modules, one of which accumulates protocol messages about received

Sinitsyn S.V., Nalyutin N.Yu. Software verification

files, and the second displays this protocol on the screen. The functional requirements for the system state that messages should be displayed in reverse chronological order. However, the storage module stores messages in forward order, and the output module uses the stack to output them in reverse order. Unit tests that touch each module individually will have no effect here - the opposite situation is quite possible, in which messages are stored in the reverse order and output using a queue. A potential problem can only be detected by checking the interaction of modules using integration tests. The key point here is that the system as a whole outputs messages in reverse chronological order, i.e., by checking the output module and finding that it outputs messages in forward order, we cannot guarantee that we have detected a defect.

As a result of carrying out integration testing and eliminating all identified defects, a consistent and holistic architecture of the software system is obtained, i.e. We can think of integration testing as testing architecture and low-level functional requirements.

Integration testing is typically an iterative process that tests the functionality of an increasingly larger set of modules.

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”