Integration testing using a real project as an example. Integration testing

Subscribe
Join the “koon.ru” community!
In contact with:

Pedagogical test

A pedagogical test is defined as a system of tasks of a certain content, of increasing difficulty, of a specific form, which allows one to qualitatively and effectively measure the level and evaluate the structure of students’ preparedness. In the pedagogical test, tasks are arranged in order of increasing difficulty - from the easiest to the most difficult.

Integrative test

An integrative test can be called a test consisting of a system of tasks that meet the requirements of integrative content, a test form, and increasing difficulty of tasks aimed at a generalized final diagnosis of the preparedness of a graduate of an educational institution.

Diagnostics is carried out by presenting such tasks, the correct answers to which require integrated (generalized, clearly interrelated) knowledge in the field of two or more academic disciplines. The creation of such tests is given only to those teachers who have knowledge of a number of academic disciplines and understand important role interdisciplinary connections in learning, are able to create tasks, the correct answers to which require students to have knowledge of various disciplines and the ability to apply such knowledge. Integrative testing is preceded by the organization of integrative training. Unfortunately, the current class-lesson form of conducting classes, combined with excessive fragmentation of academic disciplines, together with the tradition of teaching individual disciplines (rather than generalized courses), will for a long time hinder the implementation of an integrative approach in the processes of learning and monitoring preparedness.

The advantage of integrative tests over heterogeneous ones lies in the greater informative content of each task and in the smaller number of tasks themselves.

The methodology for creating integrative tests is similar to the methodology for creating traditional tests, with the exception of the work of determining the content of tasks. To select the content of integrative tests, the use of expert methods is mandatory.

Adaptive test

The adaptive test works like a good examiner. First, he "asks" a question of moderate difficulty, and the resulting answer is immediately evaluated. If the answer is correct, then the assessment of the test taker’s capabilities increases. In this case, a more difficult question is asked. If the student answers a question successfully, the next one is chosen to be more difficult; if unsuccessful, the next one is chosen easier.

The main advantage of an adaptive test over a traditional one is efficiency. An adaptive test can determine the test taker's knowledge level with fewer questions (sometimes the test length is reduced by up to 60%).

In an adaptive test, on average, you have more time to think about each question than in a regular test. For example, instead of taking 2 minutes per question, an adaptive test taker might end up with 3 or 4 minutes (depending on how many questions they need to answer).

The reliability of the results of the adaptive test coincides with the reliability of fixed-length tests. Both types of tests equally accurately assess the level of knowledge.

However, it is widely believed that the adaptive test is a more accurate assessment of knowledge. This is not true.

Total applications. But between these two stages of testing, others occur. I, like many others, call such tests integration.

A few words about terminology

Having talked a lot with test-driven development enthusiasts, I came to the conclusion that they have a different definition for the term “integration tests.” From their point of view, an integration test tests the "external" code, that is, the one that interacts with the "outside world", the world of the application.

So if their code uses Ajax or localStorage or IndexedDB and therefore cannot be tested with unit tests, they wrap that functionality in an interface and mock that interface for unit tests, and testing the actual implementation of the interface is called an "integration test" " From this point of view, an "integration test" simply tests code that interacts with the "real world" outside of those units that operate without regard to the real world.

I, like many others, tend to use the term "integration tests" to refer to tests that test the integration of two or more units (modules, classes, etc.). It doesn't matter if you hide real world through locked interfaces.

My rule of thumb on whether to use real implementations Ajax and other I/O (input/output) operations in integration tests is this: if you can do it and the tests still run quickly and don't behave strangely, then test the I/O. If the I/O operation is complex, slow, or just weird, then use mock objects in your integration tests.

In our calculator, fortunately, the only real I/O is the DOM. There are no Ajax calls or other reasons to write mocks.

Fake DOM

The question arises: is it necessary to write a fake DOM in integration tests? Let's apply my rule. Will using real DOM make tests slow? Unfortunately, the answer is yes: using the real DOM means using the real browser, which makes tests slow and unpredictable.

Do we separate most of the code from the DOM or test everything together in E2E tests? Both options are not optimal. Luckily, there is a third solution: jsdom. This wonderful and amazing package does exactly what you expect from it - implements the DOM in NodeJS.

It works, it's fast, it runs in Node. If you use this tool, you can stop treating the DOM as "I/O". And this is very important, because separating the DOM from the front-end code is difficult, if not impossible. (For example, I don't know how to do this.) I'm guessing that jsdom was written specifically for running frontend tests under Node.

Let's see how it works. As usual, there is initialization code and there is test code, but this time we will start with the test code. But before that, a retreat.

Retreat

This part is the only part of the series that is focused on a specific framework. And the framework I chose is React. Not because it's the best framework. I firmly believe that there is no such thing. I don't even think there are better frameworks for specific use cases. The only thing I believe in is that people should use the environment in which they are most comfortable working.

And the framework I'm most comfortable working with is React, so the following code is written in it. But as we will see, frontend integration tests using jsdom should work in all modern frameworks.

Let's go back to using jsdom.

Using jsdom

const React = require("react") const e = React.createElement const ReactDom = require("react-dom") const CalculatorApp = require("../../lib/calculator-app") ... describe( "calculator app component", function () ( ... it("should work", function () ( ReactDom.render(e(CalculatorApp), document.getElementById("container")) const displayElement = document.querySelector(" .display") expect(displayElement.textContent).to.equal("0")

The interesting ones are lines 10 through 14. On line 10 we render the CalculatorApp component, which (if you've been following the code in the repository) also renders the Display and Keypad components.

We then check that in lines 12 and 14 the element in the DOM shows an initial value of 0 on the calculator display.

And this code, which runs under Node, uses document ! The document global variable is a browser variable, but it's here in NodeJS. To make these lines work, a very large amount of code is required. This very large amount of code that resides in jsdom is essentially a complete implementation of everything in the browser, minus the rendering itself!

Line 10, which calls ReactDom to render the component, also uses document (and window), since ReactDom uses them frequently in its code.

So who creates these global variables? Test - let's look at the code:

Before(function () ( global.document = jsdom(`

`) global.window = document.defaultView )) after(function () ( delete global.window delete global.document ))

On line 3 we create a simple document that just contains a div.

On line 4 we create a global window for the object. React needs this.

The cleanup function will remove these global variables and they will not take up memory.

Ideally, the document and window variables should not be global. Otherwise, we will not be able to run tests in parallel with other integration tests, because they will all overwrite global variables.

Unfortunately, they have to be global - React and ReactDom need document and window to be just that, since you can't pass them to them.

Event Handling

What about the rest of the test? Let's get a look:

ReactDom.render(e(CalculatorApp), document.getElementById("container")) const displayElement = document.querySelector(".display") expect(displayElement.textContent).to.equal("0") const digit4Element = document. querySelector(".digit-4") const digit2Element = document.querySelector(".digit-2") const operatorMultiply = document.querySelector(".operator-multiply") const operatorEquals = document.querySelector(".operator-equals" ) digit4Element.click() digit2Element.click() operatorMultiply.click() digit2Element.click() operatorEquals.click() expect(displayElement.textContent).to.equal("84")

The rest of the test tests a scenario where the user presses "42 * 2 =" and should get "84".

And he does it in a beautiful way- gets the elements using the famous querySelector function and then uses click to click on them. You could even create an event and trigger it manually using something like:

Var ev = new Event("keyup", ...); document.dispatchEvent(ev);

But the built-in click method works, so we use it.

So simple!

The astute will notice that this test checks exactly the same thing as the E2E test. This is true, but note that this test is about 10 times faster and is synchronous in nature. It's much easier to write and much easier to read.

Why, if the tests are the same, do we need an integration test? Well, just because it is educational project, not the real one. Two components make up the entire application, so integration and E2E tests do the same thing. But in a real application, an E2E test consists of hundreds of modules, while integration tests include a few, maybe 10 modules. So in a real application there will be about 10 E2E tests, but hundreds of integration tests.

Annotation: The lecture is the second of three covering the levels of the verification process. The topic of this lecture is the process of integration testing, its tasks and goals. Organizational aspects of integration testing are considered - structural and temporal classifications of integration testing methods, planning of integration testing. The purpose of this lecture: to give an idea of ​​the process of integration testing, its technical and organizational components

20.1. Objectives and goals of integration testing

The result of testing and verification of the individual modules that make up the software system should be a conclusion that these modules are internally consistent and comply with the requirements. However, individual modules rarely function on their own, so the next task after testing individual modules is testing the correct interaction of several modules combined into a single whole. This type of testing is called integration. Its purpose is to ensure that the system components work together correctly.

Integration testing also called system architecture testing. On the one hand, this name is due to the fact that integration tests include checks of all possible types interactions between software modules and elements that are defined in the system architecture - thus integration tests check completeness interactions in the system implementation being tested. On the other hand, the results of integration tests are one of the main sources of information for the process of improving and clarifying the system architecture, intermodule and intercomponent interfaces. That is, from this point of view, integration tests check correctness interaction of system components.

An example of checking the correctness of interaction can be two modules, one of which accumulates protocol messages about received files, and the second displays this protocol on the screen. The functional requirements for the system state that messages should be displayed in reverse chronological order. However, the storage module stores messages in forward order, and the output module uses the stack to output them in reverse order. Unit tests that touch each module individually will have no effect here - the opposite situation is quite possible, in which messages are stored in the reverse order and output using a queue. A potential problem can only be detected by checking the interaction of modules using integration tests. The key point here is that the system as a whole outputs messages in reverse chronological order, i.e., by checking the output module and finding that it outputs messages in forward order, we cannot guarantee that we have detected a defect.

As a result of carrying out integration testing and eliminating all identified defects, a consistent and holistic architecture is obtained software system, i.e. we can assume that integration testing is testing the architecture and low-level functional requirements.

Integration testing, as a rule, is an iterative process in which the functionality of an increasingly increasing set of modules is tested.

20.2. Organization of integration testing

20.2.1. Structural classification of integration testing methods

As a rule, integration testing is carried out after completion of unit testing for all integrated modules. However, this is not always the case. There are several methods for conducting integration testing:

  • bottom-up testing;
  • monolithic testing;
  • top-down testing.

All of these techniques rely on knowledge of the system's architecture, which is often depicted as structure diagrams or function call diagrams. Each node in such a diagram represents a software module, and the arrows between them represent call dependencies between modules. The main difference between integration testing techniques is the direction of movement along these diagrams and the breadth of coverage per iteration.

Bottom-up testing. When using this method, it is implied that all software modules included in the system are first tested and only then are they combined for integration testing. With this approach, error localization is greatly simplified: if the modules are tested separately, then the error when they working together there is a problem with their interface. With this approach, the tester’s search area for problems is quite narrow, and therefore the probability of correctly identifying a defect is much higher.


Rice. 20.1.

However, bottom-up method testing has a significant drawback - the need to develop a driver and stubs for unit testing before carrying out integration testing and the need to develop a driver and stubs during integration testing of part of the system modules (Fig. 20.1)

On the one hand, drivers and stubs are a powerful testing tool, on the other hand, their development requires significant resources, especially when the composition of integrated modules changes, in other words, one set of drivers may be required for unit testing of each module, a separate driver and stubs for testing the integration of two modules from the set, a separate one - for testing the integration of three modules, etc. This is primarily due to the fact that module integration eliminates the need for some stubs, and also requires a driver change that supports new tests that affect multiple modules.

Monolithic testing suggests that individual components of the system have not undergone serious testing. The main advantage of this method is the absence of the need to develop a test environment, drivers and stubs. After all modules have been developed, their integration is carried out, then the system as a whole is tested. This approach should not be confused with system testing, which is the subject of the next lecture. Despite the fact that monolithic testing will check the operation of the entire system as a whole, the main task of this testing is to identify problems with the interaction of individual system modules. The task of system testing is to evaluate the qualitative and quantitative characteristics of the system from the point of view of their acceptability for the end user.

Monolithic testing has a number of serious disadvantages.

  • It is very difficult to identify the source of the error (identify the erroneous piece of code). Most modules should assume there is an error. The problem comes down to determining which of the errors in all the modules involved led to the result. This may introduce error effects. In addition, a bug in one module may block testing of another.
  • It is difficult to organize bug fixes. As a result of testing, the tester records the problem found. The defect in the system that caused this problem will be fixed by the developer. Since, as a rule, the modules under test are written different people, a problem arises - which of them is responsible for finding and eliminating the defect? With such “collective irresponsibility,” the speed of eliminating defects can drop sharply.
  • The testing process is poorly automated. The advantage (there is no additional software accompanying the testing process) turns into a disadvantage. Every change made requires repeating all the tests.

Top-down testing assumes that the integration testing process follows development. First, only the topmost control level of the system is tested, without modules more low level. Then, gradually, lower-level ones are integrated with higher-level modules. As a result of using this method, there is no need for drivers (the role of the driver is performed by a higher-level system module), but the need for stubs remains (Fig. 20.2).

Different testing experts have different opinions about which method is more convenient for real testing of software systems. Jordan argues that top-down testing most appropriate in real-life situations, and Myers believes that each approach has its own advantages and disadvantages, but overall the bottom-up method is better.

The literature often mentions a method of integration testing of object-oriented software systems, which is based on identifying clusters of classes that together have some closed and complete functionality. At its core, this approach is not a new type of integration testing, it’s just changing minimum element resulting from integration. When integrating modules in procedural programming languages, you can integrate any number of modules, provided that stubs are developed. When integrating classes into clusters, there is a rather loose restriction on the completeness of the cluster functionality. However, even in the case of object-oriented systems, it is possible to integrate any number of classes using stub classes.

Regardless of the integration testing method used, it is necessary to take into account the degree of coverage of the system functionality by integration tests. The work proposed a method for assessing the degree of coverage based on control calls between functions and data flows. With this assessment, the code of all modules in the system structure diagram must be executed (all nodes must be covered), all calls must be executed at least once (all connections between nodes in the structure diagram must be covered), all call sequences must be executed at least once (all paths on the structure diagram must be covered).

Annotation: The lecture is the second of three covering the levels of the verification process. The topic of this lecture is the process of integration testing, its tasks and goals. Organizational aspects of integration testing are considered - structural and temporal classifications of integration testing methods, planning of integration testing. The purpose of this lecture: to give an idea of ​​the process of integration testing, its technical and organizational components

20.1. Objectives and goals of integration testing

The result of testing and verification of the individual modules that make up the software system should be a conclusion that these modules are internally consistent and comply with the requirements. However, individual modules rarely function on their own, so the next task after testing individual modules is testing the correct interaction of several modules combined into a single whole. This type of testing is called integration. Its purpose is to ensure that the system components work together correctly.

Integration testing also called system architecture testing. On the one hand, this name is due to the fact that integration tests include checks of all possible types of interactions between software modules and elements that are defined in the system architecture - thus, integration tests check completeness interactions in the system implementation being tested. On the other hand, the results of integration tests are one of the main sources of information for the process of improving and clarifying the system architecture, intermodule and intercomponent interfaces. That is, from this point of view, integration tests check correctness interaction of system components.

An example of checking the correctness of interaction can be two modules, one of which accumulates protocol messages about received files, and the second displays this protocol on the screen. The functional requirements for the system state that messages must be displayed in reverse chronological order. However, the storage module stores messages in forward order, and the output module uses the stack to output them in reverse order. Unit tests that touch each module individually will have no effect here - the opposite situation is quite possible, in which messages are stored in the reverse order and output using a queue. A potential problem can only be detected by checking the interaction of modules using integration tests. The key point here is that the system as a whole outputs messages in reverse chronological order, i.e., if we check the output module and find that it outputs messages in forward order, we cannot guarantee that we have found a defect.

As a result of carrying out integration testing and eliminating all identified defects, a consistent and holistic architecture of the software system is obtained, i.e. we can assume that integration testing is testing the architecture and low-level functional requirements.

Integration testing, as a rule, is an iterative process in which the functionality of an increasingly increasing set of modules is tested.

20.2. Organization of integration testing

20.2.1. Structural classification of integration testing methods

As a rule, integration testing is carried out after completion of unit testing for all integrated modules. However, this is not always the case. There are several methods for conducting integration testing:

  • bottom-up testing;
  • monolithic testing;
  • top-down testing.

All of these techniques rely on knowledge of the system's architecture, which is often depicted as structure diagrams or function call diagrams. Each node in such a diagram represents a software module, and the arrows between them represent call dependencies between modules. The main difference between integration testing techniques is the direction of movement along these diagrams and the breadth of coverage per iteration.

Bottom-up testing. When using this method, it is implied that all software modules included in the system are first tested and only then are they combined for integration testing. With this approach, error localization is greatly simplified: if modules are tested separately, then an error when they work together is a problem with their interface. With this approach, the tester’s search area for problems is quite narrow, and therefore the probability of correctly identifying a defect is much higher.


Rice. 20.1.

However, bottom-up method testing has a significant drawback - the need to develop a driver and stubs for unit testing before carrying out integration testing and the need to develop a driver and stubs during integration testing of part of the system modules (Fig. 20.1)

On the one hand, drivers and stubs are a powerful testing tool, on the other hand, their development requires significant resources, especially when the composition of integrated modules changes, in other words, one set of drivers may be required for unit testing of each module, a separate driver and stubs for testing the integration of two modules from the set, a separate one - for testing the integration of three modules, etc. This is primarily due to the fact that module integration eliminates the need for some stubs, and also requires a driver change that supports new tests that affect multiple modules.

Monolithic testing suggests that individual components of the system have not undergone serious testing. The main advantage of this method is the absence of the need to develop a test environment, drivers and stubs. After all modules have been developed, their integration is carried out, then the system as a whole is tested. This approach should not be confused with system testing, which is the subject of the next lecture. Despite the fact that monolithic testing will check the operation of the entire system as a whole, the main task of this testing is to identify problems with the interaction of individual system modules. The task of system testing is to evaluate the qualitative and quantitative characteristics of the system from the point of view of their acceptability for the end user.

Monolithic testing has a number of serious disadvantages.

  • It is very difficult to identify the source of the error (identify the erroneous piece of code). Most modules should assume there is an error. The problem comes down to determining which of the errors in all the modules involved led to the result. This may introduce error effects. In addition, a bug in one module may block testing of another.
  • It is difficult to organize bug fixes. As a result of testing, the tester records the problem found. The defect in the system that caused this problem will be fixed by the developer. Since, as a rule, the modules under test are written by different people, the problem arises - which of them is responsible for finding a fix for the defect? With such “collective irresponsibility,” the speed of eliminating defects can drop sharply.
  • The testing process is poorly automated. The advantage (there is no additional software accompanying the testing process) turns into a disadvantage. Every change made requires repeating all the tests.

Top-down testing assumes that the integration testing process follows development. First, only the topmost control level of the system is tested, without lower-level modules. Then, gradually, lower-level ones are integrated with higher-level modules. As a result of using this method, there is no need for drivers (the role of the driver is performed by a higher-level system module), but the need for stubs remains (]. At its core, this approach is not a new type of integration testing; the minimum element obtained as a result of integration simply changes. When integration of modules in procedural programming languages, you can integrate any number of modules, subject to the development of stubs. When integrating classes into clusters, there is a rather loose restriction on the completeness of the functionality of the cluster. However, even in the case of object-oriented systems, it is possible to integrate any number of classes using stub classes.

Regardless of the integration testing method used, it is necessary to take into account the degree of coverage of the system functionality by integration tests. The work proposed a method for assessing the degree of coverage based on control calls between functions and data flows. With this assessment, the code of all modules in the system structure diagram must be executed (all nodes must be covered), all calls must be executed at least once (all connections between nodes in the structure diagram must be covered), all call sequences must be executed at least once (all paths on the structure diagram must be covered).

12 answers

Integration testing is when you test multiple components and how they work together. For example, how another system interacts with your system or a database interacts with the data abstraction layer. This usually requires complete installed system, although in its pure forms it does not work.

Functional testing is when you test a system against the functional requirements of the product. Product/project management typically records this data, and QA formalizes the process of what the user should see and experience, and what the end result of these processes is. Depending on the product, this may or may not be automated.

Functional Testing: Yes, we test the product or software as a whole functionally whether it works functionally or not (test buttons, links, etc.)

For example: Login page

you provide a username and password, you check if it takes you to the home page or not.

Integration Testing: Yes, you are only testing the integrated software, but you are checking where the data flow is happening and if any changes are happening in the database.

For example: Sending an email

You send one message to someone, there is a data flow, and also a change in the database (the table sent increases the value by 1)

Hope this helped you.

This is an important distinction, but unfortunately you will never find agreement. The problem is that most developers define them from their own point of view. This is very similar to the Pluto debate. (If it were closer to the Sun, would it be a planet?)

Unit testing is easy to define. It tests CUT ( Code Under Test) and nothing else. (Well, as little as possible.) This means they are mockeries, knockoffs, and fixtures.

On the other end of the spectrum is what many people call system integration testing. This is testing as much as possible, but still looking for bugs in your own CUT.

But what about the vast space in between?

  • For example, what if you check a little more than CUT? What if you enabled the Fibonacci function instead of using the fixture you entered? I would call it functional testing, but the world doesn't agree with me.
  • What if you included time() or rand() ? Or what if you call http://google.com ? I would call this system testing, but again, I'm alone.

Why does this matter? Because system tests are unreliable. They are necessary, but sometimes they can fail for reasons beyond your control. On the other hand, functional tests should always pass and not randomly; if they happen to be fast, they can also be used from the very beginning to use Test-Driven Development without writing too much large quantity tests for your internal implementation. In other words, I think unit tests may be more complicated than they're worth, and I'm in good company.

I set the tests on 3 axes, with all of them zeros when unit testing:

  • Functional testing: Using real code deeper and deeper into your call stack.
  • Integration-testing: higher and higher your call stack; in other words, testing your CUT by running code that will use it.
  • System testing: more and more unique operations (O/S scheduler, clock, network, etc.).

The test could easily be all 3 to varying degrees.

Functional Testing: This is a testing process in which each component of a module is tested. For example: If a web page contains a text field, you need to check the radiobot checkboxes, buttons and dropdowns, etc.

Integration testing: A process in which the flow of data between two modules is tested.

Integration testing. Integration testing is nothing but testing various modules. You should check the relationship between modules. For example you open facebook then you see login page after entering login id and password you can see facebook home page so login page is one module and home page is another module. you should only check the connection between them when you are logged in, then only the home page should be open and not the message box or anything else. There are two main types of integration testing: the TOP-DOWN approach and the BOTTOM UP approach.

Functional testing. In functional testing, you should only think about input and output. In this case, you have to think like a real user. Testing what you gave and what result you got is functional testing. you only need to watch the exit. With functional testing, you do not need to test the coding of the application or software.

A functional testing test focuses on the functionality and supporting functionality of the application. The functionality of the application must work correctly or not.

In an integration testing test, you need to check the dependency between modules or submodules. The example for module entries must be rendered correctly and displayed in another module.

Integration Test: - When the unit testing is done and the problems with the related components are resolved, then all the required components must be integrated into one system so that it can perform the operation. After combining the components of the system To check whether the system is working correctly or not, this type of testing is called integration testing.

Functional Testing: - Testing is mainly divided into two categories: 1.Functional Testing 2. Non-Functional Testing **Functional Testing: - To check whether the software works as per the user's requirements or not. ** Non-functional testing: - To check whether the software meets quality criteria such as stress test, security test, etc.

Usually the Client provides requirements only for a functional test and for a non-functional test, the requirements should not be mentioned, but the application is required to perform these activities.

I would say that both of them are closely related to each other and it is very difficult to differentiate between them. In my opinion, integration testing is a subset of functional testing.

The functionality check is based on the initial requirements you receive. You will test the application's behavior as expected against the requirements.

When it comes to integration testing, it is the interaction between modules. If a module sends input, module B can process it or not.

Integration testing

You can see how the different modules of the system work together. We mainly refer to the integrated functionality of the various modules rather than to the different components of the system. For efficient work In any system or software product, each component must be synchronized with each other. In most cases, the tool we used for integration testing will be the one we used for unit testing. It is used in difficult situations when unit testing is not sufficient to test the system.

Functional testing

It can be defined as testing individual functionality modules. It refers to testing a software product at an individual level to check its functionality. Test cases are designed to test the software for expected and unexpected results. This type of testing is done more from the user's perspective. That is, it takes into account the user's expectation for type input. It is also called black box testing and also called closed box testing

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”