Continuous Integration With Jenkins

Quality and stability are important characteristics of software development. As the development of software progresses, more attention must be paid to these two factors. This article is about the quality assurance of software through continuous integration with Jenkins.

Besides the functionality of applications, the software must run both stable and reliable. Over time, not only the functionality of programs but also the number of possible error sources increases with the number of lines of code added. Finding and correcting as many errors as possible is an important aspect of high-quality software. Testing and finding errors can only be done manually to a limited extent.

Module tests

Software applications are usually divided into individual modules, i.e. self-contained units. To test the functionality of applications, module tests are therefore written, which are then executed automatically at regular intervals. The module tests are often called unit tests. Module tests can be written for different interfaces. The type of implementation depends on the software. Usually, there is a group of module tests that evaluate the direct function of methods. For example, a single test checks whether the method returns exactly what is expected. This behavior is then tested with some exemplary data.

As an example of a module test, the following application scenario can be imagined: The object of a feed generator is filled with data, for example with articles from a blog. When a method is called, the data is written to the RSS feed as an XML file. Here several problem cases must be checked: On the one hand, whether all entered data has been written into the XML file according to expectations. Furthermore, it should be checked whether the corresponding XML file is valid and well-formed, i.e. whether it complies with the rules. The checks are performed by a test framework.

The previous example tested the direct functionality of a method, but this is not the only interface that can be used. Depending on the application, there are other interfaces whose testing makes sense. For example, if you start from a GUI application, it is not necessarily sufficient to test only the methods of a class. The graphical user interface should also be tested. In the case of an editor, you could test whether the “Undo” function really does what is expected. When programming web pages, you can test if the integrated search function works properly. The search index can be fed with test data during the test, which is then searched for afterward. If something fails there, it will be noticed early.

For module tests, it is important that they are executed and evaluated automatically. This is also related to the concept of continuous integration.

The concept of continuous integration consists of two principles, namely integration and its continuity. Integration means the insertion of a new or modified program code into the original project. The continuity of the integration takes place in many short intervals. The frequency often varies from project to project, but usually, changes are made at least once a day. It is important that many small changes can be adopted. The advantage is the early detection of errors in the program code, especially when using module tests.

The current state of development is tested using software for continuous integration. This includes, first of all, compiling the project and then performing the written module tests.

Jenkins

To develop a software project with the help of continuous integration, many software products support the developer. One of them is Jenkins, which is discussed in this article. Jenkins was developed under the umbrella of Sun Microsystems by Kohsuke Kawaguchi. At that time it was called “Hudson”. Kawaguchi left the company after Sun was acquired by Oracle. Oracle refused to continue using the name “Hudson” and continued to develop it itself. Since Kawaguchi continues to develop Hudson under the name “Jenkins”, it is therefore a fork. Jenkins is written in Java and runs platform-independent as a web application on a server. The source code is subject to the MIT license.

In Jenkins, jobs can be defined, which are to be understood as repetitive workflows. Usually, this includes several steps that are executed automatically. Within the configuration of a Jenkins job, it is possible to define different workflows, which under Linux usually contain shell scripts. One can understand Jenkins therefore also as an essay for such scripts. The advantage of Jenkins is that the jobs can be created easily and intuitively and the developers can also use many configuration options.

In general, Jenkins jobs are executed in two different ways. The first is the concept of daily builds that are run daily. The second concept is the already mentioned continuous integration of source code, which is executed whenever changes have been made to the source code. Here, a Jenkins job is started whenever changes have been made to the repository. In daily builds, on the other hand, the changes made on a given day are always merged.

A Jenkins job represents a certain workflow, which can be divided into four sub-items. The first execution is the triggering of the job. This can be time-controlled, event-controlled, or triggered by a change in the source code. For the time-controlled triggering, a certain time can be defined. For example, if 6 pm is specified, Jenkins will automatically start the created job. Equivalently, a job can also be triggered event-controlled. Here it can be configured that a certain Jenkins job should only be executed if a predecessor project has run successfully. The third possibility is to trigger the job after a change in the source code. Jenkins scans the source code archive (repository) at regular intervals and triggers the Jenkins job when a change is registered. The last trigger is the simplest: manual execution.

In general, Jenkins jobs are executed in two different ways. The first is the concept of daily builds that are run daily. The second concept is the already mentioned continuous integration of source code, which is executed whenever changes have been made to the source code. Here, a Jenkins job is started whenever changes have been made to the repository. In daily builds, on the other hand, the changes made on a given day are always merged.

A Jenkins job represents a certain workflow, which can be divided into four sub-items. The first execution is the triggering of the job. This can be time-controlled, event-controlled, or triggered by a change in the source code. For the time-controlled triggering, a certain time can be defined. For example, if 6 pm is specified, Jenkins will automatically start the created job. Equivalently, a job can also be triggered event-controlled. Here it can be configured that a certain Jenkins job should only be executed if a predecessor project has run successfully. The third possibility is to trigger the job after a change in the source code. Jenkins scans the source code archive (repository) at regular intervals and triggers the Jenkins job when a change is registered. The last trigger is the simplest: manual execution.

The second step is a very short one: The Jenkins job starts downloads the source code from the repository and then goes on to the third step, the build process.

The build process can be used very individually since it is specified in shell or Windows batch scripts. As a user of Jenkins, you have a wide range of possibilities to design the build process. In general, the build process compiles the project and runs the tests. If no serious errors occur in the defined shell scripts such as program crashes, Jenkins moves on to the fourth step. However, if something goes wrong, the build process aborts completely and reports the build failure to the developers.

The post-build process is the fourth and final step that Jenkins performs. These are actions that are all performed after the build process. There you can also define some actions. So it makes sense to put the project into a package. Depending on the used system it is possible to build a DEB or RPM package immediately. In addition, the post-build process also evaluates the executed tests. The module tests write the results of the executed tests to log files. The type of log file differs from the test framework used. Jenkins supports the Java test framework JUnit by default. XML files are often used for this purpose, which the Jenkins job evaluates at the end. During the evaluation, the log files are then read and interpreted according to a defined schema. The number of failed module tests is counted and output as the result of the Jenkins job.

Jenkins then uses the test results to generate the status of the current build in the form of colors and a so-called “weather report”. The status is “red” when too many errors have occurred, “yellow” when a small number of errors have occurred, and “blue” when none of the module tests have failed. Often the “traffic light colors” are used alternatively so that a successful build is marked with green instead of blue. You can set how a “small number of errors” is to be understood. If a Jenkins job has been run several times, i.e. the source code has been changed several times, the weather report is generated from it. The weather report shows the progress of the last five Jenkins jobs. There is bright sunshine if all builds were successful, whereas there are thunderstorms if all the last builds failed. There are also other states such as clouds if only a few of the last jobs failed. The individual values from which the state of a Jenkins job is generated can also be configured here so that development teams have full control over their Jenkins jobs.

Finally, the developers must be informed at the end of the post-build process and thus the entire run of a Jenkins job. Developers can either use the e-mail notification or alternatively make use of the large plug-in pool. Developers can be informed by using plug-ins via the XMPP protocol or an IRC bot. If you want to run another Jenkins job directly after that one, you can also set a trigger that starts the other Jenkins job.

Plug-ins

In the standard version, Jenkins already offers a wide range of functions that make continuous integration a little easier for developers. Due to the numerous available extensions, it is possible to significantly increase the functional range of Jenkins. Since Jenkins mainly provides tools that are interesting for Java developers, such as the evaluation of JUnit tests, there are also plug-ins that are interesting for developers of other programming languages. This gives C++ developers the opportunity to get to know the advantages of Jenkins as well. For example, Jenkins supports the Boost Test Framework, which belongs to the C++ Boost Libraries. It is also possible to run tools like cppcheck to perform C++ code analysis. Interesting is also the possibility to generate documentation so that new documentation is available daily. A long plug-in list can be found in the Jenkins Wiki.

Public Jenkins servers

Since Jenkins is a web application, some projects have publicly viewable on Jenkins servers. Such public Jenkins servers are available from at least two major OSS projects, e.g. Ubuntu and KDE. A look at the Jenkins job “akonadi_master” from KDE is worthwhile for interested people. There you can see the build process of the last weeks or months. Furthermore, you can see some more graphs that show the test results or outline the warnings of the GNU Compiler.

Conclusion

As a developer, there are several advantages to continuously integrating with Jenkins. The source code is regularly integrated into the project at short intervals and both major and minor errors are quickly noticed, provided that sufficient qualitatively and quantitatively good test cases are written. Jenkins makes it easier for developers to keep track of possible errors and supports quality assurance with a wide range of functions.

Besides Jenkins, there is of course alternative software for continuous integration that can be used instead of Jenkins. These include Travis CI for the open-source community, Apache Gump and BuildBot.

Interested to deploy DevOps application management services in your company for better project deployment? Heyooo will be happy to support you in the implementation!

Related Article