This blog post is an attempt to showcase what all goes into setting up a minimal CI (Continuous Integration) process of the Delivery Pipeline.
A lot of content is available on the web explaining benefits that DevOps culture brings to the table, but when it comes to implementing and adoption of these practices, the first challenge that people usually face is what exactly has to be done on ground to setup proper tooling & processes that enable DevOps.
As we all know CI/CD (Continuous Integration & Continuous Deployment) is a key principle of successful DevOps with automation playing a starring role. It may be delivering new features to internal customers faster or bringing new products to market before the competition. Setting up a CI, CD pipeline helps speed up time to delivery and generate greater value to the businesses.
Typical Stages: Delivery Pipeline
Also known as Version Control System, is used to keep track of changes in the codebase that developers work on. It helps in collaborating, sharing and versioning of code files. Examples of popular Source control tools and platforms: GitHub, Gitlab, Bitbucket, etc.
The process of building the code typically involves code compilation, creating a binary distribution and packaging of the software distribution. Specific build tools are required for different programming languages. In a delivery pipeline, build tools are usually automated to build and package the source code. Example of build tools: Maven, Gradle, Ant, Make, Rake, etc.
Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. Basically source control, build & testing is automated using a CI server. Example of popular CI servers: Jenkins, TeamCity, Circle CI, Bamboo, etc.
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Testing can be done using custom-built frameworks based on requirement. Both white box testing and black box testing can be automated using a CI server.
Code Coverage is a measurement of how many lines/blocks of your code are executed while the automated tests are running. It is collected by using a specialized tool to instrument the binaries to add tracing calls and run a full set of automated tests against the instrumented product. A good tool will give you not only the percentage of the code that is executed, but also will allow you to drill into the data and see exactly which lines of code were executed during particular test. After automated testing of the build, Code Coverage report can be generated to understand the actual code usage. Examples of Code Coverage tools: Emma, DevelCover, JCov, etc.
Static analysis, also called static code analysis, is a method of computer program debugging that is done by examining the code without executing the program. The process provides an understanding of the code structure, and can help to ensure that the code adheres to industry standards. Static Analysis can be automated and should be a part of the Delivery pipeline to eliminate any code defects. Example tools: Sonar Qube, Fortify, Checkstyle, etc.
Performance testing is in general, a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. These kind of tests can be automated using Cloud platforms to launch load generator and target servers programatically before releasing the build to production. Examples of load testing tools: Load Runner, JMeter, etc.
Artifact manager is a software tool designed to optimise the download and storage of binary files used and produced in software development. It centralizes the management of all the binary artifacts generated and used by the organization to overcome the complexity arising from the diversity of binary artifact types, their position in the overall workflow and the dependencies between them. The Artifact manager hosts the deployable binary files and its versions to ease rollback process. Examples of tools: Artifactory, Archiva, Nexus, etc.
Continuous Deployment is a software development practice in which every code change goes through the entire pipeline and is put into production, automatically, resulting in many production deployments every day. Deployments can be done using multiple strategies based on the infrastructure and environment where the application is deployed. Deployment types: Blue-Green, Rolling deployments, etc. Example tools: Ansible, Spinnaker, AWS CodeDeploy, Custom scripts, etc.
Given the sheer number of releases in a continuous delivery, it becomes important to measure the performance and availability of software to improve stability. Continuous monitoring helps identify root causes of issues quickly to proactively prevent outages and minimize user issues. Example tools: Datadog, NewRelic, AppDynamics, etc.
It’s Show Time!
As promised, this blog is meant to showcase the Continuous Integration stages in action. I have tried to demo all the stages of CI of the Continuous Delivery pipeline in the Youtube Video at the end of this blog.
Few quick points to note before watching the demo:
- Application framework used for the Demo: Spring Boot (Java)
- Source Control: GitLab
- CI Server: Jenkins
- Build Tool: Maven
- Automated Testing: Custom Built Testing Framework (more on this in next blog post)
- Code Coverage: Jacoco Library
- Static Analysis: CheckStyle
- Performance Testing: Custom Built Scripts (more on this in next blog post)
- Artifact Management: JFrog Artifactory
- Continuous Deployment: AWS CodeDeploy (more on this in next blog post)
- Continuous Monitoring: DataDog (more on this in next blog post)