The cultural movement that is DevOps — which, in short, encourages close collaboration among developers, IT operations, and system admins — also encompasses a set of tools, techniques, and practices. As part of DevOps, the CI/CD process incorporates automation into the SDLC, allowing teams to integrate and deliver incremental changes iteratively and at a quicker pace. Together, these human- and technology-oriented elements enable smooth, fast, and quality software releases. This Zone is your go-to source on all things DevOps and CI/CD (end to end!).
Three Reasons Why You Should Attend PlatformCon 2024
DevSecOps: It’s Time To Pay for Your Demand, Not Ingestion
This article identifies some basic trends in the software industry. Specifically, we will explore how some well-known organizations implement and benefit from early and continuous testing, faster software delivery, reduced costs, and increased collaboration. While it is clear that activities like breaking down silos, shift-left testing, automation, and continuous delivery are interrelated, it is beneficial to take a look at how companies strive to achieve such goals in practice. Companies try to break down the traditional silos that separate development, operations, and testing teams. This eliminates barriers and builds collaboration, where all teams share responsibility for quality throughout the software development lifecycle. This collaborative approach leads to improved problem-solving, faster issue resolution, and ultimately, higher-quality software. The concept of "shifting left" emphasizes integrating testing activities earlier into the development process. This means conducting tests as code is written (unit tests) and throughout development stages (integration tests), instead of waiting until the end. By detecting and fixing defects earlier, the overall development cycle becomes more efficient as issues are addressed before they become complex and expensive to fix later. This proactive approach ultimately leads to higher-quality software and faster releases. Embracing automation is another core trend. By utilizing automated testing tools and techniques, such as unit testing frameworks and continuous integration pipelines, organizations can significantly accelerate the testing process. This frees up valuable human resources, allowing testers to focus on more complex tasks like exploratory testing, test strategy development, and collaborating with other teams. This increases efficiency, it allows faster feedback loops and earlier identification of defects, ultimately leading to higher-quality software and faster releases. Continuous delivery, ensuring high-quality software is delivered frequently and reliably is another key trend. This is achieved through several key practices: automation of repetitive tasks, integration and testing throughout development, and streamlined deployment pipelines. By catching and addressing issues early, fewer defects reach production, enabling faster and more reliable releases of high-quality software that meets user expectations. This continuous cycle of delivery and improvement ultimately leads to increased innovation and a competitive edge. Early and Continuous Testing Early and continuous testing may lead to better defect detection and faster resolution, resulting in higher-quality software. Let's take a look at a few specific cases: 1. Netflix Challenge Netflix's challenge is releasing new features regularly while maintaining a high level of quality across various devices and platforms. Solution Netflix adopted a DevOps approach with extensive automation testing. They utilize unit tests that run on every code commit, catching bugs early. Additionally, they have automated testing frameworks for various functionalities like UI, API, and performance. Impact This approach allows them to identify and fix issues quickly, preventing them from reaching production and impacting user experience. 2. Amazon Challenge Amazon's challenge is ensuring the reliability and scalability of their massive e-commerce platform to handle unpredictable traffic spikes. Solution Amazon employs a "chaos engineering" practice. They intentionally introduce controlled disruptions into their systems through automated tools, simulating real-world scenarios like server failures or network outages. This proactive testing helps them uncover potential vulnerabilities and weaknesses before they cause customer disruptions. Impact By identifying and addressing potential issues proactively, Amazon can ensure their platform remains highly available and reliable, providing a seamless experience for millions of users. 3. Spotify Challenge Spotify's challenge is maintaining a seamless music streaming experience across various devices and network conditions. Solution Spotify heavily utilizes continuous integration and continuous delivery (CI/CD) pipelines, integrating automated tests at every stage of the development process. This includes unit tests, integration tests, and performance tests. Impact Early detection and resolution of issues through automation allow them to maintain a high level of quality and deliver frequent app updates with new features and bug fixes. This results in a more stable and enjoyable user experience for music lovers globally. These examples highlight how various organizations across different industries leverage early and continuous testing to: Catch defects early: Automated tests identify issues early in the development cycle, preventing them from cascading into later stages and becoming more complex and expensive to fix. Resolve issues faster: Early detection allows for quicker bug fixes, minimizing potential disruptions and ensuring a smoother development process. Deliver high-quality software: By addressing issues early and continuously, organizations can deliver software that meets user expectations and performs reliably. By embracing early and continuous testing, companies can achieve a faster time-to-market, reduced development costs, and ultimately, a more satisfied customer base. Faster Software Delivery Emphasizing automation and continuous integration empowers organizations to achieve faster software delivery. Here are some examples showcasing how: 1. Netflix Challenge Netflix's challenge is maintaining rapid release cycles for new features and bug fixes while ensuring quality. Solution Netflix utilizes a highly automated testing suite encompassing unit tests, API tests, and UI tests. These tests run automatically on every code commit, providing immediate feedback on potential issues. Additionally, they employ a continuous integration and delivery (CI/CD) pipeline that automatically builds, tests, and deploys code to production environments. Impact Automation reduces the need for manual testing, significantly reducing testing time and allowing for faster feedback loops. The CI/CD pipeline further streamlines deployment, enabling frequent releases without compromising quality. This allows Netflix to deliver new features and bug fixes to users quickly, keeping them engaged and satisfied. 2. Amazon Challenge Amazon's challenge is scaling deployments and delivering new features to their massive user base quickly and efficiently. Solution Amazon heavily invests in infrastructure as code (IaC) tools. These tools allow them to automate infrastructure provisioning and configuration, ensuring consistency and repeatability across different environments. Additionally, they leverage a robust CI/CD pipeline that integrates automated testing with infrastructure provisioning and deployment. Impact IaC reduces manual configuration errors and streamlines infrastructure setup, saving significant time and resources. The integrated CI/CD pipeline allows for automated deployments, reducing the time required to move code from development to production. This enables Amazon to scale efficiently and deliver new features and services to their users at an accelerated pace. 3. Spotify Challenge Spotify's challenge is keeping up with user demand and delivering new features and updates frequently. Solution Spotify utilizes a containerized microservices architecture, breaking its application down into smaller, independent components. This allows for independent development, testing, and deployment of individual services. Additionally, they have invested heavily in automated testing frameworks and utilize a continuous integration and delivery pipeline. Impact The microservices architecture enables individual teams to work on and deploy features independently, leading to faster development cycles. Automated testing provides rapid feedback, allowing for quick identification and resolution of issues. The CI/CD pipeline further streamlines deployment, allowing for frequent releases of new features and updates to the Spotify platform and keeping users engaged with fresh content and functionalities. These examples demonstrate how companies across various sectors leverage automation and continuous integration to achieve: Reduced testing time: Automated testing reduces the need for manual efforts, significantly reducing the time it takes to test and identify issues. Faster feedback loops: Automated tests provide immediate feedback on code changes, allowing developers to address issues quickly and iterate faster. Streamlined deployment: Continuous integration and delivery pipelines automate deployments, minimizing manual intervention and reducing the time it takes to move code to production. By leveraging automation and continuous integration, organizations can enjoy faster time-to-market, increased responsiveness to user needs, and a competitive edge in their respective industries. Reduced Costs Automating repetitive tasks and shifting left can reduce the overall cost of testing. There are three main areas to highlight here. 1. Reduced Manual Effort Imagine a company manually testing a new e-commerce website across different browsers and devices. This would require a team of testers and significant time, leading to high labor costs. By automating these tests, the company can significantly reduce the need for manual testing, freeing up resources for more complex tasks and strategic testing initiatives. 2. Early Defect Detection and Resolution A software company traditionally performed testing only towards the end of the development cycle. This meant that bugs discovered late in the process were more expensive to fix due to a number of reasons. By shifting left and implementing automated unit tests early on, the company can identify and fix bugs early in the development cycle, minimizing the cost of rework and reducing the chance of them cascading into later stages. 3. Improved Test Execution Speed A software development team manually ran regression tests after every code change, causing lengthy delays and hindering development progress. By automating these tests, the team can run them multiple times a day, providing faster feedback and enabling developers to iterate more quickly. This reduces overall development time and associated costs. Examples Capgemini: Implemented automation for 70% of their testing efforts, resulting in a 50% reduction in testing time and a 20% decrease in overall project costs. Infosys: Embraced automation testing, leading to a 40% reduction in manual effort and a 30% decrease in testing costs. Barclays Bank: Shifted left by introducing unit and integration testing, achieving a 25% reduction in defect escape rate and a 15% decline in overall testing costs. These examples showcase how companies across different sectors leverage automation and shifting left to achieve the following: Reduced labor costs: Automating repetitive testing tasks reduces the need for manual testers, leading to significant cost savings. Lower rework costs: Early defect detection and resolution minimize the need for rework later in the development cycle, saving time and money. Increased development efficiency: Faster test execution speeds through automation allow developers to iterate more quickly and reduce overall development time, leading to cost savings. By embracing automation and shifting left, organizations can enjoy improved resource utilization, reduced project overruns, and a better return on investment (ROI) for their software development efforts. Increased Collaboration Increased collaboration between development (Dev), operations (Ops), and testing teams. This is achieved by creating a shared responsibility for quality throughout the software development lifecycle. Here's how it works: Traditional Silos vs. Collaborative Approach Traditional Silos In a siloed environment, each team operates independently. Developers write code, testers find bugs, and operations manage the production environment. This often leads to finger-pointing, delays, and a disconnect between teams. Collaborative Approach DevOps, QAOps, and agile practices, among others, break down these silos and promote shared ownership for quality. Developers write unit tests, operations implement automated infrastructure testing, and testers focus on higher-level testing and test strategy. This nurtures collaboration, communication, and a shared sense of accountability. Examples Netflix: Utilizes a cross-functional team structure with members from development, operations, and testing working together. This allows them to share knowledge, identify and resolve issues collaboratively, and ensure a smooth delivery process. Amazon: Employs a "blameless post-mortem" culture where teams analyze incidents collaboratively without assigning blame. This builds openness, encourages shared learning, and ultimately improves system reliability. Spotify: Implements a "one team" approach where developers, operations engineers, and testers work together throughout the development cycle. This facilitates open communication, allows for shared decision-making, and promotes a sense of collective ownership for the product's success. Benefits of Increased Collaboration Improved problem-solving: By working together, teams can leverage diverse perspectives and expertise to identify and resolve issues more effectively. Faster issue resolution: Open communication allows for quicker sharing of information and faster identification of the root cause of problems. Enhanced quality: Increased collaboration creates a culture of ownership and accountability, leading to higher-quality software. Improved team morale: Collaborative work environments are often more enjoyable and motivating for team members, leading to increased productivity and job satisfaction. Strategies for Fostering Collaboration Cross-functional teams: Encourage collaboration by forming teams with members from different disciplines. Shared goals and metrics: Align teams around shared goals and success metrics that promote collective responsibility for quality. Open communication: Create open communication channels and encourage information sharing across teams. Knowledge sharing: Facilitate knowledge sharing across teams through workshops, training sessions, and collaborative problem-solving activities. By adopting DevOps, QAOps, and agile principles, organizations can break down silos, embrace shared responsibility, and cultivate a culture of collaboration. This leads to a more efficient, innovative, and, ultimately, successful software development process. Wrapping Up A number of organizations embark on a transformative journey towards faster, more reliable, and higher-quality software delivery. Through breaking down silos and forging shared responsibility, teams can leverage automation and shift left testing to enhance continuous delivery. This collaborative and efficient approach empowers organizations to deliver high-quality software more frequently, reduce costs, and ultimately gain a competitive edge in the ever-evolving technology landscape.
Why Is Securing the Pipeline Important? CI/CD stands for Continuous Integration/Continuous Delivery, which is the process of automating the tasks of software development. Securing CI/CD is a multi-stage process that is designed to identify and mitigate potential risks at different stages of the CI/CD. There are some stages in the CI/CD pipeline such as source code maintenance, build, testing, and deployment. Each of these stages is vulnerable unless we implement a solid risk mitigation system. If we add feature branches to the "picture," then it certainly adds more risk vulnerability to the pipeline. As such, securing the CI/CD process across all the tools and at every stage of the pipeline should be a top priority for every organization. No matter what tools you are using to secure the pipeline, make sure you mitigate all potential risk factors for the path code takes as it moves across the pipeline. What Is DevOps? Large-scale and highly elastic application services come with a requirement of automatic validation, infrastructure upgrading, development and deployment, quality assurance, and infrastructure administration. Traditional infrastructure management is being replaced by building CI/CD pipelines for all phases of the product development life cycle. DevOps is a union of software development and operations. It is a culture that the company evolves from the Agile development process. The new methods of Continuous Integration, Continuous Delivery, and Continuous Deployment have come with the rise of DevOps that focuses on: Communication, collaboration, and cohesion between teams Applying best practices for change, configuration, and deployment automation Delivering solutions faster Monitoring and planning high-speed product updates Figure 1: DevOps Model CI/CD gets rid of the manual gate and implements fully automated verification of the acceptance environment to determine whether the pipeline can continue to production or not. Continuous Integration focuses on the software development cycle of the individual developer in the code repository. This can be executed multiple times in a day with the primary purpose of enabling early detection of integration bugs, tighter cohesion, and more development collaboration. Major activities are static code analysis, unit tests, and automated reviews. Continuous Delivery focuses on automated code deployment in testing, staging, or production environments, taking the approval of updates to achieve an automated software release process, and pre-emptively discovering deployment issues. Figure 2: DevOps Phases Benefits of DevOps Improved collaboration, operational support, and faster fixes Increased flexibility, agility, and reliability Infrastructure security and data protection Faster maintenance and upgrades Transformation of projects with digitalization strategies Increase speed, the productivity of a business and IT team AWS CI/CD Pipelines AWS provides a set of developer tools that can be used to achieve DevOps CI/CD in a fully secure, scalable, maintainable, and easy integration environment with existing CI/CD tools like Ansible, Chef, Puppet, Terraform, etc. AWS provides CI/CD for Virtual Machine or container-based services, along with options to manage (create, update, and delete) all other services like databases, storage, compute, machine learning, etc. Figure 3: AWS CI/CD Tools AWS Services for DevOps Integration AWS provides a bundle of DevOps services designed to enable organizations to build and deliver their products faster and reliably. These services simplify the process of provisioning and managing the infrastructure, automating the software release processes, and monitoring the applications and infrastructure performance. Figure 4: Sample Pipeline using AWS and other CI tools AWS provides the services that can help your organization practice DevOps in a more efficient way. We will discuss some of the important tools here. These tools can be categorized based on their roles into different sections as depicted in the following section. Infrastructure as Code Treat infrastructure the same way the developer treats the code with all best practices and tests. AWS provides a DevOps-focused way of creating and maintaining infrastructure. Some of the Infrastructure as Code tools are: AWS CloudFormation: This provides the facility to prepare templates for infrastructure and services. Templates can be written in JSON and YAML and can be managed with versioning. These templates can be executed on Jenkins or any other CI server with AWSCLI. Terraform provides an option for AWS Resource Manager with rich controls and extension with state management. AWS OpsWorks: This provides even more levels of automation with additional features like integration with configuration management software (Chef) and application lifecycle management. AWS Config: AWS Config is an audit tool to monitor existing AWS account resources and triggers an alarm upon any change in infrastructure. Continuous Deployment Continuous Deployment is the core concept of a DevOps strategy. Its primary goal is to enable the automated deployment of production-ready application code. Following are the CI/CD tools provided by AWS: AWS CodeCommit: A secure, highly scalable, managed source control service that hosts private Git repositories AWS CodeDeploy: Features provide the ability to deploy applications across an Amazon E2C fleet with minimum downtime, centralizing control and integrating with your existing software release or continuous delivery process. There are third-party tools like Claudia and Serverless which deploys AWS Lambda and Elastic Beanstalk. AWS ElasticBeanstalk: This supports automation and numerous other DevOps best practices including automated application deployment, monitoring, infrastructure configuration, and version management. Application and infrastructure changes can be easily rolled back as well as forward. AWS ECS: Highly scalable and secure container service to store Docker images AWS CodePipeline: This is a continuous delivery and release automation service that aids smooth deployments. Design development workflow for checking in code, building the code, deploying your application into staging, testing it, and releasing it to production. Automation and Monitoring Automation and monitoring focuses on setup, configuration, deployment, and support of infrastructure and applications. Communication and collaboration are fundamental in a DevOps strategy. To facilitate this, AWS provides flexible tools. We are listing here some of the frequently used ones: AWS CloudWatch: Monitors all AWS resources and applications in real-time; Provides metrics for managed services to design dashboards, alarms, and triggers WS XRay: Records and tracks the communication between all services and detects the issues in performance and application permission AWS CloudTrail: Enables governance, compliance, operational auditing, and risk auditing In order to make the cloud software solution journey smooth, efficient, and effective, one must follow DevOps principles and practices. DevOps has become an integral part of any cloud solution in today’s technology world. Many organizations offer DevOps as a service to automate your product delivery lifecycle to improve collaboration, monitoring, management, and reporting. It helps to accelerate new services through CI/CD to achieve operational flexibility, cost-effective ways of delivery, and avoid issues in production. Takeaways CI/CD security is a necessity for organizations to build and deploy applications in a reliable, efficient, and secure way. The strategies and practices described in this article lay a strong foundation for securing CI/CD pipelines. Nonetheless, achieving a scalable and secure pipeline is a continuous process that requires you to go beyond the basics of business flow. We would like to recommend a few next steps that would help you to implement the discussed solution. Training and assessment: Regularly educate and train development and DevOps teams on emerging security best practices. Security audits: Make it a schedule to perform security assessments for your CI/CD pipeline to detect and mitigate potential vulnerabilities or security risks. Always be informed: Read up on the latest things on security trends, vulnerability reports, latest security patches, etc. that can keep your organization's software delivery process secure and reliable.
Security testing is an essential part of testing. Every organization wants to do at least basic security testing before releasing the code to production. Security testing is like an ocean; it might be difficult to perform complete security testing without the help of trained professionals. Some of the open-source tools provide automated basic scanning of the website. Once we add it to pipelines like any other test such as smoke or regression, the security tests also can run as part of deployment and report issues. What Is OWASP ZAP? ZAP is a popular security testing tool and open source. ZAP tool helps to find the vulnerabilities in the applications or API endpoints. Vulnerabilities include cross-site scripting, SQL injection, broken authentication, sensitive data exposure, broken access control, security misconfiguration, insecure deserialization, etc. The beauty of this tool is that it provides both UI and Command Line Interfaces to run the tests. Since it provides a command-line interface we can integrate it as part of our pipeline. The pipeline can be triggered when we release code into production, this helps to find the potential security issues. What Are We Going To Learn? How to configure and set up OWASP ZAP security test into Azure Release Pipeline How to run OWASP ZAP security tests on websites in Azure DevOps Pipeline using Docker How to perform API security testing using OWASP ZAP security testing tool in Azure DevOps Pipelines with Docker Images How to publish OWASP ZAP security testing results in Azure DevOps Pipeline How to publish OWASP ZAP HTML test results into Azure Artifacts by creating feed and packages How to download artifacts containing OWASP ZAP HTML test results using the Azure CLI tool What Are the Prerequisites? Create a Repository Create a repository inside your organization (preferred), download the file OWASPToNUnit3.xslt, and keep it inside the repository. This file is needed to convert the OWASP ZAP security test result XML file to publish results in Azure DevOps. Create a Feed Azure DevOps Artifact This feed is helpful for publishing OWASP ZAP HTML results. The steps are as follows: Step 1 Navigate to Azure DevOps > Click on Artifacts > Click on Create Feed: Step 2 In the "Create new feed" form, enter the correct text, and click on Create.Note: We will be using the feed name while configuring tasks. You need to choose the same from the drop-down, so note down the feed name. Step 3 Create a sample package inside the feed using the command line. Install Azure CLI. After installation, run the command below to create a sample package: PowerShell az artifacts universal publish - -organization https://dev.azure.com/[Your_Org_Name] --feed SecurityTesting --name security_testing --version 1.0.0 --description "Your description" --path . Upon completion of Step 3, Navigate to Azure DevOps > Artifact > and select feed as SecurityTesting. You should see the newly created package: We have completed all initial setup and prerequisites, and are good to start with pipelines now. Refer to Microsoft Documentation for more details. How to Configure OWASP ZAP Security Tests in Azure DevOps Pipeline Let's discuss in detail step by step by setting up OWASP ZAP Security Tests Pipeline using Docker Image. Step 1: Create a New Release Pipeline 1. Navigate to Azure DevOps > Pipeline > click on Releases. 2. Click on New, and choose New Release Pipeline: 3. Choose Empty job when the template window prompts: 4. Name the stage Security Testing (or any other name you wish). Step 2: Add Artifact to Release Pipeline Click on Add an artifact. In the popup window, choose Azure Repository. Choose your Project. Choose the Source repository (this is the place where you created the XSLT file in the prerequisite section). Choose the default branch as master. Click Add. Step 3: Add Tasks to Pipeline We need to add tasks to the pipeline. In our case, we have created only one stage, which is security testing. Step 4: Configure Agent Job Details Display Name: Agent Job or anything you wish Agent pool: Choose Azure Pipelines. Agent Specification: Choose any Ubuntu agent from the dropdown. Step 5: Add Docker Installer Task In the search box, search for Docker CLI, Add the task, and configure the Docker CLI Task. Step 6: Add Bash Script Task Step 7: Configure Bash Script Task Enter display name: Security Test Run Type: Click on the Inline Radio button. Script: Copy and paste the below code (don't forget to replace your URL). Example: chmod -R 777 ./ docker run --rm \ -v $(pwd):/zap/wrk/:rw \ -t owasp/zap2docker-stable \ zap-full-scan.py \ -t https://dzone.com \ -g gen.conf \ -x OWASP-ZAP-Report.xml \ -r scan-report.html How To Run OWASP ZAP Security Test for API The above-mentioned script works well with websites and webpages, but if your requirement is an API, then you need to add different inline scripts. The rest of the things remain the same. Script for OWASP ZAP API Security Scan Shell chmod -R 777 ./ docker run — rm -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-weekly zap-api-scan.py -t [your-api-url] -f openapi -g api-scan.conf -x OWASP-ZAP-Report.xml -r api-scan-report.html true Example: chmod -R 777 ./ docker run --rm \ -v $(pwd):/zap/wrk/:rw \ -t owasp/zap2docker-weekly \ zap-api-scan.py \ -t https://dzone.com/swagger/v1/swagger.json \ -f openapi \ -g api-scan.conf \ -x OWASP-ZAP-Report.xml \ -r api-scan-report.html true Thanks to sudhinsureshr for this. Step 8: Add Powershell Task To Convert ZAP XML Report To Azure DevOps NUnit Report Format To Publish Results Add PowerShell task using the add Azure DevOps/add tasks window. Configure Powershell task. Convert ZAP XML to NUnit XML. Display Name: Anything you wish Type: Inline Script: Inline Sample Inline Script Note: This script contains a relative path to the repository and folder. The content of the script may change based on the name you specified in your project. PowerShell $XslPath = "$($Env:SYSTEM_DEFAULTWORKINGDIRECTORY)/_Quality/SecurityTesting/OWASPToNUnit3.xslt" $XmlInputPath = "$($Env:SYSTEM_DEFAULTWORKINGDIRECTORY)/OWASP-ZAP-Report.xml" $XmlOutputPath = "$($Env:SYSTEM_DEFAULTWORKINGDIRECTORY)/Converted-OWASP-ZAP-Report.xml" $XslTransform = New-Object System.Xml.Xsl.XslCompiledTransform $XslTransform.Load($XslPath) $XslTransform.Transform($XmlInputPath, $XmlOutputPath) Step 9: [Optional] Publish OWASP ZAP Security Testing HTML Results To Azure Artifact Add Universal Package task: Configure Universal Package task: Display Name: Anything you wish Command: Publish (Choose from the dropdown) Path to Publish: $(System.DefaultWorkingDirectory) or you can choose from the selection panel (…) menu Feed Location: This organization's collection Destination Feed: SecurityTesting (This is the one that you created in prerequisite step 2.) Package Name: security_testing (This is the one that you created in prerequisite step 3.) Step 10: Publish OWASP ZAP Results Into Azure DevOps Pipeline Add Publish Results task: Configure Publish Results Task Display Name: Any name Test Result format: NUnit Test Result Files: Output file name in Step 8. In our case, it's Converted-OWASP-ZAP-Report.xml. Search Folder: $(System.DefaultWorkingDirectory) After completion of Step 10, trigger Azure OWASP ZAP release. The release starts running and shows the progress in the command line. Step 11: Viewing OWASP/ZAP Security Testing Results Once the release is completed, navigate to completed tasks and click on the Publish Test Results task. The window with the link to the result opens: Once you click the link, you can see the results. Final Thoughts ZAP is an acronym for Zed Attack Proxy, formerly known as OWASP ZAP. It is primarily used as a web application security scanner. The goal is to find vulnerabilities in an application or API endpoint that are prone to various types of attacks. ZAP is actively maintained by a dedicated team of volunteers and is used extensively by professional penetration testers. As we can see in this article, the detailed configuration steps to set up security testing can be added to the DevOps pipeline just like any other tests, and run as a part of deployment and report issues.
Software applications are typically connected to externalities such as databases, SFTP sites, secured web APIs, etc. We often have to store the secrets used to access these externalities in the code we write and share these secrets with other developers in our team. These secrets can include things such as user IDs, passwords, private key files, or anything else that should not be seen by unauthorized persons. While the decision to include such secrets in a coding repository is often highly debated, there can be some use cases in which this approach may be necessary. What Is Git Crypt? git-crypt provides a security mechanism for Git repositories. It allows you to encrypt whatever files you wish within a repository. The encryption keys it uses can then be exported and securely shared among other developers, and it can be imported into tools such as Jenkins for testing and deployment. Getting Started With Git Crypt To get started with git-crypt, you will need to build it from a source or install it through your operating system's preferred package manager. Once that is done, you will need to initialize your (existing) repository to work with git-crypt: Plain Text $ git-crypt init You then need to tell git-crypt which files it needs to encrypt. Say you have a file containing your secrets in a directory called secretdir and has the name i-want-this-to-be-private.txt. You would need to configure a .gitattributes file to tell git-crypt to encrypt this file: Plain Text # You can use the standard syntax of .gitattributes to configure this file, # that could include things like wildcards or other directories. secretdir/i-want-this-to-be-private.txt filter=git-crypt diff=git-crypt The .gitattributes file Once you commit the .gitattributes file, you will need to make and save a change to secretdir/i-want-this-to-be-private.txt so that it will need to be committed. Once you have committed the updated version of this file, it will be encrypted for the next developer who clones the repository. You can use the git-crypt status command to verify that your file has been encrypted: Another user who clones the repository and attempts to view the file without decrypting it will see gibberish: One way to allow authorized users to work with the repository would be to securely share with them a key file that will give them access. You can export this key with the command; just make sure to not store it in the same directory as your repository: Plain Text $ # You can specify any file name or path here. $ git-crypt export-key ../git-crypt.key Once another authorized user has the key, he or she can use it to decrypt the file and use the repository: Plain Text $ git-crypt unlock ~/git-crypt.key Now that we have the export key, how do we integrate it into a Jenkins Pipeline? How To Use Git Crypt in a Jenkins Pipeline Creating Credentials in Jenkins Log into your Jenkins Web UI interface. Typically, this runs on port 8080 of the server on which Jenkins is installed. Within Jenkins, access the dashboard. Go to "Manage Jenkins." Then choose "Credentials." Upload the key you generated previously using the interface: Use the added key file in the Jenkins Pipeline. Here "git-crypt-export-key" is the ID given when you add Jenkins credentials. Plain Text pipeline { agent { node { label 'my-test-node' } } environment { mySecret = credentials("git-crypt-export-key") } stages { stage("Decrypt the files") { steps { sh """ cd /opt/my-secret-repo git-crypt unlock '$mySecret' """ } } } } You may get a warning about data being passed insecurely by using this method. Conclusion This article shows us both how to use git-crypt to protect secrets in a Git repository and how to use the keys provided by the same for CD tools such as Jenkins. Further Reading How to Integrate Your GitHub Repository to Your Jenkins Project Working with PHP, Git, and Azure DevOps How to Use Azure DevOps’ Work Items and PHP
Jenkins is a Continuous Integration (CI) server that can fetch the latest code from the version control system (VCS), build it, test it, and notify developers. Jenkins can do many things apart from just being a Continuous Integration server. Originally known as Hudson, Jenkins is an open-source project written by Kohsuke Kawaguchi. As Jenkins is a Java-based project, before installing and running Jenkins on your machine, you need to install Java 8. The Multibranch Pipeline allows you to automatically create a pipeline for each branch on your Source Code Management (SCM) repository with the help of Jenkinsfile. What Is a Jenkinsfile? Jenkins pipelines can be defined using a text file called Jenkinsfile. You can implement pipeline as code using Jenkinsfile, and this can be defined by using a domain-specific language (DSL). With Jenkinsfile, you can write the steps needed for running a Jenkins pipeline. You may also enjoy: Building a Continuous Delivery Pipeline Using Jenkins What Is a Multi-Branch Pipeline? The Multibranch Pipeline project type enables you to implement different Jenkinsfiles for different branches of the same project. In a Multibranch Pipeline project, Jenkins automatically discovers, manages, and executes Pipelines for branches that contain an in-source control. Architecture Diagram 5 Steps To Create a Multibranch Pipeline Project Open the Jenkins homepage in the local environment (such as http://localhost:8080). Click New Item in the top left corner of the Jenkins dashboard. New item Enter the name of your project in the Enter an item name field, scroll down, select Multibranch Pipeline, and click the OK button. Multibranch pipeline In the configure page, we need to configure the GitHub repo source. Scroll down to the Branch sources and select the source from the Add Source dropdown. We will be using GitHub in this demonstration example, so select GitHub from the dropdown. Enter the location of the repository using the following steps: Select the Add button to add credentials and click Jenkins. Enter the GitHub username, Password, ID, and Description Select dropdown to add credentials in the credentials field. Click the Save button. Branch sources On saving, Jenkins automatically scans the designated repository and does some indexing for organization folders. Organization folders enable Jenkins to monitor an entire GitHub Organization or Bitbucket Team/Project and automatically create new Multibranch Pipelines for repositories that contain branches and pull requests containing a Jenkins file. Scan repository log Currently, this functionality exists only for GitHub and Bitbucket, with functionality provided by the GitHub Organization Folder and Bitbucket Branch Source plugins. Once jobs are created, the build gets triggered automatically. Builds triggered Configuring Webhooks for Multibranch Pipeline Project In the next step, we have to configure our Jenkins machine to make it able to communicate with our GitHub repository. For that, we need to get the Hook URL of the Jenkins machine. Mentioned below are the steps to set up Jenkins webhooks on the GitHub repo. Go to Manage Jenkins and select the Configure System view. Find the GitHub Plugin Configuration section and click on the Advanced button. Select the Specify another hook URL for GitHub configuration. Copy the URL in the text box field and unselect it. Then click on Save. It will redirect to the Jenkins dashboard. Webhooks Now navigate to the GitHub tab on the browser and select your GitHub repository. Click on Settings. It will navigate to the repository settings. Settings Under the Settings, click on the Webhooks option and then click on the Add Webhook button. Paste the Hook URL on the Payload URL field. Make sure the trigger webhook field has Just the push event option selected. Add webhook Click Add webhook and it will add the webhook to your repository. Once you've added a webhook correctly, you can see the webhook with a green tick as depicted in the following image. Added webhook Now go back to the repository, change the branch, and update any of the files. In this scenario, we will update the README.md file. In the image below, we can see that the Jenkins job is getting triggered automatically. CI Triggers After pipeline execution is completed, we can verify the history of the executed build under the Build History by clicking the build number. On clicking the build number, select Console Output. From there, you can see the outputs in each step. Console output Conclusion With this, we have learned the process of creating a Jenkins multibranch pipeline project and configuring it with the Git repo. You saw how easy it is to create a multibranch pipeline project where Jenkins creates a new independent job automatically every time you create a new branch. Jenkins even takes care of branch maintenance, it can remove the job automatically when you remove the branch. Hope you found this article useful. Further Reading How To Build an Effective CI/CD Pipeline: Practical Steps for Creating Pipelines That Accelerate Deployments
Nowadays, it’s critical to get your releases out fast. Gone are the days when developers could afford to wait weeks for their code to be deployed to a testing environment. More than ever, there is great demand for rapid deployment cycles that seamlessly take code from development to production without any hiccups. Yet, the reality is that developers often find themselves bogged down by the complexities of infrastructure management and the tediousness of manual deployment processes. They crave a solution that allows them to focus solely on their code, leaving the intricacies of deployment to automation. That's where Continuous Integration and Continuous Deployment (CI/CD) pipelines come in. These automated workflows streamline the entire deployment process, from code compilation to testing to deployment, enabling developers to deliver updates at lightning speed. However, implementing a robust CI/CD pipeline has historically been challenging, particularly for organizations with legacy applications. Why Kubernetes for Deployment? This is where Kubernetes, the leading container orchestration platform, shines. Kubernetes has revolutionized the deployment landscape by providing a scalable and flexible infrastructure for managing containerized applications. When combined with Helm, the package manager for Kubernetes, developers gain a powerful toolkit for simplifying application deployment and management. In this article, we delve into the intricacies of setting up a fully automated CI/CD pipeline for containerized applications using Jenkins, Helm, and Kubernetes. We'll walk you through the process of configuring your environment, optimizing your pipeline for efficiency, and provide a practical template for customizing your own deployment workflows. By the end of this guide, you'll be equipped with the knowledge and tools necessary to accelerate your software delivery cycles and stay ahead in today's competitive landscape. Let's dive in! Automating CI/CD Pipeline Setup This 6-step workflow will easily automate your CI/CD pipeline for quick and easy deployments using Jenkins, Helm, and Kubernetes. In order to get familiar with the Kubernetes environment, I have mapped the traditional Jenkins pipeline with the main steps of my solution. Note: This workflow is also applicable when implementing other tools or for partial implementations. Setting Up the Environment Configure the Software Components Before you create your automated pipeline, you need to set up and configure your software components according to the following configuration: Software Components Recommended Configuration A Kubernetes Cluster Set up the cluster on your data center or on the cloud. A Docker Registry Find a solution for hosting a private Docker registry. Consider requirements like privacy, security, latency, and availability when choosing a solution. A Helm Repository Find a solution for hosting a private Helm repository. Consider requirements like privacy, security, latency, and availability when choosing a solution. Isolated Environments Create different namespaces or clusters for Development and Staging Create a dedicated and isolated cluster for Production Jenkins Master Set up the master with a standard Jenkins configuration. If you are not using slaves, the Jenkins master needs to be configured with Docker, Kubectl, and Helm. Jenkins Slave(s) It is recommended to run the Jenkins slave(s) in Kubernetes to be closer to the API server which promotes easier configuration. Use the Jenkins Kubernetes plugin to spin up the slaves in your Kubernetes clusters. Prepare Your Applications Follow these guidelines when preparing your applications: Package your applications in a Docker Image according to the Docker Best Practices. To run the same Docker container in any of these environments: Development, Staging or Production, separate the processes and the configurations as follows: For Development: Create a default configuration. For Staging and Production: Create a non-default configuration using one or more: Configuration files that can be mounted into the container during runtime. Environment variables that are passed to the Docker container. The 6-Step Automated CI/CD Pipeline in Kubernetes in Action General Assumptions and Guidelines These steps are aligned with the best practices when running Jenkins agent(s). Assign a dedicated agent for building the App, and an additional agent for the deployment tasks. This is up to your good judgment. Run the pipeline for every branch. To do so, use the Jenkins Multibranch pipeline job. The Steps Get code from Git Developer pushes code to Git, which triggers a Jenkins build webhook. Jenkins pulls the latest code changes. Run build and unit tests Jenkins runs the build. Application’s Docker image is created during the build.- Tests run against a running Docker container. Publish Docker image and Helm Chart Application’s Docker image is pushed to the Docker registry. Helm chart is packed and uploaded to the Helm repository. Deploy to Development Application is deployed to the Kubernetes development cluster or namespace using the published Helm chart. Tests run against the deployed application in Kubernetes development environment. Deploy to Staging Application is deployed to Kubernetes staging cluster or namespace using the published Helm chart. Run tests against the deployed application in the Kubernetes staging environment. [Optional] Deploy to Production The application is deployed to the production cluster if the application meets the defined criteria. Please note that you can set up as a manual approval step. Sanity tests run against the deployed application. If required, you can perform a rollback. Create Your Own Automated CI/CD Pipeline Feel free to build a similar implementation using the following sample framework that I have put together just for this purpose: A Jenkins Docker image for running on Kubernetes. A 6-step CI/CD pipeline for a simple static website application based on of official nginx Docker image. Conclusion Automating your CI/CD pipeline with Jenkins, Helm, and Kubernetes is not just a trend but a necessity in today's fast-paced software development landscape. By leveraging these powerful tools, you can streamline your deployment process, reduce manual errors, and accelerate your time-to-market. As you embark on your journey to implement a fully automated pipeline, remember that continuous improvement is key. Regularly evaluate and optimize your workflows to ensure maximum efficiency and reliability. With the right tools and practices in place, you'll be well-equipped to meet the demands of modern software development and stay ahead of the competition.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. Thirty years later, I still love being a software engineer. In fact, I've recently read Will Larson's "Staff Engineer: Leadership beyond the management track," which has further ignited my passion for solving complicated problems programmatically. Knowing that employers continue to accommodate the staff, principle, and distinguished job classifications provides a breath of fresh air for technologists who want to thrive as an engineer. Unfortunately, with the good sometimes comes the not-so-good. For today's software engineer, the reality isn't quite so ideal, as Toil continues to find a way to disrupt productivity on a routine basis. One common example is when it comes to deploying our artifacts — especially into production environments. It's time to place a higher priority on deployment automation. The Traditional Deployment Lifecycle The development lifecycle for a software engineer typically centers around three simple steps: develop, review, and merge. Building upon these steps, the following flowchart illustrates a traditional deployment lifecycle: Figure 1. Traditional development lifecycle In Figure 1, a software engineer introduces an update to the underlying source code. Once a merge request is created, the continuous integration (CI) tooling executes unit tests and performs static code analysis. If these steps are completed successfully, a second software engineer performs a code review for the changes. If those changes are approved, the original software engineer merges the source code changes into the main branch. At this point, the software engineer starts a deployment to the development environment (DEV), which is handled by the continuous delivery (CD) tooling. In this example, the release candidate is deployed to dev and additional tests (like regression tests) are executed. If both steps pass, the software engineer initiates a deployment into the QA environment via the same CD tooling. Next, the software engineer creates a change ticket to release the source code update into the production environment (Prod). Once the approving manager approves the change ticket, the software engineer initiates a deployment into Prod. This step instructs the CD tooling to perform the Prod deployment. Unfortunately, there are several points in the flow where human-based tasks are involved. Time to Focus on Toil Elimination Google Site Reliability Engineering's Eric Harvieux defined Toil as noted below: "Toil is the kind of work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows." Software engineers should alter their mindset to become cognizant on identifying Toil in their roles and responsibilities. Once Toil has been acknowledged, tasks should be established to eliminate these items that do not foster productivity. Most Agile teams reserve 20% of sprint capacity for backlog tasks. Toil elimination is always a perfect candidate for such work. In Figure 1, the following tasks were handled manually and should be viewed as Toil: Start DEV Deployment Start QA Deployment Create Change Ticket Manager Approve Change Ticket Start Prod Deployment In order to drive toward next-gen deployment lifecycles, it is important to become Toil-free. DevOps Lifecycle and Deployment Automation While Toil elimination is an important aspect to next-gen deployment lifecycles, deployment automation via DevOps is equally as important. Using DevOps pipelines, we can automate the deployment flow as noted below: Create the release candidate image when the merge-to-main event is completed. Automate the deployment to DEV when a new release candidate is created. Continue to deploy to QA upon successful deployment to DEV. Create the change ticket programmatically once QA deployment is successful. In implementing the automation noted above, three of the five human-based tasks are eliminated. In order to mitigate the remaining two tasks, the observability platform can be leveraged. Service owners often rely on their observability platform to support and maintain applications running in production. By extending the coverage to include the lower environments (like DEV and QA), it is possible for DevOps pipelines to interact with metrics being emitted during the deployment lifecycle using an open-source tool such as Ansible. This means that as the DevOps pipelines are making changes to an environment, an Ansible Playbook can be created to monitor a given set of metrics in order to know if the deployment is running as expected. If no anomalies or errors surface, the pipeline will continue running. Otherwise, the current task will abort and the prior state of the deployment will be restored. As a result, using a collection of metrics defined by the service owner and the observability platform, the need for manager approval becomes diminished. This is because the approval of the merge request is where the change was analyzed. Additionally, the approving manager step often was added because a better alternative did not exist. With the manager approval step replaced, the deployment to Prod can be triggered by the same DevOps pipeline. In taking this approach, the status of the change ticket can reflect the actual status as tasks are completed by the automation. Example statues include Created, To Be Reviewed, Approved, Started, In Progress, and Completed (or Completed With Errors). Next-Gen Deployment Lifecycle By eliminating Toil and introducing DevOps automation via pipelines, a next-gen deployment lifecycle can be created. Figure 2. Next-gen deployment lifecycle In Figure 2, the deployment lifecycle becomes much smaller and no longer requires the approving manager role. Instead, the observability platform is leveraged to monitor the DevOps pipelines. With the next-gen deployment lifecycle, the software engineer performs the merge-to-main step after the merge request has been approved. From this point forward, the remainder of the process is completely automated. If any errors occur during the CD pipeline steps, the pipeline will stop and the prior state will be restored. Compared to Figure 1, all of the existing Toil has been completely eliminated and teams can get into the mindset that a merge-to-main event is the entry point to the next production release. What's even more exciting is the improvement that teams will see with their commit-to-deploy ratios in adopting this strategy. Shattering Unjustified Blockers When considering next-gen deployment lifecycles, three common thoughts are often raised: 1. We Need to Let the Business Know Before We Can Deploy Software engineers should strive to enhance or update services in a manner where business-level approval is not a requirement. The use of feature flags and versioned URIs are examples of how automated releases can be achieved without impacting existing customers. However, it is always a great idea to communicate what features and fixes are planned — along with the expected time frames. 2. The Manager Should Know What Is About to Be Deployed While this is a fair statement, the approving manager's knowledge of the update should be established during the sprint planning stage (or similar). Once a given set of work begins, the expectation is that the work will be completed and deployed during the given development iteration. Like software engineers, managers should adopt the mindset that merge-to-main ultimately results in a deployment to production. 3. At Least One Person Should Approve Changes Before They Are Pushed to Production This is a valid statement, and it actually occurs during the merge request stage. In fact, the remaining approval in the next-gen deployment lifecycle is where it is for a very good reason. When one or more approvers review a merge request, they are in the best position — at the best point in time — to review and challenge the work that is being completed. Thereafter, it makes far better sense for the observability platform to monitor the DevOps pipelines for any unexpected issues. Conclusion The traditional development lifecycle often includes human-based approvals and an unacceptable amount of Toil. This Toil not only becomes a source of frustration but also impacts the productivity and mental health of the software engineer over time. Teams should make it a priority to eliminate Toil in their roles and responsibilities and drive toward next-gen development lifecycles using DevOps pipelines and integrating with existing observability platforms. Taking this approach will allow teams to adopt a "merge-to-main equals deploy-to-Prod" mindset. In doing so, commit-to-deploy ratios will improve as a nice side effect. Thirty years ago, I found my passion as a software engineer, and 30 years later, I still love being a software engineer. In fact, I am even more excited for the path ahead, free from human-based approvals due to DevOps automation and Toil elimination. Have a really great day! Resources: "Staff Engineer: Leadership beyond the management track" by Will Larson, 2021 "Identifying and Tracking Toil Using SRE Principles" by Eric Harvieux, 2020 "Monitoring as code with Sensu + Ansible" by Jef Spaleta, 2021 This is an excerpt from DZone's 2024 Trend Report,The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures.For more: Read the Report
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. The era of digital transformation has brought about the need for faster, efficient, and more secure software development processes. Enter DevSecOps: a philosophy that integrates security practices into DevOps processes and aims to embed security into every stage of the development lifecycle — from the writing of code to application deployment in production. The incorporation of DevSecOps can lead to numerous benefits such as early identification of vulnerabilities, cost savings, and faster delivery times. Shift-Left Principle The term "shift left" refers to shifting the focus on security checks and controls toward the beginning, or "left," of the software development lifecycle (SDLC). Traditionally, security checks were performed toward the end, or "right," of the SDLC, often leading to vulnerabilities being detected late in the process, whilst the application is already deployed in production and such vulnerabilities are more expensive and time-consuming to fix. The shift-left principle offers numerous benefits: Early detection of vulnerabilities – By integrating security checks earlier in the SDLC, vulnerabilities can be detected and addressed sooner. This reduces the risk of security breaches and ensures a more secure product. Reduced costs – Addressing security issues late in the development process can be costly. By shifting left, these issues are identified and repaired early, reducing the associated costs and resources required. Improved compliance – With security integrated from the outset, it's easier to ensure compliance with industry regulations and standards. Enhanced product quality – A product built with security in mind from the beginning is likely to be of higher quality with fewer bugs and vulnerabilities. Faster time to market – By reducing the time spent on fixing security issues at later stages, products can be delivered to the market faster. This integration ensures that testing becomes an intrinsic part of the development organization's DNA, fostering a culture where software is meticulously crafted with quality considerations ingrained from the inception of the project. Figure 1. Shifting security controls to the left Key Considerations for DevSecOps Implementation Implementing DevSecOps successfully requires careful consideration of key factors that contribute to a secure and efficient development pipeline. This integration of DevSecOps into the CI/CD pipeline allows for early detection of security issues, reducing the likelihood of vulnerabilities making their way into production while also allowing developers to quickly fix these issues and learn how to avoid reproducing them in the future. Automated Security Testing Tools Because applications come in different forms (e.g., mobile, web, thick client, containerized), you may need to set up different types of controls — and even different types of tooling to secure each component of your application. Let's review the main types of tests you should use. Static Application Security Testing Static application security testing (SAST) tools analyze an application's source code (the code written by your developers) for potential vulnerabilities without executing the program. By scanning the codebase during the development phase, SAST provides developers with insights into security flaws and coding errors. A good SAST tool can detect code smells as well as any bad practices that could lead to vulnerabilities such as SQL or path injection, buffer overflow, XSS, and input validation. Software Composition Analysis Software composition analysis (SCA) is critical for identifying and managing security risks associated with open-source components used in software development, generally coming from additional packages (e.g., NPM packets for JavaScript, NuGet for .NET, Maven, gems). Most developers load a package when they need one but never check if the package has a known vulnerability. An SCA tool will warn you when your application is using a vulnerable package as well as when a fix already exists but you are not using the fixed version of the dependency. Dynamic Application Security Testing Dynamic application security testing (DAST) tools assess applications in their running state, simulating real-world attacks to identify vulnerabilities. By incorporating DAST into the testing process, DevSecOps teams can uncover security weaknesses that may not be apparent during static analysis. A DAST tool will act like a fully automated penetration testing tool that will test for major known vulnerabilities (OWASP) and for a lot of other bad practices such as information leak/exposure. Interactive Application Security Testing Interactive application security testing (IAST) tooling is the combination of a DAST tool and a SAST tool because by allowing access to the source code ("gray boxing"), it will help the DAST perform better but also limit the number of false positives. IAST is super efficient but more challenging to set up because it tends to deeply test each application. Container Scanner Containers offer agility and scalability yet also introduce unique security challenges. For example, if your application is containerized, you must perform additional controls. Mainly, scanners will analyze your Dockerfile to check if the base image contains known vulnerabilities, and they will also look for bad practices such as running as root, using the "latest" tag, or exposing dangerous ports. The following Dockerfile example contains at least three bad practices, and it may contain a vulnerability in the Node.js base image: Shell FROM node:latest WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3000 22 HEALTHCHECK CMD curl --fail http://localhost:3000 || exit 1 CMD ["node","app.js"] Infrastructure-as-Code Scanner Infrastructure as Code (IaC) allows organizations to manage and provision infrastructure through code, bringing the benefits of version control and automation to the infrastructure layer. IaC scanning ensures that infrastructure code undergoes rigorous security controls, such as validating configurations, following best practices, scanning for security misconfigurations, and enforcing security policies throughout the infrastructure deployment process. Secrets Scanner A secret (e.g., API key, password, connection string for a database) should be stored in the source code (hard-coded), or in a configuration file stored within the code repository, because a hacker gaining access to the code could then access production and/or other critical environments. Secrets scanners can detect 150+ types of secrets that developers could leave in the code, and once a secret has been stored in the code (commit), it should be considered "compromised" and revoked immediately. Criteria to select the right third-party product: SAST Number of languages supported, ideally one tool for all code Accuracy of detection Dashboard to customize analysis with sets of rules SCA Number of packages recognized Automated remediation (can create a pull-request with the updated package) DAST Should be able to cover APIs as well as GUI apps Cover more than just OWASP IAST Capable of covering rich applications (e.g., with microservices) Offer remediation/advice to fix detected issues Container Scanner Up-to-date CVE database for the base image Can lint a Dockerfile and check best practices IaC Scanner Find issues in template files Supports the format of your cloud provider (e.g., ARM + Bicep for Azure, CloudFormation for AWS, Deployment Manager for Google Cloud) or Terraform if you are using it Secret scanner The number of credentials types recognized A dashboard that allows security teams to monitor detected secrets and ensure they have been revoked Custom rules to prevent false positives and/or add new formats Establishing Security Gates in CI/CD Pipeline Analysis tools are a good start, but they are useless if they are not part of a global governance. This governance must be built on well-defined security policies and on mandatory controls to ensure that the organization's data and systems are consistently protected against potential threats and vulnerabilities. Defining and Enforcing Security Policies Effective security in a CI/CD pipeline begins with the definition of clear and project-specific security policies. These policies should be tailored to the unique requirements and risks associated with each project. Whether it's compliance standards, data protection regulations, or industry-specific security measures (e.g., PCI DSS, HDS, FedRamp), organizations need to define and enforce policies that align with their security objectives. Once security policies are defined, automation plays a crucial role in their enforcement. Automated tools can scan code, infrastructure configurations, and deployment artifacts to ensure compliance with established security policies. This automation not only accelerates the security validation process but also reduces the likelihood of human error, ensuring consistent and reliable enforcement. Integration of Security Gates In the DevSecOps paradigm, the integration of security gates within the CI/CD pipeline is pivotal to ensuring that security measures are an inherent part of the software development lifecycle. If you set up security scans or controls that users can bypass, those methods become totally useless — you want them to become mandatory. Security gates act as checkpoints throughout the CI/CD pipeline, ensuring that each stage adheres to predefined security standards. By integrating automated security checks at key points, such as code commits, build processes, and deployment stages, organizations can identify and address security issues in a systematic and timely manner. These gated controls can be in different forms: Automated security controls (e.g., SAST, SCA, CredScan) Manual approval (e.g., code review) Manual testing (e.g., pen testing by specialized teams) Performance testing Quality (e.g., a query that monitors the number of defects opened in your quality tracking tool) Figure 2. Standard DevSecOps pipeline with gated security controls Continuous Monitoring and Feedback In the fast-paced world of software development, the importance of real-time monitoring for security and quick fixing cannot be overstated because even with gated controls, vulnerabilities can be found after an application has been deployed in production. Real-Time Monitoring for Security Real-time monitoring allows teams to proactively detect and respond to security threats as they emerge. By leveraging automated tools and advanced analytics, organizations can continuously monitor their applications, infrastructure, and networks for potential vulnerabilities or suspicious activities. This proactive approach not only enhances security but also minimizes the risk of security breaches and data compromises. It gives the ability to gain comprehensive visibility across the entire technology stack. DevSecOps teams can track and analyze security metrics at every layer, from application code to production environments. This visibility enables quick identification of security gaps and facilitates the implementation of targeted remediation measures, ensuring a robust defense against evolving cyber threats. Addressing Security Findings and Adapting Processes Identifying security findings is only the first step; effective DevSecOps requires a proactive approach to address and remediate these issues promptly. When security findings are identified, cross-functional teams work together to assess the impact, prioritize remediation tasks, and implement corrective measures. This collaborative effort ensures that security is everyone's responsibility and not just confined to a specific silo within the organization. Adaptability is a core tenet of DevSecOps. Organizations must foster a culture of continuous learning, where security teams regularly update their knowledge, processes, and tools based on evolving threats and industry best practices. This adaptive mindset ensures that security measures remain effective in the face of new challenges, and that the DevSecOps pipeline is continually refined for optimal security outcomes. Conclusion As software development processes continue to evolve, the need for robust security measures within the CI/CD pipeline becomes more critical. Embracing a DevSecOps approach can help organizations create a secure, efficient, and reliable CI/CD pipeline. By prioritizing security from the get-go and implementing security gates, organizations can save resources, reduce risk, and ultimately, deliver better, safer products to the market. Go and make security the foundation of your products! This is an excerpt from DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures.For more: Read the Report
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. DevOps — ✓DevSecOps — ✓Platform engineering — ? Is platform engineering just another term used for a specialization of DevOps, or is it something different? The truth is probably somewhere in the middle. DevOps and its associated DevXOps flavors have a strong cultural spice that puts the individual teams at the center. Unfortunately, in many places, DevOps has led to new problems like tool proliferation and a lack of harmonization across the enterprise. One could say that in response to the very strict silos and strong centralization of the past, DevOps has pushed the pendulum too far toward federation — and, hence, a suboptimization at the team level — to the detriment of the organization. This has been felt most by the larger, more complex enterprises that have to deal with different technology stacks and differing levels of maturity across the organization. Platform engineering has evolved as a response to this enterprise-wide challenge. Platform engineering is not a replacement for DevOps. Instead, platform engineering complements DevOps to address enterprise-wide challenges and provide a tooling platform that makes it easier for individual teams to do the right thing rather than break things while trying to maintain consistency across the organization. IT delivery has increased in complexity over the last few years, given that more applications are moving at a faster pace. This means organizations cannot rely on individuals to control the complexity; they require systemic answers supported by the proper tooling. This is the problem statement that platform engineering has the ambition to address. With this, platform engineers have become crucial for organizations, as their role holds the keys to enabling security and engineering standards. What Is a Platform Engineer? The role of the platform engineer has three different parts. Figure 1. Role of the platform engineer The most obvious one is the role of a technical architect as they have to build an engineering platform that connects all tools and enables processes. The second aspect is a community enabler, which is similar to developer relations roles at technical tooling companies. The third part is a product manager; the competing interests and demands from the developer community need to be prioritized against the technical needs of the platform (consider things like security hardening and patching of outdated components). Platform Engineer as Technical Architect In organizations with moderate or high complexity within their technology stack, the number of tools required to build, release, and maintain software is at least a dozen, sometimes more. Integrating these tools and enabling the measurement of meaningful metrics is about as tricky as integrating business applications. After all, the challenges are very similar: Different processes need to be aligned, data models need to be transformed to make them usable, and integration points need to be connected to enable the end-to-end process. The systems that run the software side of the business have become similarly challenging. The role of the platform engineer here is to look after the architecture of the tools that run the software side — the goal being to make the tools "disappear" and make the build and release of software appear easy. Platform Engineer as Community Enabler Software engineers tend to think their solutions are better than those from someone else. As such, the adoption of engineering platforms is a challenge to overcome. Telling engineers to use a specific tool has often been met with resistance. The platform engineer must be a community enabler who works with the engineers to promote the platform and convince them of the benefits. Communication goes both ways in this part of the role as the platform engineer also must listen to the problems and challenges of the platform and identify new features that are high in demand. This leads to the third part of the role. Platform Engineer as Product Manager Competing demands on the platform come from the engineers of an organization and other stakeholders like security and, of course, the platform engineers. Prioritizing these demands in a meaningful way is a difficult task as you have to find a balance between all the competing interests, especially as funding for the platform is often a challenge in itself, so speed to value is critical for the ongoing support of the platform. The platform engineer requires good negotiation skills to navigate these challenges. Overview of Platform Engineering Architecture We spoke about the role of the platform engineer, but what is in that platform that the platform engineer is building and maintaining? It is easiest to think about three layers and one target environment: The top layer is the developer experience. These are the tools the developer directly engages with — tools that drive the overall workflow, like an Agile lifecycle management tool, a service management tool, and the developer IDE, fit into this. The bottom layer comprises the infrastructure components that must be combined to build application environments. This can be from the public or private cloud and includes traditional data center technologies. In the middle is where most of the complexity sits — the software engineering platform. Here, all the processes that are required to create and deliver software are being orchestrated: CI/CD, security scanning, environment provisioning, and release management. Figure 2. Platform structure Making the Switch: How to Adopt Platform Engineering Across DevOps Teams So where should you start? One successful adoption pattern focuses on identifying developer journeys to define a minimum viable platform. Which capabilities are required to enable a developer journey to achieve an outcome? Think of a task like provisioning an environment, deploying a new API to production, or running a performance test suite. Each is a valid developer journey with multiple touchpoints that potentially require numerous tools. Once you have created the minimum viable platform for the first set of applications or technologies, adoption follows three dimensions: More applications (once the required capabilities are available), more capabilities, and more maturity, thus increasing the levels of automation and/or performance. Besides worrying about building out the platform with a reasonable approach, three other aspects should be addressed early on: Community engagement Funding Measuring outcomes from the platform Defining a community engagement strategy can be very helpful. This strategy should contain how the information will be shared with the developer community, how feature requests can be made, and how the platform's benefits will be communicated. Defining the forums, the communications, and their respective frequency is also helpful. Funding can quickly become a bottleneck, so a funding strategy should be agreed upon early in the platform engineer adoption. This can be one of several strategies, such as dedicated funding, funding for the services provided, or a service tax on all software development. Each has its own benefits and challenges, a discussion of which is beyond the scope of this article. What is essential is to have a sustainable long-term funding strategy that does not depend on stakeholders' goodwill. Last but not least, the platform engineer needs to be able to show results, which means we need to measure meaningful metrics that showcase why the company is better off with the platform in place. This is often forgotten or an afterthought. Understanding the organization's priorities and aligning the measurement framework to it can help achieve ongoing support. Unfortunately, this usually requires data alignment across multiple tools and is easiest to accomplish when thought about upfront — it becomes increasingly difficult the longer the data models of individual tools remain isolated. Conclusion Platform engineering is still pretty new, yet there is already a lot of content on it, which shows how quickly it has gained interest from organizations. There is even a dedicated conference for it, which began in 2022 and has thousands of participants. It's the early days, but current indications show that platform engineering has quickly found market adoption and a passionate community. And while this is happening, the role of the platform engineer will steadily increase in importance, which is already showing up in salaries too. Hopefully, platform engineering will continue to help organizations reduce complexity for their developers while delivering on the DevOps promise: to provide better solutions faster and more securely. This is an excerpt from DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures.For more: Read the Report
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures. Software supply chains (SSCs) have become a prevalent topic in the software development world, and for good reason. As software development has matured, so has our understanding of the dependencies that can affect the security and the legal standing of our products. We only have to hear names like Log4Shell to remember how crippling a single vulnerability can be. SSCs are a blend of development and operations, and as we start to take SSCs more seriously, it is important that we strike an effective balance of responsibilities in our existing DevOps culture. In this article, we will take a look at how DevOps plays a critical role in SSC management (SSCM) and how we can effectively manage our SSCs with our existing DevOps structures. The Software Supply Chain A supply chain is a network of resources that are required to procure a product. In software, this means all the software artifacts that our product depends on and all the artifacts we publish, including: Binaries Configurations Scripts Licenses For SSCs, there are generally three parts: Upstream – the dependencies our product relies on Build system – the infrastructure used to build our product Downstream – the artifacts we publish This conceptual SSC is illustrated below in Figure 1: Figure 1. SSCs comprise upstream artifacts, a build system, and downstream artifacts Image source: Software Supply Chain Security, DZone Refcard The first step in managing an SSC is to answer some fundamental questions, including: What artifacts, including transitive ones, does our product use? What artifacts does our product publish? Is our build infrastructure secure? Do we verify that our dependencies are trustworthy? Do we provide verifiably secure forms of trust for our downstream artifacts? Answering these questions is not a simple task, and it quickly becomes evident that both development and operations play a critical role in managing a supply chain. Integrating DevOps and Software Supply Chain Management Managing an SSC can be a difficult task since there are a lot of pieces to consider. Responsibility for these puzzle pieces are also spread across different groups. For example, if a dependency is vulnerable, it will likely fall on development to upgrade and mitigate the dependency, but changing our deployment infrastructure needs to verify the trustworthiness of our downstream artifacts will likely fall on operations. We must be cautious against creating the silos that our DevOps culture has succeeded in lowering, but at the same time, we must recognize that development and operations are not interchangeable. To effectively manage an SSC, we must leverage the strengths of each team and apply them to where they are needed, while still maintaining visibility and trust between them. Striking this balance requires three parts: Visibility Input Accountability Visibility Both development and operations must have access to the supply chain configuration. As the SSC for a product grows, both development and operations will need to see the chain. In the past, the build and deployment configuration would reside in a completely different repository, and possibly require a completely different set of tools, from the code itself. In order to effectively manage an SSC, we want our SSC configurations to reside in the same area as our code. That way, both development and operations have access to the configurations, and just as importantly, they can both see the changes that the other makes to the configuration. Some tips include: Keeping repositories tidy and lean so that configurations and scripts are easy to find Following established conventions so that everyone knows where to find scripts and configuration in any repository Ensuring Software Bills of Materials (SBOMs) are published where development and operations can see them Input Both development and operations must be able to see and approve changes. Stemming from this shared visibility, it is critical that both teams have input into changes that will affect the SSC. It becomes clear early on in SSCM that both development and operations have common interests, but sometimes they have competing ideas on how to achieve those goals. For example, if our product has a vulnerability in a downstream artifact that makes our deployment vulnerable, it may not be feasible to simply remove that artifact from deployment since it may contain code that is critical for our product to operate correctly. Instead, development and operations need to work together (and both provide input) to see if the vulnerability can be fixed by development or be remediated by operations. Some tips include: Adding both development and operations personnel as approvers on pull requests (PRs) Fostering a close working relationship between development and operations personnel so that each can see from the other's perspective Ensuring that development and operations are continually in sync ("on the same page") Accountability Both development and operations must both be accountable for the SSC. Lastly, accountability by both development and operations is essential for trust to grow and for DevOps to succeed. A simple, "What do you guys think?" can go a long way in building and maintaining the trust and relationship between the development and operations teams. It is equally important that both teams also take shared responsibility for the SSC outcomes. For example, if a vulnerability makes it into production, development should be asking, "How did we let that vulnerability slip into our code and get deployed?" And operations should be asking, "How did we let a vulnerable artifact make it into a production environment?" This shared accountability ensures that both teams have the mindset of "What can we do to secure our SSC?" rather than pointing the finger at each other. Some tips include: Fostering an environment of personal responsibility Ensuring that development and operations are both praised for successes in the SSC Expecting both development and operations to handle SSC issues that come in (not just one or the other) DevOps Tips for Effective Software Supply Chains Visibility, input, and accountability are the cornerstones of effective SSCM, especially in a DevOps culture, but sometimes these concepts can be too abstract. Below are three specific tips that both development and operations can use to implement these concepts on a daily basis. 1. Place Code and Configuration in the Same Repository A key, although sometimes overlooked, aspect of visibility is adding configuration to a place that development and operations both have access to. The ideal location is the repository that contains the code. Not only does this ensure that both development and operations can see the changes made that will affect the SSC for our product, but it also ensures that both teams do not have to go "hunting" for the configurations in another location or use another set of tools. The same tools that development and operations already use to develop and deploy our product are the same ones that are used to manage the SSC. 2. Utilize Pull Requests Whenever Possible There are usually two ways to make a change to a repository: commit directly to the repository or create a PR and merge the PR. Creating a PR has the benefit of adding approvers who can approve or reject the PR. This can be a very useful tool when one team is making a change and requests the input of the other. For example, if the development team wants to add a new testing stage in the build pipeline, it is important that the operations team provides their input. This not only creates buy-in by both teams, but it also builds trust. Furthermore, it creates accountability by both teams since the development team created the change and the operations team approved it (or vice versa). 3. Automate as Much as Possible Manual steps can inadvertently create artificial barriers between teams. For example, if there is a manual step in the deployment process that only operations staff know how to perform, the visibility of the development team is greatly reduced. Instead, we should automate as many of the development and operations steps as possible. This not only increases the agility of the team but also documents the procedure and allows both development and operations to request changes and scrutinize the process. Conclusion SSCM has become a crucial part of software development, and for good reason. An ineffective management process can lead to an ineffective supply chain, which can result in significant financial and reputational damage to both a product and a company. Enacting visibility, input, and accountability for both the development and operations teams not only ensures that our SSCs are secure, but it also provides a crucial opportunity to strengthen our DevOps culture and create trust between our development and operations teams. This is an excerpt from DZone's 2024 Trend Report,The Modern DevOps Lifecycle: Shifting CI/CD and Application Architectures.For more: Read the Report
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH
Pavan Belagatti
Developer Evangelist,
SingleStore
Alireza Chegini
DevOps Architect / Azure Specialist,
Coding As Creating
Lipsa Das
Content Strategist & Automation Developer,
Spiritwish