CI/CD Archives | JFrog Release Fast Or Die Mon, 03 Jun 2024 13:41:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Complete your Software Supply Chain with GitLab CI/CD and JFrog https://jfrog.com/blog/software-supply-chain-with-gitlab-cicd-jfrog/ Mon, 27 Feb 2023 10:59:06 +0000 https://jfrog.com/?p=110224 Integrate JFrog and GitLab CI

Software is more than building code. Developing software and ensuring quality builds requires managing a complete software supply chain. With the many security threats across the supply chain, managing each and every aspect of the software you deliver to your customers, including the entire process of how it was made, is critical to your organization. This means setting up your software release cycle to include DevOps and security best practices. The challenge is doing this as a continuous flow that is a seamless part of your software delivery.

If you’re already using GitLab as your CI workflow engine, you probably know that’s not where delivery stops. You need to complete the software supply chain feedback loop through continuous security, provenance, software distribution, edge management and more. You can accomplish this by integrating your GitLab processes with the JFrog Platform to deliver a full software supply chain management solution.

JFrog Template Gallery for GitLab CI/CD

The JFrog GitLab templates repository makes it easy to integrate and set up the JFrog Platform into your existing GitLab CI/CD, and achieve a complete software supply chain.

The templates gallery includes ready-to-use templates for popular build-tools such as: .NET, go, Gradle, Maven, npm, NuGet, Pip, Pipenv, and Yarn. Each template provides JFrog functionalities for setting up your security and build integrations.

For example, the “audit” templates provide you with the ability to scan your source code for security vulnerabilities and license compliance issues.

default:
  image: maven:3.8.6-openjdk-11-slim

include:
  - remote: "https://releases.jfrog.io/artifactory/jfrog-cli/gitlab/v2/.setup-jfrog-unix.yml"
  # For Windows agents:
  #- remote: "https://releases.jfrog.io/artifactory/jfrog-cli/gitlab/v2/.setup-jfrog-windows.yml"


jfrog-maven-audit:
  script:
    - !reference [.setup_jfrog, script]

    # Configure JFrog Artifactory repositories
    - jf mvn-config --repo-resolve-releases $ARTIFACTORY_VIRTUAL_RELEASE_REPO --repo-resolve-snapshots $ARTIFACTORY_VIRTUAL_SNAPSHOT_REPO

    # Audit Maven project
    - jf audit

  after_script:
    # Cleanup
    - !reference [.cleanup_jfrog, script]

Full code here >

Learn more about scanning dependencies in your sources >

How to use the templates

Copy the template to your GitLab repository, modify as needed, set the GitLab CI/CD variables (as described in the installation section) and you’re ready to run the pipeline!

How it works

The include statement, at the beginning of each template, adds an initialization script to your pipeline that enables quick and easy access to many JFrog Platform features. Referencing the .setup_jfrog script in a pipeline job does the following:

  • Installs JFrog CLI
  • Configures JFrog CLI to work with the JFrog Platform
  • Sets the build name and build number values to allow publishing build-info to Artifactory
  • Optionally replaces the default Docker Registry with an Artifactory Docker Registry

Read a customer’s own account of how they seamlessly integrated the JFrog CLI with their Gitlab CI.

Discover the JFrog Platform

Whether you’re working with containers, packages, libraries, or any other type of binary file, JFrog Artifactory has you covered. As an industry standard, Artifactory serves as the backbone of your DevOps environment, providing software developer teams with a centralized place to store, manage and distribute binary artifacts.

Fully integrated with popular CI/CD tools, including GitLab CI, Artifactory makes it easy to manage all your binary artifacts in a single, centralized, universal repository, eliminating the need to use multiple tools for different types of binary files, streamlining your software release process and reducing the risk of errors. Critically, Artifactory is built for scale.

When it comes to securely managing the lifecycle of software artifacts, you need a true supply chain management solution that focuses on the asset that runs in production – the software binary.

Unique JFrog Platform capabilities include proxying and caching 3rd party packages for consistent, reliable access even across remote locations, as well as enterprise grade support for over 30 package types, multi-site support, continuous security monitoring focused both on source code and binaries, prioritization of long lists of vulnerabilities, actionable policies, and a guaranteed uptime SLA in the cloud that you can rely on.

More Developer Tools

We’re excited to make the JFrog Templates Gallery and the following open source tools available for developers to use and get started with:

  • JFrog Frogbot –  An automatic pull request security vulnerability scanning in Git.
  • JFrog IDE integrations – Developer plugins and extensions, including VS Code, IntelliJ IDEA, Eclipse and more, enabling developers to discover and remediate security vulnerabilities early on in the development stage.
  • JFrog Build Integrations – Developer plugins and extensions, including JenkinsCI, GitHub Actions, Azure DevOps, Bamboo and more, enabling developers through integration to CI systems.

As always we’re happy to help! The JFrog GitLab templates repository is open source; your contribution is always welcome. Submit your pull requests and engage with us by opening issues.

]]>
Proceed With Care: How to Use Approval Gates in Pipelines https://jfrog.com/blog/proceed-with-care-how-to-use-approval-gates-in-pipelines/ Thu, 07 Oct 2021 16:52:02 +0000 https://jfrog.com/?p=82365

While DevOps automation aims to eliminate most human intervention in the CI/CD DevOps pipeline, you can’t always cut people completely out of the process. There are still times when you’ll want an expert, hands-on review to assure that everything is as it should be before allowing your pipeline to proceed further.

That’s why JFrog Pipelines empowers DevOps teams to include approval gates in their CI/CD pipelines, to give key personnel the power to prevent mistakes from cascading into production binaries.

Use Cases for Approval Gates

When are approval gates necessary? That depends on your organization and its particular concerns. Here are some possible requirements where approval gates in Pipelines might be used: 

  • A security team is required to audit and approve applications before they are released to production by the SRE team. 
  • A build must be approved by the product, security, and quality assurance teams before being deployed into production.
  • A production engineer must verify that all the dependent components of an application are ready/compatible before promotion or deployment. 
  • Security and QA teams must validate updated artifacts before they are released for consumption by other development teams. 
  • A quality assurance team needs the development team to approve artifacts before running tests. 

Creating Approval Gates

You can configure a manual approval gate for any step in your pipeline configuration YAML. 

When a step has an approval gate configured, it will suspend execution and set its status to Pending. The required user(s) must then manually approve (or reject) for the step to complete (or cancel).

If the step is cancelled, then Pipelines will treat it as failed and no subsequent steps will be executed.

Simple Approval Gate 

An approval gate can be specified in the configuration section of any Pipelines step, using the requiresApproval key.

In its simplest usage, you can just set the value of requiresApproval to true.

steps:
      - name: approvalGatesStep
        type: Bash
        configuration:
          requiresApproval: true

In this mode, any user with execute permissions for the pipeline can approve or reject the step. If no action is taken within 24 hours, the step will automatically be cancelled and no subsequent steps will be executed.

Complex Approval Gate

In a simple approval gate, no user is notified outside of the Pipelines UI that approval is required; the user must watch the pipeline execute to see the step’s pending status.

This isn’t very practical for most real-world circumstances. Approvers need to be notified, through the collaboration tools that they use every day, that their action is required.

It’s also likely that approvals will need to be made by a specific person, or by multiple people.

For these reasons, the requiresApproval key can be configured with any or all of these additional properties:

  • approvers – List of users who can approve or reject the step.
  • notifications – List of notifications sent through SMTP and/or Slack when the step enters Pending status. 
  • timeoutSeconds– Maximum time the step can hold Pending status before being cancelled.
      - name: npm_publish_step
        type: npmPublish
        configuration:
          requiresApproval:
            approvers:
              - mtwain                        # Artifactory user
              - jcheever                      # Artifactory user
            notifications:
              - integrationName: mySlack_Int  # Slack integration
            timeoutSeconds: 43200             # 12 hours 

Running Approval Gates

So you have your approval gate set up in a step. What happens when the pipeline runs?

When our pipeline executes, the approval gate configured in our npmPublish step will suspend execution, enter Pending status, and send notification (in this case, to Slack).

JFrog Pipelines approval gate notification in Slack

The link in the notification will display the Pipeline History view for the current run, which reflects the currently Pending status of the step.

JFrog Pipelines approval gate pending status

When you view the log for the Pending step, an Approve/Reject button is available.

JFrog Pipelines approval gate approve or reject

Clicking Approve/Reject reveals options to approve or reject, along with an opportunity to register a comment about the action.

JFrog Pipelines approval gate

When you Approve, you are asked to confirm.

JFrog Pipelines approval gate confirmation

Stop, Look, Click

An approval gate in Pipelines empowers DevOps engineers to use the expertise of the people who make the software development lifecycle work an integral part of their CI/CD pipelines. Through automated notifications and an easy-to-follow UI, you can make sure that any needed manual oversight gets done by those authorized to perform it.

This feature is only the latest way that JFrog Pipelines helps your organization to practice and enforce CI/CD the way that you have decided you need to. Whether it’s creating your own custom extensions or templates, or out-of-the-box integrations with the many tools that you use, Pipelines enables you to build the working patterns that suit you best.

Have you tried Pipelines CI/CD yet? If not, start for free!

]]>
GitLab vs JFrog: Who Has the Right Stuff? https://jfrog.com/blog/gitlab-vs-jfrog-who-has-the-right-stuff-2/ Mon, 18 Oct 2021 13:30:21 +0000 https://jfrog.com/?p=82422 GitLab vs JFrog

Like the historic space race, the competition to plant the flag of DevOps is blasting off which makes it an exciting moment for the community. According to market intelligence firm IDC, global business will invest $6.8 trillion in digital transformation by 2023. Yet research also suggests that 70 percent of them will fail to meet their goals, due to inefficient collaboration and cross-team alignment.

As many of today’s F1000 companies and development organizations tell us, scale is the next great challenge for DevOps – and you can’t scale if your people and tools don’t work well together. As all code-repo companies suggest, and GitLab themselves note: “[organizations] want faster cycle time, improved efficiency, and reduced risk.” This is increasingly apparent in a modern world of building and releasing software sometimes multiple times daily, requiring scalable, secure, flexible tools across a company’s DevOps infrastructure to meet demands.

This means that all aspects of the software lifecycle must be addressed in order to meet the demands of today and tomorrow. Code management systems, package management, continuous security and software distribution must be integrated, highly scalable and work completely in sync to drive transformation. Here at JFrog, many of our customers are accomplishing exactly that: marrying their chosen tools for VCS (like GitHub, GitLab, Atlassian and others) with the binary management and software delivery expertise the JFrog Platform provides.

But with both JFrog and GitLab sporting the “DevOps Platform” label, let’s take a look at how you can get the DevOps job done to rapidly, securely, and fearlessly bring software to the market.

Platform or Portfolio

GitLab makes an excellent VCS — as even some of our developers at JFrog who have used it will attest. GitLab’s expertise at the leftmost side of the software development lifecycle (SDLC) has helped them produce innovative features — such as codebase projects — to support design and coding.

Through a uniquely transparent, open-source development model, GitLab has rapidly expanded its subscriptions to include tools that help automate some rightward tasks of the SDLC such as CI/CD, package management, static testing, dynamic testing, and more. 

Maturity?

While impressive in scope, most of GitLab’s offerings are currently far from mature. Of the more than 40 rightward items not related to source code or CI/CD in GitLab’s maturity chart, only 3 are “complete.” Nearly half of those remaining are still “minimal.”

GitLab Maturity Status as of September, 2021 (source: GitLab)

Unified?

The GitLab suite provides a common user interface, and they stress as a core value that no feature is considered complete unless it can be controlled through the UI.

While this provides the appearance of a unified platform, it obscures the reality that many of the GitLab tools are not genuinely integrated with each other. Behind the UX curtain, few of the pieces interoperate smoothly — often not even sharing a common metadata model. Even as DevOps seeks to break down silos, many of GitLab’s DevOps and DevSecOps solutions remain siloed from each other.

Vision?

Successful DevOps doesn’t come from solving individual problems one at a time. Success comes from applying the right methodologies that have been proven to work by hundreds of other organizations. Your DevOps tools need to enable the best practices that help you accelerate quality software releases and deliver them safely to customer’s desktops and devices.

While GitLab seeks to provide end-to-end tools for DevOps, they do not offer an end-to-end method for DevOps success.

When it comes to DevOps, a portfolio is not a platform. And our long experience in DevOps has shown that digital transformation success lies in managing your binaries, not just your code.

The Right Stuff: Development vs Delivery

Our thousands of successful customers have proved one truth: Achieving DevOps success is all about your binaries. 

Software development is about source code, and quality code comes from empowering smart people who know how to write it. 

Software delivery is about binaries — ensuring quality builds and getting them swiftly into your customers’ devices. Quality binaries come from smart systems that know how to manage and distribute them.

As the vendor of a leading VCS, GitLab is expert at the necessary task of enabling software developers to create and manage source code. But the aim of DevOps is to achieve delivery of high-quality releases from developers at top speeds.

A binaries-centric approach — JFrog’s expertise — is the only way to successfully automate a modern organization’s software lifecycle that ensures trust and speed of delivery.

To get that done, you’ll need to be able to achieve these critical milestones:

Release Faster

Accelerating releases requires an accelerated binaries workflow that creates trust. 

JFrog GitLab
Package Types 30+ 11+
Proxy Repository Caching Docker Hub Only
Automation: Natively integrated CI/CD
Build Metadata for SBOM and Traceability X
Build Promotion for Release Staging X
Advanced Query Language X
Distribution Solution X


Package Types

JFrog internal data shows that, on average, an Artifactory installation maintains repositories for at least 7 distinct package types. Enterprise-level users demand even more, with half of those using 12 or more package types.

By these measures, GitLab’s current local repository support for 11 package types offers a solid start — although far fewer than the over 30 package types natively supported by Artifactory. Like Artifactory, GItLab also provides a Generic repo type, enabling users to centrally manage additional file types that are part of their releases – such as images, zip files, docs, and more.

Language or OS JFrog Artifactory (partial)  GitLab Package Registry (entire)
Java Apache Maven Apache Maven
Java Gradle Gradle
Javascript npm/Yarn npm/Yarn
Javascript Bower
Python pip/twine pip/twine
.Net Nuget Nuget
Golang Go modules Go modules
Docker Docker Docker
Helm Helm charts –*
C/C++ Conan Conan
iOS CocoaPods
PHP PHP Composer PHP Composer
Ruby RubyGems RubyGems
Rust Cargo
Linux, Yum RPM
GNU Linux Debian
Hashicorp Terraform (soon) Terraform

* With Helm v3, GitLab users can use the Container Registry to store Helm Charts. However, due to the way metadata is passed and stored by Docker, it is not possible for GitLab to parse this data and meet performance standards.

Proxy Repository Caching

Over 92 percent of applications use open source code from public repositories, which can represent a commonly estimated 60-90% of the code in each app.

Artifactory’s remote repositories  enable you to cache those open source packages in a proxy repo, enforcing version immutability, providing local speed and ensuring against any connection outage. You can logically combine these with local Artifactory repos as a single virtual repository, providing convenience to developers and governance by administrators over what can be accessed.

GitLab’s Dependency Proxy provides a similar function — but it’s currently available only for the Container Registry, and only proxies Docker Hub.

Build Metadata for SBOM and Traceability

Every build of an application is composed of many, many artifacts — the building blocks of software — from your packages and configuration files to the binaries that are the deployable runtime components of your application. 

When you achieve the DevOps goal of multiple builds per day, it adds up quickly. Our enterprise customers each maintain an average 20 million unique artifacts, adding 130% more each year.

Artifactory stores extended metadata — what we call “build info” — with every build you make, from any build tool, linking to the package metadata of your open source and proprietary dependencies along with build artifacts and environment settings. With detailed build info, you can trace every deployable binary back to where it came from and out to every place it’s been staged for service.

Your build info is also the basis of a Software Bill of Materials (SBOM) — a machine-readable inventory detailing all the items included in an application and their origin — for every release put into production or delivered to a customer.  With a growing number of governments and regulated industries requiring an SBOM to help combat cyberattacks, Artifactory is your rapid turnkey solution for compliance.

GitLab has no comparable function to Artifactory’s build info. GitLab CI/CD offers some analog to a “build” which they call deployments. This enables you to store a record of the build event with your source code, along with where it was (or, for manual deployments, will be) deployed. But this data describes the event, not the deployed binary, and does not include any of the metadata required to produce an SBOM or replicate a deterministic build. Moreover, by being an exclusive feature of GitLab CI/CD, this tracking mechanism cannot be extended to your legacy CI/CD pipelines, such as Jenkins.

Instead of metadata, GitLab’s dependency list function works only with its dependency scanning tool to produce dependency data by parsing the source in the GitLab VCS. Consequently, GitLab cannot produce an absolutely reliable SBOM from your deployable binary — they can only reconstruct what is likely to be in your binary as deduced from the source code. 

Build Promotion and Release Staging

Every new software version must pass several quality gates in an SDLC. But what passes through these gates makes the difference between a speedy or a plodding path to release.

Artifactory enables build promotion, in which an immutable binary runs through the entire SDLC. With a repository for each SDLC stage, a build with its metadata can be promoted simply by shifting it to the next repo in sequence.

In this “build once and promote” method, the same build is evaluated at every stage, assuring absolute consistency through the DevOps pipeline. Once free of having to perform their own deterministic builds, teams can apply the hours they recover to conducting more exhaustive tests and delivering feedback more quickly.

Build promotion is built into Artifactory’s DNA — a simple API action can promote a build from one repo to the next, and Artifatory’s checksum-based storage guarantees that a given build and its metadata are identical in all repos where it’s stored.

While GitLab provides repos for generic artifacts like release binaries and container registries for Docker images, GitLab simply doesn’t provide built-in support for build promotion. While GitLab CI/CD does support recording deployments to defined environments in your GitLab projects, this feature is unavailable to use with other automation tools.

Distribution Solution

From JFrog Cloud repositories, you can distribute high volumes of software across multiple locations through Amazon’s CloudFront software distribution CDN solution. JFrog Distribution provides enterprise-ready solutions for the secure delivery of validated applications and updates to the servers, desktops, and devices where they can be put to use. From the binaries in your Artifactory repositories, create signed release bundles for delivery to Artifactory edge nodes.

GitLab offers no distribution solution at all, falling far short of their claim to be an end-to-end DevOps platform.

 Connect and Automate

To be versatile, your DevOps platform needs to be able to support all the technologies that your development teams use, to merge seamlessly with the tools you need.

JFrog GitLab
CI/CD Automation
  Pre-built (native) steps & extensions X
  Templates
  Simple CI/CD Creation
  Conditional Execution & Complex Pipelines
  Pipeline Editor X
  Signed Pipelines X
  Linux Build
  Windows Server Build partial
  Auto-Scaling Build Infrastructure Support 3 2
REST APIs

CI/CD Automation

Much like their VCS offering for source code, GitLab’s CI/CD is mature and widely used. GitLab pipelines are created through descriptions in YAML, where developers can define pipeline stages and actions. 

Similarly, JFrog Pipelines also uses YAML to define pipelines. But there are some important differences:

  • Native Steps
    While GitLab’s CI/CD is strongly integrated with GitLab source code repositories, each tool for build, test, and deployment must be invoked through their individual command line interfaces in shell scripts.

    JFrog Pipelines is naturally integrated with all mission-critical parts of the JFrog Platform — not just Artifactory, but also Xray and Distribution. Pipelines’ native steps reduce out-of-the-box effort by enabling many common actions (such as Docker image builds, security scans, or release bundling) through pre-built steps defined in declarative YAML.

    Pipelines native steps can be mixed with general-purpose steps to execute shell commands, or extended by creating your own Pipelines Extensions that can hide low-level complexity and be shared across a team, department, or organization.
  • Simple CI/CD Creation
    GitLab’s Auto DevOps feature helps to get started with CI by automatically setting up pipelines and integrations for you based on your source code. JFrog Pipelines reduces complexity significantly through declarative native steps and custom extensions, so you can focus on what you want done rather than how to get it done. Additionally, you can jumpstart creation of CI/CD with Pipelines Templates to quickly create pipelines for common operations. We provide several built-in, and you can add your own to help ensure your teams follow best CI/CD practices.
  • Complex Pipelines
    GitLab’s Parent-Child pipelines enable pipelines to behave more dynamically, automatically choosing to start (or not start) sub-pipelines based on the outcome of another.

    Similarly, in Pipelines any pipeline or step can be configured to trigger on a variety of events, including the successful completion of another pipeline’s output, or a personnel-controlled approval gate. Pipelines’ Graph view provides a combined, real-time status view of all interconnected pipelines and resources — your “Pipeline of Pipelines” — to understand dependencies between them.
  • Signed Pipelines
    As part of the unified JFrog Platform, Pipelines creates a blockchain-like, cryptographically-signed ledger for each run of a pipeline. It also adds enhanced metadata to builds and release bundles including a link to the run that generated it. Customers can then block downstream actions such as build promotion, release bundle creation, deployments, etc … if the builds/release bundles cannot be validated as being created by the linked run. This “zero trust” approach provides an additional layer of security by guaranteeing the authenticity of packages and making pipelines tamper-proof.

APIs, CLI, and Integrations

GitLab and JFrog each provide REST APIs to enable integration. Review them both — we think you’ll find JFrog offers a more comprehensive set for versatile automation. 

The JFrog CLI offers a ready way to access your Artifactory repositories from the command line or to automate from a shell script. GitLab’s subscriptions provide no CLI at all.

GitLab CI/CD is tightly coupled with GitLab source control repositories. While the JFrog Platform provides end-to-end DevOps with Pipelines, you can also use Artifactory with CI/CD tools you might prefer — whether that’s Jenkins, CircleCI, or even GitLab CI/CD. JFrog provides several integrations out-of-the-box, or choose from a large family of technology partner integrations.

 Protect Your Business

Practicing strong DevSecOps means having the best risk data and being able to interpret results to maintain safety and regulatory compliance. It also means enabling everyone along the software production process to be aware of security. 

JFrog GitLab
Software Composition Analysis (SCA) Scanning Limited
Container Scanning Limited
License Compliance
Automated Policy Enforcement X
Impact Analysis X
IDE Integrations to shift-left security X

SCA Scanning

GitLab Dependency Scanning tool is tightly integrated — and can only be used — with GitLab source control repositories and GitLab CI/CD to identify vulnerable open-source dependency references in source code. It scans source code from within a CI/CD pipeline; information about vulnerabilities found and their severity is reported in the merge request, so a developer can act to remediate.GitLab does not scan packages in GitLab Package Registry.

JFrog Xray performs deep-recursive SCA scans on your packages and binaries, leveraging your Artifactory metadata to identify the open-source components directly from your builds. This provides greater certainty, ongoing vigilance, and the ability to flag zero-day vulnerabilities in binaries already deployed.

To help shift-left security, developers can also invoke Xray to scan dependencies in the source code in their local directory, enabling them to remediate even before committing code to a branch.

Container Scanning

GitLab container scanning uses the open source Trivy engine as of GitLab 14.0. Container scanning is currently not integrated into the GitLab Container Registry flow – Docker images can only be scanned through a separate job in GitLab CI/CD. Images pushed to the registry are not automatically scanned.

JFrog Xray can be configured to scan Docker images (including OCI-compliant images and Google Distroless images) in a registry continuously for both vulnerabilities and license policy violations, just as it can any of the 18 package types Xray supports. 

Automated Enforcement Policies

GitLab’s Dependency Scanning, Container Scanning, License Compliance and other security tools all provide reports that must be read, evaluated, and acted upon by a human operator — a set of high-friction manual steps that inhibit speedy software delivery.

JFrog Xray SCA tool empowers security teams to configure rules and policies for vulnerability severity and license conflicts, and set up automated watches to detect violations and enforce those policies after a scan. Through JFrog partner integrations, you can also report violations through analytic tools such as Splunk or DataDog, or alert teams through Slack*, or MS-Teams.*

*Currently in Beta

Impact Analysis

JFrog Xray reveals the risk impact to your entire binaries ecosystem through impact graphs that show the full scope of any vulnerability or policy violation throughout your inventory of builds. GitLab provides no comparable facility.

IDE Integrations

With any of the Xray plugins for popular IDEs, you can shift left security awareness to developers, flagging and remediating vulnerabilities in OSS dependencies even before code commit.

As noted above, GitLab Dependency Scanning is driven through GitLab CI/CD. No IDE plugins for GitLab are currently available.

 Scale to Infinity

Your business can’t stand still; it must be able to seize every opportunity to grow globally. There are no limits and, no matter where you’re starting from, your tools have to keep up. Your critical path operations must fit your needs today, but also enable business agility to meet your needs of tomorrow without interruption.

JFrog GitLab
Expandable High Availability
Regional Geo-Replication Limited
Multicloud Offering X
Hybrid Solution Limited
Unlimited number of users X
Private Distribution Network X


Expandable High Availability

Both GitLab and JFrog support high availability (HA, also known as “clustered”) deployments using multiple, load-balanced instances to help assure swift response time while enabling failover protection and zero downtime when performing upgrades. 

Regional Geo-Replication

GitLab Geo supports limited site replication through unidirectional mirroring from a single primary GitLab site to read-only secondary sites. We consider this inadequate to support the way that global development teams collaborate.

The JFrog Platform supports a variety of replication topologies, most readily through Federated Repositories, an innovative bidirectional mirroring technology that empowers geographically distributed teams to collectively produce and share artifacts with their metadata. Within each federated repository, changes made to artifacts or to the repository configuration on one site are automatically synchronized to other member sites (up to 10).

Multicloud and Hybrid

GitLab hosts all cloud (SaaS) services on a single cloud platform, GCP in the U.S, making multi-cloud redundancy impossible. Although GitLab has both a SaaS service and self-managed installation option, these are separate and users cannot work between and across them.

JFrog Cloud (SaaS) is available for managed hosting on all major cloud providers (AWS, GCP, and Azure), empowering you to choose your cloud platform or maintain more than one for a multi-cloud strategy. All subscription levels of JFrog Platform (Pro, Team, Enterprise, Enterprise+) are available for self-hosting or on-premises and can be combined with any JFrog Cloud account to build an enterprise-ready hybrid system through repository geo-replication.

Unlimited Users

When your license fees are by the number of user accounts, that can significantly add to the expense of an expansion or acquisition. With JFrog’s unlimited user licensing, that’s not something you’ll need to ever think about; add as many user accounts as your installation can practically support with no added cost.

Private Distribution Network

You can distribute releases from your Artifactory single-source-of-truth to Artifactory edge nodes, which can be connected into a multi-node topology to form a secure, high-speed, private distribution network (PDN) that spans the globe. Through JFrog’s innovative peer-to-peer networking technology, you can overcome poor network connectivity, latency, and cross-border obstacles to deliver large, signed release bundles safely at top speed worldwide.

As noted above, GitLab provides no delivery or distribution solution.

Once You Leap Forward, You Won’t Go Back

The strength of GitLab as a VCS and project manager is clear from the many organizations who rely on them for source code management. In fact, many of JFrog’s customers do, using JFrog’s universal CLI and APIs to integrate with the CI/CD of their choice, whether that’s GitLab, Jenkins, or another tool. Of the customers listed in GitLab’s recent Form S-1 filing, over half are JFrog customers as well!

With their broad portfolio of tools, GitLab’s offering asks you to replace a best-of-breed toolchain with one from a single vendor. But unless that vendor can offer you a better chance to succeed, that’s a risky bargain.

At JFrog, our mission is simple: To make every software creator successful by providing the best solution to deliver releases fast, securely, and continuously. 

GitLab’s ambitions are certainly large, according to their mission statement. “Our BHAG over the next 30 years is to become the most popular collaboration tool for knowledge workers in any industry.”

Why wait three decades? The JFrog DevOps Platform, powered by the industry’s most popular tool for binaries management, is available today and you can start for free!

]]>
Six Simple Steps to Your First CI/CD DevOps Pipeline in JFrog Pipelines https://jfrog.com/blog/six-simple-steps-to-your-first-ci-cd-devops-pipeline-in-jfrog-pipelines/ Tue, 30 Mar 2021 16:34:17 +0000 https://jfrog.com/?p=72343

See how easy it is to get started, and start working with a simple “Hello World” DevOps pipeline. Along the way, you’ll learn some fundamental Pipelines concepts.

Here’s what you’ll need:

  1. A JFrog Cloud account. If you don’t have one, start for free!
  2. A GitHub account for your personal repositories

Step 1 – LOGIN TO THE JFROG PLATFORM

Login to your JFrog Cloud account with the JFrog Platform credentials provided to you by email. 

The JFrog DevOps Platform in your cloud account is automatically set up with a default Dynamic Node Pool on your cloud provider, so you already have build nodes available for your pipeline to execute in!

JFrog Pipelines CI/CD Node Pools

Step 2 – ADD A GITHUB INTEGRATION 

For Pipelines to connect to other services, such as GitHub, Artifactory, or Kubernetes, you must add Pipelines integrations for those services. You must provide the integration with the URL endpoint for those services and credentials for a user account on that service, along with any other parameters.

Did You Know?

Pipelines stores all of your credentials for an integration in an encrypted vault to keep your secrets secure.

To add an integration for your GitHub account, from the Application tab click on Pipelines | Integrations, then click Add an Integration.

Use the following information for the integration:

Name  my_github
Integration Type GitHub
url  No change (this is hardcoded to https://api.github.com)
Token  Your GitHub account personal access token that has repo and admin OAuth scopes.


Click the
Create button.

JFrog Pipelines CI/CD Integrations

Step 3 – FORK THE PIPELINES SIMPLE EXAMPLE GITHUB REPO

Fork the simple “Hello World” example for JFrog Pipelines to your own GitHub repository.

This repo contains a simple example of a pre-defined pipeline described in the YAML file named jfrog-pipelines-hello-world.yml.

Example CI/CD JFrog Pipelines DSL

Step 4 – UPDATE THE VALUES.YML FILE

Edit the values.yml file in the your forked repo, and change the path value from jfrog/jfrog-pipelines-simple-example to <your_github>/jfrog-pipelines-simple-example. 

JFrog Pipelines Example Values File

Commit your changes.

Step 5 – ADD PIPELINE SOURCE 

To add the pipeline source, from the Application tab click on Pipelines | Pipeline Sources, then click Add a Pipeline Source and select From YAML.

Use the following settings for the pipeline source: 

SCM Provider Integration Select my_github 
Repository Full Name Select the jfrog-pipelines-simple-example repository
Branch  Select master
Pipeline Config File Filter jfrog-pipelines-hello-world.yml 


Wait for the new Pipeline Source to sync.

Step 6 – MANUALLY TRIGGER THE PIPELINE 

From the Application tab click on Pipelines | My Pipelines menu item to see your newly created pipeline, then click on its name: my_first_pipeline.

JFrog Pipelines CI/CD My Pipelines

The resulting Pipeline History view displays a diagram of your pipeline and its steps. Note there is no information available in the Runs history since this pipeline has not yet been triggered. 

The example pipeline is configured to trigger whenever there is any new commit to the GitHub repo. For this demo, you can manually trigger and run your pipeline:

  1. Click the p1_s1 step in the pipeline diagram. This will display the information box for that step.
  2. In the step’s information box, click the Trigger this step button. 

You will see your pipeline diagram change as each step changes state. As the pipeline executes, its status is shown in the Runs history until it completes with Success.

JFrog Pipelines CI/CD Run Execution

That’s it! You’ve now loaded and run your first example workflow in Pipelines!

You can examine the results of each of your steps in the Pipeline Run Logs.

Now that you’ve experienced how easy it is, explore what else you can do with JFrog Pipelines through the Pipelines Developer Guide Quickstart in the JFrog Platform documentation.

 

You might also enjoy the eBook “6 Obstacles to Successful DevOps” 

]]>
My Build, My Way | JFrog Pipelines Extensions https://jfrog.com/blog/my-build-my-way-jfrog-pipelines-extensions/ Wed, 17 Mar 2021 16:02:18 +0000 https://jfrog.com/?p=71759 My Build, My Way - JFrog Pipelines Extensions

TL;DR

Once my new projects are almost ready to share with the team and I can build and test them locally, I’ll need a CI automation tool to test and deploy each release. As a Principal Consultant at Declarative Systems, I’ve been recommending JFrog Artifactory to clients looking to bullet-proof their deployments since 2016. After considering different CI solutions, we found that JFrog Pipelines has the best integration with Artifactory which made choosing this platform a no-brainer. It also addresses our security and license compliance goals stretching years into the future on day one.

Designing your CI/CD Environment

One of the most important decisions you will need to make in designing your CI/CD pipeline is how to match your Continuous Integration environment to your local one. JFrog Pipelines is a DevOps CI/CD automation solution for building, testing and deploying software as part of your CI/CD pipeline. Pipelines includes native steps to get up and running quickly. For example, you could use DockerBuild to build a Docker image and then push to a registry with DockerPush:

Geoff Williams, Principal Consultant, Declarative Systems

Basic Docker Pipeline in JFrog Pipelines

If the native steps do everything you want, they can be assembled together like Lego bricks to make a pipeline and you’re done. Here’s how I used JFrog Pipelines to align our existing local environments with the CI delivery process.

My Build, My way

I needed more flexibility for using existing build scripts since these had already been created and I wanted a repeatable process between CI and workstation. To make my build scripts run in pipelines I needed a way to setup the containerized build environment so that I had:

Geoff Williams_Principal Consultant_Declarative Systems
  1. Extra tools
    Getting hold of the extra tools I needed was easy. I built a custom image and configured the pipeline to use it.
  2. Tools configured to use Artifactory
    I needed tools like yarn and podman to be logged in to Artifactory so that my builds could publish artifacts.

Every tool out there has its own way of configuring Artifactory and while JFrog Pipelines has a Bash step that lets me do anything in Bash I know this approach would quickly result in a lot of hard to manage, completely custom build scripts.

We can do better than this.

JFrog Pipelines Extensions

JFrog Pipelines Extensions let me abstract away common tasks and re-use them across projects. They come in two flavors:

  1. Resources
    Things you need to make your step work. They are a great place to do setup, such as configuring a tool to work with Artifactory. The details of resources are invisible when you view your pipeline and each resource name must be unique across builds.
  2. Steps
    Discrete things you want to do as part of your build and are typically more constrained in how they work. Each step is clearly shown when you view your pipeline.

Resources or Steps?

Which one to use depends on what you are trying to achieve. Steps let you build a workflow that does one thing and does it well. Resources, on the other hand, are the right place to setup your environment so you can build software in Pipelines the same way you build locally.The challenge: Setup JFrog Pipelines environment to replicate local build

The challenge: Setup JFrog Pipelines environment to replicate local build

Resources were the right way to set up the environment for my build.

Extensions Registry

I got all of my builds working in JFrog Pipelines just the way I wanted them to and I’ve shared my Extensions Registry on GitHub for the benefit of the DevOps community. This registry is evolving to focus on using resources instead of steps where possible to give developers the greatest flexibility. It was technically reviewed with Tal Yitzhak, a Solution Engineer from the JFrog DevOps Acceleration team. Here are some of the example repositories it includes (see links in the following section).

OCI Container images

Name: declarativesystems/ContainerEnv

Description:

  • Pull images from artifactory
  • Push images to artifactory
  • Full control of image building

Tools configured:

  • buildah
  • podman

DockerCustom image should provide:

  • buildah
  • podman

Node.js

Name: declarativesystems/NpmEnv
Description:

  • Resolve dependencies from Artifactory
  • Publish to Artifactory

Tools configured:

  • npm
  • yarn

DockerCustom image should provide:

  • Node.js
  • npm
  • yarn

Python

Name: declarativesystems/PythonEnv
Description:

  • Resolve dependencies from Artifactory
  • Publish to Artifactory

Tools configured:

  • pip
  • poetry

DockerCustom image should provide:

  • Python
  • pip
  • poetry

Summary

JFrog Pipelines extensions make it simple to create new pipelines by configuring your environment and then running your existing build scripts. To use these extensions within your own organization, fork the repository and you can start using the resources it provides in your own pipelines.

Leap forward with JFrog Pipelines. Start for free >

]]>
SwampUP Leap: Splunk’s DevOps and CI/CD Journey With JFrog https://jfrog.com/blog/splunk-devops-and-ci-cd-journey/ Tue, 21 Jul 2020 15:18:13 +0000 https://jfrog.com/?p=61266

At swampUP2020 we were delighted to have a keynote interview with Splunk CTO & SVP Tim Tully. In this fireside chat Tim talks at length about how Splunk has evolved over the last two decades, and how he is driving technical innovation to equip customers far beyond log analytics. 

Being in Charge

Tim oversees everything tech at Splunk, including Product, Engineering, IT and Security. This includes a distributed team of 1,000 developers, spread across 100 or so scrum teams, supporting over 2,000 Splunk applications. Tim talks about his passion for coding and how he actually prefers sitting in the corner of a room of developers with his headphones on, staring at a terminal. 

The Road Ahead

As CTO, Tim owns the Splunk roadmap, a challenge he enjoys and grabs with both hands, as you’ll see when he delves into their new acquisitions and technologies and how they are applying them to expand the capabilities of their platform. 

Hear Tim describe some of the advances in machine learning that are breaking boundaries for real-time data processing, which now feature in their new Data Stream Processor. Plus, get a heads-up on some of their upcoming releases. 

Listen to TIm explain Splunk’s latest advances in observability and monitoring, including their innovations for monitoring microservices-oriented cloud applications running in a containerized Kubernetes environment. 

Immerse yourself in this talk and learn some interesting facts about Splunk, the world of DevOps and how JFrog and Splunk are aligned in supporting open source, and giving it back to the community.

]]>
JFrog Pipelines 1.6: Overcoming CI/CD Obstacles to Scaling DevOps https://jfrog.com/blog/jfrog-pipelines-1-6-overcoming-ci-cd-obstacles-to-scaling-devops/ Thu, 09 Jul 2020 15:00:25 +0000 https://jfrog.com/?p=60669

Long release cycles are no longer viable in the world of software development. The promise of DevOps has been to materially shrink time to value. Like most meaningful transitions, this one hasn’t always been a simple flip of a switch. For many organizations, development teams have become complex and unwieldy. So, the custodians of DevOps have found it difficult to achieve broader adoption of DevOps principles across engineering teams.

We heard about some of these challenges from you. We heard about approaches to DevOps that created a complex tangle of technology. This complexity only got in the way of broad adoption. You told us that you need a simple, central, cloud-native, Kubernetes enabled way to solve the problem. You asked for consistency, standardization, and reusability.

We created JFrog Pipelines with the goals you set forth for us. We have now added new features to JFrog Pipelines 1.6 with the same goals in mind. We’re committed to helping you achieve broader adoption of DevOps principles across engineering teams so that you can deliver a continuous stream of value for your customers.

Build Your Own Library of Custom Steps

We’re extending the concept of reusability to make it even easier for you to build your pipelines. Workflows in JFrog Pipelines are built by combining discrete steps, each of which performs a task. We have pre-built several common steps our users need – we call these Native Steps. We currently provide a robust set of over 20 Native Steps that cover your core DevOps tasks.

In addition to commonly occurring steps across all our customers, each of you will have commonly occurring steps that are unique to you. So, we’re now taking Native Steps to the next level by empowering you to build your own library of steps.  Now you can encapsulate frequent common actions in your own pipelines. This is much simpler than other approaches involving plugins that conflict with each other, creating a ‘plugin hell’ for users. You can define your custom steps in YAML files in your source control system and import these steps into pipelines.

Being able to assemble pipelines, rather than scripting them, makes pipeline creation scalable – every incremental pipeline becomes easier to write. The effort of a few experts, who can create your custom steps, can go a long way towards empowering others who do not need the same level of DevOps skills.

Our vision for pre-built steps is wide-ranging, with an eye towards enabling our partners and our community to create and share pre-built steps in the future. Your ability to define your own custom step library is a huge step forward. Moreover, we have a huge runway with what we can accomplish with this. Watch this space!

Cloud-Native Efficiencies With Kubernetes Build Nodes

We were early entrants into the Kubernetes game by extending Artifactory to serve as your Docker Container Registry as well as your Helm Chart Repository, making Artifactory your full-featured K8s registry. K8s enables you to quickly and predictably deploy your applications using containerized microservices and is rapidly becoming the de facto standard platform for managing the orchestration of containers.

If you’re consolidating your workloads on K8s, it’s likely that you’ll also want to run your Pipelines build nodes on K8s. Your pipelines can then inherit all the platform automation that comes with K8s. You can now specify node pools that run on K8s pods for your pipelines as an alternative to running your pipelines on VMs. So your pipelines will be able to build anywhere K8s runs – whether on EKS, GKE, Azure’s AKS, or your on-prem clusters. This helps you avoid cloud vendor lock-in, and empowers you to leverage the strengths of each cloud provider as appropriate for your workloads.

Kubernetes node pools are dynamic. Pipelines can spin up or down build nodes on demand. Pipelines delivers the scale efficiencies associated with cloud-native computing, minimizing cloud service charges by demanding resources only as they are needed.

Matrix Builds – Define and Run Your Builds Fast

Test suites are growing in complexity, involving various permutations of environments, languages, tools, and versions of runtimes. It can be tedious to define each permutation, and the performance of your builds can really suffer from having to sequentially test every permutation. You’re likely to grow tired of waiting for your builds to complete. Enter the Matrix Build.

Matrix Builds enable you to execute the same step action in a variety of configurations and runtime environments, with each variant executing as an independent “steplet.” Currently, all steplets run on a single node, but future releases will support parallel execution of steplets across multiple nodes or node pools. Spreading your build workload can dramatically amp up performance. For instance, you can have various build steplets running in parallel, each one running against a specific version of Node.js, and a specific runtime version of Linux, with its own set of environment variables. On completion of all steplets, Pipelines aggregates the status and results, giving the appearance of a single step.

So how does this magic work? Within the YAML configuration for the step, you can specify the set of environment variables, build images, and language versions you want to test against. Pipelines will then build out the full matrix of tests to be run, and run each combination in parallel. That’s it!

More Improvements

Along with these functional, performance and infrastructure improvements, we’re also enhancing Pipelines for ease of use. 

Easy Integration with Jenkins – New Native Step

We have improved our integration with Jenkins by adding a new native step. This is significant because of the sheer number of development teams that use Jenkins for building and integrating software. With our Jenkins integration, JFrog Pipelines can subsume Jenkins pipelines, empowering you to create pipelines of pipelines. Hence, the adoption of JFrog Pipelines is non-disruptive, enabling you to continue leveraging your investment in Jenkins.

The Jenkins native step empowers you to represent a Jenkins job as a step in your JFrog pipelines. A member of your team creates the integration with Jenkins only once. Developers can then add a Jenkins build step to their JFrog pipelines without being concerned with the particulars of the integration. The Jenkins native step transfers execution to a Jenkins pipeline. Once the Jenkins pipeline completes, it returns control to the JFrog pipeline.

Quickly Find Your Pipelines

We’ve introduced several UI navigation improvements, including the ability to quickly find your pipelines by tagging your favorites. 

We’ve also made it much easier to monitor your multibranch pipelines, with an expanding/collapsing UI to group them and view the status of each.

CI/CD for Fast Forward

These changes mark a huge step forward, establishing a great cadence for enhancements to Pipelines. We’ll keep pushing on these fronts to make pipelines increasingly frictionless and keep you posted on our progress as it occurs.

Now is the time to try out JFrog Pipelines. You can easily sign up for a trial and build your first pipeline by recreating the examples in our set of Pipelines Quickstarts.

]]>
CI/CD In Confidence: How Pipelines Keeps Your Secrets https://jfrog.com/blog/ci-cd-in-confidence-how-pipelines-keeps-your-secrets/ Thu, 21 May 2020 18:58:33 +0000 https://jfrog.com/?p=58496

A friend that can’t keep a secret isn’t one you’ll rely on. The same is true for your mission critical CI/CD tool that you have to entrust with credentials for each integrated component.

Keeping your secrets safe can be a challenge for CI/CD tools, since they need to connect to such a variety of other services. Each one needs its own password or token that must be kept hidden from prying eyes. Revealing this sensitive data in the plaintext files that define your workflows is a huge security risk.

JFrog Pipelines was designed for secrecy from the start. Unlike many CI solutions that provide plugins or add-ons that need to be specially installed and maintained, secrets management is built into the way that Pipelines works.

Here’s how Pipelines integrations combine central secrets management with fine-grained access permissions of JFrog Platform to provide convenience, security, and administrator control.

Pipelines Integrations

Pipelines comes with the right variety of out-of-the-box integrations for the tools you’re likely to use most, so connecting to services is a snap. Adding an integration is often just giving it a friendly name, providing an API endpoint, and entering user credentials. Integrations ready to connect include GitHub, Bitbucket, Docker, Kubernetes, and Slack, as well as cloud services like AWS, GCP, and Azure.

Central Secret Storage

Pipelines stores the secrets you provide to all your integrations in a central storage vault, encrypted to keep them safe from any digital intruder.

For example, if Jasmine’s developer team uses a private Docker registry for container images, a Docker Registry integration is provided the username and password.

When Jasmine’s Pipelines DSL references this Docker Registry integration, it reveals only that integration’s friendly name (Jasmine_Docker) in the plaintext file. The Pipelines integration does the work to connect, and the secrets stay securely hidden from view.

integrations:
	- name: Jasmine_Docker			# Our private docker registry

In this way, all a developer needs to know is the integration’s friendly name to access the service. They never have to directly use the secrets that authorize connecting to the service. When a separate administrator configures integrations, developers on a team don’t need to manage or even know the secrets for the integrations they’re permitted to use, so those services can be safely shared without sharing secrets.

Administrator Control

For control, only a JFrog Platform administrator user can add, edit, or delete integrations. And Pipelines follows best security practice by obscuring vital secrets such as passwords or tokens in the UI with disc symbol text.

In a large, multi-team organization you probably don’t want every user to be able to access every service. When an admin adds or edits an integration, they can restrict access to only certain pipeline sources. In this way, operations can limit a service to use only by some pipelines and, by extension, to the users and groups that have permission to use those pipelines. 

So while it’s easy for a developer to use and share integrations, each can only connect to the services they’re permitted to see.

For example, Sanjay isn’t on Jasmine’s team, so he shouldn’t be able to push images to her private Docker registry. Here’s how Kim, the administrator, might restrict use of that integration to Jasmine’s team:

  1. Kim adds Jasmine’s project repository, e.g. jasmine/pipelinest as a pipeline source
  2. In the Docker Registry integration Jasmine_Docker, Kim assigns only the pipeline source jasmine/pipelines.
  3. In Administration | Permissions, Kim adds the pipeline source jasmine/pipelines to the permissions target for Jasmine’s team.

Developer Availability

Although Pipelines holds the integration’s secrets centrally, the Pipelines DSL can still access its particulars. When an integration is specified in a step’s integrations block it can be used in that step’s shell scripts. 

For example, if a pipeline needs to send a notification email when a build completes, it can use the built-in utility function and reference the SMTP integration that an administrator has added and named “TeamJasmine”:

integrations:
	- name: TeamJasmine			# SMTP integration
execution:
onSuccess:
- send_notification TeamJasmine --body "built docker image docker-local/demo:$pipeline_name.$run_number"

Similarly, if a shell script needs the values held by an integration, it can do so through environment variables. For example, to issue a Secure Shell (SSH) command using the private key in a SSHKeys integration:

integrations:
	- name: mySSHKeys				# SSH Keys integration
execution:
onExecute:
- echo "$int_mySSHKeys_privateKey" > key.txt
- chmod 400 key.txt
- ssh -i key.txt user@host 'do some work'

Neither of these examples ever expose keys or passwords in the plaintext Pipelines DSL file, so it’s safe to store it in an online source code repository. None of this critical information ever leaves the secure confines of your Pipelines environment.

Tops in Top Secret

Pipelines integrations enable you to share secure resources, while keeping the secrets that authorize their use extra safe. Using the JFrog Platform’s unified permissions model, you can grant access to those who need it and block access to everyone else.

That’s just one, but very important detail of Pipelines’ design for cloud native, enterprise-scale CI/CD. It’s a reflection of our comprehensive approach to building a system for one-stop DevOps.

Give it a try for yourself, and see how Pipelines can help speed your releases safely.

]]>
Jenkins and JFrog Pipelines: CI/CD Working Together to Release Your Software https://jfrog.com/blog/jenkins-and-jfrog-pipelines-ci-cd/ Wed, 13 May 2020 14:31:27 +0000 https://jfrog.com/?p=58177

As a software producer, you need to keep releases moving, even as you need to move your technology ahead. Transitioning your Jenkins continuous integration (CI) pipelines to a newer, optimized system can’t be a roadblock, and your enterprise can’t afford the work stoppage a rip-and-replace rework would require.

We understood that deeply when we built our CI/CD solution, JFrog Pipelines. That’s why we made it very easy to connect your current Jenkins pipelines to ones in JFrog Pipelines, so that you can extend your existing toolchain, not disrupt it.

Like many organizations, you’ve already invested hundreds of developer hours building many Jenkins pipelines that perform and have become integral to your software build process. Those Jenkins workhorses can continue to drive essential parts of your CI, but hand off to new workflows in JFrog Pipelines.

Let’s take a look at how that’s done, to enable Jenkins and Pipelines to work together.

Pipelines and the Platform Difference

Pipelines is the CI/CD component of the JFrog DevOps Platform end-to-end set of solutions for “one-stop DevOps.” Powered by Artifactory, the JFrog Platform provides everything you need to manage your organization’s software delivery, from artifact repositories, distribution of binaries, security scanning and CI/CD automation. 

Chances are good that your Jenkins pipelines are already pushing artifacts and builds to Artifactory repositories for things like Go, Docker, and Helm. That’s because Artifactory’s universal repository management enables connection with the DevOps tools you choose, including the most popular CI servers.That’s helped push Artifactory to be accepted as the industry standard for binary repository management.

JFrog Pipelines is the automation glue that helps unify all the tools in the JFrog DevOps Platform, Like Jenkins, it can move your software through each stage from code to build to binaries and all the way to distribution. But as part of the JFrog Platform, it’s naturally integrated with Artifactory, Xray, and Distribution, and can be administered through a unified permissions model for fine-grained access control.

Pipelines also operates at enterprise scale, able to support hundreds of CI/CD pipelines through a single, central platform for all administrators and teams.

Even with these compelling reasons to migrate your CI from Jenkins to Pipelines, it might not be practical to do so all at once. 

From Jenkins to Pipelines

For this example, we have built a Go REST application that Jenkins will build, run unit tests and then push the application to a staging Docker repository. Next, JFrog Pipelines will deploy the Docker Go application from the staging repository to a Kubernetes cluster. We will use Google Kubernetes Engine (GKE). Additionally, we will use Artifactory as our Docker registry. This makes it easy to promote the build to release without pushing the same build to another release registry. 

The code repository for this example contains our Go REST application, the Jenkins pipeline Jenkinsfile and the JFrog Pipeline YAML file. Per best practices, the pipeline infrastructure is defined in these files, too.

Jenkins Pipeline

Our Jenkins pipeline performs the initial build and testing of our application, pushing a Docker container to a repository with build information.

Let’s take a look at our Jenkins pipeline. The sections of the following Jenkinsfile that are important are the Publish Build Info and post stages. After Jenkins builds and tests our Go application image, we publish the build info to Artifactory.

stages {
 stage('Build') {
     steps {
         container('golang'){
             sh 'go build'
         }
     }
 }
 stage('Unit Tests') {
     steps {
         container('golang'){
             sh 'go test ./... -run Unit'
         }
     }
 }
 stage('Docker Build') {
   steps {
     container('docker'){
         sh "docker build -t partnership-public-images.jfrog.io/goci-example:latest ."
     }
   }
 }
 stage('Docker Push to Repo') {
   steps {
     container('docker'){
         script {
           docker.withRegistry( 'https://partnership-public-images.jfrog.io', 'gociexamplerepo' ) {
             sh "docker push partnership-public-images.jfrog.io/goci-example:latest"
           }
        }
     }
   }
 }
 stage('Publish Build Info') {
   environment {
     JFROG_CLI_OFFER_CONFIG = false
   }
   steps {
     container('jfrog-cli-go'){
         withCredentials([usernamePassword(credentialsId: 'gociexamplerepo', passwordVariable: 'APIKEY', usernameVariable: 'USER')]) {
             sh "jfrog rt bce $JOB_NAME $BUILD_NUMBER"
             sh "jfrog rt bag $JOB_NAME $BUILD_NUMBER"
             sh "jfrog rt bad $JOB_NAME $BUILD_NUMBER \"go.*\""
             sh "jfrog rt bp --build-url=https://jenkins.openshiftk8s.com/ --url=https://partnership.jfrog.io/artifactory --user=$USER --apikey=$APIKEY $JOB_NAME $BUILD_NUMBER"
         }
     }
   }
 }
}

 

Then in the post stage, we trigger JFrog Pipelines by referencing that build info in a special webhook call to the Pipelines REST API. We will talk about how this webhook is set up in JFrog Pipelines next. 

post {
   success {
     script {
        sh "curl -XPOST -H \"Authorization: Basic amVmabcdefM25rMW5z=\" \"https://partnership-pipelines-api.jfrog.io/v1/projectIntegrations/17/hook\" -d '{\"buildName\":\"$JOB_NAME\",\"buildNumber\":\"$BUILD_NUMBER\",\"buildInfoResourceName\":\"jenkinsBuildInfo\"}' -H \"Content-Type: application/json\""
     }
   }
}

JFrog Pipelines

Our JFrog Pipeline will trigger through the build info pushed by the Jenkins pipeline, and perform the remaining deployment and staging actions to release.

To connect Jenkins to JFrog Pipelines, we must first create a Jenkins integration in our Pipelines deployment, here called jenkins_openshiftk8s_com. This UI provides our curl webhook command above, and enables our Jenkins pipeline to trigger our JFrog pipeline.

JFrog Pipelines defines its pipeline steps in YAML. The first section of this file is our resources. These are sources and destinations of data that are used by the pipeline. In our case, we are defining our GitHub repo, a BuildInfo resource connected to the jenkins_openshiftk8s_com Jenkins integration, and a final BuildInfo resource to promote our release. The BuildInfo resource is used to store metadata for our build.

resources:
 - name: gociexampleGithubRepo
   type: GitRepo
   configuration:
     gitProvider: myGithub
     path: myaccount/goci-example
 - name: jenkinsBuildInfo
   type: BuildInfo
   configuration:
     sourceArtifactory: MyArtifactory
     buildName: goci-example/master
     buildNumber: 1
     externalCI: jenkins_openshiftk8s_com
 - name: releaseBuildInfo
   type: BuildInfo
   configuration:
     sourceArtifactory: MyArtifactory
     buildName: goci-example/master
     buildNumber: 1

 

Our first step is a Bash step that receives our Jenkins trigger through jenkinsBuildInfo.

- name: start_from_jenkins
 type: Bash
 configuration:
   inputResources:
     - name: jenkinsBuildInfo
 execution:
   onExecute:
     - echo 'Jenkins job triggered Pipelines'

 

If all goes well, we then deploy our Go REST application to our staging environment. In this case, we have a GKE cluster for this. We reference this cluster through a Kubernetes integration named gociexampleClusterCreds . We can integrate with any Kubernetes cluster by providing our kubeconfig data as an Integration object.

We use the HelmDeploy step to deploy our application using a HelmChart directory in our repo. 

- name: deploy_staging
 type: HelmDeploy
 configuration:
   inputSteps:
     - name: start_from_jenkins
   inputResources:
     - name: gociexampleGithubRepo
       trigger: false
   integrations:
     - name: gociexampleClusterCreds
   releaseName: goci-example
   chartPath: chart/goci-example/

 

Then we have a Bash step that waits for the Go REST application to become available.

- name: wait_for_server
 type: Bash
 configuration:
   inputSteps:
     - name: deploy_staging
 execution:
   onExecute:
     - timeout 60 bash -c 'while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' https://goci-example.35.238.177.209.xip.io)" != "200" ]]; do sleep 5; done' || true

 

Once our Go REST application comes up in our staging environment, we execute our staging tests.

- name: staging_test
 type: Bash
 configuration:
   inputSteps:
     - name: wait_for_server
   inputResources:
     - name: gociexampleGithubRepo
       trigger: false
   runtime:
     type: image
     image:
       auto:
         language: go
         versions:
           - "1.13"
   environmentVariables:
     STAGING_URL: "https://goci-example.35.238.177.209.xip.io"
 execution:
   onExecute:
     - cd ../dependencyState/resources/gociexampleGithubRepo
     - go mod download
     - go test ./test -run Staging

 

Finally, if our staging tests pass, we promote the build to release.

- name: promote_release
 type: PromoteBuild
 configuration:
   targetRepository: partnership-public-images.jfrog.io
   status: Released
   comment: Passed staging tests.
   inputResources:
     - name: jenkinsBuildInfo
   outputResources:
     - name: releaseBuildInfo

 

Our JFrog Pipeline can be further extended to provide continuous delivery (CD) operations using JFrog Distribution to publish your software to end systems and JFrog Edge systems. But we will leave that to a future blog post.

Keep DevOps Moving

As you can see, it’s a straightforward process to connect your Jenkins pipeline to one in JFrog Pipelines. If needed, you can also trigger a Jenkins pipeline from JFrog Pipelines.

Developing your software delivery toolchain from a patchwork of tools can be a time consuming and frustrating task. Using a unified toolchain like the JFrog Platform allows you to focus on your software and not the tools. But if you have existing tools, you can easily plug these into the JFrog platform and still take advantage of the JFrog Platform features.

]]>
Parallel Maven Deployment with Jenkins and Artifactory https://jfrog.com/blog/parallel-maven-deployment-with-jenkins-and-artifactory/ Mon, 30 Mar 2020 13:29:09 +0000 https://jfrog.com/?p=56183 Parallel Maven Deployment with Jenkins and Artifactory

There are many reasons why you may want to use Artifactory as your Maven repository. For example, it allows tagging Maven artifacts with custom properties, so that they can later be found based on specific criteria. It stores build metadata about your artifacts, and allows controlling the repositories used by the Maven build, without modifying the pom file. In this post, I’d like to focus on one specific advantage, Maven deployments in Jenkins.

Reduce Maven Build Time with Parallel Deployment

JFrog has recently released version 3.6.1 of the Jenkins Artifactory Plugin. This release includes a significant enhancement when it comes to Maven deployments – it is now parallel! You now have the option of setting the number of threads assigned to the deployment of Maven artifacts. 

This should actually reduce the build time dramatically, especially if your build creates and deploys a large amount of artifacts. 

If you’re already using the plugin to build your code, after upgrading the Artifactory Plugin, without changing any of your build configuration, the deployment time should be reduced to a third of what it used to be. By default, Jenkins will use three threads of the deployment, but you have the option of changing this default from within your pipeline code.

If you’re already using the Artifactory pipeline APIs, then your script should include this section, which defines the Maven deployment using declarative syntax. Notice the new threads property which is now supported.

rtMavenDeployer (
id: 'deployer-unique-id',
serverId: 'Artifactory-1',
releaseRepo: 'libs-release-local',
snapshotRepo: 'libs-snapshot-local',
threads: 6 // The default value is 3
)

If you’re using the scripted syntax, you can set the threads count on the deployer as follows:

rtMaven.deployer.threads = 6

Read more about Maven builds with Jenkins and Artifactory, and get started with the latest version of the Jenkins Artifactory Plug-in.

]]>