Best Practices Archives | JFrog Release Fast Or Die Sun, 02 Jun 2024 13:01:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Top JFrog Security Research Blogs of the Year https://jfrog.com/blog/top-jfrog-security-research-blogs-of-the-year/ Fri, 12 Jan 2024 14:51:19 +0000 https://jfrog.com/?p=123915 Top JFrog Security Blogs 2023

With over 29,000 CVEs and 5.5 billion malware attacks recorded in the past year, it’s no wonder that software supply chain security is a top priority for enterprise developers on a global scale. That is also why JFrog Security Research has been instrumental in identifying and analyzing the biggest threats and devising methods to protect software from exploitable vulnerabilities and malware.

Preventing attacks before they happen, can save companies from significant financial damage, loss of reputation and even worse — deterioration of customer trust. DevSecOps professionals who want to stay on top of industry threats, should review this year’s top JFrog Security Research analysis and findings of the biggest vulnerabilities, malware attacks and recommended best practices for remediation.

Here are this year’s reported threats, discoveries and security analysis:

  1. SSH protocol Terrapin Attack
  2. N-Day Package Hijacking Threat
  3. Plexus-Archiver Vulnerability
  4. Curl and Libcurl Vulnerabilities
  5. Spring WebFlux Vulnerability
  6. .NET WhiteSnake Malware Malware Payload
  7. Malicious Code in NuGet packages
  8. Analyzing the Impala Stealer Vulnerability
  9. OpenSSH Sandboxing & Privilege Separation
  10. The Most Insecure Docker Application
  11. OpenSSH Pre-Auth Double Free Vulnerability
  12. Detecting Malicious Packages and Hidden Code
  13. DoS Threat when using Rust’s Hyper Package

1. SSH Protocol Flaw – Terrapin Attack CVE-2023-48795: All You Need to Know

The Terrapin attack is a new high-profile attack that was discovered recently and affects multiple known software packages implemented around the SSH protocol, both SSH clients and SSH servers. This post includes a technical analysis of this vulnerability, discusses its exploitation impacts, and how it was fixed in OpenSSH. It also instructs how to mitigate the vulnerability without upgrading. JFrog Xray and JFrog Advanced Security can identify this and similar vulnerabilities across the entire codebase.

JFrog Xray Advanced Security
JFrog Xray and JFrog Advanced Security can identify Terrapin and similar vulnerabilities

2. N-Day Hijack: Analyzing the Lifespan of Package Hijacking Attacks

The consequences of a software package hijacking attack can be severe, ranging from data theft and data corruption to installing malicious software. Organizations must take steps to reduce the risk of package hijacking attacks. This post analyzes the typical time before a package hijacking attack is detected. It concludes that waiting at least 14 days before upgrading to a new package version would have mitigated all hijacking instances outlined in the post. JFrog Curation can be used with a specific rule to enforce this 14 days wait in your organization.


GitHub Security Advisory regarding the node-ipc hijacking vulnerability

3. Arbitrary File Creation Vulnerability in Plexus-Archiver – CVE-2023-37460

CVE-2023-37460 of plexus-archiver, an archive creation and extraction package, can be exploited by extracting a malicious archive that contains a symlink to a path outside of the extraction directory. By triggering the vulnerability, an attacker can create an arbitrary file, that didn’t already exist before, which can cause a remote code execution. JFrog Security researchers found that even after the ZipSlip fix back in 2018, the plexus archiver is still vulnerable to this attack.

plexus-archiver - CVE-2023-37460
The plexus-archiver can be exploited by extracting a malicious archive and symlink to an outside directory

4. VE-2023-38545 & CVE-2023-38546 Curl and Libcurl Vulnerabilities: All You Need to Know

The release of Curl v8.4.0 included fixes for two discovered vulnerabilities with one rated as having low severity and the other considered of high severity. The high severity vulnerability affects both the Curl command-line tool and libcurl. This blog post details the applicability conditions of the high severity CVE, the vulnerability details, how likely it’s to be exploited in the wild and possible mitigations without upgrading.

Curl and Libcurl vulnerabilitiesDemonstration of a DoS attack via an arbitrary-read primitive

5. Spring WebFlux – CVE-2023-34034: Write-Up and Proof-of-Concept

Spring is a widely used Java-based application framework that provides infrastructure support for the development of enterprise-level Java applications. Their fix for the WebFlux CVE lacked many details. That along with all the hype surrounding the CVSS score, compelled the JFrog Team to provide their own analysis. This blog post provides details on the vulnerability, the applicability conditions, a remediation, and a proof-of-concept demonstrating in which cases the vulnerability can be triggered to cause an authentication bypass.

A Simple Spring WebFlux Application
Proof-of-concept

6. New .NET Malware “WhiteSnake” Targets Python Developers, Uses Tor for C&C Communications

The JFrog Security Research team discovered a unique malware payload in the PyPI repository, written in C#. Although it’s common to see native binary payloads in malicious packages, it’s fairly uncommon to see this type of cross-language attack. The C# payload was rather sophisticated and among other things used Tor as a way to communicate with its C&C server.

Manually deobfuscated code
An example of manually deobfuscated code

7. Attackers are starting to target .NET developers with malicious-code NuGet packages

The security research team identified a sophisticated and highly-malicious attack targeting .NET developers via the NuGet repository – which were downloaded 150K times over the past month before they were removed from the NuGet repository. This first-ever documented attack in the NuGet repository deployed its payload in two stages, and proved that no open source repository can be considered safe.

Azetap.API package’s author defined as Microsoft with a false description
Azetap.API package’s author defined as Microsoft with a false description

8. Analyzing Impala Stealer – Payload of the first NuGet attack campaign

This post provides a detailed analysis of a malicious payload dubbed “Impala Stealer”, a custom crypto stealer which was used as the payload for the NuGet malicious packages campaign we exposed in a previous post. The “Impala Stealer” was meant to steal credentials from the Exodus wallet. The post includes a description of how the campaign targets .NET developers via NuGet malicious packages, and a detailed analysis of the payloads that were used by the malicious packages that were part of this campaign, their way to stay persistent, and their actions to identify and use the Exodus wallet installed on the device to steal wallet credentials.

Click to enlarge diagram
#IMAGE_DESCRIPTION#
Impala Stealer and Updater flow chart

9. Examining OpenSSH Sandboxing and Privilege Separation – Attack Surface Analysis

The OpenSSH double-free vulnerability CVE-2023-25136 has created a lot of confusion regarding OpenSSH’s custom security mechanisms – Sandbox and Privilege Separation. Both of these security mechanisms were not well known with limited documentation. This blog post provides an in-depth analysis of OpenSSH’s attack surface and security measures.

Privilege SeparationOpenSSH’s Privilege Separation Mechanism

10. Testing the Actual Security of the Most Insecure Docker Application

WebGoat is a well-known insecure web application, written mostly in Java, used publicly for application security training and benchmarking tools. In this post, JFrog Security researchers analyzed the WebGoat Docker image and used it to demonstrate the power of the automatic CVE applicability detection feature in Xray – “Contextual Analysis”. Contextual Analysis can automatically analyze code and differentiate between exploitable vulnerabilities and those that are not exploitable.

JFrog Xray Contextual Analysis results for CVE-2013-7285
An example of CVE Contextual Analysis report

11. OpenSSH Pre-Auth Double Free CVE-2023-25136 – Writeup and Proof-of-Concept

OpenSSH version 9.2p1 contained a fix for a double-free vulnerability. Given the severe potential impact of the vulnerability and its high popularity in the industry, this security fix prompted the JFrog Security Research team to investigate the vulnerability. This post provides details on the vulnerability, the applicability conditions, its impact, and a proof-of-concept to trigger it casing a Denial of Service (DoS) attack.

12. Detecting Malicious Packages and How They Obfuscate Their Malicious Code

In this fourth post of the malicious packages analysis series, the security research team analyzes techniques for hiding and obfuscating malicious code in software packages, along with how a malicious package can be detected and prevented by security teams. This includes how to identify known and unknown malicious packages and best practices for secure development to avoid infection by one.

13. Watch out for DoS when using Rust’s popular Hyper package

The JFrog Research team discovered and disclosed multiple vulnerabilities in popular Rust projects such as Axum, Salvo and conduit-hyper, that stem from the same root cause – forgetting to test the Content-Length header of requests before calling a function in the Hyper library. This post elaborates on the root cause of the issue and provides guidance on how to remediate it to avoid potential DoS exploitation.

Without any length checks it is possible to cause a DoS attack using a small packet size
Without any length checks it is possible to cause a DoS attack using a small packet size

Stay up-to-date with JFrog Security Research

The security research team’s findings and research play an important role in improving the JFrog Platform’s application software security capabilities.

Follow the latest discoveries and technical updates from the JFrog Security Research team on our research website, and on X @JFrogSecurity.

]]>
How to Onboard to a Federated Repository https://jfrog.com/blog/how-to-onboard-to-a-federated-repository/ Wed, 08 Mar 2023 19:35:44 +0000 https://jfrog.com/?p=111291

Scaling up your development organization typically involves spreading development across multiple locations around the globe. One of the key challenges with multisite development is ensuring reliable access to required software packages and artifacts for teams collaborating across time zones. The JFrog Software Supply Chain Platform solves this challenge with federated repositories in JFrog Artifactory.

What are federated repositories?

Federated repositories are created when two or more repositories of the same package type are connected via federation to enable automatic, full bi-directional mirroring across different JFrog platform deployments (JPDs) or JFrog Artifactory instances (Figure 1). This sync is also created for geo-synchronized environments or for an active-active Disaster Recovery (DR) environment.

Graphic showing bidirectional synchronization with federated repositories.
Figure 1. Bidirectional synchronization with federated repositories.

Benefits of continuously synced repositories

Federated repositories make it easy for distributed teams to work together by giving them access to the same set of artifacts, builds, and packages. This configuration eliminates the need for complex replication setups and rules around when artifacts are pushed or pulled from one repository to another. All-in-all, it makes the process of sharing components across multiple JPDs or development sites much easier to manage and maintain.

Before getting started with federated repositories

There are certain infrastructure elements to consider before starting to use federated repositories. They include:

  • Network speed: The federation heavily relies on the network infrastructure and when configuring the federation between two JPDs, the network speed should be considered.
  • Disk size: While the federation is being triggered, some temporary files will be created on the root disk. Therefore, having enough disk space is necessary.
  • Managing load: The federation will add some extra load on Artifactory, so admins must consider what needs to be federated, the size of artifacts, and the number of artifacts to be federated. Regular monitoring of resources such as DB resources (ex. CPU, connections, long-running queries) and Infrastructure resources (ex. CPU, memory, storage, JVM parameters) is recommended.

There are also a handful of prerequisites before being able to connect repositories in a federation:

  • The appropriate subscription/license level (ie, Enterprise X and above).
  • Artifactory versions earlier than 7.49.3 must be identical between federated members.
  • Before creating federated repositories, it’s mandatory to configure a custom base url for Artifactory.
  • A circle of trust must be set between the Artifactory instances. There are two ways to establish a circle of trust:

Once you’ve reviewed your infrastructure and ensured you have the right level of JPD, you’re ready to start setting up your repository federation.

Best practices for setting up federated repositories

Let’s take a look at onboarding best practices for federated repositories in three scenarios:

  1. Newly created federated repositories
  2. Moving from push replication to federated repositories
  3. Federating a large repo to an existing member site

Newly created federated repositories

In a large-scale enterprise, start slow by configuring the federation for small repositories to validate the federation speed as well as for any system performance anomalies.

After ensuring the limitations, and resources are satisfactory, start the federation for other repositories in a batch manner.

The federation can be scheduled based on the technologies, or based on the team structure. For example, the federation can be performed batch-wise first on generic repositories, moving to Maven, Docker and so on. Alternatively, the federation can be based on individual teams. For example, start performing federation for Team 1 (i.e., copy/sync all of the repositories for Team 1) before moving to Team 2, and then Team 3.

Due to the time constraint, and if the aim is to have an active sync of all repositories from JPD1 to JPD2, the following four steps can be followed:

  1. Copy the filestore of JPD1 to JPD2.
  2. Perform the migration of the configurations from JPD1 to JPD2 by following the methods in this KB article.
  3. This migration will create duplicate service IDs for Artifactory and Access in JPD2. Therefore, reach out to JFrog support for assistance changing the service IDs.

Once the migration is completed and the service IDs are changed, the local repositories can be converted to the federated repositories and the federation can be established for the delta data synchronization.

Moving from push/pull replication to federated repositories

In scenarios where a push or pull replication is already configured to keep repositories in JPD1 and JPD2 in sync, converting to federation is relatively easier as the data should already be synced.

Again, start slow by converting the least-used local repositories to federated repositories and monitor the above-mentioned infrastructure resources.

If there’s a Global DNS being used between the JPD1 and JPD2, and the custom base URL for both JPD1 and JPD2 is the same, configure the federated base URL over base URL config in the config descriptor as mentioned here in this wiki.

Federating a large repository to an existing member site

There’s a use case for setting up a federation of a large single repository from JPD1 to JPD2 and this federation is completely new to start with.

Start this federation by setting up a Push replication of this large repository to the JPD2 local repository so that this Push replication replicates all the repository binaries from JPD1 to JPD2.

The Push replication is suggested considering this replication is unidirectional and will not add as much load as the federated repository would, given the federated repository sync is bi-directional.

Once the Push replication replicates all the data from JPD1 to JPD2 for this large repository, convert this large local repository to the federated repository and monitor the above-mentioned infrastructure resources.

Items to be aware of during the federation process and tuning parameters

When leveraging federated repositories there some potential concerns to be aware of:

  • The metadata files such as maven-metadata.xml will not be federated. Therefore there will be a difference in the number of files federated between JPD1 and JPD2.
  • For Docker repositories, the files under the _uploads folder wouldn’t be federated therefore the number of artifacts would mismatch between JPD1 and JPD2.
  • When a federation for a large repository fails and a read timeout is seen in the logs, the timeout should be increased as mentioned here in this wiki by simply adding the artifactory.mirror.http.client.socket.timeout.mili=200000 parameter in the artifactory.system.properties file.
  • Similarly, the timeouts on the reverse proxy/load balancer should also be increased considering the federation is redirected through them.
  • As the federation would add some load to Artifactory, it’s recommended to tune your Artifactory instance as indicated in this KB article.

Upon successfully tuning Artifactory, the federated repository settings can also be tuned by adding the following:

  • Increase the federated repository configs in the binarystore.xml
    <provider id="federated-repo" type="federated-repo">
    <numberOfRemoteImporters>40</numberOfRemoteImporters>
    <numberOfLocalImporters>10</numberOfLocalImporters>
    <errorRecoveryInterval>35000</errorRecoveryInterval>
    <maxRetry>10</maxRetry>
    </provider>
  • Tune the federated repository parameters in the artifactory.system.properties
    artifactory.federated.repo.executor.poolMaxQueueSize=20000 (default is 10000)
    artifactory.federated.max.config.threads=20 (default is 5)
    artifactory.federated.repo.max.total.http.connections=70 (default is 50)
    artifactory.federated.repo.max.threads.percent=20 (default is 10)

Monitoring the federated repository sync

The Federated Repositories synchronization can be monitored by getting the status of the federation with the help of this Federated Repository sync status REST API:

$ curl -u admin -XGET https://myartifactory.jfrog.com/artifactory/api/federation/status/repo/<example-repo-local>

Monitor the lag time between the last federation using this REST API:

$ curl -u admin -XGET https://myartifactory.jfrog.com/artifactory/api/Federation/status/mirrorsLag

Get started with federated repositories today

If you’re looking for a way to keep multiple instances of JFrog Artifactory as part of the JFrog Platform in sync, consider using Artifactory’s Federated Repository functionality. These helpful tips will help you get started, but if you still need more guidance, our support team is here to help.

If you don’t use Artifactory today, you can give it a try for free.

]]>
The Peopleware Running Cloud DevOps https://jfrog.com/blog/the-peopleware-running-cloud-devops/ Wed, 21 Jul 2021 12:32:07 +0000 https://jfrog.com/?p=78907 The Peopleware Running Cloud DevOps

Early this year, we set out on a journey to onboard a new cloud engineering team at JFrog. Many can relate to the challenges involved with onboarding a new team, these were amplified even more during the pandemic. However this blog post is not about COVID-19, it is about sharing our experience of fine-tuning the onboarding path for this unbeatable group.

TL/DR: What it takes to build and onboard a team of junior engineers into the existing JFrog Cloud engineering team. Including a tailored bootcamp with academy training, real experience and much more.

But wait, what is Production Engineering @JFrog?

JFrog’s Production Engineering team is responsible for the efficiency, scalability, performance and reliability of all our production services on AWS, Azure and GCP. Combining software and systems engineering, Production Engineers develop tools, processes and technologies that operate on hyper-scale environments.

Production Engineering new team members

JFrog Production Engineering Team

JFrog CloudOps Academy

With an increasing need to hire tech superstars, we decided to bulletproof our tech stack and build an internal “JFrog CloudOps Academy” as part of the Production Engineering group. The academy was designed to introduce new engineers from diverse backgrounds and experiences to the JFrog team.

Over a timeframe of about 10 weeks, our new engineers were able to build up their cloud DevOps skill-set. We designed customized learning paths that included guided, self-paced theory lessons, as well as much needed hands-on experience with our live cloud environments.

The program allowed the new team to fast track into a working environment where they work together, complete each other, take a concept and work on it together. Essentially, this represents the human side of things, just like Jane Groll talks about in her DevOps for Humans talk. The participants not only acquired technical skill set, but also day-to-day work methodologies with iterative work and short agile feedback loops.

Ongoing Feedback Loop

Ongoing Feedback Loop – Learning/Teaching Methodology

Here are the highlights of what our academy enabled our team to do.

Build a solid agile foundation

The way each team practices agile is unique to their needs and culture. We created JIRA EPICs which are large bodies of work that can be broken down into a number of tasks (called stories). We work with sprints (short, time-boxed period) to make the onboarding stories more manageable. Each story was marked as completed only after the engineer completed a final required technical task.

Gain meaningful experience

One of the sole purposes of Cloud Engineering is to continuously deliver value to everyone in the organization and everyone using the product that the organization delivers.

Throughout the program, the new engineers got hands-on training with systems and infrastructures. For example, the team got to design automation for existing manual testing processes for our cloud environments. To do this, they broke down simple tasks such as deploying new JFrog Platform cloud environments, creating JFrog Pipelines source/integrations with GitHub, and JFrog Xray policies/watches, into automation scripts. The automation of these deployments and configurations has become one of the most used pipeline jobs.

Our training program was based on engineering fundamentals with a proven path to cloud fluency. Here are the details on what it included.

Download the complete reference guide of our learning journey >

Build services at scale with expert guidance

While the new engineers gained technical experience directly supporting and maintaining the many services that power JFrog Cloud, they also had a dedicated mentor to lean on. This provided them with technical guidance and support to navigate through their challenges.

So how did it work for us?

The academy streamlined the onboarding process, enabling the new engineers join our globally expanding production group and establish a 24/7 production reliability routine by maintaining tools to automate operational processes. They also take an active part in participating and now owning some of the cloud maintenance tasks.

“Talent without working hard is nothing.” –
Cristiano Ronaldo

We’re still in the process of blending the juniors with the seniors, but this academy resulted in a significant increase of knowledge sharing docs, sessions, passion and curiosity to the workplace.

There is a positive impact on other channels, for example, our juniors engineers can spot new solutions to old problems, Mentoring is a rewarding opportunity for the mentee and the mentor, This relationship does not have to be specifically between one mentor and the mentee: rather, multiple cross team members, or even the company as a whole, are playing a mentorship role.

Want to run a similar training program?

Do you want to take a similar leap to find and train cloud engineers? Here’s our main takeaways from this incredible program:

  • Identify the right audience
    Our excellent HR process at JFrog enabled us to find amazing frogs. We were able to build a team from professionals of diverse backgrounds, skill sets and experience levels: IT, Solutions, DevOps, Systems engineers and nurture them to the new role.
  • Define the required skills and experience
    For our program, we set out to find good coders that have experience in at least one non-shell language, mostly around automation and tooling. Also, have knowledge of real world Linux, networking, distributed systems, design and debugging.
  • Build a learning path
    You will need to build your own program. Start by making sure the syllabus includes the things that are important to you and represent the core components that you would like to achieve. Continue to drill down the learning according to your actual day-to-day operation and needs.
    For core learning resources you can utilize a 3rd party learning platform like A Cloud Guru, which gives you an easy way to continuously develop modern tech skills.
  • Take a CloudOps training approach
    Our team focused on learning, took the time with the courses, and understood the importance of hands-on experience. Special emphasis was given to practical knowledge. Everyone worked really hard on the training and we do expect from the engineers to also put equal effort. The engineers are expected to practice as much as possible and raise questions/doubts.

The process allowed us to identify the gaps or additional training that is required personally for each participant. The program is generic, but the approach is personalized. Here are the soft and technical skills the team acquired.What Makes a Superstar DevOps/Production Engineer

LEAP and the rest will follow.

]]>
Developer, Transform Yourself: Digital Transformation Starts with You https://jfrog.com/blog/digital-transformation-starts-with-you/ Thu, 17 Jun 2021 14:29:39 +0000 https://jfrog.com/?p=77486

As technical professionals we spend a lot of time developing technical skills. Checking the right boxes of experience with languages, tools, and technologies is what typically lands us a job interview for our specialty.

But what wins the job in DevOps — and carries you to success in it — are your human skills. Even more than technical chops, personal traits like mindset, communication skills, and work habits are your strongest assets in making DevOps work.

To succeed at DevOps, cultivating these traits must be part of any ambitious technical professional’s never-ending learning and growth plan, along with broad technical know-how. A digital transformation of any organization starts with a personal transformation journey.

Jayne Groll – The Human Behind the Humans in DevOps

If you’re researching the human side of DevOps, you’ll immediately encounter Jayne Groll, CEO of DevOps Institute. A frequent presenter at our annual swampUP DevOps conference, Jayne was honored for her swampUP 2021 session on the human side of digital transformation, earning top ratings for both content and speaker quality.
Carl Quinn Award Winner Jayne Groll

Continuous Personal Transformation

DevOps isn’t just a set of procedures, it’s a culture. Just as iterative and incremental improvements drive agile development and DevOps, your personal change is a continuous, ongoing process. 

This means fostering a culture that embraces change, both among teams and as individuals. For everyone involved, personal transformation follows the same path as DevOps – in increments. 

Accelerating releases requires shedding old practices as you adopt new, better ones. Automation drives DevOps, so you have to be comfortable with automating away parts of your current job. Don’t worry, there are plenty of new, higher value contributions to replace the things that you leave behind. Transformation will bring opportunities for growth.

Here are some of the things today’s DevOps engineers should consider in their professional development.

Full-Stack DevOps 

In his SwampUP 2021 conference keynote, JFrog CEO Shlomi Ben Heim talked about the full-stack DevOps engineer as somebody whose skill set spans across several aspects of delivering software. Job descriptions for front-end or back-end engineers are giving way to a need for full-stack engineers, and demand is growing for site reliability experts who can solve delivery obstacles from end-to-end.But this broadening goes beyond tools and tech. As Jayne Groll has observed, DevOps needs T-shaped people. Engineers need to become expert in the core skills of their specialty, but they also need breadth of knowledge, too. With extended skills comes the ability and willingness to think beyond one’s own silo.

T-Shaped Skills

Fully Rounded People

Over several years of conducting engineer surveys, DevOps Institute has seen the reported set of top 5 skill domains change and shift in importance. Yet the one most consistently at or near the top every year is the set of human skills that enable teams to work together.

Top 5 Must-Have Skill Domains

It takes a tribe to make digital transformation a reality, from the service desk to security testing, process reengineering, and architects. To break down silos between those stakeholders takes human-centered skills that include:

  • Collaboration and cooperation
  • Sharing and knowledge transfer
  • Communication skills
  • Empathy
  • Personal value commitment
  • Diversity and inclusion

15 Top DevOps Skills

Reinvent Yourself

Developing your skills for DevOps engineering means seeking opportunities to upskill beyond your daily work duties. In the latest DevOps Institute survey, 39% of respondents say that their organization doesn’t have an upskilling program. So you’ll have to take the initiative and make the investment in your future.

When choosing, figure out what skills will have an impact. But also honestly assess your willingness to explore change. 

Successful DevOps requires an end-to-end transformation — it’s how an end-to-end solution like the JFrog DevOps Platform can provide the DevOps tool infrastructure to change how you build software. As with siloed teams, a complicated and disjointed DevOps tool chain is a huge impediment to the broad visibility and control DevOps professionals need. Enabling human skills of sharing, collaboration, and communication are built into the access to artifacts, fine-grained permissions, and traceability of Artifactory.

Your transformation journey will be different from anyone else’s, but you can start with a free JFrog Cloud subscription to learn how DevOps is done. Even after you’ve become an “expert” change will keep you hopping — we’re here to help keep you leaping forward.

For more information on the human side of DevOps , watch Jayne Groll’s award-winning session:

Want to learn more about DevOps? Watch the DevOps 101 webinar – Introduction to CI/CD

]]>
10 Helm Tutorials to Start your Kubernetes Journey https://jfrog.com/blog/10-helm-tutorials-to-start-your-kubernetes-journey/ Fri, 30 Apr 2021 17:26:53 +0000 https://jfrog.com/?p=61864

The growth of Kubernetes has been stellar and K8s applications have grown in importance and complexity. Today, even configuring a single application can require creating many interdependent K8s sources that each depend on writing a detailed YAML manifest file. With this in mind, Helm as a package manager for Kubernetes is a major way users can make their K8s configurations reusable.

Helm for Beginners

Helm is the go-to application package manager for Kubernetes that enables you to describe the structure of your application through Helm charts. Through the Helm command-line inteface you can rollback your deployment, monitor the state of your application, and track the history of each deployment. Helm provides a giant change in the way that server-side applications are defined, stored and managed. In April of 2019, the CNCF graduated Helm from incubation into becoming a full project, meaning that Helm will receive access to more resources than in the past.

Helm’s main features include:

  • Find and use popular K8s software packaged as Helm charts
  • Share K8s applications as Helm charts
  • Create reproducible builds of your Kubernetes applications
  • Manage Kubernetes manifest files
  • Manage Helm package releases

Helm is for everyone: Get a free Guide to Helm


Why Helm Charts?

Helm configuration files are referred to as charts and consist of a few YAML files with metadata and templates rendered into Kubernetes manifest files. The basic directory structure of a chart includes:

package-name/
  charts/
  templates/
  Chart.yaml
  LICENSE
  README.md
  requirements.yaml
  values.yaml

 

Using the helm command, you can install a chart from a local directory or from a `.tar.gz` packaged version of the above-mentioned directory structure. These packaged charts can also be downloaded and installed automatically from chart repositories..

Become a Helm Champ

There are a vast number of resources available to help you learn how to successfully use Helm to deploy your Kubernetes applications. Many of these resources are tutorials aimed at aiding beginners in understanding Helm and how it works. 

Here are some of my favorite video tutorials that can help explore basic to advanced Helm concepts and practices.

1. What is Helm?

This introductory video tutorial about Helm was created by David Okun from IBM cloud. The quick tutorial walks through a typical scenario of using Helm to quickly define, manage and easily deploy applications and services in Kubernetes.

 

2. An Introduction to Helm

This video is hosted by the CNCF (Cloud Native Computing Foundation) and covers the basics of Helm and the makeup of charts. They also explain ways to share and consume Helm charts.

 

3. What is Helm in Kubernetes?

This video from Techworld covers the basics of Helm, templating engines and even the downsides of Helm. In the description of the video are timestamps which make it easy to find the portion of the tutorial that you need!

 

 4. Helm and Kubernetes Introduction

Matthew Palmer introduces Node.js, Ruby and PHP developers to Helm for Kubernetes. This video covers an overview of Helm’s charts and releases as well as delving into the Helm Architecture. There is also a code example exercise of converting a regular Node.js and MongoDB web application into a Helm Chart.

 

5. Helm Chart Creation

Bitnami has a full Helm Chart tutorial available on Youtube. The tutorial is purposed for beginners to Helm and teaches how to create a Helm chart, deploy a sample application, add a dependency, package and share it.

 

6. Helm Chart Patterns

This video tutorial is from the CNCF and explains in-depth Helm Chart patterns and best practices for reviewing and maintaining the charts in the public Helm Chart repo.

 

7. Building Helm Charts from the Ground Up

This video tutorial is from the CNCF and gives a much more detailed explanation of key Kubernetes concepts when building Helm charts. This is a comprehensive guide for Helm charting.

 

8. Helm Security – A Look Below Deck

Matt Farina explains some of the basics of Helm security and provides a great overview of how the community is working together to build and improve upon many processes to keep your Kubernetes applications safe.

 

9. Helm 3 Deep Dive

This video is hosted by the CNCF (Cloud Native Computing Foundation). Taylor Thomas of Microsoft Azure and Martin Hickey of IBM discuss the changes that occur in Helm v3. They talk about new features and the architecture that support those features. Topics covered range from changes to the CLI library to chart additions and new client security models.

 

10. Delve into Helm: Advanced DevOps

This advanced Helm tutorial dives deeply into Helm and focuses on lifecycle management and continuous delivery of Kubernetes-native applications in different environments. They show how to extend Helm’s capabilities with plugins and add-ons.

 

Helm On!

 

Additional Resources:

]]>
SwampUP Leap: AppsFlyer Transforms Its Artifact Management with Artifactory’s Single Source of Truth https://jfrog.com/blog/appsflyer-transforms-its-artifact-management/ Thu, 16 Jul 2020 15:39:54 +0000 https://jfrog.com/?p=60932

At swampUP 2020, DevOps platform engineer Roman Roberman spoke about AppsFlyer’s need to gain control and automate their development environment. 

AppsFlyer’s mobile app Attribution Analytics platform helps marketers measure and optimize their user acquisition funnel. Headquartered in San Francisco, AppsFlyer operates 18 global offices, and its platform is integrated with over 2,000 ad networks, including Yahoo, Google, and Bing.

AppsFlyer’s Operations

Prior to JFrog Artifactory, AppsFlyer’s projects were what Roberman called “a real mess” using a mix of internal and external repositories:

As a result of this collection of solutions, AppsFlyer encountered issues such as unavailable external sources, and failing builds due to deleted dependencies. Configuring unique repositories for each project cost valuable development time. With so many independent account systems, credentials were hard to track, so all users tended to have the same overly generous permissions. And of course it was difficult to track where the artifacts were located and pulled from.

Accelerating Software Releases With Best Practices

Needing to improve the speed and reliability of their deployments as well as wanting a single location to access all their artifacts, AppsFlyer chose Artifactory as their single source of truth to manage their binaries. Artifactory accelerates AppsFlyer’s software deployments and improves the stability and reliability of their software releases. 

As a JFrog Enterprise subscriber, AppsFlyer uses JFrog Mission Control to manage their main production cluster in Europe and automatically replicates all repositories to their U.S. cluster. They utilize Artifactory’s three repository types to manage their artifacts: 

With Artifactory in place, AppsFlyer was able to resolve all of their issues and are in control of their artifact management and development processes. 

Learn More About Managing Artifacts in a Distributed Landscape

To learn more about AppsFlyer’s development process before and after installing Artifactory, watch the video.

]]>
SwampUP Leap: Salesforce’s Last Mile Delivery at Scale https://jfrog.com/blog/salesforces-last-mile-delivery-at-scale/ Tue, 14 Jul 2020 19:43:27 +0000 https://jfrog.com/?p=60825

At this year’s swampUP 2020 conference, we were fortunate to have several customers who showcased their skillful use of our products for achieving ambitious goals. One such session presentation was delivered by Navin Ramineni, Director of Infrastructure Engineering at Salesforce.

For Navin, JFrog Artifactory is not just a repository manager.  It’s also a sophisticated distribution mechanism for transferring artifacts across the globe to all of Salesforce’s data centers. Navin and the Infrastructure team support an army of engineers for product development, database design, IT, quality, and security.. For all of these groups, Navin needed a way to handle their challenging need for scale, security, and compliance, with the flexibility  to manage multiple roll-out strategies on diverse infrastructures. What’s more, Navin needed to satisfy all these requirements while assuring a friendly developer experience.

Salesforce’s Need for Scale: By The Numbers

The JFrog Platform helps enable Salesforce to operate at global enterprise scale:

  • More than 200 instances of Artifactory, distributed across the globe
  • Upto 92 million requests for artifacts per day
  • Replication of 4TB of data across the globe through Artifactory’s multi-site replication
  • Ability to support 20,000 builds per day
  • Ability to promote and consume 150 artifacts in production per day

The Infrastructure engineering team supports a diverse set of development teams. So, they needed extensive support for a wide range of package types. They also saw an explosion in the number of artifacts, as the engineering teams evolved away from monolithic apps towards a microservices based architecture. They needed a tool that could support this evolution, and they found Artifactory’s scalability and support for over 27 package types to be the right fit.

Security and Compliance

Salesforce required a high degree of isolation between their R&D and production environments. Communication between these environments is governed by some strict rules. They also have special considerations for their government environments, which have to be both physically and logically isolated from the infrastructure used for Salesforce’s other customers. 

To support their security and compliance needs, Salesforce demanded a delivery mechanism that included a “wall” between their R&D and production environments. This wall consists of an intermediary staging repository which they call their “DMZ zone.” The movement of artifacts from the R&D to production via the DMZ is facilitated by Artifactory’s support for various styles of replication: push replication, pull replication, event-based replication, and scheduled replication. Navin explains details of this elaborate delivery mechanism in his session presentation.

Multiple Rollout Strategies

Salesforce’s R&D teams are highly distributed and some wanted to host their binary repositories directly on bare metal infrastructures in their on-prem data centers, while others preferred to keep them in a public cloud service. They also required their artifacts related to canary deployments to be exhaustively tested before changes were rolled out to production data centers. Rigorous quality validations and schedules meant that releases had to be staggered across data centers. Geolocation-based releases during off-peak hours are also important for ensuring minimal customer impact. These different roll-out strategies were easy to handle with Artifactory.

Hear the Story, From the Trenches

You can hear about Navin’s experience with Artifactory directly from him. His conference session from JFrog’s SwampUP 2020 conference was recorded and is available for on-demand access. If you want to know how to go big with Artifactory, you will definitely want to hear directly from him:

]]>
JFrog at Capital One: Approved, Compliant Software Distribution at Enterprise DevOps Scale https://jfrog.com/blog/jfrog-at-capital-one-approved-compliant-software-distribution-at-enterprise-devops-scale/ Thu, 09 Jul 2020 17:30:53 +0000 https://jfrog.com/?p=60658

JFrog User conference 2020


Capital One continuously innovates when it comes to enterprise DevOps patterns and compliance at scale.
During the recent swampUP 2020 conference, Wayne Chatelain, Sr. Manager, Software Engineer at Capital One, shared how they use the JFrog DevOps Platform to standardize on a central, production-approved software library – which Capital One calls the definitive library. This is the master copy of all software artifacts, both developed within the company and those sourced from 3rd-parties.

Compliant and Ready for Software Distribution

The definitive software library needs to be:

  1. Secured and compliant – passing Xray’s OSS security vulnerabilities and license checks, as well as Capital One’s extensive battery of further compliance checks.
  2. Vetted and approved for production use – ensuring only approved artifacts are used in the higher environments. 
  3. Drift-resistant: Capital One went the extra mile to eliminate drift — by making sure all production releases and environments use ONLY these approved artifacts. 
  4. Distributed to runtime infrastructure edges across the globe for faster deployments and easy consumption at the last-mile.
  5. Being kept fresh and up-to-date – validating all artifacts have recently passed the compliance checks and have not gone stale.
  6. Tightly integrated with their CI/CD pipelines and automatic compliance checks and processes.

Best Practices for a Production-Approved Software Library

During his talk, Wayne shared the capabilities of the JFrog Platform that Capital One leverages  Artifactory, Xray, JFrog Distribution, and Hybrid Edges — along with their architecture and the API calls that they use (taking advantage of JFrog’s extensive REST API, AQL and metadata capabilities) to achieve the below fully automated, end-to-end process for all builds:

  1. JFrog Distribution creates a Release Bundle (BOM) with all artifacts that need to be distributed.
  2. Capital One created a custom Certification API as part of their Distribution workflow. This API invokes custom rules and automated approval gates to determine if an artifact/build is approved for use. 
  3. All CI/CD pipelines automatically trigger these rules to certify every artifact/build. Certification checks can happen in parallel to other pipeline steps – such as performance/other tests. 
  4. Once an artifact has been certified, it is automatically published to Distribution Edge nodes, with validation that artifacts have reached their destination(s) and are available for download.
  5. Certified artifacts on production Edge nodes then get pulled by the deployment automation pipelines.
  6. To ensure curated artifacts remain in an approved state, Capital One automated the process of expiring old artifacts. This is done by automatically adding custom metadata to all artifacts in the certification process that indicate when they expire and need to be recertified again. 
  7. An automated process removes expired artifacts from Edge nodes, notifies artifact owners, or runs a new build cycle to produce a new artifact that goes through the certification process. They also detect when new versions of libraries are available and update old versions. 

Watch the Capital One Talk

Watch the recording of the swampUP talk to learn how you too can take advantage of the pattern shared by Capital One to create your own definitive software library for your organization.

]]>
5 Best Practices for GoLang CI/CD https://jfrog.com/blog/5-best-practices-for-golang-ci-cd/ Wed, 24 Jul 2019 15:25:32 +0000 https://jfrog.com/?p=47295

For developers programming in long-established languages like Java, JavaScript or Python, the best way to build continuous integration and continuous delivery (CI/CD) workflows with Artifactory is pretty familiar. A mature set of dependency management systems for those languages and container solutions like Docker provide a clear roadmap.

But if you’re programming your applications in GoLang, how hard is it to practice CI/CD with the same kind of efficiency?

As it turns out, it’s gotten a lot easier, especially with some of the latest innovations in Go. With Artifactory’s native support for GoLang, the path to quality CI/CD is much clearer.


With latest innovations in Go, like Artifactory's native support, the path to quality CI/CD is much clearer.
Click To Tweet


Best Practices

At JFrog, we’re big fans of GoLang, using it as the language for several of our flagship solutions. And we practice what we promote, too, using Artifactory at the heart of our CI/CD. Here are some of the practices we can recommend:

1. Use Go Modules

Unlike many established programming languages, initial releases of GoLang didn’t provide a common mechanism for managing versioned dependencies. Instead, the Go team encouraged others to develop add-on tools for Go package versioning.

That changed with the release of Go 1.11 in August, 2018, with support for Go modules. Now the native dependency management solution for GoLang, Go modules are collections of related Go packages that are versioned together as a single unit. This enables developers to share code without repeatedly downloading it. 

A Go module is defined by a go.mod file in the project’s root directory, which specifies the module name along with its module dependencies. The module dependency is represented by module name and version number.

If you haven’t adopted Go modules yet, then you’ll need to follow the steps below:

  1. Use go mod init to generate a go.mod file if previous package managers were used.
  2. Use go mod tidy if other package managers were not used. This command will generate a populated go.mod file.
  3. If the version is v2 and above, then this requires the module name change to add the corresponding suffix, updates to the import path, usage of module aware static analysis tools and finally update code generator files such as .proto files to reflect the new import path.

For example, here is a go.mod file for a publicly available Go module for a structured logger:

module github.com/sirupsen/logrus

require (
	github.com/davecgh/go-spew v1.1.1 // indirect
	github.com/konsorten/go-windows-terminal-sequences v1.0.1
	github.com/pmezard/go-difflib v1.0.0 // indirect
	github.com/stretchr/objx v0.1.1 // indirect
	github.com/stretchr/testify v1.2.2
	golang.org/x/sys v0.0.0-20190422165155-953cdadca894
)

Note that the version numbers must conform to semver convention (for example, v1.2.1 instead of 20190812, or 1.2.1) as required by the go command. You should avoid using pseudo versions like the one shown above (v0.0.0-yyyymmddhhmmss-abcdefabcdef) — although commit hash pseudo-version were introduced to bring Go Modules support to untagged projects, they should be only used as a fallback mechanism. If the dependencies you need have release tags, use those tags in your require statements.

In your own code, you can import this Go module along with other dependencies:

import (
	"fmt"
	"io/ioutil"
	"net/http"
	"os"

// Public Go Module for logging
	log "github.com/sirupsen/logrus"
)

Then you can reference the module functions in your GoLang code:

   // Send text to the log
   log.Printf("Hello Log!")

2. Use GOPROXY to Ensure Immutability and Availability

Once you maintain your GoLang dependencies as versioned modules, you can keep them as immutable entities by setting up a GOPROXY. In this way, you can always guarantee what a specific version of a module contains, so your builds are always repeatable.

Will you be using Go modules that are publicly available open-source packages? Ones you create and share with the OSS community? Modules you keep private to just your team? Here are ways to do each, or all at once, and ensure that those unchanging versioned modules are always available for your builds.

GOLANG.ORG

The Go team at Google maintains a set of GOPROXY services for publicly available modules. As of Go 1.13, the module mirror proxy.golang.org is automatically set as the default GOPROXY when Go is installed or updated. This service is also supported by an index service for discovering new modules (index.golang.org), and a global go.sum database for authenticating module content (sum.golang.org).

The go.dev site is also maintained by the Go team and is the hub for Go users providing centralized and curated resources from across the Go ecosystem. Within this site, you can explore pkg.go.dev to search among thousands of open-source Go packages through a friendly UI.

Artifactory Go Registries

While it’s good to share, you’ll likely also need to restrict the use of some Go modules you create to within your organization. With Artifactory, you can set up both local and remote Go Registries, making public and private modules equally available to your builds.

You can proxy the Go mirror service as a remote Go registry in Artifactory, providing your build system with a local cache that further speeds up your builds and helps protect against network connection outages. Setting one up is easy.

Artifactory Go Registry

You can also configure one or more local Go registries for modules you need to maintain privately within your organization.

When you combine local and remote registries into a Virtual Repository, your builds can resolve Go module dependencies from both a public source and the modules you create and maintain privately.

You can learn more about how to configure your GOPROXY for Artifactory repositories through this blog post on Choosing Your GOPROXY for Go Modules.

3. Use Artifactory Repository Layouts

When you build your app (using go build), where will the binary artifacts generated be stored? Your build process can push those intermediate results to repositories in a binaries repository manager like Artifactory.

For these intermediate Go artifacts, you’ll use Artifactory’s generic repositories. Structuring those repositories in a smart way can help you control the flow of your binaries through development, test, and production with separate repositories for each of those stages. To help you do this, use the Custom Repository Layout feature of Artifactory with those generic repositories.

Create a custom layout that is similar to the following:

[org]/[name<.+>]/[module]-[arch<.+>]-[baseRev].[ext]

When you configure the custom layout, you should test the artifact path resolution to confirm how Artifactory will build module information from the path using the layout definitions.

4. Build Once and Promote

With your repositories properly set up for your Go builds, you can start to move them through your pipeline stages efficiently. 

Many software development procedures require a fresh complete or partial build at each staging transition of development, testing, and production. But as developers continue to change shared code, each new build introduces new uncertainties; you can’t be certain what’s in it. Even with safeguards to help assure deterministic builds, that may still require repeating the same quality checks in each stage.  

Instead, build your Go-based microservices once, then promote them to the next stage once promotion criteria such as tests or scans are met. If you plan to containerize your Go microservice, the same principle applies: build each Docker image once and promote it through a series of staging repositories. In this way, you can guarantee that what was tested is exactly what is being released to production.

Build Promotion

5. Avoid Monolithic Pipelines

Instead of a single, monolithic pipeline for your app, It’s better to have several, with each one building, testing, scanning and promoting a different layer. This helps make your CI/CD process more flexible, with different teams responsible for each layer, and fosters a “fail fast” system that helps catch errors early.

For example, deploying a containerized application can typically be composed of five pipelines:

  1. Build the Go application using the JFrog CLI. This pipeline pulls the source code; builds the application with Artifactory repositories for dependencies and output binaries; and tests and promotes from a dev repo to a staging repository in Artifactory.
  2. Build a base layer of the containerized app, e.g. a Docker framework. Static or dynamic tags can be used based on company’s risk and upgrade policies. For example, If a dynamic tag such as alpine:3.10 is used as a base layer of Docker framework, then all patch updates will be included each time the Docker framework is built. This pipeline will include build, test and promote stages.
  3. Pull the promoted artifacts produced by the prior pipelines and build a containerized app.This will also have build, test, scan and promote stages. 
  4. Build a Helm chart that points to a statically tagged & promoted version of a containerized app that was produced by prior pipeline.
  5. Deploy the containerized Go app to Kubernetes using the Helm chart.

Artifactory acts as your “source of truth” for your Go builds, providing a GOPROXY for both public and private modules, as well as storing compiled binaries. Using the JFrog CLI to build your Go app helps the build process interact with Artifactory, and capture the build info that makes your builds fully traceable. Here is a sample snippet:

// Configure Artifactory
jfrog rt c

// Configure the project's repositories
jfrog rt go-config

// Build the project with go and resolve the project dependencies
from Artifactory.
jfrog rt go build --build-name=my-build --build-number=1

// Publish the package we build to Artifactory.
jfrog rt gp go v1.0.0 --build-name=my-build --build-number=1

// Collect environment variables and add them to the build info.
jfrog rt bce my-build 1

// Publish the build info to Artifactory.
jfrog rt bp my-build 1

 

To explore this more, take a look at our example demonstration files for GoLang CI/CD  that we’ll be using at GopherCon.

Boldly Go

As you can see, some simple best practices in managing your GoLang applications can ease the way to effective CI/CD. And Artifactory, as the essential manager of your software artifact supply chain, can play a central role in helping to bring those methods into your software development pipeline.

Any questions? We’ll be happy to answer them at Gophercon; dive in with us to explore all the best ways to release your Go applications fast, and at top quality. Or you can start trying to put them into practice with a free trial of Artifactory!

]]>
Steering Straight with Helm Charts Best Practices https://jfrog.com/blog/helm-charts-best-practices/ Thu, 28 Mar 2019 12:03:30 +0000 https://jfrog.com/?p=44886

Kubernetes, the popular orchestration tool for container applications, is named for the Greek word for “pilot,” or the one who steers the ship. But as in any journey, the navigator can only be as successful as the available map.

An application’s Helm chart is that map, a collection of files that can be deployed from a helm charts repository that describe a related set of K8s resources. Crafting your Helm charts in the most effective way will help Kubernetes maneuver through the shoals when it deploys containers into your production environment.

But there are other ways to go adrift too, as I found while developing publicly available K8s charts to deploy products. With every pull request, feedback from the Helm community helped steer me to the Helm charts best practices that offered the strongest results for both operating and updating containers.


Learn more: 10 Helm tutorials to start your Kubernetes journey


Here are some things to consider when writing K8s charts that will be used by the community or customers in production. Among the things you need to think about are:

  • What dependencies do you need to define?
  • Will your application need a persistent state to operate?
  • How will you handle security through secrets and permissions?
  • How will you control running kubelet containers?
  • How will you assure your applications are running and able to receive calls?
  • How will you expose the application’s services to the world?
  • How will you test your chart?

This guide offers some best practices to structure and specify your Helm charts that will help K8s deliver your container applications smoothly into dock.


Helm is for everyone: Get a free Guide to Helm


Getting Started

Before you start, make sure you are familiar with the essential procedures for developing Helm charts.

In this guide, we will create a Helm chart that follows the best practices we recommend to deploy a two-tier create, read, update, and delete (CRUD) application for the Mongo database using Express.js.

You can find the source code of our example application in express-crud in GitHub.

Creating and filling the Helm chart

Let’s create our template helm chart using the helm client’s create command:

$ helm create express-crud

This will create a directory structure for an express-crud Helm chart.

To start, update the chart metadata in the Chart.yaml file that was just created. Make sure to add proper information for appVersion (the application version to be used as docker image tag), description, version (a SemVer 2 version string), sources, maintainers and icon.

apiVersion: v1
appVersion: "1.0.0"
description: A Helm chart for express-crud application
name: express-crud
version: 0.1.0
sources:
- https://github.com/jainishshah17/express-mongo-crud
maintainers:
- name: myaccount
 email: myacount@mycompany.com
icon: https://github.com/mycompany17/mycompany.com/blob/master/app/public/images/logo.jpg
home: https://mycompany.com/

Defining Dependencies

If your application has dependencies, then you must create a requirements.yaml file in the Helm chart’s directory structure that specifies them. Since our application needs the mongodb database, we must specify it in the dependencies list of the requirements.yaml file we create.

A requirements.yaml for this example contains:

dependencies:
- name: mongodb
 version: 3.0.4
 repository: https://kubernetes-charts.storage.googleapis.com/
 condition: mongodb.enabled

Once a requirements.yaml file is created, you must run the dependency update  command in the Helm client:

$ helm dep update

Creating deployment files

The deployment files of your Helm chart reside in the \templates subdirectory and specify how K8s will deploy the container application.

As you develop your deployment files, there are some key decisions that you will need to make.

Deployment Object vs StatefulSet Object

The deployment file you create will depend on whether the application requires K8s to manage it as a Deployment Object or a StatefulSet Object.

A Deployment Object is a stateless application that is declared in the filename deployment.yaml and specifies the kind parameter as deployment.

A Stateful Object is for applications that are stateful and used with distributed systems. They are declared in the filename stateless.yaml and species the kind parameter as stateful.

Deployment StatefulSet
Deployments are meant for stateless usage and are rather lightweight.  StatefulSets are used when state has to be persisted. Therefore it uses volumeClaimTemplates on persistent volumes to ensure they can keep the state across component restarts.
If your application is stateless or if state can be built up from backend-systems during the start then use Deployments. If your application is stateful or if you want to deploy stateful storage on top of Kubernetes use a StatefulSet.

 

As this application does not need state to be persisted I am using deployment object.
The deployment.yaml file has already been created by the helm create command.

We will use AppVersion as the Docker image tag for our application. That allows us to upgrade Helm chart with new version of Application by just changing value in Chart.yaml

image: "{{ .Values.image.repository }}:{{ default .Chart.AppVersion .Values.image.tag }}"

Secret Versus ConfigMap

You will need to determine which of the credentials or configuration data is appropriate to store as secrets, and which can be in a ConfigMap.

Secrets are for sensitive information such as passwords that K8s will store in an encrypted format.

A ConfigMap is a file that contains configuration information that may be shared by applications. The information in a ConfigMap is not encrypted, so should not contain any sensitive information.

Secret ConfigMap
Putting this information in a secret is safer and more flexible than putting it verbatim in a pod definition or in a docker image; A ConfigMap allows you to decouple configuration artifacts from image content to keep containerized applications portable
Used for confidential data Used for non-confidential data
Example uses: API Keys, Password, Tokens and ssh keys Example uses: Log rotators, Configuration without confidential data

 

In this example, we shall allow Helm to pull docker images from private docker registries using image pull secrets.

This procedure relies on having a secret available to the Kubernetes cluster that specifies the login credentials for the repository manager. This secret can be created by a kubectl command line such as:

$ kubectl create secret docker-registry regsecret --docker-server=$DOCKER_REGISTRY_RUL --docker-username=$USERNAME --docker-password=$PASSWORD --docker-email=$EMAIL

In the values.yaml file of your Helm chart, you can then pass the secret name to a value:

imagePullSecrets: regsecret

You can then make use of the secret to allow Helm to access the docker registry through these lines in deployment.yaml:

{{- if .Values.imagePullSecrets }}
 imagePullSecrets:
 - name: {{ .Values.imagePullSecrets }}
{{- end }}

For secrets available to the application, you should add that information directly to values.yaml.

For example, to configure our application to access mongodb with a pre-created user and database, add that information in values.yaml

mongodb:
 enabled: true
 mongodbRootPassword:
 mongodbUsername: admin
 mongodbPassword: 
 mongodbDatabase: test

Note that here we do not hardcode default credentials in our Helm chart. Instead, we use logic to randomly generate password when it is not provided via –set flag or values.yaml

We will use a secret to pass mongodb credentials to our application, through these lines in deployment.yaml.

env:
- name: DATABASE_PASSWORD
  valueFrom:
    secretKeyRef:
      name: {{ .Release.Name }}-mongodb
      key: mongodb-password

You can control the running of kubelet containers either through specialized Init Containers or through Container Lifecycle Hooks.

 

InitContainers Container Lifecycle Hooks
InitContainers are specialized Containers that run before app Containers and can contain utilities or setup scripts not present in an app image. Containers can use the Container lifecycle hook framework to run code triggered by events during their management lifecycle.
A Pod can have one or more Init Containers, which are run before the app Containers are started.

 

A Pod can have only one PostStart or PreStop hook
PostStart hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.
e.g Moving files mounted using ConfigMap/Secrets to different location.
PreStop hook is called immediately before a container is terminated. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent.

e.g Gracefully shutdown application

You can use initContainers to add waits to check that dependent microservices are functional before proceeding. You can use PostStart hook to updated file in same pod for e.g updating configuration files with Service IP

 

In our example, add these initContainers specifications to deployments.yaml to hold the start of our application until the database is up and running.

initContainers:
- name: wait-for-db
 image: "{{ .Values.initContainerImage }}"
 command:
 - 'sh'
 - '-c'
 - >
   until nc -z -w 2 {{ .Release.Name }}-mongodb 27017 && echo mongodb ok;
     do sleep 2;
   done

Adding Readiness and Liveness Probes

It’s often a good idea to add a readiness and a liveness probe to check the ongoing health of the application. If you don’t, then the application could fail in a way that it appears to be running, but doesn’t respond to calls or queries.

These lines in the deployment.yaml file will add those probes to perform periodic checks:

livenessProbe:
 httpGet:
   path: '/health'
   port: http
 initialDelaySeconds: 60
 periodSeconds: 10
 failureThreshold: 10
readinessProbe:
 httpGet:
   path: '/health'
   port: http
 initialDelaySeconds: 60
 periodSeconds: 10
 failureThreshold: 10

Adding RBAC Support

These procedures will add role-based access control (RBAC)  support to our chart, when it is required by an application.

Step 1: Create a Role by adding the following content in a role.yaml file:
A Role can only be used to grant access to resources within a single namespace.

{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
 labels:
   app: {{ template "express-crud.name" . }}
   chart: {{ template "express-crud.chart" . }}
   heritage: {{ .Release.Service }}
   release: {{ .Release.Name }}
 name: {{ template "express-crud.fullname" . }}
rules:
{{ toYaml .Values.rbac.role.rules }}
{{- end }}

Step 2: Create RoleBinding by adding the following content in a rolebinding.yaml file:
A ClusterRole can be used to grant the same permissions as a Role, but because they are cluster-scoped, they can also be used to grant access to:

  • cluster-scoped resources (like nodes)
  • non-resource endpoints (like “/healthz”)
  • namespaced resources (like pods) across all namespaces
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
 labels:
   app: {{ template "express-crud.name" . }}
   chart: {{ template "express-crud.chart" . }}
   heritage: {{ .Release.Service }}
   release: {{ .Release.Name }}
 name: {{ template "express-crud.fullname" . }}
subjects:
- kind: ServiceAccount
 name: {{ template "express-crud.serviceAccountName" . }}
roleRef:
 kind: Role
 apiGroup: rbac.authorization.k8s.io
 name: {{ template "express-crud.fullname" . }}
{{- end }}

Step 3: Create a ServiceAccount by adding the following content in a serviceaccount.yaml file:

A service account provides an identity for processes that run in a Pod.

{{- if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
 labels:
   app: {{ template "express-crud.name" . }}
   chart: {{ template "express-crud.chart" . }}
   heritage: {{ .Release.Service }}
   release: {{ .Release.Name }}
 name: {{ template "express-crud.serviceAccountName" . }}
{{- end }}

Step 4: Use helper template to set ServiceAccount name.
We will do that by adding following content in _helpers.tpl file

{{/*
Create the name of the service account to use
*/}}
{{- define "express-crud.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "express-crud.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}

Adding a Service

Now it’s time to expose our application to the world through a service.

A service allows your application to receive traffic through an IP address. Services can be exposed in different ways by specifying a type:

ClusterIP The service is only reachable by an internal IP from within the cluster.
NodePort The service is accessible from outside the cluster through the NodeIP and NodePort.
LoadBalancer The service is accessible from outside the cluster through an external load balancer. Can Ingress to the application..


We will do that by adding the following content to service.yaml:

apiVersion: v1
kind: Service
metadata:
 name: {{ template "express-crud.fullname" . }}
 labels:
   app: {{ template "express-crud.name" . }}
   chart: {{ template "express-crud.chart" . }}
   release: {{ .Release.Name }}
   heritage: {{ .Release.Service }}
spec:
 type: {{ .Values.service.type }}
 ports:
   - port: {{ .Values.service.externalPort }}
     targetPort: http
     protocol: TCP
     name: http
 selector:
   app: {{ template "express-crud.name" . }}
   release: {{ .Release.Name }}

Note that in the above, for our service type we reference a setting in our values.yaml:

service:
  type: LoadBalancer
  internalPort: 3000
  externalPort: 80

Values.yaml Summary

Defining many of our settings in a values.yaml  file is a good practice to help keep your Helm charts maintainable.

This is how the values.yaml file for our example appears, showing the variety of settings we define for many of the features discussed above:

# Default values for express-mongo-crud.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

## Role Based Access Control
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
rbac:
 create: true
 role:
   ## Rules to create. It follows the role specification
   rules:
   - apiGroups:
     - ''
     resources:
     - services
     - endpoints
     - pods
     verbs:
     - get
     - watch
     - list

## Service Account
## Ref: https://kubernetes.io/docs/admin/service-accounts-admin/
##
serviceAccount:
 create: true
 ## The name of the ServiceAccount to use.
 ## If not set and create is true, a name is generated using the fullname template
 name:

## Configuration values for the mongodb dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/mongodb/README.md
##
mongodb:
 enabled: true
 image:
   tag: 3.6.3
   pullPolicy: IfNotPresent
 persistence:
   size: 50Gi
 # resources:
 #  requests:
 #    memory: "12Gi"
 #    cpu: "200m"
 #  limits:
 #    memory: "12Gi"
 #    cpu: "2"
 ## Make sure the --wiredTigerCacheSizeGB is no more than half the memory limit!
 ## This is critical to protect against OOMKill by Kubernetes!
 mongodbExtraFlags:
 - "--wiredTigerCacheSizeGB=1"
 mongodbRootPassword:
 mongodbUsername: admin
 mongodbPassword:
 mongodbDatabase: test
#  livenessProbe:
#    initialDelaySeconds: 60
#    periodSeconds: 10
#  readinessProbe:
#    initialDelaySeconds: 30
#    periodSeconds: 30

ingress:
 enabled: false
 annotations: {}
   # kubernetes.io/ingress.class: nginx
   # kubernetes.io/tls-acme: "true"
 path: /
 hosts:
   - chart-example.local
 tls: []
 #  - secretName: chart-example-tls
 #    hosts:
 #      - chart-example.local

initContainerImage: "alpine:3.6"
imagePullSecrets:
replicaCount: 1

image:
 repository: jainishshah17/express-mongo-crud
 # tag: 1.0.1
 pullPolicy: IfNotPresent

service:
 type: LoadBalancer
 internalPort: 3000
 externalPort: 80

resources: {}
 # We usually recommend not to specify default resources and to leave this as a conscious
 # choice for the user. This also increases chances charts run on environments with little
 # resources, such as Minikube. If you do want to specify resources, uncomment the following
 # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
 # limits:
 #  cpu: 100m
 #  memory: 128Mi
 # requests:
 #  cpu: 100m
 #  memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

Testing and Installing the Helm Chart

It’s very important to test our Helm chart, which we’ll do using the helm lint command.

$ helm lint ./

## Output
==> Linting ./
Lint OK

1 chart(s) linted, no failures

Use the helm install command to deploy our application using helm chart on Kubernetes.

$ helm install --name test1 ./ 

## Output
NAME:   test1
LAST DEPLOYED: Sat Sep 15 09:36:23 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME           DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
test1-mongodb  1        1        1           0          0s

==> v1beta2/Deployment
test1-express-crud  1  1  1  0  0s

==> v1/Secret
NAME           TYPE    DATA  AGE
test1-mongodb  Opaque  2     0s

==> v1/PersistentVolumeClaim
NAME           STATUS   VOLUME    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
test1-mongodb  Pending  standard  0s

==> v1/ServiceAccount
NAME                SECRETS  AGE
test1-express-crud  1        0s

==> v1/Service
NAME                TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
test1-mongodb       ClusterIP     10.19.248.205         27017/TCP     0s
test1-express-crud  LoadBalancer  10.19.254.169      80:31994/TCP  0s

==> v1/Role
NAME                AGE
test1-express-crud  0s

==> v1/RoleBinding
NAME                AGE
test1-express-crud  0s

==> v1/Pod(related)
NAME                                READY  STATUS    RESTARTS  AGE
test1-mongodb-67b6697449-tppk5      0/1    Pending   0         0s
test1-express-crud-dfdbd55dc-rdk2c  0/1    Init:0/1  0         0s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w test1-express-crud'
  export SERVICE_IP=$(kubectl get svc --namespace default test1-express-crud -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo https://$SERVICE_IP:80

Running the above helm install command will produce an External_IP for the Load Balancer. You can use this IP address to run the application.

This is how our application appears when run:

Result

Wrapping Up

As you can see from this example, Helm is an extremely versatile system that allows you a great deal of flexibility in how you structure and develop a chart. Doing so in the ways that match the conventions of the Helm community will help ease the process of submitting your Helm charts for public use, as well as making them much easier to maintain as you update your application.

The completed Helm charts for this example project can be found in the express-crud repo on GitHub, and you may review these functioning files to help you to more thoroughly understand how they work.

To explore more examples, you can review my sample repository of Helm charts for deploying  products to Kubernetes.

 

Additional Resources:

 

]]>