DevOps Archives | JFrog Release Fast Or Die Tue, 24 Sep 2024 15:19:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Proudly Announcing JFrog’s Full Conformance to OCI v1.1 https://jfrog.com/blog/full-conformance-to-oci-v1-1/ Tue, 24 Sep 2024 15:19:27 +0000 https://jfrog.com/?p=141243

JFrog has long supported standards widely used by developers, including OCI container images. We started with our OCI-compliant Docker registry, then followed up with dedicated JFrog Artifactory OCI repositories. In our continued commitment to developer freedom of choice, we’re excited to take another leap forward.

JFrog is now fully conformant to OCI v1.1. Source: OCI Conformance Page

JFrog is now fully certified to the OCI v1.1 standard. We are proud to be one of only two vendors who have made this commitment thus far. Available from JFrog Artifactory V7.90.1 for OCI and Helm OCI repositories, JFrog users can now get everything they need for their OCI packages.

What is OCI v1.1?

The Open Container Initiative (OCI) is a Linux Foundation project providing open standards for container formats and runtimes, and is widely adopted by developers. It seeks to optimize industry-wide interoperability while maintaining performance.

Originally announced in July 2023, OCI v1.1 is OCI’s latest specification for image, runtime, and distribution specifications. It offers developers more flexibility and integrity, and the ability to link images with one another (using the subject field). Additionally, the new artifactType field makes it easy to create tags to parse through artifacts.

Referrers API

One of the most interesting features in OCI v1.1 is the Referrers API, which offers a convenient approach to retrieve and filter the relationships between images using subject and artifactType. With Referrers API, and by leveraging JFrog Artifactory, you can transfer these relationships between different repositories with ease.

How Referrers API Works

The best way to illustrate the power of Referrers API is with examples.

Example 1:
Fetching All Subjects

Let’s say we have an image called Image A. Two other images, Image B and Image C, are related to Image A. To define these relationships, we can use the subject field to point both Image B and Image C to Image A.

With Referrers API, you’ll retrieve that Image B and Image C are related to Image A.

Example 2:
Fetching Specific Artifact Types

Now, let’s say you want to retrieve only specific artifact types. Let’s assume Image B is an SBOM artifact, and Image C is a signature artifact. Using the artifactType field, you can denote these references.

Using Referrers API, you can retrieve signature artifacts related to Image A, and Referrers API will correctly state that Image C as a signature related to Image A.

As you can see from these two examples, Referrers API in OCI v1.1 makes it easy to access valuable information that can be used to show the relationships of associated OCI packages.

OCI Containers and Repositories in JFrog Artifactory

JFrog’s support of OCI means you get seamless access to everything OCI within Artifactory. You can use OCI natively, securely, and reliably, all within a single point of truth.

For more information on how to get started, including sample code snippets, visit our Referrers API page on the JFrog Help Center. To see how JFrog allows you to do more with your OCI repos, take a tour of our platform or speak to a JFrog team member.

]]>
Trusted Software Delivered! https://jfrog.com/blog/trusted-software-delivered-swampup-2024/ Fri, 20 Sep 2024 02:06:20 +0000 https://jfrog.com/?p=140897

At swampUP 2024 in Austin just a few days ago, we explored the EveryOps Matters approach with the crowd of developers, driven by a consolidated view from their companies’ boardrooms and 2024 CIO surveys.

The message was clear: “EveryOps” isn’t just a strategy or tech trend —  it’s a fundamental, ongoing mindset shift that must drive developers’ proactive actions in an ever-evolving software landscape.

It’s not optional; it’s essential. Today’s developers are not just contributors; they are the makers and guardians of this transformation, the ones who will define the future of fast and trusted software delivery.

In the past many years, developers and dev teams have been handed unprecedented challenges: deliver faster with DevOps, put the company’s security on your shoulders, engineer platforms for internal use, and now deliver machine learning and AI components as part of applications. DevOps, DevSecOps, MLOps… EveryOps is now the reality of every development team.

But as this world moves quickly forward, how can teams adapt? What is required to stay ahead of the curve? How can we responsibly drive our company initiatives, while taking advantage of the many technologies available?

The answer, as they say, is to “follow the money.” What are the priorities of the CIOs and CISOs of the world as they look to shape these new realities? If we understand their priorities, we can get a glimpse into where the investment is headed. These budgets and decisions will force developers’ hands in the coming years, as the choice for development shops becomes to lead, follow, or get out of the way.

Recently, in CIO surveys done by Wells Fargo, J.P. Morgan, and KeyBanc, we see that CIOs and senior leadership revealed that if economic situations worsened, 30% of companies would first look to eliminate headcount. Another significant cut would be in hardware, and also in enterprise software. What does this tell us?

First, notice that physical “commodities” are the first to go, then software costs. But second, it immediately begs the question: where will the money go as companies look to reinvest?

Interestingly, GenAI and the productivity gains it may provide are a top investment for companies, followed by software and security. So if developers are to “follow the money” and expand their careers in an EveryOps world, understanding these areas, embracing them, learning about them, and expanding their horizons is essential.

To make this reality practical, at swampUP 2024, the community and partners brought amazing innovations and new ideas to the market.

First, it is extremely clear that only a holistic platform that understands the entire development process is required to meet the challenges of today. In fact, Gartner predicts that by 2027, 80% of companies will adopt platforms for DevOps and DevSecOps versus point solutions — only 25% of companies have fully migrated to a platform today. To this end, JFrog announced the availability of JFrog ML, our new solution for building and deploying machine learning models as part of your trusted pipeline. With the recent acquisition of Qwak AI as a core component, JFrog ML helps developers and data scientists collaborate to build, train, deploy, and manage secure ML models using the same practices as DevOps.

Second, it is also very clear that understanding the complete lineage of software from code to runtime is mission-critical for an organization from a pipeline management and security perspective. We were excited to have the CEO of GitHub, Thomas Dohmke, join us virtually and explain how GitHub and JFrog are serving the community by providing deep integrations in platform functionality, including SSO/identity, projects, security dashboards, and more. Finally, the right hand and left hand know what each other is doing.

Third, we were also proud to announce at swampUP the ongoing integration of the JFrog Platform with GitHub Copilot. As the leading AI tool assisting developers today, Copilot chat can now query JFrog about package curation and other tasks that understand your company’s policies. This landmark partnership continues to drive massive developer value and DevSecOps team efficiencies through consolidation of tooling/dashboards and AI.

Fourth, to assist security teams, JFrog took EveryOps to the next level when it comes to security. We were excited to announce a complete “shift-right” in security with the launch of JFrog Runtime Security. This exciting capability allows companies to not just secure and manage their pipeline, but detect vulnerabilities and gain insight into software where it runs in the production environments. For the first time in a single platform, companies can trace the lineage of software from code through production and back with deep metadata and understanding of the complete software lineage.

Finally, JFrog announced a new chapter in bringing the community together in a single source of truth: a collaboration with the world leader in AI, NVIDIA. JFrog and NVIDIA announced the ability to take NVIDIA NIM packages (microservices that provide GPU-optimized ML models) into company pipelines as first-class citizens of JFrog Artifactory. Now joint customers can utilize optimized models to get the most out of their NVIDIA investments in the platform that is already the system of record for their company: the JFrog Platform.

All of these announcements showcased one truth: the future of development is in secure, automated, AI-component-delivering systems, and it takes all of us across digital teams to make it a reality. The new EveryOps-powered world doesn’t require dozens of new point solutions — it requires a single, integrated platform that gives companies an arsenal of functionality.

These groundbreaking announcements would not have become a GA reality without the dedication of our FROGs — the JFrog employees. They not only serve the community but are also inspired and driven by its collective wisdom, fully committed to alleviating developer pain points in an EveryOps world.

We will reconvene in Napa Valley in 2025 at the next swampUP to celebrate everything you’re driving forward today.

I look forward to seeing what you create and innovate with to build the next generation of applications. This is an exciting time, and I’m honored you’re partnering with us, trusting us, and building with the community on this amazing ride. You will adapt, evolve, and expand your skills to meet the world’s new challenges; I’m sure of it.

May the Frog be with you!

Shlomi

]]>
Point Solutions vs Platform – Which is Best to Secure your Software Supply Chain? https://jfrog.com/blog/point-vs-platform-for-software-supply-chain-security/ Thu, 25 Jul 2024 13:51:32 +0000 https://jfrog.com/?p=135430 Platform-vs-Point-Solutions

According to Gartner, almost two-thirds of U.S. businesses were directly impacted by a software supply chain attack. So it’s not a question of whether to secure your software supply chain, but rather what is the most effective and efficient way to provide end-to-end security during all phases of the software development lifecycle (SDLC).

 

Download the Ebook

The Problem

Managing to control all the ways attackers can exploit the software supply chain is endless, and attempting to cover every possible scenario is basically impossible. To provide effective security for all seven stages of development, security professionals need to effectively identify the potential sources for vulnerabilities, detect potential threats, prioritize them and remediate only what is truly necessary to prevent exploitation.

While there are many effective security tools  for providing protection at each stage of development, having too many tools can result in:

  • Lack of centralized management
  • No communications between tools
  • Siloed security and management operations
  • Multiple sources of truth
  • Limited visibility across the entire SDLC
  • Slow response time to breaches in security

The price enterprises pay for multiple point solutions isn’t just financial, but should also consider the complexity and hassle of dealing with vendor evaluations, procurement, integration, maintenance and troubleshooting of so many disparate tools.

Point vs Platform cover page

The Solution

One of the best ways to battle having too many tools is to take a platform approach.  This helps unify the software supply chain, streamline security operations, increase developer efficiency and reduce risks associated with open source packages. It also enables scalability, while providing a single system of record and end-to-end traceability for threat analysis, licensing, compliance and governance.

To make the right choices for securing your software supply chain, it’s important to prioritize your security requirements and understand the impact that tool selection or platform adoption can have on your software development operations and protecting your business.

That’s why we are pleased to offer our Secure the Software Supply Chain the Hard Way, or Choose the Platform Way eBook. Please download your free copy and start making smarter choices on the best way to protect your software business now and in the future.

]]>
Expanding Artifactory’s Hugging Face Support with Datasets https://jfrog.com/blog/expanding-artifactory-hugging-face-support-with-datasets/ Tue, 16 Jul 2024 12:42:12 +0000 https://jfrog.com/?p=136985

When working with ML models, it’s fair to say that a model is only as good as the data it was trained on. Training and testing models on quality datasets of an appropriate size is essential for model performance.

Because of the intricate link between a model and the data it was trained on, it’s also important to be able to store datasets and versioned models together. This is why we’ve expanded Artifactory’s Hugging Face repositories to natively support both Hugging Face datasets and models.

Before we dive into Artifactory’s dataset support, let’s do a quick recap on Hugging Face.

What are Hugging Face Datasets?

Creating an appropriate dataset for model training can be difficult and costly, leading many AI developers to look for existing datasets they can use as-is or modify with little effort. Thankfully, the community has responded by creating and publishing thousands of datasets for use.

Hugging Face has emerged as a popular hub for AI developers and NLP practitioners to use for this very type of collaboration. It serves as a public registry offering pre-trained models, datasets, and tools.

Artifactory: your private data registry

As of version 7.90.1, Artifactory supports proxying Hugging Face Datasets for remote and local repositories, using the native Hugging Face Python library. Using Artifactory to cache datasets from Hugging Face ensures that your datasets, along with your models, are always consistent and reliable, and that they can be retrieved with the best performance.

Using Artifactory Hugging Face local repositories allows you to define fine-grain control of the access to your models and datasets and provide a single place to resolve and manage both assets.

Dataset files stored in the same remote repository as models

Set Me Up for resolving datasets with Artifactory Hugging Face repos

There are two ways Hugging Face stores datasets within their platform. The first is that the actual data files sit inside the Hugging Face registry. The second option occurs when the format of the data file cannot sit directly within Hugging Face and instead a Python script is provided to point and download the data from an external location. In these instances, JFrog will not cache the file and instead offer a pass-through.

What next?

We’ve got even more in the works! Up next for our Hugging Face support: better linking between the model and the dataset used to train it.

In the meantime, you can log into your JFrog account to start managing your datasets with JFrog. Not an Artifactory user, but want to see how it can help manage your machine learning models and datasets? Start a free 14-day trial.

]]>
Embracing Complexity in DevOps: Software Supply Chain State of the Union 2024 https://jfrog.com/blog/devops-software-supply-chain-state-of-the-union-2024/ Fri, 05 Jul 2024 17:15:43 +0000 https://jfrog.com/?p=135453 As we delve deeper into the era of software reliance, the 2024 JFrog Software Supply Chain report emerges as required reading for developers and DevOps professionals who are at the frontline of today’s technological innovations.

Read the Report

DevOps and development themes from the 2024 report

The report combines Artifactory data, analysis from the JFrog Security Research team, and survey responses from 1,200 Security, Development, and Ops professionals. Its goal is to understand the state of the software supply chain and what’s required to secure it given its increasing complexity.

Highlights from the report:

  1. Increasing diversity in the tech stack: There is increasing diversity in the technology stacks used across industries. The research shows that 53% of organizations utilize between 4 to 9 programming languages, while 31% use more than 10. This variety not only adds complexity to the software supply chain but significantly increases the attack surface, making it critical for developers and DevOps teams to adopt the right tools and processes to effectively manage dependencies and apply security across a vast array of technologies. That said, the top technologies that organizations use haven’t changed much from year to year. Maven, PyPI, NPM, and Docker continue to be the most popular packaging technology ecosystems used.
  2. Explosion of Open Source Components: While open source continues to dominate software development, it also poses significant security challenges. The research tracks an upsurge in CVEs, particularly in npm and PyPI. For developers, understanding which vulnerabilities are critical and which are not can significantly reduce wasted effort, allowing them to focus on innovation rather than remediation. More specifically, the report also explores what to look out for and how to continue to use open source software in a safe manner – even when there’s more malicious activity than ever.
  3. Shifting security practices: A major takeaway from the report is the shift in how organizations approach security: from a disjointed afterthought to an integral part of the development process. With 89% of surveyed professionals stating their organizations have adopted a security framework, there’s a clear push towards more proactive security measures. For DevOps, this means integrating security early in the development lifecycle, a practice known as “shifting left,” which can greatly reduce vulnerabilities in production. Like it or not, the security challenge isn’t going away. In fact, 25% of developer time is currently spent on security remediation by the majority of organizations surveyed. But if you can approach development with security best practices in mind, you’ll be able to save time and be a driving force, rather than functioning reactively or “taking orders” from the security team.
  4. The role of AI and ML: The influx of artificial intelligence (AI) and machine learning (ML) in development processes is transforming how we build software. The report provides insights into how organizations are leveraging AI to enhance security protocols and streamline development. For developers particularly, understanding these tools can be a game-changer in terms of reducing time on task and enhancing code quality. For instance, most teams today are using AI for security scanning, but not code creation, likely due to security concerns. But there are considerations and steps that organizations can take to expand their use of Gen AI tools, which enables them to scale coding responsibly.
  5. Managing the software supply chain: At its center, the report sheds light on the expanding software supply chain and the associated risks. It’s no longer just about the code you write; it’s about the entire ecosystem from which your code derives and the pathways it takes to production and beyond. For DevOps teams, this means a significant responsibility to manage this supply chain effectively, ensuring compliance, security, and efficiency. The report provides information and resources for further learning in an effort to prepare organizations to succeed in a continuously evolving industry.

In sum

For DevOps and development teams, the 2024 JFrog Software Supply Chain report can be used as a roadmap to navigating the complexities of modern software development. It highlights critical challenges and offers insight into how enterprise organizations can leverage new technologies and practices to mitigate risks and also drive innovation. As we navigate an increasingly interconnected world, understanding and implementing the strategies outlined in this report will be key to securing and optimizing our software supply chains.

]]>
Removing Friction Between DevOps and Security is Easier than you Think https://jfrog.com/blog/removing-friction-between-devops-and-security-is-easier-than-you-think/ Wed, 08 May 2024 11:44:13 +0000 https://jfrog.com/?p=130356 Removing Friction Between DevOps and Security - Banner
Note: This post is co-authored by JFrog and Sean Wright and has been published on Sean Wright’s blog.

Removing friction between DevOps and Security teams can only lead to good things. By pulling in the same direction, DevOps can make sure developers continue to work with minimum interruption, while automation and background processes make security more effective and consistent than before. And, security teams have the visibility and understanding of the software development life cycle (SDLC), to improve developer experience and reduce risks and incidents for the organization.

In the first blog post we took a look at the friction between DevOps and Security in software development and the negative business impact such friction can entail. We ended with the question: Does there have to be friction between DevOps and Security at all? And that’s exactly what we intend to find out in this post.

Bringing DevOps and Security Together

The journey to achieve an optimal model of collaboration and software development efficiency (velocity), starts with the simplification and cross usage of tools. We’re seeing a common trend where executives are looking to improve their software supply chain processes by streamlining the many tools they use. But now the time has come for combined groups of development and security engineers to also embrace this idea, move away from point solutions, and switch to an integrated efficient tools architecture.

It’s pretty clear that security and development teams need to leverage multiple data points and perspectives to cater to their specific roles. Some of the tools that generate this data are unique to their functional needs, yet there are also common data sets, and operational frameworks that can align.

The key to successful collaboration doesn’t lie in having an myopic view of looking at a specific solution in isolation, but rather considering the needs of information that flows between the teams and the overlap of solutions and tools. The winning approach is having a holistic DevSecOps architecture that combines the tools, data flows and processes from both groups.

We’re seeing more and more companies – across all industries and geographies – adopting a unified software supply chain platform approach. A platform offers a reliable solution for managing and securing the SDLC, while keeping all relevant teams connected and productive.

Ask security teams what they care about; their first response will probably not be the developer’s experience or how many code fixes are required, just as developers are less concerned about how security threats are detected or how many vulnerabilities were remediated. The trick is to put all the information together and come up with insights that benefit both teams with a holistic view and understanding of what’s happening at each stage of the development process.

A good example of this could be to focus on one aspect, detecting a potential vulnerability.  As developers are coding, they’re actively warned about a potential vulnerability, with a suggestion how to mitigate the risk. Now, when the package is marked ready for production, the automated scanning of the package is launched and the number of potential vulnerabilities is immediately reduced, since the problem was handled early in the development process. From Security’s perspective, this means having less alerts to deal with, less back and forth with the developer, and less interruption in the development of new solutions.

JFrog’s recent Software Supply Chain State of the Union Report, found that developers spend, on average, 25% of their time remediating vulnerabilities. Not only is the volume of vulnerabilities increasing, but it’s becoming harder to triage them effectively (remember the recent issue with NVD not analyzing CVEs?). So, just imagine how beneficial it would be – across the organization – to reduce this by 10-15%. And at the same time, the security posture of the SDLC would improve by reducing alert fatigue, and allowing the teams to focus on their shared mission of securing the delivery of software.

PS: Remember to track metrics when launching this improvement.

One of the keys to making this happen is deploying a platform-based solution that provides a common framework for managing and securing the software supply chain, integrating the teams and enabling their core missions. The advantage of having a single platform is getting the right information at the right time to all stakeholders – in the language and context they understand. It makes it easy to create automated workflows that minimize conflicts, provide continuous monitoring and alerting, and deliver one source of truth for developers, DevOps and Security.

The Visibility Advantage

What’s the one thing CISOs and security teams always ask for? Visibility.

A software supply chain platform enables visibility across the entire SDLC, empowering organizations to standardize, monitor, secure and automate the process of delivering trusted software. By utilizing a platform, you gain a central point of truth with accelerated triage and prioritization for easily identifying the source (and even a specific developer) associated with an introduced vulnerability.

The challenge associated with relying on point solutions, without a central software supply chain platform, is that the risk of introducing  blind spots could let incidents fall through the cracks. Leveraging a platform architecture, enables companies to enjoy both agility and scalability – with all aspects managed from a single pane of glass. This means viewing all risks from a single place, without having to correlate all information from numerous tools.

Using a platform’s architecture encourages the use of automated security measures at critical points in the development process, and the continuous monitoring in production that reduces the need for manual work. For example, when developers want to use open source packages, they’re automatically scanned for vulnerabilities. If a threat is discovered or introduces unacceptable operation risk, the package can’t be used for the build. This is a great real-life picture of how to improve security without hassling developers or slowing down operations. Simply put, this is a win-win situation for both DevOps and Security.

Furthermore, if a CVE is detected in a binary in production, a platform can provide you with all the background of that artifact – including the developer responsible for using it in the first place and hopefully recommend a short term mitigation (e.g., change the way the function is called) and long term fix (e.g., upgrade the package). After the vulnerability is known, Security can analyze the threat and decide how likely it can be exploited in a real-world scenario.

In some cases, even though a known vulnerability is present, it can’t be exploited in the current operating environment and may not require an immediate fix. This is becoming even more fundamental with the increase in volume of CVEs, in addition to the challenge in trying to triage them. Organizations need a way to focus on the ones that actually represent the most risk.

In other instances, a simple package upgrade can eliminate the threat, once again aligning Security’s desire to minimize risk with DevOps’ goal of getting a release out the door as soon as possible. It’s critical to have this functionality since it can reduce the number of false alerts significantly, and remove the pressure to remediate immediately.

Increased Collaboration & Communications

In addition to having the right platform in place, regular meetings are required for maintaining open communication and building strong personal relationships between DevOps and Security. Don’t just run status reviews, but make it fun, educational.  Brainstorm together to merge the knowledge of the two teams to solve a tough problem. Another easily implemented enabler is to establish a dedicated chat channel where team members can easily exchange information and ask questions. This ongoing dialogue is essential for transparency, trust and a shared understanding of each other’s tasks and goals.

Now that we’ve got the technology infrastructure and open communication channels, we still need to make sure everyone stays on track. It’s strongly recommended to have a well-defined roadmap, including assignments regarding who is RACI (Responsible, Accountable, Consulted and Informed) for each task. It also provides a clear statement of milestones and goals, to make sure all teams are aligned and working together. All this is easily enabled when using a platform.

It’s highly recommended to create measures of success that show how the platform and collaboration drive mutual goal achievement. You could have a metric for developers to show how much faster they get code into production. For Security, you could track how the number of vulnerabilities in production is reduced. And you could have shared metrics that recognize star contributors from each team.

Benefits of Software Supply Chain Platform

When it comes to releasing secure quality software in the fastest time possible, Security needs to be able to find and fix vulnerabilities efficiently – with minimal conflict. It’s not about pointing fingers to a specific application, version, project or even developer, it’s about removing friction for the benefit of the teams, and more importantly improving overall business outcomes. Removing friction enables DeVOps and security teams to speed up the entire remediation process, coordinating their efforts to provide a safe fix and distribute it accordingly.

There are many operational and security benefits that come as a result of deploying a platform-based software supply chain solution. They range from overarching goals such as tool consolidation and build integrity to vulnerability management capabilities, including prevention, detection, triage and remediation.

This trend of moving to platforms is also driven by the need to reduce the number of tools used to secure the software supply chain and how to avoid the blind spots caused by trying to mix and match tools without a central supporting framework.

These benefits enable faster release cycles without sacrificing security, which by definition helps remove the friction between DevOps and Security. This also impacts business results, with faster releases, less downtime, compliance verification and more effective responses to vulnerabilities and malicious code.

The Log4j Example

A platform is certainly a good way to start with cases like the infamous Log4J vulnerability,   where operations lacked a centralized repository, causing confusion and delay in precious response time.

Not having accurate and relevant information regarding the vulnerability, and where it was deployed within the code, caused some DevOps teams to concentrate their limited resources on security measures that weren’t related to the vulnerability. Many didn’t even know whether it originated in their own code or in a third party package. Fragmented visibility of the software supply chain, due to disparate point solutions, delayed responses and increased business damage.

Had those organizations deployed a centralized platform, their response could have been prompt and focused, quickly identifying where the vulnerability appeared in their code, who was responsible and what updates could be used to remediate the threat in a timely manner.

Better Together – One Team

In the world of software development and cybersecurity, the success of an organization is very much dependent on getting the security and development teams to work together. Taking advantage of a unified platform, allows DevOps and Security to establish shared processes that remove silos. In the end of the day, all teams are working for the same goal – for the organization to succeed.

A platform enables developers to keep on working uninterrupted, while security gates and controls are applied across all stages of the SDLC. If a vulnerability is discovered, contextualized prioritization and easy-to-understand suggestions for remediation, can minimize the effect on customers and avoid negative business impact.

Leveraging the benefits of a software supply chain platform increases communication and collaboration, automates tasks and workflows, makes it much easier to remove friction, and improves development efficiency alongside overall application security.

It wouldn’t be an exaggeration to say that software supply chain platforms are becoming the must-have architecture for enterprises seeking to be ready for whatever may be next. From evolving technologies (AI/ML), and practices (MLOps), to new compliance and regulation requirements (PCI v4, DORA and NIS2). A platform is the ideal solution for gaining overall confidence to prevent the next cyber attack without slowing down your business.

]]>
Live Panel Recap: Women in DevOps https://jfrog.com/blog/live-panel-recap-women-in-devops/ Mon, 22 Apr 2024 09:24:24 +0000 https://jfrog.com/?p=127715

In celebration of International Women’s Day, I had the pleasure of speaking with two incredible female leaders in the software industry on our live panel session, “Women in DevOps: Moments of Leadership and Tech Evolution.” During the conversation with Jyostna Seelam, Senior Manager at Capital One, and Tracy Ragan, CEO of DeployHub, we discussed the role of women in DevOps and how they navigate growth and leadership, the impact of emerging technologies such as AI and machine learning on DevOps practices, and the influence of cloud-native technologies on the evolution of DevOps.

The event was a smash success, with 260 registrants, 117 live attendees, and a lively dialogue among all. In fact, some of my favorite moments came from the Q&A portion of the event where we answered attendee-submitted questions. In this blog, I’m excited to share five top takeaways. To see the entire panel session as it unfolded, you can watch the webinar here.

Watch the webinar

Takeaway 1: Women bring unique skills and attributes to DevOps roles

The first question from the Q&A was around specific qualities women bring to DevOps roles and how those qualities can contribute to team success. We all agreed, grinning in our self-awareness, that characteristics such as multitasking, tidiness, and organization – qualities typically associated with women – can be uniquely beneficial in the DevOps environment.

Tracy observed that women tend to be “naturally tidier,” a skill that is an advantage in DevOps. “Being organized, being tidy, and being able to multitask are qualities of every person involved in DevOps that I have ever met,” she said.

Jyostna echoed Tracy’s sentiment, adding that women’s ability to multitask and their natural inclination to keep things organized are strengths that can be leveraged in DevOps practices. She also emphasized the importance of using the power of connection by joining networks and communities to stay on top of emerging technologies and developments in the field.

Bottom line:

Women bring unique and highly valuable skills to DevOps roles, such as expert multitasking and organization, which can contribute to both individual and team success.

Takeaway 2: AI and machine learning will greatly influence DevOps practices

The panelists also delved into the hot topic of how AI and machine learning will influence DevOps practices in the future. They recognized that although these technologies are still being learned and understood, they present great potential for automating mundane tasks and improving workflows.

Tracy suggested that the current stage of adoption we’re in could be called “applied AI,” where we’re just now learning how to use this new tool in our toolkit. For example, teams are beginning to figure out the best ways to use AI to do things like generate new workflows, improve existing ones, and how best to add the new tooling to DevOps workflows.

She also cautioned that widespread adoption of AI is going to come with a necessary cultural shift. “As developers, we’re so script-oriented and we’re going to have to shift away from that if we want to use AI to start generating code of any kind,” she said. “We have to start recognizing that AI is there to help us.”

When it comes to how AI is impacting the security side of things, Jyostna emphasized the value of AI in predictive analysis and its potential to help shift strategies to the left, thereby enabling proactive monitoring and early detection of issues. She encouraged embracing AI and other new technologies, and emphasized the importance of being hands-on in learning and applying them.

Bottom line:

AI and machine learning are becoming increasingly influential in our industry, with potential applications in predictive analysis, continuous security monitoring, automated code generation, and testing. We’re still in the early stages of learning best practices for leveraging these technologies, but if we want to continue driving forward an integration between DevOps and Security, we’re going to need to learn how these new technologies can help us to enable an ideal Dev+Sec+Ops workflow and then implement it.

Takeaway 3: Embrace change and continuous learning in the DevOps space

One of the key themes that wove throughout our conversation was the importance of embracing change and continuous learning. The panelists highlighted the rapid evolution of technologies and practices in the field and stressed the need for professionals to stay updated and adaptable.

Jyostna emphasized the importance of continuous learning and adapting to new technologies. “Challenge yourself to be hands-on when it comes to emerging technologies. Utilize the tools that are available to you… don’t hesitate or fear them.” As a broad example, she suggested that attendees not treat DevOps and DevSecOps as different or competing terminologies. “Just embed it in your culture,” she said.

Tracy also shared some interesting thoughts on how embracing cloud-native architecture can evolve DevOps. “The structure of the application itself isn’t changing. So if you’re really driving into digital transformation, you’re really starting to look at how to decouple a monolith.” Once you start adopting that decoupled architecture, she explains, you end up finding that you have multiple workflows and multiple containers that are being deployed independently. “Now you have feature-dependent deployments, and that changes everything.”

Tracy also highlighted the need to learn about Docker, Helm, and Kubernetes. “Getting Kubernetes certified is going to teach you a lot about today’s new digital world,” she said.

Bottom line:

The transition to a more fragmented and decoupled software environment presents new challenges in tracking changes and understanding the impact of updates. The adoption of cloud-native technologies, for instance, allows for improved scalability and flexibility in DevOps.

Takeaway 4: Mentorship is crucial for career advancement in DevOps

We all stressed the importance of mentorship for women in DevOps, both as a way of navigating challenges and advancing in our careers. We encouraged women to seek out mentors within their organizations and to use these relationships for guidance and support.

Jyostna highlighted the value of networking and joining communities, which can provide additional avenues for learning and development. “Networking opportunities,” she stated succinctly, “don’t miss them!”

“Find mentors within your organization… that’s somebody you can reach out to and you can feel free to talk to,” Tracy Ragan further advised. She explained that men tend to find mentors more frequently, but that this is something that needs to change.

I jumped into the conversation at this point, doubling down on the importance of mentorship. I’ve personally had mentors who have pushed me leaps and bounds forward in my career (shoutout to Randall, Ann, and Kirk – you know who you are – thank you!).

Bottom line:

There are numerous educational and professional development opportunities available for women looking to enter or advance in DevOps careers. Prioritize learning the basics of “the ops, the build, and the deploy” (i.e. learn Kubernetes, Docker, and Helm as a solid foundation). Last, but certainly not least, seek out trusted mentors.

Takeaway 5: Work-life balance is critical in DevOps roles

We also touched on the topic of work-life balance and its importance in DevOps roles. Tracy, Jyostna, and I fully acknowledged the challenges of maintaining balance but offered strategies for managing these challenges effectively.

Tracy suggested asking for help when needed. “I think that women in general are less likely to ask for help,” she said. She followed up by saying that people are often very willing to help and that asking for assistance can show initiative.

Jyostna agreed, adding that taking advantage of educational and professional development opportunities can also contribute to a better work-life balance, since many of today’s tools were originally created to help save developers’ time. We all heartily agreed on the value of leveraging available tools and resources to manage workloads and maintain balance.

Bottom line:

Don’t be afraid to ask for help and to find ways to outsource your work. Leverage the networks, tools, and other resources around you to offload some of the daily pressures. Especially with the growing trend of shift-left security, developers need to take advantage of time-saving tools like AI to maintain efficiency and productivity, all while nurturing a healthy work-life balance.

Women in DevOps webinar screenshot

]]>
Friction between DevOps and Security – Here’s Why it Can’t be Ignored https://jfrog.com/blog/friction-between-devops-and-security-cant-be-ignored/ Wed, 03 Apr 2024 10:01:33 +0000 https://jfrog.com/?p=127975 Friction Between DevOps and Security-863x300

Note: This post is co-authored by JFrog and Sean Wright and has also been published on Sean Wright’s blog.

DevOps engineers and Security professionals are passionate about their responsibilities, with the first mostly dedicated to ensuring the fast release and the latter responsible for the security of their company’s software applications. They have many common goals, but when a secure software supply management program is launched, very quickly, according to many CISOs we’ve spoken to, a misunderstanding of needs and priorities arises.

The disconnect starts when DevOps perceives that precautionary security measures are just slowing down the release cadence. Security, in the meantime, appears to be only focused on the potential consequences of neglecting security posture, such as data breaches, reputational damage and legal implications, and doesn’t understand the needs of the DevOps teams.

Ironically, both teams want the same thing, to create solutions that enable the organization to deliver services and capabilities, in an efficient, safe and secure manner.

Both are essentially correct, but there is a certain conflict of interests.

For any organization, speeding up development and protecting today’s software supply chain is a challenge. It encompasses pretty much everything that has to do with software production from the initial ideation and design phase, to coding, testing, deployment and ongoing maintenance.

In fact, the increasing complexity and interconnectedness of software systems, together with the explosion in the use of open source software, has made securing the software supply chain a critical aspect of ensuring overall cybersecurity. One of the key areas around secure software supply chain management, is managing the risks associated with vulnerabilities.

To ensure that we’re leveraging the skills and powers of each team, it’s essential to align security at every step of the development process and understand the nature of potential vulnerabilities in terms of prevention, detection and remediation. One of the main issues when discovering a vulnerability is to fully understand where it’s located, the true severity of the threat, and the best way to handle it.

On top of that, and especially following the recent National Vulnerability Database (NVD) saga, it’s becoming increasingly difficult to get appropriate information about the vulnerability itself. Still, there’s certainly no need to get stuck in a Fear, Uncertainty and Doubt (FUD) mindset because of a known vulnerability, and start implementing unrelated security procedures instead of concentrating on mitigating the vulnerability itself.

It’s important to ensure that the vulnerability is applicable to your situation. Don’t fall into panic mode, as described in a previous post regarding how companies overreacted in responding to the Log4j vulnerability. Instead, address the issue at hand as well, leveraging the combined perspectives of the two teams, in a balanced, respected collaborative effort.

What Interests DevOps and Security Teams

While both teams share the overarching goal of maximizing efficiency and minimizing costs, when it comes to specific objectives, that’s where the friction starts.

On the one hand, most DevOps teams are interested in:

  • Ensuring a smooth unhindered development process
  • Delivering quality software releases according to business requirements
  • Speed to get software to production
  • Less revisiting the code to fix issues

On the other hand, Security professionals are rightly concerned with:

    • Ensuring the security of all software packages
    • Minimizing risk by preventing, identifying and mitigating vulnerabilities
  • Speed to detect security issues
  • Less coding and vulnerability issues

Bringing Dev, Sec and Ops teams together can be a real  challenge

While these values aren’t necessarily at odds with one another, you don’t have to be a systems analyst to understand where they can be working at cross purposes. For example, when a release needs to get out the door and Security is holding it back due to an obscure vulnerability in an old version of an open source package, serious conflict can arise as to whether the software is safe for distribution or not.

Likewise, when DevOps prevails and prematurely comes out with a release, without proper scanning of open source packages, it can result in potential security risks falling through the cracks, and possibly leading to financial loss and damage to the brand once the vulnerability is revealed.

These examples show where friction or misunderstanding can hinder the prevention of vulnerabilities and potentially slow down the software release process.

Take Aways

Both DevOps and Security teams should aim to enable developers – DevOps to make it easier on developers to develop new software and release it quickly, and Security to ensure developers have access to appropriate tooling and knowledge, with security measures in place, to help prevent them from making serious mistakes.

Recognizing that DevOps and Security have divergent interests is a good first step, but at the end of the day, DevOps needs to understand the importance of maintaining security posture, while Security must be committed to streamlining its vulnerability prevention, detection and remediation. So the issue really becomes:

Does there have to be friction between DevOps and Security at all?

That is really an excellent question and just happens to be the subject of our next blog. We’ll delve deeper into this issue and provide insights about the benefits of DevOps and Security teams streamlining vulnerability management. After all, both DevOps and Security work for the same organization and are ultimately part of the same extended team, which shares the joint goal of minimizing risk and increasing operational speed and efficiency.

Check out the second blog post in our series and discover the best way to remove friction and get DevOps and Security pulling in the same direction.

]]>
The State of Software Supply Chain Security in 2024 https://jfrog.com/blog/state-of-software-supply-chain-security-2024/ Wed, 27 Mar 2024 12:01:46 +0000 https://jfrog.com/?p=127952

In today’s fast-paced software development landscape, managing and securing the software supply chain is crucial for delivering reliable and trusted software releases. With that in mind, it’s important to assess whether your organization is set up to handle the continuous expansion of the open-source ecosystem and an ever-growing array of tools to incorporate into your supply chain.

To help prepare you, we’ve compiled a comprehensive report that combines JFrog’s extensive usage data from millions of users, meticulous CVE analysis conducted by the JFrog Security Research Team, and commissioned third-party polling data from 1,224 professionals in Security, Development, and Ops roles. In this blog, I’ll give you a quick overview of some of our findings. You can also check out the full report here.

Read the Report

Themes from the 2024 software supply chain security report

Four key themes emerged from our analysis:

  1. An exploding software supply chain. The growing amount of open-source components available is creating an increasingly vast software supply chain (SSC) to contend with.
  2. Where risk is hiding (and where it’s not). While risk lies beyond the open-source ecosystem, not all reported vulnerabilities are worth spending time remediating.
  3. Where to focus your security efforts. A security mindset has finally hit the mainstream, but disjointed security approaches are costing development teams about a quarter of working time each month.
  4. The emergence of AI/ML. Organizations need to be intentional about how they’re leveraging AI-based tools and move quickly to adopt security best practices for model use.

In a nutshell, the overwhelming amount of change and the rate of expansion in terms of the tools, technologies, and languages available today has the potential to put a massive strain on organizations.

Here’s a sneak peek of some of the data available in the report:

Fig 1. How many programming languages do you use in your software development organization? (Comissioned survey, 2023)How many programming languages do you use in your software development organization? (Comissioned survey, 2023)

As you can see, about half of organizations (53%) utilize 4-9 programming languages, while a substantial 31% use more than 10 languages. Unsurprisingly, the larger the organization, the more programming languages are likely to be used.

Fig 2. Number of new packages per year, displayed by package type (Artifactory database, 2023)Number of new packages per year, displayed by package type (Artifactory database, 2023)

Docker and npm were the most-contributed to package types. PyPI contribution also increased, likely driven by AI/ML use cases. According to JFrog Artifactory usage data, the most popular technologies used in production-ready software are Maven, npm, Docker, PyPI, Go, Nuget, Conan (C / C++), Helm.

The variety of available open-source packages and libraries available is booming, inadvertently creating a world of potential risk for organizations, as we explore further in the report.

CVEs by month and severity in the last 2 years (National Vulnerability Database analyzed by JFrog Security Research)CVEs by month and severity in the last 2 years (National Vulnerability Database analyzed by JFrog Security Research)

A look at CVEs in the National Vulnerability Database from January 2022 to November 2023 shows that Critical and Low CVEs remain relatively consistent, but medium and High CVEs are increasing. While this may initially cause alarm, a review of over 200 high-profile CVEs created in 2023 by the JFrog Security Research team tells a very different story. See the report to find out why.

In sum

I can’t stress this point enough: the growing complexity of the software supply chain can expose your organization to greater risk than ever. But with the right tools, processes, and best practices, technical leaders will be able to utilize the most diverse software ecosystem we’ve ever seen to their competitive advantage.

To see the complete analysis — including our security findings, a look into the future of AI/ML, plus practical security tips for safeguarding your software supply chain — read the full report. Also be sure to stay tuned for the upcoming webinar where I’ll be discussing the findings in real time.

]]>
How a DevOps Company Does DevOps https://jfrog.com/blog/how-a-devops-company-does-devops/ Wed, 06 Mar 2024 15:06:35 +0000 https://jfrog.com/?p=126847 How a DevOps company does DevOps

At JFrog, we believe in practicing what we preach by “drinking our own champagne.” This means that we not only develop and deliver market-leading products but also utilize our own solutions in our development processes.

When it comes to managing development environments, we aim to implement the best-in-class approaches. By adopting these top-tier practices, we ensure that our development environments are optimized for efficiency, reliability, and innovation, enabling us to deliver exceptional products to our customers.

In this blog, I’ll share how a DevOps company uses its own products to enhance efficiency and effectiveness in all engineering processes. Keep reading to learn about:

  • Our goal of driving innovation in a fast, secure, and budget-friendly manner
  • Differentiating between freestyle and paved paths. If you’d like to read more on the paved paths, check out Microsoft’s blog
  • The JFrog paved path in R&D environments
  • How to spin the right environment at the right time
  • JFrog’s DevOps practices, including functional testing and monitoring

To hear me speak about all of this, plus see a short demo of the platform, you can also watch the webinar version of this content.

Webinar screenshot
Click to watch

JFrog’s DevOps Goals

Our number one goal is to drive innovation in a way that’s fast, secure, and within budget. It’s also critical to maintain high uptime. Because of our range of offerings, we also need to balance the needs of both on-premises and SaaS platform customers, which means that our development environments need to support both use cases.

Freestyle vs. Paved Path

The “freestyle” and “paved path” approaches offer different benefits for engineers at different stages of development. Here’s a quick description of each:

Freestyle

  • Enable fast yet secured prototyping; explore new technologies
  • Minimal guardrails
  • Engineer has R/W permissions & ability to deploy

Paved path

  • Enable secured and high-quality delivery to production
  • Quality/Security/FinOps guardrails
  • Deploy permissions only

The freestyle approach is one where engineers have full freedom to explore and experiment with new technologies without any boundaries (except for security, of course). If I’m an engineer, I want to do fast prototypes while exploring new technologies and I don’t need heavy guardrails because it’s not yet in production. I should have full flexibility to experiment because what I’m working on isn’t with the customer yet.

In contrast, the “paved path” approach comes into play once an experimental project has proven successful and needs to be brought into an enterprise-grade environment. Once we have something viable, we need to stress-test it by working with it like we would in production. We need to apply more guardrails in order to see how it might actually perform in the hands of the customer.

Both paths are necessary at various points in the development process, so we need to enable both for our engineers.

The JFrog Paved Path in R&D Environments

DevOps practices at JFrog involve a gradual hardening process as projects transition from exploration to production.

Below, I’ve outlined a staged funnel starting from exploratory stages, where there’s minimal hardening, up to production-like environments where extensive hardening is required.

  1. Exploratory – No hardening; completely open for experimentation
  2. Freestyle – Innovate with more support of company tools and processes
  3. Evaluate – Evaluate innovation in a production-like environment
  4. Stress Test – Test to find out if it’ll work at the scale of production

As we move through these stages, we start to get more and more like the version control, the pipelines, or any additional security hardening that we would like. We enable a self-service for our engineers self-service because we need to be able to monitor and remediate the environments so they are up and running at all times.

From Innovation to Production (On-prem and Cloud)

JFrog uses its own products in the development process, testing new versions in an environment identical to that of a large customer. For example, if at production we have Artifactory 7.69, then we’ll develop the next version on 7.69, which will become the 7.7, 7.8, and so on. We use the current version of our own products to develop the next generation, new features, and new innovation.

Functional testing flowFunctional testing flow

In 2022 and 2023, JFrog also made efforts to harden its processes and upscale its infrastructure to handle increased data transfers. This was achieved by moving JFrog’s R&D cluster to its production class. We had all the elements and the journey was to bring our processes to the next level. We built them to scale with self-service.

A Self-Service System

JFrog has implemented a self-service system for engineers to spin up their own production-like environments. All you have to do is select your deployment type, choose the products that you want to deploy, and you get an environment where you can run all your tests. If they work, you can then distribute, push it to release and distribution. If it doesn’t work, then you’re in a debug phase until the issue is fixed.

This self-service system is highly convenient and efficient, as engineers can independently run tests and debug issues without having to file tickets or requests.

Recap

In a nutshell, here’s how a DevOps company does DevOps:

  • JFrog uses its own products, like Artifactory, to develop our next generation of products.
  • We use a dual approach of freestyle and paved way to drive innovation and maintain high-quality delivery.
  • We have moved JFrog R&D cluster to JFrog production class, enabling us to scale data transfer from 50 terabytes to over 100 petabytes.
  • We have a self-service system for engineers to spin up production-like environments without needing to file tickets.
  • We use a performance test to ensure that our products can handle the load and stress of both on-prem and cloud customers.

The good news is, you can start applying some of the best practices I’ve just shown you right away! If you’d like to learn more about the JFrog Platform, schedule a demo or sign up for a free trial.

]]>