
AI has done something truly magical for software development: it made “ship it” feel like a reasonable response to “we haven’t read it yet.”
According to a new report spotlighted by DevOps.com, the combination of AI-assisted coding and modern dependency-heavy development is driving open-source risk to what can only be described as “this is fine” levels—except the room is definitely on fire, and the fire is made of transitive dependencies.
The RSS item we’re building on here is AI-Fueled Development Pushes Open-Source Risk to Extremes: Report by James Maguire at DevOps.com. It summarizes findings from Black Duck’s 2026 Open Source Security and Risk Analysis (OSSRA), which analyzed 947 commercial codebases across 17 industries. citeturn0search0turn1search1turn1search2
The headline numbers are the kind that make CISOs develop a sudden interest in pottery:
- Mean open-source vulnerabilities per codebase jumped 107%, reaching 581 vulnerabilities per codebase. citeturn1search1turn1search2
- Open source appeared in 98% of audited applications. citeturn1search1turn0search0
- Open-source components per application rose 30% year over year, and file counts grew 74%. citeturn1search1turn1search2
- License conflicts were found in 68% of codebases—an all-time high for the OSSRA—up from 56% the prior year (per Black Duck). citeturn1search1
Those figures alone would be enough to justify a full-blown “dependency diet.” But the deeper story is not just “there are more vulnerabilities.” It’s why they’re showing up in such volume, why AI makes the curve steeper, and what organizations can do to get back to something resembling control—without turning off AI tools and going back to writing XML by candlelight.
What the 2026 OSSRA is really saying (in plain English)
The OSSRA is not a bug-count contest. The numbers reflect something more structural: software is getting bigger, more layered, and more borrowed, and AI is accelerating that borrowing.
Black Duck’s dataset spans audits of 947 codebases across 17 industries. SC Media reports the audits covered 2,843 individual projects conducted between November 2024 and October 2025 (the report is titled 2026, but the analysis window is earlier). citeturn1search2
And the “581 vulnerabilities per codebase” figure is especially easy to misread. It does not necessarily mean 581 distinct CVEs. SC Media notes the average included 237 unique vulnerabilities, and it quotes Qualys VP Saumitra Das emphasizing that much of this is transitive dependency sprawl compounding complexity. citeturn1search2
Translation: you might have one vulnerable library, but it can appear multiple times across builds, containers, modules, or duplicated dependency trees. It’s the same hole showing up in five walls because the blueprint was reused and then photocopied for speed.
AI didn’t invent open-source risk—AI just put it on a scooter
Open source has dominated modern development for years. The surprise isn’t that open source is ubiquitous. The surprise is how fast the risk surface is expanding now that AI assistants can generate working code (and dependency choices) faster than traditional governance can observe, measure, or constrain it.
DevOps.com’s write-up frames this as a pace mismatch: AI can collapse timelines from months to days, while security and compliance controls still often run on human review cycles. citeturn0search0
And that mismatch shows up in several ways:
- Dependency inflation: AI suggestions often pull in libraries to solve problems quickly, which increases the attack surface and the maintenance burden.
- Hidden component intake: Code can enter a codebase outside package managers (copy/paste, vendored code, binaries, generated snippets), which reduces scanning visibility. citeturn1search6turn1search5
- License ambiguity: AI-generated code may incorporate patterns or fragments with licensing obligations that don’t travel with the snippet when it’s pasted into a proprietary repo. citeturn1search1turn0search0
AI tooling also changes developer behavior. When an assistant can propose “the fix” in seconds, the incentive shifts from understanding a dependency choice to simply accepting it. In mature orgs, that’s manageable with guardrails. In most orgs, it becomes a silent scaling factor for risk.
The dependency graph is the new attack surface (and it’s growing faster than your backlog)
Two of the most revealing OSSRA stats aren’t strictly “security vulnerability” stats; they’re software age and maintenance health.
DevOps.com summarizes the OSSRA as finding that:
- 93% of audited codebases included components that hadn’t seen active development in at least two years.
- 92% relied on components that were four or more years old.
- Only 7% of components were fully up to date.
Those numbers are from the DevOps.com reporting of Black Duck’s findings. citeturn0search0
Outdated dependencies are not automatically insecure—but they are a predictable place for vulnerabilities to hide, because older code is less likely to receive fixes quickly, and older versions are more likely to remain unpatched in enterprise environments.
Transitive dependencies: the “free” pizza that comes with hidden fees
Modern software almost never depends on just one open-source library. It depends on a library that depends on another library that depends on five more libraries that were last updated when people still argued about tabs vs spaces on Twitter.
Black Duck’s own OSSRA trend materials for prior editions highlight that a large portion of components in modern apps are transitive dependencies, which creates visibility problems if you lack an up-to-date inventory of third-party code. citeturn1search7
This is the core: your security posture now depends on the hygiene of strangers you’ve never met, plus the hygiene of their dependencies, plus the hygiene of the build systems that package those dependencies, plus the hygiene of the registries they’re published to.
Supply chain attacks are no longer “edge cases”
The OSSRA findings land amid a broader industry reality: software supply chain attacks have moved from “rare but scary” to “we should assume this will happen.”
Black Duck also published a separate report, “Navigating Software Supply Chain Risk in a Rapid-Release World,” summarized by TechRadar, that surveyed 540 software security leaders and found 65% of organizations experienced at least one software supply chain attack in the past year. The same summary notes that 95% of organizations are using generative AI tools like ChatGPT for development and that only 24% analyze code for security, licensing, and IP concerns in a comprehensive way. citeturn2news12
Even if your organization hasn’t been hit, the ecosystem is being targeted at scale. Sonatype’s Q4 2025 Open Source Malware Index describes an explosion in malicious packages, including automated campaigns that published huge volumes of malware packages—particularly in npm. citeturn0search1
The key point is that the attacker advantage is increasingly automation. As defenders automate builds and releases, attackers automate poisoning the same upstream inputs.
License conflicts: the sleeper risk that can become a boardroom problem
Security leaders are used to arguing for vulnerability remediation budgets. Licensing compliance is different: the risk is harder to quantify until it becomes very quantifiable in court.
Black Duck’s 2026 OSSRA found that 68% of audited codebases contain license conflicts—the highest in OSSRA history—and Black Duck specifically ties this to AI-generated code potentially reproducing code governed by restrictive licenses such as GPL/AGPL without carrying forward obligations. citeturn1search1turn0search0
Some quick context for non-lawyers:
- Permissive licenses (MIT, Apache 2.0, BSD) typically allow broad reuse, usually requiring attribution and license inclusion.
- Copyleft licenses (GPL, AGPL) can impose “share alike” obligations that may conflict with proprietary distribution models, depending on how the software is linked or delivered.
If your codebase contains conflicting obligations, the risk isn’t theoretical. It can delay deals, block procurement, complicate M&A, and create costly remediation work late in the product cycle—when engineers would rather be doing literally anything else.
Why “just scan it” is not enough anymore
Software composition analysis (SCA) and SBOM generation are now table-stakes. But the OSSRA storyline makes clear that traditional scanning approaches have blind spots:
- If your pipeline relies on manifest-based detection, it may miss code that enters outside normal dependency declarations (copy/paste snippets, vendored dependencies, binaries).
- If AI tools suggest dependencies and developers accept them quickly, you may have more frequent dependency churn than your review process can handle.
- If your vulnerability intelligence relies heavily on lagging metadata, you may triage the wrong things first.
In other words, scanning is necessary, but it’s not sufficient. The modern requirement is continuous, policy-enforced intake control—and that means shifting from detection-only to prevention-oriented workflows.
Regulation is tightening: SBOMs and secure development are becoming default expectations
Open-source risk and AI-accelerated development would be hard enough on their own. But the compliance environment is also shifting toward more formalized software assurance.
The EU Cyber Resilience Act (CRA): timeline matters
The EU Cyber Resilience Act (CRA) entered into force on December 10, 2024. The European Commission notes that manufacturers must place compliant products on the EU market by 2027. citeturn4search1turn4search3
More specifically, the EU’s digital-strategy CRA overview states:
- Main obligations apply from December 11, 2027.
- Reporting obligations apply from September 11, 2026.
That second date is the one many teams miss: September 11, 2026 is not far away in organizational time, especially if you ship software embedded in devices or sell into EU markets with long product lifecycles. citeturn4search3
The CRA also explicitly references SBOM-style documentation. One compliance explainer cites CRA Annex I language requiring manufacturers to identify and document components, including by drawing up a software bill of materials in a commonly used and machine-readable format, covering at least top-level dependencies. citeturn4search2
US federal expectations: SSDF and attestation
In the US, NIST has published guidance related to Executive Order 14028 and software supplier requirements. NIST’s attestation page explains that guidance was codified in OMB Memorandum M-22-18, instructing federal acquirers to ensure software producers implement and attest to conformity with secure software development practices, and that agencies may request additional artifacts like SBOMs based on risk. citeturn4search6
Separately, the NIST Secure Software Development Framework (SSDF) is published as NIST SP 800-218 (Version 1.1). citeturn3search6
So yes: the “paperwork” is arriving. And it’s arriving right when AI is helping teams ship faster than ever. That’s not irony; it’s scheduling.
Open-source malware is scaling like SaaS (unfortunately)
If vulnerabilities are the unintentional risk, open-source malware is the intentional one. And the malware economy has discovered the joy of automation.
Sonatype’s Q4 2025 Open Source Malware Index says it identified 394,877 new open source malware packages in Q4 2025, representing a 476% increase compared to the previous three quarters combined, and it attributes much of that spike to a highly automated campaign. citeturn0search1
That matters because modern organizations don’t just “use open source.” They use registries continuously, at scale, through CI pipelines. A malicious package can land in a developer workstation, a build server, or a container build process long before anyone thinks to ask “wait, who published this?”
AI assistants can accidentally help attackers
There’s another twist: AI systems can recommend dependencies based on popularity and pattern matching, not trust signals. Help Net Security summarizes Sonatype research noting that AI systems may select packages and suggest upgrades based on public data that can lag reality, and it references testing showing an AI “dependency upgrade hallucination” rate of 27.76%. citeturn0search2
Even when the suggestion isn’t malicious, it can be wrong, and “wrong in CI” has a way of becoming “wrong in production” if the pipeline is designed to optimize for speed.
A practical playbook: how to keep AI speed without buying AI-shaped risk
Turning off AI tooling is not realistic for most teams. Also, good luck enforcing that without being haunted by shadow IT and developers who suddenly become experts in “personal productivity tools.” The goal is to treat AI-assisted development as high-throughput change and build controls that scale with that throughput.
1) Treat dependency intake like a security boundary
Most orgs still treat dependencies as “just libraries.” They’re not. Dependencies are executable code you didn’t write, delivered through systems you don’t control, maintained by people you don’t employ.
Concrete steps:
- Use an internal repository manager (proxying npm/PyPI/Maven Central) and define what can enter the org.
- Block known-malicious packages and require quarantine/approval for suspicious ones.
- Pin versions and use lockfiles to reduce surprise upgrades.
This sounds basic, but it’s the foundation for everything else.
2) Generate SBOMs continuously—and make them useful
SBOMs are only as valuable as the processes wrapped around them. The CRA’s direction toward a machine-readable SBOM for at least top-level dependencies is a legal baseline; operational security often benefits from including transitive dependencies too. citeturn4search2turn4search5
Good SBOM practice in 2026 means:
- Generate SBOMs per build, not per release quarter.
- Store SBOMs alongside artifacts (containers, packages) so you can answer “what’s running?” instantly.
- Integrate SBOM data with vulnerability intelligence and ticketing so remediation isn’t a PDF exercise.
3) Adopt secure development baselines (SSDF) and measure them
NIST SSDF (SP 800-218) is not a tool; it’s a baseline of secure development practices. The hard part is operationalizing it across teams and pipelines. citeturn3search6turn4search6
Pragmatically, that often means:
- Threat modeling for critical systems
- Secure code review requirements
- Automated testing (SAST/DAST/SCA) with defined gates
- Vulnerability response SLAs and patch processes
If your AI tooling increases code volume, you need more automation in review, not less.
4) Use OpenSSF frameworks to mature supply chain security
The Open Source Security Foundation (OpenSSF) has been building practical frameworks for supply chain security. Two are especially relevant here:
- S2C2F (Secure Supply Chain Consumption Framework) for consuming dependencies safely—complete with a maturity model. citeturn3search0turn3search1
- SLSA for producer-side build provenance and tamper resistance; OpenSSF announced SLSA v1.0 in 2023. citeturn3search5turn4search9
Why mention these? Because AI-assisted development increases throughput, and frameworks give you a staged way to increase controls without trying to do “everything everywhere all at once.”
5) Make provenance and attestations normal
One of the most promising counterweights to supply chain chaos is cryptographic provenance: being able to say what was built, from which source, using which build system, and under what policy.
The SLSA project describes how it relates to in-toto attestations and provenance predicates, enabling verification of supply chain steps and policies. citeturn4search9
You don’t need to become a cryptography professor. You do need to recognize that “trust me, we built it” doesn’t scale anymore—especially when builds happen hundreds of times per day.
6) Add AI-specific controls (because AI is now part of the supply chain)
Most orgs are still writing “AI policies” like they’re writing a social media policy: vague, aspirational, and doomed to be ignored. AI in development is more concrete than that. It affects code, dependencies, and security outcomes.
Actionable controls:
- Log and tag AI-assisted changes (commit metadata, PR labels) so you can measure impact and target reviews.
- Restrict AI tools from auto-installing dependencies in CI/CD without policy checks.
- Require license scanning for copied snippets and generated code (not just declared packages).
- Train developers on dependency trust signals (publisher identity, download anomalies, package age, typosquatting cues).
The endgame is not to slow down. It’s to make safe the default fast path.
What this means for DevOps teams in 2026
The DevOps promise was always speed with reliability. The 2026 OSSRA data suggests we’re drifting into a new phase: speed with accumulated third-party uncertainty.
If you’re leading engineering, security, or platform teams, here are the strategic implications:
- Expect vulnerability backlogs to grow unless you reduce dependency count, improve automation, or both.
- Shift security left isn’t enough if you don’t also shift “dependency governance” left.
- Compliance will increasingly ask for evidence (SBOMs, attestations, vulnerability handling processes), not intentions.
- AI governance becomes SDLC governance because AI changes what enters your codebase and how quickly.
The mildly funny part is that we finally got the development speed we wanted. The less funny part is that attackers got it too.
Closing thought: visibility beats velocity (eventually)
DevOps.com ends on a point that’s worth underlining: when code is written at ever-faster rates, transparency into what’s actually running in production may matter more than the speed at which it was written. citeturn0search0
That doesn’t mean going slow. It means building systems where you can answer, instantly and confidently:
- What components are in this release?
- Where did they come from?
- What vulnerabilities affect them?
- What licenses govern them?
- Who approved them, and under what policy?
If you can do that, AI-assisted development becomes a competitive advantage instead of a security debt multiplier. If you can’t, you’re not “moving fast and breaking things” so much as “moving fast and inheriting things.”
Sources
- DevOps.com: AI-Fueled Development Pushes Open-Source Risk to Extremes: Report (James Maguire, Feb 27, 2026) citeturn0search0
- PR Newswire: Black Duck Research Shows Open Source Vulnerabilities Have Doubled as AI Accelerates Code Creation (Feb 25, 2026) citeturn1search1
- SC Media: Open-source vulnerabilities per codebase surge by 107% (Laura French, Feb 26, 2026) citeturn1search2
- TechRadar: Software supply chain attacks pose huge dangers — here’s how to bolster your defenses citeturn2news12
- Sonatype: Open Source Malware Index Q4 2025: Automation Overwhelms Ecosystems (Jan 15, 2026) citeturn0search1
- Help Net Security: Open-source malware zeroes in on developer environments (Jan 29, 2026) citeturn0search2
- European Commission: Cyber Resilience Act policy page citeturn4search3
- European Commission: A safer digital future: new cyber rules become law (Dec 10, 2024) citeturn4search1
- Sbomify: EU Cyber Resilience Act (CRA) SBOM Requirements citeturn4search2
- OpenSSF: Secure Supply Chain Consumption Framework (S2C2F) citeturn3search0
- OpenSSF: Announces SLSA Version 1.0 Release (Apr 19, 2023) citeturn3search5
- SLSA.dev: in-toto and SLSA (May 2023) citeturn4search9
- NIST CSRC: SP 800-218 Secure Software Development Framework (SSDF) Version 1.1 citeturn3search6
- NIST: Software Security in Supply Chains: Attesting to Conformity with Secure Software Development Practices citeturn4search6
Bas Dorland, Technology Journalist & Founder of dorland.org