AI is no longer confined to advanced research labs or elite developer teams. With the rise of generative AI, machine learning, and citizen development, artificial intelligence is becoming embedded in the daily workflows of knowledge workers across every industry. Python has emerged as the foundational language behind this shift offering flexibility, accessibility, and a robust open source ecosystem.
Yet this same openness introduces new risks. The sheer scale of open source dependency sprawl, combined with rapidly accelerating adoption, raises critical questions about security, compliance, and sustainability. As enterprises face mounting pressure from executive orders and international regulatory frameworks, like the EU Cyber Resilience Act (CRA), governance across the AI stack is no longer optional.
The new challenge is how organizations can maintain the speed of innovation enabled by open source Python while protecting themselves from software supply chain vulnerabilities, lack of transparency in AI model development, and inconsistent lifecycle management.
Complexity, Sprawl, and the Cost of Uncurated Innovation
Python’s simplicity and versatility have made it the go-to language for data science, ML, and AI, particularly among non-traditional developers. Anaconda, a leading provider of curated Python packages, now supports over 50 million users managing more than 7,500 tightly maintained libraries. But as Peter Wang, Co-founder and Chief AI Officer at Anaconda, noted:
“Many people who are not software engineers want to use programming tools to express their ideas… This was true in data science, and now it’s just as true in AI.”
This expansion beyond professional developers has created an ecosystem where enterprise IT must support a mix of citizen developers, data scientists, and LLM-assisted builders all working across different languages, models, and frameworks. Our research shows the pace of AI workload growth is particularly steep at the edge, where production deployments are projected to double over the next 24 months. This growth brings with it enormous pressure on platform teams to govern a sprawling, fast-evolving toolchain.
Python’s community-driven structure exacerbates the problem. Unlike Java or C++, which are stewarded by large corporate entities, Python remains a volunteer-run effort. This has enabled widespread innovation but it has also left organizations without a clear source of truth for software updates, versioning, or patching in the event of vulnerabilities.
Curation, Control, and Open Source Governance at Scale
Enterprises are now seeking ways to balance freedom with control while enabling innovation and limiting exposure. The response: curated open source software ecosystems that integrate with internal controls and security policies.
Anaconda plays a pivotal role in this space, offering a managed software supply chain for Python environments.
“We are a central clearing house where people across lines of business can get access to tools they know and love but IT has a place to have visibility,” Wang said.
This visibility is critical in a world of growing regulatory oversight. Our research finds that a majority of enterprises are already aligning software strategies with SBOM requirements, and adoption of governance tools across the AI pipeline is accelerating. With new regulations like the EU CRA requiring software vendors to document and secure their code bases by 2027, proactive tooling is becoming essential.
Anaconda’s partnerships with platforms like Snowflake and Databricks illustrate the growing demand for embedded governance. These integrations allow organizations to give end users access to open source tools while maintaining version control, dependency tracking, and compliance across departments.
When it comes to AI-specific risks such as code generation by large language models (LLMs) the stakes get even higher. Wang noted:
“We are not just a target for attack, we are being used as a vector to attack our users.”
With generative AI increasingly writing its own code, enterprises must now inspect not only their libraries and packages, but also the training data behind the models themselves.
AI Transparency and the Future of Secure Innovation
As we enter the next phase of AI maturity, open source governance must expand beyond package management to include model governance, data lineage, and real-time compliance monitoring. The transparency gap around LLM training data, for instance, poses a direct challenge to trust and ethical deployment, especially in regulated industries.
Wang emphasized this future state:
“If you’re going to be using LLMs for serious things, there has to be a similar bar of transparency and accountability around the data that goes into them.”
At theCUBE Research, we believe the foundation for trusted AI innovation must rest on secure, curated open source infrastructure. This includes:
- Centralizing package curation and dependency visibility
- Aligning Python-based environments with security standards (e.g., SBOM, CRA, NIST)
- Providing citizen developers with secure defaults
- Creating governance frameworks for LLM-generated code and model usage
The tension between speed and safety is not new but it’s becoming more acute in the era of democratized AI. Python will remain at the heart of innovation. The question is whether organizations can build the right infrastructure to support its growth securely.
Analyst Guidance
Organizations adopting AI at scale should act now to secure their open source foundations. We recommend:
- Prioritize curated environments for Python across data science, AI, and application development.
- Map your software supply chain and assess package risk exposure across teams.
- Implement policy enforcement around open source usage, version control, and third-party dependencies.
- Establish visibility into LLM-generated code and traceability for models used in production.
- Explore partnerships with curated platforms like Anaconda to reduce overhead and accelerate compliance.
The open source community has brought us to the edge of what’s possible with AI. Now, it’s time to build the guardrails that will take us safely into the future.