67% of organizations admit that security controls slow down their release cycles. Nearly 60% of cloud breaches in 2025 will stem from misconfiguration or gaps introduced during rapid development. Those two data points tell a pretty simple story: we’re shipping faster than we can secure, and the blast radius is getting bigger.
In this episode of AppDevANGLE, I spoke with Gil Geron, CEO and co-founder of Orca Security, about how to secure cloud-native applications (across AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud, and more) without dragging developers back into “ticket-driven” slow lanes. We dug into microservices and serverless blind spots, the real meaning of “shift left,” why agentless security matters, and how AI might actually be good news for security teams.
Developers Want Speed, Security Teams Want Control
Cloud-native architectures (microservices, serverless functions, managed services) were supposed to make delivery faster and more flexible. They did. But they also fragmented telemetry, multiplied the number of “things” to configure, and widened the gap between Dev and Sec.
“What we’re seeing is that the need for velocity becomes essential for our business,” Gil said. “The need to be able to deploy fast, the need to be able to scale fast becomes fundamental.”
Security teams are being forced to adapt. Instead of acting as a gate at the end, they’re getting embedded into the development and operations lifecycle. That requires changing not only tools but language.
Gil used a simple example: when a developer thinks about access, they think IAM roles and policies; a security engineer might think firewalls and networks. But the outcome they both care about is “exposure.”
“When you try to protect data at rest, it doesn’t matter whether it’s SQL Database or RDS,” he explained. “What you care about is that you want to protect the database and make sure there’s no external exposure.”
In other words, the conversation needs to shift from infrastructure primitives to intent: what should be exposed, to whom, and under what conditions.
Our research shows that 70% of enterprises have accelerated cloud-native adoption, yet fewer than half have full observability across their pipelines. Decentralized architectures plus fragmented telemetry equals blind spots, which is exactly where misconfigurations and policy drift creep in.
Why Shift Left Doesn’t Actually Slow You Down
A lot of teams still treat “shift left” as a euphemism for “dump more work on developers.” Gil pushed back on that narrative.
“I think it’s a common misconception that shifting to the left will slow you down,” he said. “It actually will speed you up.”
He compared it to classic software engineering: a bug in design is far more expensive than a bug in code. The same logic applies to security flaws. A critical issue discovered in production has higher risk, higher remediation cost, and often higher business impact than the same issue caught during build, test, or staging.
“If you have a security issue in production…it’s way more expensive than fixing it while you’re building the code,” he said. “Your ability to fix it before it’s deployed into production actually speeds you up.”
The real challenge isn’t “developers don’t care about security.” It’s relevance and clarity. Developers don’t want their code breached. What they do want is:
- Security findings that are understandable (“what is this issue actually about?”)
- Prioritized (“why this one, now?”)
- Actionable (“what’s the fix and where in my code/infrastructure is it?”)
“That’s what a good solution in AppSec tries to do,” Gil said. “It tries to tell you, ‘Hey, this is important, you should fix it. If it goes to production, it will be critical, so let’s do it now.’”
Our own app-dev-focused research points to the same pattern: teams that embed security into CI/CD with automation and real-time AppSec often maintain, or even improve, throughput, especially once they’ve tuned policies and workflows.
Agentless Cloud Security Changes the Equation
We also spent time on one of the big design decisions in cloud security: agent-based vs agentless.
Traditional approaches lean heavily on agents: install something on each host or workload, then rely on that agent for telemetry and enforcement. That model is brittle at cloud scale, especially in environments with ephemeral workloads and platform-managed services.
Gil described the “chicken and egg” problem: “Being in a position where you have to deploy something that’s going to have performance impact or cause obstruction to your production in order to understand whether there might be a security issue…definitely doesn’t work in scaled environments.”
Worse, it rarely achieves full coverage. Orca’s data shows that in many production environments, less than 50% of assets are actually covered by agents. The result is predictable: partial visibility, blind spots, and misaligned priorities.
Agentless, API- and graph-based approaches turn that around by connecting directly to cloud APIs and configuration state, building a unified model of assets, identities, permissions, data stores, and network paths, and avoiding the operational pain of deploying and maintaining agents on every host.
“What happens with lack of visibility,” Gil said, “is lack of accuracy. And in order to really collaborate with DevOps and engineers in a good way, you have to be very accurate. You have to be able to tell them: this is important, and this is why it is important for you.”
That “for you” is key. Agentless and graph-based analysis lets platforms surface a critical vulnerability with internet exposure, reachable PII, and a plausible lateral movement path instead of just dumping CVEs.
In our AppDev Summit research, we saw agentless security and cloud graph platforms enabling teams to inventory and analyze cloud “as state,” without installing agents. That lowered runtime overhead, reduced maintenance drag, and sped up adoption. Given that many developers already spend only about one-third of their time on net-new innovation, and two-thirds on maintenance, anything that cuts configuration churn and operational overhead matters.
Context Is the New Security Primitive
Telemetry is only useful if it carries context. In cloud-native environments, that context spans:
- The environment (prod vs dev)
- The data (PII, payment, internal, public)
- The exposure (internet-facing vs internal-only)
- The blast radius (who/what can move laterally from here?)
“Context is basically the king of decision making,” Gil said. “Trying to talk about a risk without the context of the environment, the impact, the importance of the environment…is essential for decision making.”
The old model (separate tools and data stores for vulnerabilities, malware, data security, network security) breaks down in the cloud. Everything is intertwined: identities, services, serverless functions, containers, buckets, queues, and APIs.
To illustrate, Gil pointed to a simple but common scenario: a storage bucket that’s publicly accessible. Should it be?
“Maybe it should be internet facing,” he said. “I need it for my website, I need the images. If you move it to be private according to the security policy, you will take down a website.”
The lesson is that security policies need business context. The right move isn’t “no buckets can be public” but “public buckets that contain sensitive data or are reachable through risky identities or paths must be flagged and fixed.”
With good visibility from the foundation up, security can become an enabler, not an inhibitor. Our practitioner guidance reflects this as well: prioritize high-risk data paths and business-critical services first; then iterate with automated discovery to avoid noisy, low-value telemetry.
AI-Native Applications and Secure-by-Default Delivery
It took us 15 minutes to say “AI” in this conversation, which might be a record in 2025, but we got there.
Gil’s view is pragmatic: AI in development is inevitable. Business pressure and the demand for velocity will override attempts to “ban” AI tools. That means security teams must adapt, not resist.
“I would argue that AI is actually very good for security teams,” he said.
As AI-native applications appear and practices like live coding and AI-assisted generation become standard, security can be embedded more deeply:
- AI agents can enforce security policy during code generation and review.
- Security controls can be codified and reused by AI-powered tools.
- Code reviews and threat modeling can be augmented or partially automated.
“We can have the AI agents leverage the security policy and actually automate stuff like security code reviews,” Gil said. “I think it’s going to bring us to a world where we are actually going to have better security than before.”
Our research echoes that direction: governance platforms, platform engineering, and clearer collaboration models are becoming critical for sustaining velocity while addressing supply chain and AI-era risks. AI doesn’t make risk go away; it changes where and how we can detect and manage it.
For now, a human-in-the-loop model remains essential. But the balance is starting to shift toward AI-assisted guardrails, with humans focused on judgment, policy, and exception handling rather than manual checklist work.
Analyst Take
Modern application teams are trapped between two pressures: ship faster, with more cloud-native services and AI capabilities and reduce breaches and misconfigurations that increasingly originate in the dev pipeline.
The conversation with Gil highlights three practical patterns that help reconcile those pressures:
- Shift left for speed, not for ceremony.
Security issues caught early are cheaper and faster to fix. The key is relevance; developers need prioritized, contextual findings that map to their code, infrastructure-as-code, and pipelines. - Use agentless and graph-based approaches to close visibility gaps.
At scale, agents alone will never reach 100% coverage. API- and graph-based platforms provide a more complete, less intrusive view of cloud state, especially for ephemeral workloads and managed services. - Make context a first-class security input.
Environment importance, data sensitivity, exposure paths, and business impact should all factor into prioritization. Without that context, teams will either drown in alerts or turn them off.
Looking ahead, AI will likely amplify all of this: more code, more changes, more dependencies, but also more opportunities to encode and enforce secure-by-default patterns in the tools developers already use.
If you’re building or running cloud-native applications today, the path forward is clear: treat security as part of your velocity strategy, not a tax on it. Solutions like Orca’s agentless, context-rich approach are one indicator of where the ecosystem is heading.

