The Layer Between Detection and Remediation
I've spent most of my career focused on identifying real-world harm – external attackers, insider abuse, fraud, misuse of corporate assets. Across multiple products and companies, the mission was always the same: detect malicious activity with high confidence and enable effective remediation.
And we delivered. Every product included clear remediation paths: suspend the account, revoke access, patch the vulnerability, rotate the key.
But over time, I started noticing a deeper pattern.
In one product, we were detecting actively hijacked accounts inside large multinational enterprises. These weren't theoretical risks. The accounts were being used to access sensitive data, manipulate internal systems, and facilitate fraud. Our detections were strong and the evidence was clear.
Yet remediation often required careful deliberation. Not because the signal wasn't trusted, but because the environments were complex. If we suspend this account, does it power a critical integration? Who or what depends on it? What downstream systems might be affected? If we update that Python library in production, what else relies on it? Are we seeing the entire dependency chain?
In large, interconnected systems, action carries weight. Security leaders weren't lacking tools or resolve. They were operating within intricate realities where even correct actions have to be executed thoughtfully.
That's when it became clear to me: the hardest part of security isn't identifying what's wrong. It's understanding the surrounding context deeply enough to fix it safely.
This is true far beyond hijacked accounts. Modern enterprises generate thousands of findings every day: misconfigurations, identity issues, certificate expirations, exposure risks, policy violations. The industry has built exceptional tools to surface them. And yet, most of the actual remediation work still happens manually.
Not because automation is impossible. Because automation without context is dangerous. Security leaders don't hold back because they lack technology. They hold back because they can't be confident that the right people will be involved, that policy won't be violated, that production won't break, that the blast radius is actually understood.
That confidence comes from context, and it's the piece that's been missing. With AI now accelerating both attackers and operational complexity, the cost of operating without it is getting harder to absorb.
What “Context” Actually Means
We think about security automation in the enterprise as successful when it meets three criteria:
- Effective – it reliably achieves its objective.
- Efficient – it operates quickly while minimizing cost and resource drain.
- Adaptive – it makes sound decisions when things don't go as expected, performing at least as well as a trained human would in the same situation.
To meet that bar, automation needs a deep understanding of three dimensions of organizational context: ownership, policy, and systems.
Ownership: Who needs to be involved for this task?
This one gets misunderstood constantly. Ownership isn't just "who owns this asset?" It's "who owns this specific action?"
The same cloud resource might involve completely different stakeholders depending on what you're doing. A DevOps engineer handles certificate rotation. A developer addresses a vulnerability. A security lead approves a change. A compliance stakeholder needs visibility. And often it's not just one person. A single remediation step might need a business owner to approve and an engineer to execute, each with different context and different concerns. The "owner" field in your system of record? Often a service account or a generic software label. Technically correct, operationally useless.
Automation fails when it routes work to the wrong person, or to no one at all. And for critical assets, most organizations won't trust full automation anyway, they want a human in the loop. The key isn't removing humans from the process. It's getting to the right human and giving them the context they need to make a call or take action confidently.
Policy: How things are done here
Every organization has policies. But policy goes well beyond what's written down. It includes: regulatory obligations, compliance frameworks, internal standards, approved tools and workflows, and the organizational norms that live in people's heads.
Which certificate authorities are permitted? Who can access production? What approvals are required before execution?
Automation that ignores policy can technically resolve an issue while simultaneously violating internal standards. That's worse than not automating at all. Automation that understands policy feels native. It works within the guardrails the organization already trusts. For CISOs, this matters enormously: automation has to strengthen governance, not route around it.
Systems: What happens if we act, or don't?
Cloud environments are deeply interconnected, and every change has downstream consequences.
What systems depend on this resource? Is it in production? Could a short disruption cause real business impact? And just as importantly, what happens if we leave it alone? Does inaction create compounding risk?
Think about a certificate on a production system. Change it without rerouting traffic and you cause downtime. Fail to rotate it and the system eventually goes down on its own. Context means understanding the ripple before you execute, in both directions. It also means knowing whether you're dealing with a standalone asset or something embedded in a broader workflow, whether there are undocumented dependencies or pinned trust on the client side. These details often determine whether an operation succeeds or fails.
From Context to Action
Together, these three pillars tell you if you should act, when to act, how to act, and what the ripple effects of your decisions might be. Priority isn't a static severity score. It emerges from context: business criticality, compliance urgency, system impact, stakeholder availability, ripple effects. A good security operator balances all of this intuitively. Context makes that possible at scale.
And this is exactly why most enterprises still limit automation to narrow, low-risk use cases. The full picture has been too hard to assemble. Without it, the risk of breaking something, violating something, or sending work to the wrong person is too high. Teams choose safety over scale, and the long tail of posture issues just keeps growing.
Automation has existed for years. Context-aware automation hasn't.
What We're Building
This is the problem we started Surf AI to solve.
We connect context across identity, cloud, data, HR, and IT systems to build a living picture of ownership, dependencies, and business impact. That context feeds into specialized AI agents – each built for a specific domain of work, operating within defined permissions, with humans in the loop by default. Policies, approvals, and guardrails govern every action. Full audit trails capture every step.
The result is that security teams stop just detecting issues and actually finish them: continuously, safely, at scale.
The turning point for me wasn't a dramatic breach. It was a quiet realization during a remediation call. We had surfaced a critical vulnerability in a production workload. Everyone agreed it needed to be fixed. But the conversation stalled, not on whether to act, but on how. Who owned the deployment pipeline? Would patching break a dependency? Was there a freeze window? We were asking humans to manually reconstruct context every single time before making a decision or taking the correct action. That friction wasn't accidental. It was structural. And it wasn't going away on its own.
The future of security operations won't be defined by better dashboards or more findings. It'll be defined by whether teams can actually execute, safely and intelligently, across complex environments, with real awareness of who owns what, what the rules are, and what breaks if you get it wrong.
AI without context just makes things move faster. AI with context makes things actually work. That's what we're building.
Roie Cohen Duwek is Co-founder and CTO of Surf AI, where he leads the engineering vision behind an agentic cybersecurity platform that bridges the gap between security visibility and safe, scalable action across enterprise systems.
