SecurityGoOpen Source

Why Both Scanners Must Agree: The Insight Behind Fendix

April 15, 2026·6 min read·Abdel-Rahman Saied

Most security scanners drown you in noise. Here's the single architectural decision that cut Fendix's false positive rate by ~70% — and why the solution was obvious in hindsight.

I've used a lot of security scanners. The experience is almost always the same: you run the tool, get back 200 findings, spend two days triaging them, and discover that maybe 15 are real. The rest are noise. And the next time a scan runs, you ignore the alerts by default because you've been trained to.

This is the false positive problem. It's not just annoying — it actively makes systems less secure. When every alert looks like a false alarm, engineers stop responding to alerts. The scanner becomes wallpaper.

Why does this happen?

Security tools generally fall into two categories: DAST (Dynamic Application Security Testing) tools that probe a running application, and SAST (Static Application Security Testing) tools that analyze source code. Each has blind spots.

  • DAST sees behavior but not code — it can flag an endpoint as vulnerable based on its response, but can't verify whether the code actually processes the dangerous input
  • SAST sees code but not behavior — it can flag a dangerous-looking function call, but can't know whether it's actually reachable or exploitable at runtime
  • Both generate findings independently, with no cross-referencing

The result: you get every DAST finding plus every SAST finding, with no signal about which ones actually matter.

The insight: make them agree

The core idea behind Fendix is simple to state and surprisingly rare in practice: a finding only becomes a build-failing alert when both the DAST engine and the SAST engine independently flag the same vulnerability at the same endpoint.

If DAST says an endpoint has a missing auth header, and SAST says the corresponding handler has no authentication middleware — that's a correlated finding. High confidence. Fail the build. If only one engine flags it, it's downgraded to informational and doesn't block CI.

This sounds obvious. The hard part is the implementation: how do you map a runtime HTTP response (what DAST sees) back to a specific code path (what SAST sees)? The two engines are looking at completely different representations of the same system.

The correlation engine in Go

I chose Go for Fendix for three reasons: the binary size stays small, the concurrency model is clean for running both engines in parallel, and the resulting artifact can be dropped into any CI pipeline without a runtime dependency.

The correlation engine works by normalizing findings from both engines into a shared format — endpoint path, HTTP method, vulnerability category — and then scoring pairs by similarity. A correlated pair needs to match on all three dimensions within configurable thresholds.

go
// Simplified correlation logic
type Finding struct {
    Endpoint  string
    Method    string
    Category  VulnCategory
    Engine    Engine // DAST or SAST
    Severity  Severity
}

func correlate(dast, sast []Finding) []CorrelatedFinding {
    var results []CorrelatedFinding
    for _, d := range dast {
        for _, s := range sast {
            if d.Endpoint == s.Endpoint &&
               d.Method == s.Method &&
               d.Category == s.Category {
                results = append(results, CorrelatedFinding{
                    DAST: d, SAST: s,
                    Confidence: High,
                })
            }
        }
    }
    return results
}

The results

Testing Fendix against a set of intentionally vulnerable APIs, the correlation requirement reduced actionable findings by ~70% compared to running either engine alone — without missing any real vulnerabilities in the test set. Everything that was correlated was real. Everything that wasn't correlated stayed in the report but didn't block the build.

The other design choices — single binary, zero telemetry, signed releases via Sigstore — are downstream of the same philosophy: a security tool you don't trust is a security tool you don't use. If engineers can't verify what the tool does, they won't use it. Fendix is open source under MIT. Read the source. There's nothing to find.

Fendix is available at fendix.dev. Install in 30 seconds: brew tap Abdel-RahmanSaied/fendix && brew install fendix

Written by

Abdel-Rahman Saied

Senior Software Engineer · Team Lead