All case studies
API Security Scanner· 7 min read · Open source

Building Fendix

Why existing security scanners produce too much noise — and how one insight (both engines must agree) reduced false positives by ~70% in a single Go binary.

Live Site SourceRole: Creator & Maintainer · Open Source · Go

01The Problem

Securitytoolsthatcrywolfgetignored.

DAST scanners (Dynamic Application Security Testing) send crafted requests to a running API and analyze responses. They catch real runtime behavior but generate false positives when a response looks suspicious but is actually expected.

SAST scanners (Static Application Security Testing) read source code and flag patterns that could be vulnerable. They catch code-level issues but have no runtime context — a flagged pattern might be safely guarded at the framework level.

Security teams using either tool alone face noise fatigue. Developers learn to ignore the scanner. Real vulnerabilities hide in the noise. The tool becomes theater.

The core hypothesis

If a vulnerability is real, both a runtime scanner (DAST) and a static scanner (SAST) should independently detect it. When they agree, confidence is high. When only one flags it — it's probably noise.

02Architecture

Twoengines.Onedecision.

The architecture is built around the correlation layer — every other component serves it.

API Endpoint (live)

Running service under test

Source Code / OpenAPI

Spec files, route definitions

DAST Engine

Crafted HTTP requests · Response analysis · Runtime behavior

SAST Engine

AST parsing · Pattern matching · Static vulnerability detection

Correlation Engine

Only flags a finding if BOTH engines independently agree · Maps DAST (HTTP) ↔ SAST (code) findings

The core insight

SARIF Report

Standard format · GitHub Actions · Azure DevOps · Any CI/CD · Zero false-positive noise

03Key Decisions

Technicalchoicesthatdefinedthetool.

01

Go for the runtime — single binary, zero dependencies

Python would be the natural choice for a security tool, but distributing a Python app means managing a runtime, virtualenvs, and dependencies. Go compiles to a single static binary that runs on any Linux or macOS system without installation. For a security tool, frictionless adoption matters.

02

Correlation-first design — the central abstraction

Most hybrid scanners just run DAST and SAST and concatenate the results. Fendix treats correlation as the primary output layer. DAST findings are normalized to code-level references; SAST findings are tagged with runtime relevance. The intersection is what ships to the report.

03

SARIF output — integrate with any existing toolchain

Proprietary formats lock users into your ecosystem. SARIF (Static Analysis Results Interchange Format) is the standard supported by GitHub Advanced Security, Azure DevOps, and every major CI/CD platform. Fendix findings appear natively in GitHub PR annotations with no extra setup.

04

Zero telemetry — open source security tools must be trustworthy

A security tool that phones home is a contradiction. No usage data, no crash reports, no version pings. Users running Fendix against internal APIs should have absolute confidence their endpoints and findings stay local. Signed releases via GitHub Actions provide integrity without central control.

04Engineering Challenges

Thehardparts.

Mapping HTTP findings to code locations

DAST produces HTTP-level findings: 'endpoint /users/1 returned a stack trace.' SAST produces code-level findings: 'line 47 of user_controller.go has an unhandled error.' Correlating these requires a normalization layer that understands route patterns, file-to-endpoint mapping, and error propagation paths.

Tuning the correlation threshold

Too strict: you require exact match on vulnerability type, endpoint, and code line — you catch almost nothing. Too loose: you correlate by vulnerability category only — you still have noise. The final model uses a weighted score: exact endpoint match + matching vuln category + overlapping parameter names = high confidence.

15+ vulnerability categories, two detection strategies each

Every category (SQLi, XSS, SSRF, path traversal, auth bypass, etc.) needs a DAST probe strategy and a SAST pattern. That's 30+ distinct detection implementations, each requiring testing across multiple frameworks and languages. The scanner is only useful if the coverage is real.

05Outcomes

Lessnoise.Moresignal.

~70%

False positive reduction

vs. standalone DAST or SAST

15+

Vulnerability categories

OWASP Top 10 and beyond

1

Go binary

Zero runtime dependencies

0

Telemetry

Privacy-first, open source

Read next

Read the Twiscope case study

How a distributed Celery + Redis pipeline handles 5M+ data points daily.