OpenAI API Key Leakage on GitHub at Scale
A senior engineer's view of OpenAI API key leakage on GitHub at scale, why automated secret scanning misses so many, and what actually stops the bleeding.
Deep dives, practical guides, and incident analyses from engineers who build Safeguard. No fluff, no vendor FUD — just what you need to ship secure software.
A senior engineer's view of OpenAI API key leakage on GitHub at scale, why automated secret scanning misses so many, and what actually stops the bleeding.
Per-token pricing on the OpenAI API looks cheap on a single call and expensive on a year-long security workload. Griffin AI's pricing reflects the architecture.
Compliance posture is about what you can prove, not what you can do. GPT-5 has impressive capabilities; Griffin AI is engineered to be defensible.
The OpenAI Assistants API is a general agent framework. SecOps needs more than a framework — it needs the engine-grounded reasoning Griffin AI adds on top.
Frontier models offer impressive enterprise features. Security programs need deeper controls than chat can provide—controls that live in the engine around the model.
Function calling gives models the ability to act. Acting safely on behalf of a specific user, in a specific context, within specific policy is a different problem.
A million-token context window is a tool, not a solution. Context grounding for security requires architecture, not just capacity.
Deep reasoning models are transformative for hard logical problems. Security reasoning is only partially a logic problem—the rest is grounding, policy, and workflow.
GPT-4o is an excellent general-purpose model. Security workflows are a specialty, and specialty work exposes the limits of general intelligence.
Weekly insights on software supply chain security, delivered to your inbox.