AI Security

RBAC & Scoping: Griffin AI vs Mythos

An AI that reads your security data needs the same access controls as a human analyst. Most pure-LLM vendors stop at the role name. Safeguard enforces the scope.

Nayan Dey
Staff Platform Engineer
7 min read

RBAC & Scoping: Griffin AI vs Mythos

Role-based access control is the workhorse of enterprise security. Done well, it gives every user and every automated actor exactly the access they need and nothing more. Done poorly, it creates a sprawl of over-privileged identities that adversaries, mistakes, and outages exploit. For an AI platform that acts on behalf of users, RBAC is not an administrative chore. It is the line between a helpful assistant and a data exfiltration tool waiting for the wrong prompt.

Griffin AI, inside Safeguard, enforces RBAC and data scoping with the same rigor as the underlying platform. Mythos-class pure-LLM competitors often stop at the role label, leaving scope enforcement as an open problem. This post walks through what strong RBAC looks like for a security AI, why Mythos-class products fall short, and how Safeguard closes the gaps.

A complete RBAC model

An RBAC implementation that holds up under pressure covers four layers:

  • Authentication. The actor is identified by a trusted authority.
  • Role assignment. The actor is mapped to one or more roles through an auditable process, ideally driven by an IdP.
  • Permission granting. Each role carries a specific set of permissions on specific resource kinds.
  • Scope enforcement. Each permission is evaluated not only on the action but on the specific resources it touches.

The first three layers are well understood. The fourth is where most products fail. A user with "read findings" permission should be able to read only the findings in the products and projects they are scoped to. The permission is not "read any finding"; it is "read findings I have a legitimate claim to."

Where Mythos-class products miss

Pure-LLM assistants inherit their RBAC posture from whichever platform they sit on top of. Their gaps tend to cluster around three issues:

  • Coarse roles. The product ships with a small number of roles, usually admin, member, and viewer. Real organizations need dozens of distinct roles, and the product forces them to overload the few it has.
  • Global scope by default. A role grants access to everything in the tenant. Team, product, and project scoping are on a roadmap or available only through a premium tier.
  • AI bypasses scope. The assistant runs under a service account or operator identity with broad permissions. User-level scope is checked at the UI tier but not at the AI tool-call tier. A carefully constructed prompt can pull data the user cannot see in the interface.
  • No resource-level policies. Sensitive records cannot be marked as restricted independent of role. Everything in the tenant is one access-control cliff.
  • Admin is one switch. The administrative role is a single all-or-nothing permission. Delegated administration, break-glass access, and separation of duties are not supported.

Any one of these is a finding in a serious access review. The combination is what it looks like when a product was built for speed and is being retrofitted for enterprise.

Safeguard's model

Safeguard's RBAC model is built around five layers of scoping, with Griffin AI as a participant at every layer.

  • Tenant. The top-level boundary. A user belongs to zero or more tenants, and every action is bounded by tenant membership.
  • Organization and team. Inside a tenant, organizations and teams carve out logical boundaries. Roles and scope are typically assigned at this layer.
  • Product and project. The operational unit of Safeguard. Findings, SBOMs, policies, and conversations are anchored to a specific product or project, and access is evaluated against the user's claim to that product or project.
  • Resource label. Individual records, especially sensitive ones, can carry labels that apply additional access rules on top of the scope evaluation.
  • Action. The concrete verb: read, create, update, delete, approve, publish, export, and so on.

Roles are composed from these layers. A role like "vulnerability manager for the payments product" is expressible directly. A role like "break-glass incident responder with time-limited elevated scope" is expressible directly. Customers define roles in configuration, map them to IdP group claims, and review them through their own change process.

Griffin AI enforces scope on every tool call

The crucial property is that Griffin AI's tool calls run under the user's identity, not a shared service account. When Griffin AI invokes a tool, the call carries the user's session, their role set, and their effective scope. The tool evaluates the call against those claims and rejects anything that falls outside.

In practice, this means:

  • A user scoped to the payments product cannot get Griffin AI to summarize findings in the loyalty product, regardless of how the prompt is phrased.
  • A user with read-only permissions cannot get Griffin AI to mutate state, even when asked directly.
  • A user whose scope was revoked thirty seconds ago cannot continue a Griffin AI session against resources they no longer have a claim to.

This behavior is not a prompt-level filter. It is an authorization check at the tool boundary, inside the Safeguard API, enforced the same way as any other API call. Griffin AI's ability to act is entirely bounded by the user's ability to act.

Attribute-based extensions

Some access decisions do not fit the role-and-scope model cleanly. Safeguard supports attribute-based rules that ride on top of RBAC for those cases:

  • Clearance. Records marked at a specific sensitivity level require a matching user attribute.
  • Locality. Records anchored to a jurisdiction require users whose attributes match.
  • Time windows. Break-glass roles are issued for specific durations, and access decisions honor the expiration.
  • Purpose. Some records are accessible only for specific declared purposes, with the declared purpose recorded in the audit log.

These extensions are policy-driven, evaluated at the same authorization boundary as role checks, and logged identically. Griffin AI inherits them all, because it goes through the same boundary.

Policy as code

Safeguard's RBAC policies are expressible as code, versioned in the customer's own change management system, and applied through a documented promotion flow. This matters because it makes RBAC reviewable. A click-through admin UI for roles can encode a major access decision with no trace. A pull request that adds a new role, requires two approvals, and is applied through a pipeline creates a record.

Policy-as-code also supports testing. Customers can write assertions against their policy set, the same way they would write unit tests, and run them before promoting a change. Regressions are caught early.

Delegation and separation of duties

Large organizations do not run RBAC from a central helpdesk. They delegate administration to the teams closest to the work, while keeping global sensitive actions reserved. Safeguard supports this with:

  • Delegated admin. A role that grants administrative rights inside a team or product without elevating to tenant admin.
  • Separation of duties. Specific pairs of actions cannot be performed by the same identity, enforced at the policy layer.
  • Four-eyes approvals. Sensitive actions, including some Griffin AI initiated ones, can require a second identity to approve before they commit.

Pure-LLM vendors that ship with a single admin role force customers to flatten their delegation model to match. That works for a pilot. It does not survive a real organization's access review.

The evaluation lens

A short set of questions separates products that did the RBAC work from products that plan to:

  • Is scope enforced at the API tier, not just the UI tier?
  • Does Griffin AI's tool call inherit the user's full claim set, including attribute-based extensions?
  • Can roles be defined, reviewed, and promoted through the customer's change management?
  • Is delegated administration available without full tenant admin elevation?
  • Does the audit log capture the claim set in effect at the time of every action, human or AI?

Safeguard answers yes to each. Mythos-class competitors routinely fail at least two, and the failure modes produce the exact findings that access reviews are designed to catch.

Closing

RBAC is the access layer an enterprise AI sits on. If the layer is weak, the AI is an amplifier for the weakness. Griffin AI inside Safeguard sits on an RBAC model that was designed to hold up under audit, under incident response, and under the weight of a real organization's delegation pattern. An AI that respects scope is the minimum. An AI that enforces it at the tool boundary is the standard. Safeguard is already there.

Never miss an update

Weekly insights on software supply chain security, delivered to your inbox.