Brokering at the proxy layer instead of handing secrets to the agent is the right mental model. With prompt injection being what it is, "the agent never possesses the credential" is a much stronger property than any amount of scoping. Curious how the non-cooperative container sandbox feels in practice once more agents are supported.
I have a related question, is anyone developing standards on how agents can proxy the requestor identity to backend database or application layers? (short lived oauth tokens perhaps, not long lived credentials like the ShowHN seems to focus on?)
T from Infisical here - Also forgot to mention that this is a research preview launch for Agent Vault and should be treated as such - experimental <<
Since the project is in active development, the form factor including API is unstable but I think it gives a good first glance into how we're thinking about secrets management for AI agents; we made some interesting architectural decisions along the way to get here, and I think this is generally on the right track with how the industry is thinking about solving credential exfiltration: thru credential brokering.
We'd appreciate any feedback; feel free also to raise issues, and contribute - this is very much welcome :)
Curious how you think about this meeting the agent-identity side. The proxy knows who's calling, but the callee (what agent lives at api.example.com, what auth it expects, what its card looks like) doesn't really have a home. Been poking at that half at agents.ml and it feels like the two pieces want to fit together
Hey! At the moment Agent Vault doesn't address the identity piece.
The identity piece would be the next logical step at some point likely after we figure out the optimal ergonomics for deploying and integrating AV into different infrastructure / agent use cases first.
We actually work a lot with identity at Infisical (anything from workload identity to X.509 certificates) and had considered tackling the identity problem for agents as well but it felt like it required an ecosystem-wide change with many more considerations to it including protocols like A2A. The most immediate problem being credential exfiltration seemed like the right place to start since we have a lot of experience with secrets management.
From what I can tell, agent-vault does not solve identity, only how its stored. For true agent identity, you should look into: https://github.com/highflame-ai/zeroid (author: full disclosure)
ZeroID looks like a good idea to me. Lots there I'll be digging into over time, and related to the use of token exchange for authorising back-end M2M transactions on behalf of a user at the front-end.
As far as I can tell the parent post is talking about discovery for agent-to-agent communications, which is not something I have much interest in myself: it feels very "OpenClaw" to replace stable, deterministic APIs with LLMs.
Yeah I'm leaning deterministic too for most needs, but I do think there's a future for agent to agent communication in more specialized cases. I think an agent having access to proprietary datasets / niche software can produce an interesting output. Say someone wants a drawing in autocad, communicating with a trained agent that has mcp access to these kind of tools seems like it could be beneficial to extend a more generalist agent's capabilities.
To be honest, I haven't used OneCLI personally before so I can't speak to it in detail but Agent Vault does take a similar approach with the MITM architecture and setting HTTPS_PROXY in the agent's environment to route traffic through the proxy; we feel like this is the right approach in terms of interface-agnostic ergonomics given that agents may interact with upstream services thru a number of means: API, CLI, SDK, MCP, etc.
Since we are in the beginnings of Agent Vault (AV), I wouldn't be surprised if there were many similarities. That said, AV likely takes a different approach with how its core primitives behave (e.g. define specific services along with how their auth schemes work) and is specifically designed in an infra-forward way that also considers agents as first class citizens.
When designing AV, we think a lot about the workflows that you might encounter, for instance, if you're designing a custom sandboxed agent; maybe you have a trusted orchestrator that needs to update credentials in AV and authenticate with it using workload identity in order to mint a short-lived token to be passed into a sandbox for an agent - this is possible. I suspect that how we think about the logical design starting from an infra standpoint will over time create two different experiences for a proxy.
If I understand correctly regarding credential stripping then yes. The idea is that you set the credentials in Agent Vault and define which services should be allowed through it, including the authentication method (e.g. Bearer token) to be used together with which credential.
We don't have plans yet to integrate with Bitwarden at this time but this could be something worth looking into at some point. We definitely would like to give Agent Vault first-class support for Infisical as a storage for credentials (this way you'd get all the benefits of secrets rotation, dynamic secrets, point in time recovery, secret versioning, etc. that already come with it).
Can I use Infisical cloud vaults with Agent Vault? I like the UI of secret management there. I like that I can manage secrets from many environments in a single place.
We'll be releasing a closer integration between Agent Vault and Infisical in the coming 1-2 weeks!
The way we see it is that you'd still need to centrally store/manage secrets from a vault; this part isn't going anywhere and should still deliver secrets to the rest of your workloads.
The part that's new is Agent Vault which is really a delivery mechanism to help agents use secrets in a way that they don't get leaked. So, it would be natural to integrate the two.
This doesn't change the fact that you'd still be able to exfiltrate data like sure they don't get credentials but if they get the proxy auth key then they would also be able to make requests through it no?
Yeah so Agent Vault (AV) solves the credential exfiltration problem which is related to but different from data exfiltration.
You're right that if an attacker can access the proxy vault then by definition they'd similarly be able to proxy requests through it to get data back but at least AV prevents them from gaining direct access to begin with (the key to access the proxy vault itself can also be made ephemeral, scoped to a particular agent run). I'd also note that you'd want to lockdown the networking around AV so it isn't just exposed to the public internet.
I use containers to isolate agents to just the data I intend for them to read and modify. If I have a data exfiltration event, it'll be limited to what I put into the container plus whatever code run inside the container can reach.
I have limited data in reach of the agent, limited network access for it, and was missing exactly this Vault. I'm relieved not to need to invent (vibe code) it.
Since the project is in active development, the form factor including API is unstable but I think it gives a good first glance into how we're thinking about secrets management for AI agents; we made some interesting architectural decisions along the way to get here, and I think this is generally on the right track with how the industry is thinking about solving credential exfiltration: thru credential brokering.
We'd appreciate any feedback; feel free also to raise issues, and contribute - this is very much welcome :)
The identity piece would be the next logical step at some point likely after we figure out the optimal ergonomics for deploying and integrating AV into different infrastructure / agent use cases first.
We actually work a lot with identity at Infisical (anything from workload identity to X.509 certificates) and had considered tackling the identity problem for agents as well but it felt like it required an ecosystem-wide change with many more considerations to it including protocols like A2A. The most immediate problem being credential exfiltration seemed like the right place to start since we have a lot of experience with secrets management.
As far as I can tell the parent post is talking about discovery for agent-to-agent communications, which is not something I have much interest in myself: it feels very "OpenClaw" to replace stable, deterministic APIs with LLMs.
Since we are in the beginnings of Agent Vault (AV), I wouldn't be surprised if there were many similarities. That said, AV likely takes a different approach with how its core primitives behave (e.g. define specific services along with how their auth schemes work) and is specifically designed in an infra-forward way that also considers agents as first class citizens.
When designing AV, we think a lot about the workflows that you might encounter, for instance, if you're designing a custom sandboxed agent; maybe you have a trusted orchestrator that needs to update credentials in AV and authenticate with it using workload identity in order to mint a short-lived token to be passed into a sandbox for an agent - this is possible. I suspect that how we think about the logical design starting from an infra standpoint will over time create two different experiences for a proxy.
If I understand correctly regarding credential stripping then yes. The idea is that you set the credentials in Agent Vault and define which services should be allowed through it, including the authentication method (e.g. Bearer token) to be used together with which credential.
We don't have plans yet to integrate with Bitwarden at this time but this could be something worth looking into at some point. We definitely would like to give Agent Vault first-class support for Infisical as a storage for credentials (this way you'd get all the benefits of secrets rotation, dynamic secrets, point in time recovery, secret versioning, etc. that already come with it).
The way we see it is that you'd still need to centrally store/manage secrets from a vault; this part isn't going anywhere and should still deliver secrets to the rest of your workloads.
The part that's new is Agent Vault which is really a delivery mechanism to help agents use secrets in a way that they don't get leaked. So, it would be natural to integrate the two.
This is definitely on the roadmap!
You're right that if an attacker can access the proxy vault then by definition they'd similarly be able to proxy requests through it to get data back but at least AV prevents them from gaining direct access to begin with (the key to access the proxy vault itself can also be made ephemeral, scoped to a particular agent run). I'd also note that you'd want to lockdown the networking around AV so it isn't just exposed to the public internet.
The general idea is that we're converging as an industry on credential brokering as one type of layered defense mechanism for agents: https://infisical.com/blog/agent-vault-the-open-source-crede...
I use containers to isolate agents to just the data I intend for them to read and modify. If I have a data exfiltration event, it'll be limited to what I put into the container plus whatever code run inside the container can reach.
I have limited data in reach of the agent, limited network access for it, and was missing exactly this Vault. I'm relieved not to need to invent (vibe code) it.