Rendered at 11:01:10 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
goodra7174 1 hours ago [-]
The HTTPS_PROXY approach is clever — interface-agnostic credential brokering without modifying the agent itself.
We ran into the same problem from the infrastructure side. When you're running agent workloads on Kubernetes, the blast radius of a leaked credential scales with whatever the pod's ServiceAccount can reach. We ended up combining Cilium FQDN egress policies (agents can only call approved endpoints) with per-workload tool allowlists enforced at the CRD level. The network-level lockdown means even if the agent is prompt-injected, it physically cannot exfiltrate to an unauthorized domain.
Curious: have you tested AV with agents that make tool calls through MCP servers? The proxy would need to handle the MCP server's outbound requests too, not just the agent's direct calls.
dangtony98 2 days ago [-]
T from Infisical here - Also forgot to mention that this is a research preview launch for Agent Vault and should be treated as such - experimental <<
Since the project is in active development, the form factor including API is unstable but I think it gives a good first glance into how we're thinking about secrets management for AI agents; we made some interesting architectural decisions along the way to get here, and I think this is generally on the right track with how the industry is thinking about solving credential exfiltration: thru credential brokering.
We'd appreciate any feedback; feel free also to raise issues, and contribute - this is very much welcome :)
nghnam 3 hours ago [-]
I like this idea.
I’ve always felt a bit uneasy about agents getting direct access to keys. Once they have them, it’s hard to know where those keys might go.
This feels cleaner to me. The agent does not need to see the real secrets. You still have to trust another layer, but that layer feels easier to control and reason about.
wfinigan 21 hours ago [-]
Love this. Our team has been frustrated that nothing like this exists. You run into this problem as soon as you start thinking seriously about capable cloud agents, but there was no generic solution. Existing options are either tied to a specific cloud vendor or protocol, like git or MCP. We had a design on the whiteboard for something like this when this release dropped in our laps--with lots of thoughtful choices we hadn't gotten to yet. Thank you Tony and team.
Would the proxy as designed double as a domain-based allowlist, separate from the credential brokering?
dangtony98 14 hours ago [-]
Yup it turns out many teams building their own custom agents end up stitching together their own solutions for this problem.
What we thought was basically: If everyone is making some version of this egress proxy, maybe we're actually missing a new infrastructure component that does not yet exist in a mainstream way for this use case. The form factor of this was very important in the design of it since it needed to fit in with the tools and workflows that agents are already using today.
Definitely regarding the domain-based allowlist and this is part of how it currently works. As an egress proxy it basically functions as a firewall and we intend to extend it with more capabilities that'll make it more useful.
sandeepkd 1 days ago [-]
This is a good start, it does covers gaps in certain areas. There are few more areas I can think of
1. The end point matters, example if the credential is OAuth2 token and service has a token refresh endpoint then the response would have a new token in the payload reaching directly to the agent
2. Not all the end points are made the same even on the service side, some may not even require credential, the proxy may end up leaking the credential to such endpoints
3. The proxy is essentially doing a MITM at this point, it just increased its scope to do the certificate validation as well, to do it correctly is a hard problem
4. All credentials are stored on a machine, it requires a lot more access & authorization framework in terms of who can access the machine now. One might think that they closed a security gap and soon they realize that they opened up couple more in that attempt
dangtony98 14 hours ago [-]
Thanks for this feedback! Will keep in mind all of these points as we iterate on Agent Vault.
We're pretty swarmed on requests at the moment but I've noted these down as improvements to AV; it's a work in progress, we'll be molding it into the right shape over the next few months.
A few thoughts for each of the above:
1. AV doesn't consider OAuth2 tokens atm but this is definitely a next step.
2. Agree which is why there is a "passthrough" mode; for each endpoint, you need to explicitly specify what credential is used for it.
3. That's correct. This is a MITM architecture with credential brokering capabilities added on top.
4. Agree. The idea here is that AV can function both as a proxy and vault but in a true production setting, it should pull credentials from a secure secrets store like Infisical. This way credentials cached in memory in AV can even be made ephemeral.
Great observations all around and we have plans for them :)
gregw2 1 days ago [-]
I have a related question, is anyone developing standards on how agents can proxy the requestor identity to backend database or application layers? (short lived oauth tokens perhaps, not long lived credentials like the ShowHN seems to focus on?)
mooreds 21 hours ago [-]
Well, there's the token exchange RFC, which defines on-behalf-of/delegation and impersonation semantics.
In this case, the user is user@example.com, but the actor is admin@example.com. (In the agentic case, the actor would be the AI agent.)
Is this kinda what you are looking for?
dbuxton 20 hours ago [-]
From the comments looks like lots of people looking at this problem from different angles.
We (harriethq.com) also have a somewhat similar insight, which is that setting up connectors is a drag for non-technical users, and a lot of systems don't support per-user connectivity so need an API shim.
The thing I like about this (Agent Vault) approach is that it's more extensible than what we're offering, which is a full managed service. But we've found that some features (e.g. ephemeral sandboxes to execute arbitrary e.g. uvx/npx based mcps) are just a big pain to self-deploy so it's easier for us to provide a service that just works out of the box.
Kudos to the team, this looks great and I'm looking forward to playing with it
dangtony98 14 hours ago [-]
Hey! Yeah I think there's overlapping functionality for sure, and you're spot on on people looking at it from different angles.
The "connectors angle" is something we thought about as well and we built a whole product line around that called Agent Sentinel (I'll link that below). We weren't convinced, however, that enterprises were ready for this and instead took it back to our infra roots (Infisical is a security infra platform) and started simple with the problem: credential exfiltration. This thinking does lead to two different kinds of products though with one naturally becoming an infrastructure component; this makes much more sense for us at Infisical to work on.
been thinking about this exact problem for a while. my own setup uses OS keyring with a <secret:name> token substitution pattern — the agent requests a credential by name, the substitution happens at execution time, the LLM never sees the raw value in context or logs. works reasonably well.
but the problem with that model is it's static protection. if the agent process itself becomes hostile or gets prompt-injected, keyring doesn't really help — it can still request the secret and get it, it just doesn't see it in the context window.
the shift i've been landing on and building into Orbital(my own project) is that it's less about blocking credential access and more about supervising it. you want to know exactly when and why the agent is requesting something, and have the ability to approve or deny in the moment. pre-set policies are hard because you genuinely can't anticipate what tools an agent will call before it runs — claude code might use curl, bash, or a completely random command depending on the problem. the approval needs to happen at runtime, not preset.
the proxy model here is interesting because it creates a natural supervision boundary. curious whether you're planning runtime approval flows or if the design stays policy-based.
jimmypk 1 days ago [-]
@10keane The proxy approach also solves the audit trail problem implicit in what you're describing. With OS keyring substitution, the agent receives the credential and you can't observe what happens next — the log shows intent (substitution happened) but not effect (what API calls were actually made). Routing through a proxy gives you an immutable record of every call made with each credential, which is the more useful thing for incident response: not "did the agent have access?" but "what did it actually do with it?"
The capability-scoping gap you're pointing at (static vs. dynamic trust) is the next layer up — effectively per-session IAM roles minted at task time, scoped to the specific endpoints the task actually needs. That's harder but it's the right direction.
Jonathanfishner 20 hours ago [-]
[dead]
rmorlok 1 days ago [-]
I really like the approach you've taken of providing an egress proxy. That let's you do a lot of things that layer around providing gaurdrails and auditing. I've been taking a similar approach on an open source embedded iPaaS project I've been working on where it primarily offers an authenticating egress proxy to whatever business logic needs it (agent, sync engine, etc).
It’s an idea that obfuscates keys a bit, but how are you going to prevent the agent from gaining access to the vault and keys itself? I’ve seen it reverse engineer many things to expose the underlying credentials. I can only think running this on a firewall that the agent can’t access to prevent escalation.
dangtony98 14 hours ago [-]
The sandboxed agent and AV should ideally not run on the same host because if it did then you're right that a sufficiently sophisticated agent like Mythos could try to reverse engineer and like find kernel exploits to gain access AV credentials.
For this reason, you'd want to keep the two separate; we have some ideas in the works for that atm but largely still experimental.
mike-cardwell 22 hours ago [-]
Does it work for websockets too, where the authentication is done in a websocket frame? Like for Home Assistant? Also, if the LLM manages to do something to reflect the authentication token back in a response, do you detect it and strip it out as an extra layer of protection?
dangtony98 14 hours ago [-]
Not yet for both but this would definitely be on the roadmap; especially the credential stripping portion.
For AV to be really useful, it'd have to support more protocols but we think this first implementation makes a move in the right direction.
Unsponsoredio 1 days ago [-]
I like this direction.
Agents having direct access to credentials always felt a bit scary.
This seems cleaner, even if it just moves the trust somewhere else.
dangtony98 14 hours ago [-]
Yup! I think the terminologies we're going to be seeing more and more of are "credential exfiltration" and conversely "credential brokering" as a solution to that.
21 hours ago [-]
manojbajaj95 1 days ago [-]
Been tinkering with something of my own at https://github.com/manojbajaj95/authsome. Core goal was to do credential management, from an ease point of view and not security.
hanyiwang 3 days ago [-]
This doesn't change the fact that you'd still be able to exfiltrate data like sure they don't get credentials but if they get the proxy auth key then they would also be able to make requests through it no?
dangtony98 3 days ago [-]
Yeah so Agent Vault (AV) solves the credential exfiltration problem which is related to but different from data exfiltration.
You're right that if an attacker can access the proxy vault then by definition they'd similarly be able to proxy requests through it to get data back but at least AV prevents them from gaining direct access to begin with (the key to access the proxy vault itself can also be made ephemeral, scoped to a particular agent run). I'd also note that you'd want to lockdown the networking around AV so it isn't just exposed to the public internet.
This doesn’t solve hostile agents. This solves hostile or compromised inference providers. You really don’t want your secrets in the logs of a random AI provider through OpenRouter or even in Anthropic logs.
dangtony98 14 hours ago [-]
What attack vector are you thinking? Could you elaborate more.
Would love to explore this train of thought and what we can do about it.
codebje 1 days ago [-]
This isn't the only thing you'd want to do.
I use containers to isolate agents to just the data I intend for them to read and modify. If I have a data exfiltration event, it'll be limited to what I put into the container plus whatever code run inside the container can reach.
I have limited data in reach of the agent, limited network access for it, and was missing exactly this Vault. I'm relieved not to need to invent (vibe code) it.
dandaka 2 days ago [-]
Can I use Infisical cloud vaults with Agent Vault? I like the UI of secret management there. I like that I can manage secrets from many environments in a single place.
dangtony98 2 days ago [-]
We'll be releasing a closer integration between Agent Vault and Infisical in the coming 1-2 weeks!
The way we see it is that you'd still need to centrally store/manage secrets from a vault; this part isn't going anywhere and should still deliver secrets to the rest of your workloads.
The part that's new is Agent Vault which is really a delivery mechanism to help agents use secrets in a way that they don't get leaked. So, it would be natural to integrate the two.
This is definitely on the roadmap!
Jayakumark 1 days ago [-]
How is it different from Onecli ? And does it do credential stripping ? Will it support access SDK from Bitwarden and integrate with infiscal ?
dangtony98 1 days ago [-]
To be honest, I haven't used OneCLI personally before so I can't speak to it in detail but Agent Vault does take a similar approach with the MITM architecture and setting HTTPS_PROXY in the agent's environment to route traffic through the proxy; we feel like this is the right approach in terms of interface-agnostic ergonomics given that agents may interact with upstream services thru a number of means: API, CLI, SDK, MCP, etc.
Since we are in the beginnings of Agent Vault (AV), I wouldn't be surprised if there were many similarities. That said, AV likely takes a different approach with how its core primitives behave (e.g. define specific services along with how their auth schemes work) and is specifically designed in an infra-forward way that also considers agents as first class citizens.
When designing AV, we think a lot about the workflows that you might encounter, for instance, if you're designing a custom sandboxed agent; maybe you have a trusted orchestrator that needs to update credentials in AV and authenticate with it using workload identity in order to mint a short-lived token to be passed into a sandbox for an agent - this is possible. I suspect that how we think about the logical design starting from an infra standpoint will over time create two different experiences for a proxy.
If I understand correctly regarding credential stripping then yes. The idea is that you set the credentials in Agent Vault and define which services should be allowed through it, including the authentication method (e.g. Bearer token) to be used together with which credential.
We don't have plans yet to integrate with Bitwarden at this time but this could be something worth looking into at some point. We definitely would like to give Agent Vault first-class support for Infisical as a storage for credentials (this way you'd get all the benefits of secrets rotation, dynamic secrets, point in time recovery, secret versioning, etc. that already come with it).
jeremyjh 1 days ago [-]
NVIDIA's OpenShell also has its own version of this, though its also in very early stages.
hrimfaxi 1 days ago [-]
It looks a lot like OneCLI too. Curious how it differs.
dangtony98 1 days ago [-]
We're still in the early innings of credential brokering so there'll be a lot of overlap but I expect the way the tool evolves will start to diverge a lot since we are thinking very infra-workflow first.
See my other comment regarding an example of this.
Jonathanfishner 20 hours ago [-]
[dead]
andreypk 1 days ago [-]
It looks promising if I have a request to llm with secrets is it handle it as well?
Thank you! Me too - very excited to see where this goes :)
zackify 1 days ago [-]
Completely unaffiliated but I just installed executor.sh today and it looks almost exactly the same
dangtony98 1 days ago [-]
I haven't used executor.sh but this seems to operate at a different layer from Agent Vault.
From what I'm seeing, executor.sh is an integration and execution layer for agents. Where Agent Vault shines is that it fits right into the tools and workflows that your agents are already using in an interface-agnostic way: API, CLI, SDK, MCP.
Put differently, the MITM architecture of Agent Vault (operates more at the network‑layer) allows the sandboxed agent can do whatever it would've done normally, just all routed through AV - the agent is basically proxy unaware.
tatoalo 23 hours ago [-]
Could this work (or planned) on gVisor-based sandboxes?
dangtony98 14 hours ago [-]
This would be deployed separately but in close proximity to your sandboxes. You'd want to add network restrictions around sandboxes to only allow outbound requests to AV.
You'd add HTTPS_PROXY to your sandbox environment and pre-configure it to trust the AV CA.
tuananh 1 days ago [-]
how do you deal with "access to the proxy"? because one can access maliciously without accessing to the token/secret.
dangtony98 1 days ago [-]
Agent Vault should remain in close proximity to the sandboxed agent and not be exposed to the public internet; your standard network security controls apply.
The proxy itself currently implements a token-based auth scheme. Depending on your setup, you can have an orchestrator mint an ephemeral token to be passed to a sandboxed agent to authenticate with the proxy.
tuananh 1 days ago [-]
this feels like vpn all over again. the location shouldn't grant any inherent trust.
dangtony98 1 days ago [-]
[dead]
whattheheckheck 1 days ago [-]
How do you solve for the agent signing up for a service and needing to save it and guaranteeing the credit wont go to the chat?
dangtony98 1 days ago [-]
Can you please elaborate on the agent signing up for a service piece? I'm curious to understand the use case more (type of agent, what credit, etc.).
The current modal assumes that you have a trusted entity whose able to save credentials to Agent Vault; that entity is likely not the agent itself because that would mean that the agent would have access to credentials. The agent is then simply configured to proxy requests through AV which attaches credentials at this proxy layer. Here are two examples:
Example 1:
- You have a backend that saves an API Key to AV for a specific vault and defines the service rules for how that credential can be used.
- That same backend mints a session-scoped token to AV and invokes the creation of a pre-configured sandbox, passing that token into it.
- The agent in the sandbox does what it needs to do, requests fully proxied through AV.
Example 2:
- A human operator manually goes into AV and adds an API Key.
- The human operator spins up an agent (could be an OpenClaw, Claude Code, etc.) in a pre-configured environment to route requests through AV. This can be done using non-cooperative sandbox mode with the AV CLI or through more manual configuration.
- The agent does what it needs to do, requests fully proxied through AV.
We're still working on smoothening it out but perhaps this gives you a better idea of how this might work.
AV does have a permission system that supports agents being able to save credentials to it and then subsequently using the proxy (maybe this is what you're targeting) but this isn't the use case that I've personally explored at much; definitely worth looking into tho.
whattheheckheck 1 days ago [-]
Yeah idk if this is solveable but let's say I have a master agent that I say go sign me up for these services so you can build secure agents to use the keys securely.
It seems like it simply has to stop and tell me to handle the secrets to put them in vault. Because of the data/instructions in same channel problem.
cristianolivera 1 days ago [-]
yea
yujunjie 1 days ago [-]
[dead]
danelliot 18 hours ago [-]
[dead]
foreman_ 1 days ago [-]
[dead]
Remi_Etien 3 days ago [-]
[dead]
sergiopreira 24 hours ago [-]
[dead]
amd92 1 days ago [-]
[flagged]
dangtony98 1 days ago [-]
I'm so glad you mentioned the non-cooperate sandbox! Did you get a chance to try it out?
This is something that we're going to be improving significantly in the next week including the ergonomics of it since the current state of this feature does not yet make it practical enough to be used by developers in a mainstream kind of way; the ergonomics are so important for a devtool.
But yes credential brokering is what the industry seems to be converging on as a solution for how we might prevent credential exfiltration; the egress proxy is increasingly becoming a common pattern in the agent stack based on some of the conversations we've had with AI-forward companies.
bayff 2 days ago [-]
Curious how you think about this meeting the agent-identity side. The proxy knows who's calling, but the callee (what agent lives at api.example.com, what auth it expects, what its card looks like) doesn't really have a home. Been poking at that half at agents.ml and it feels like the two pieces want to fit together
dangtony98 1 days ago [-]
Hey! At the moment Agent Vault doesn't address the identity piece.
The identity piece would be the next logical step at some point likely after we figure out the optimal ergonomics for deploying and integrating AV into different infrastructure / agent use cases first.
We actually work a lot with identity at Infisical (anything from workload identity to X.509 certificates) and had considered tackling the identity problem for agents as well but it felt like it required an ecosystem-wide change with many more considerations to it including protocols like A2A. The most immediate problem being credential exfiltration seemed like the right place to start since we have a lot of experience with secrets management.
sharathr 1 days ago [-]
From what I can tell, agent-vault does not solve identity, only how its stored. For true agent identity, you should look into: https://github.com/highflame-ai/zeroid (author: full disclosure)
codebje 1 days ago [-]
ZeroID looks like a good idea to me. Lots there I'll be digging into over time, and related to the use of token exchange for authorising back-end M2M transactions on behalf of a user at the front-end.
As far as I can tell the parent post is talking about discovery for agent-to-agent communications, which is not something I have much interest in myself: it feels very "OpenClaw" to replace stable, deterministic APIs with LLMs.
bayff 1 days ago [-]
Yeah I'm leaning deterministic too for most needs, but I do think there's a future for agent to agent communication in more specialized cases. I think an agent having access to proprietary datasets / niche software can produce an interesting output. Say someone wants a drawing in autocad, communicating with a trained agent that has mcp access to these kind of tools seems like it could be beneficial to extend a more generalist agent's capabilities.
We ran into the same problem from the infrastructure side. When you're running agent workloads on Kubernetes, the blast radius of a leaked credential scales with whatever the pod's ServiceAccount can reach. We ended up combining Cilium FQDN egress policies (agents can only call approved endpoints) with per-workload tool allowlists enforced at the CRD level. The network-level lockdown means even if the agent is prompt-injected, it physically cannot exfiltrate to an unauthorized domain.
Curious: have you tested AV with agents that make tool calls through MCP servers? The proxy would need to handle the MCP server's outbound requests too, not just the agent's direct calls.
Since the project is in active development, the form factor including API is unstable but I think it gives a good first glance into how we're thinking about secrets management for AI agents; we made some interesting architectural decisions along the way to get here, and I think this is generally on the right track with how the industry is thinking about solving credential exfiltration: thru credential brokering.
We'd appreciate any feedback; feel free also to raise issues, and contribute - this is very much welcome :)
I’ve always felt a bit uneasy about agents getting direct access to keys. Once they have them, it’s hard to know where those keys might go.
This feels cleaner to me. The agent does not need to see the real secrets. You still have to trust another layer, but that layer feels easier to control and reason about.
Would the proxy as designed double as a domain-based allowlist, separate from the credential brokering?
What we thought was basically: If everyone is making some version of this egress proxy, maybe we're actually missing a new infrastructure component that does not yet exist in a mainstream way for this use case. The form factor of this was very important in the design of it since it needed to fit in with the tools and workflows that agents are already using today.
Definitely regarding the domain-based allowlist and this is part of how it currently works. As an egress proxy it basically functions as a firewall and we intend to extend it with more capabilities that'll make it more useful.
1. The end point matters, example if the credential is OAuth2 token and service has a token refresh endpoint then the response would have a new token in the payload reaching directly to the agent
2. Not all the end points are made the same even on the service side, some may not even require credential, the proxy may end up leaking the credential to such endpoints
3. The proxy is essentially doing a MITM at this point, it just increased its scope to do the certificate validation as well, to do it correctly is a hard problem
4. All credentials are stored on a machine, it requires a lot more access & authorization framework in terms of who can access the machine now. One might think that they closed a security gap and soon they realize that they opened up couple more in that attempt
We're pretty swarmed on requests at the moment but I've noted these down as improvements to AV; it's a work in progress, we'll be molding it into the right shape over the next few months.
A few thoughts for each of the above:
1. AV doesn't consider OAuth2 tokens atm but this is definitely a next step.
2. Agree which is why there is a "passthrough" mode; for each endpoint, you need to explicitly specify what credential is used for it.
3. That's correct. This is a MITM architecture with credential brokering capabilities added on top.
4. Agree. The idea here is that AV can function both as a proxy and vault but in a true production setting, it should pull credentials from a secure secrets store like Infisical. This way credentials cached in memory in AV can even be made ephemeral.
Great observations all around and we have plans for them :)
https://datatracker.ietf.org/doc/html/rfc8693 has all the details, but here's an example:
In this case, the user is user@example.com, but the actor is admin@example.com. (In the agentic case, the actor would be the AI agent.)Is this kinda what you are looking for?
We (harriethq.com) also have a somewhat similar insight, which is that setting up connectors is a drag for non-technical users, and a lot of systems don't support per-user connectivity so need an API shim.
The thing I like about this (Agent Vault) approach is that it's more extensible than what we're offering, which is a full managed service. But we've found that some features (e.g. ephemeral sandboxes to execute arbitrary e.g. uvx/npx based mcps) are just a big pain to self-deploy so it's easier for us to provide a service that just works out of the box.
Kudos to the team, this looks great and I'm looking forward to playing with it
The "connectors angle" is something we thought about as well and we built a whole product line around that called Agent Sentinel (I'll link that below). We weren't convinced, however, that enterprises were ready for this and instead took it back to our infra roots (Infisical is a security infra platform) and started simple with the problem: credential exfiltration. This thinking does lead to two different kinds of products though with one naturally becoming an infrastructure component; this makes much more sense for us at Infisical to work on.
https://infisical.com/docs/documentation/platform/agent-sent...
but the problem with that model is it's static protection. if the agent process itself becomes hostile or gets prompt-injected, keyring doesn't really help — it can still request the secret and get it, it just doesn't see it in the context window.
the shift i've been landing on and building into Orbital(my own project) is that it's less about blocking credential access and more about supervising it. you want to know exactly when and why the agent is requesting something, and have the ability to approve or deny in the moment. pre-set policies are hard because you genuinely can't anticipate what tools an agent will call before it runs — claude code might use curl, bash, or a completely random command depending on the problem. the approval needs to happen at runtime, not preset.
the proxy model here is interesting because it creates a natural supervision boundary. curious whether you're planning runtime approval flows or if the design stays policy-based.
The capability-scoping gap you're pointing at (static vs. dynamic trust) is the next layer up — effectively per-session IAM roles minted at task time, scoped to the specific endpoints the task actually needs. That's harder but it's the right direction.
https://github.com/rmorlok/authproxy
For this reason, you'd want to keep the two separate; we have some ideas in the works for that atm but largely still experimental.
For AV to be really useful, it'd have to support more protocols but we think this first implementation makes a move in the right direction.
Agents having direct access to credentials always felt a bit scary.
This seems cleaner, even if it just moves the trust somewhere else.
You're right that if an attacker can access the proxy vault then by definition they'd similarly be able to proxy requests through it to get data back but at least AV prevents them from gaining direct access to begin with (the key to access the proxy vault itself can also be made ephemeral, scoped to a particular agent run). I'd also note that you'd want to lockdown the networking around AV so it isn't just exposed to the public internet.
The general idea is that we're converging as an industry on credential brokering as one type of layered defense mechanism for agents: https://infisical.com/blog/agent-vault-the-open-source-crede...
Would love to explore this train of thought and what we can do about it.
I use containers to isolate agents to just the data I intend for them to read and modify. If I have a data exfiltration event, it'll be limited to what I put into the container plus whatever code run inside the container can reach.
I have limited data in reach of the agent, limited network access for it, and was missing exactly this Vault. I'm relieved not to need to invent (vibe code) it.
The way we see it is that you'd still need to centrally store/manage secrets from a vault; this part isn't going anywhere and should still deliver secrets to the rest of your workloads.
The part that's new is Agent Vault which is really a delivery mechanism to help agents use secrets in a way that they don't get leaked. So, it would be natural to integrate the two.
This is definitely on the roadmap!
Since we are in the beginnings of Agent Vault (AV), I wouldn't be surprised if there were many similarities. That said, AV likely takes a different approach with how its core primitives behave (e.g. define specific services along with how their auth schemes work) and is specifically designed in an infra-forward way that also considers agents as first class citizens.
When designing AV, we think a lot about the workflows that you might encounter, for instance, if you're designing a custom sandboxed agent; maybe you have a trusted orchestrator that needs to update credentials in AV and authenticate with it using workload identity in order to mint a short-lived token to be passed into a sandbox for an agent - this is possible. I suspect that how we think about the logical design starting from an infra standpoint will over time create two different experiences for a proxy.
If I understand correctly regarding credential stripping then yes. The idea is that you set the credentials in Agent Vault and define which services should be allowed through it, including the authentication method (e.g. Bearer token) to be used together with which credential.
We don't have plans yet to integrate with Bitwarden at this time but this could be something worth looking into at some point. We definitely would like to give Agent Vault first-class support for Infisical as a storage for credentials (this way you'd get all the benefits of secrets rotation, dynamic secrets, point in time recovery, secret versioning, etc. that already come with it).
See my other comment regarding an example of this.
From what I'm seeing, executor.sh is an integration and execution layer for agents. Where Agent Vault shines is that it fits right into the tools and workflows that your agents are already using in an interface-agnostic way: API, CLI, SDK, MCP.
Put differently, the MITM architecture of Agent Vault (operates more at the network‑layer) allows the sandboxed agent can do whatever it would've done normally, just all routed through AV - the agent is basically proxy unaware.
You'd add HTTPS_PROXY to your sandbox environment and pre-configure it to trust the AV CA.
The proxy itself currently implements a token-based auth scheme. Depending on your setup, you can have an orchestrator mint an ephemeral token to be passed to a sandboxed agent to authenticate with the proxy.
The current modal assumes that you have a trusted entity whose able to save credentials to Agent Vault; that entity is likely not the agent itself because that would mean that the agent would have access to credentials. The agent is then simply configured to proxy requests through AV which attaches credentials at this proxy layer. Here are two examples:
Example 1:
- You have a backend that saves an API Key to AV for a specific vault and defines the service rules for how that credential can be used.
- That same backend mints a session-scoped token to AV and invokes the creation of a pre-configured sandbox, passing that token into it.
- The agent in the sandbox does what it needs to do, requests fully proxied through AV.
Example 2:
- A human operator manually goes into AV and adds an API Key.
- The human operator spins up an agent (could be an OpenClaw, Claude Code, etc.) in a pre-configured environment to route requests through AV. This can be done using non-cooperative sandbox mode with the AV CLI or through more manual configuration.
- The agent does what it needs to do, requests fully proxied through AV.
We're still working on smoothening it out but perhaps this gives you a better idea of how this might work.
AV does have a permission system that supports agents being able to save credentials to it and then subsequently using the proxy (maybe this is what you're targeting) but this isn't the use case that I've personally explored at much; definitely worth looking into tho.
It seems like it simply has to stop and tell me to handle the secrets to put them in vault. Because of the data/instructions in same channel problem.
This is something that we're going to be improving significantly in the next week including the ergonomics of it since the current state of this feature does not yet make it practical enough to be used by developers in a mainstream kind of way; the ergonomics are so important for a devtool.
But yes credential brokering is what the industry seems to be converging on as a solution for how we might prevent credential exfiltration; the egress proxy is increasingly becoming a common pattern in the agent stack based on some of the conversations we've had with AI-forward companies.
The identity piece would be the next logical step at some point likely after we figure out the optimal ergonomics for deploying and integrating AV into different infrastructure / agent use cases first.
We actually work a lot with identity at Infisical (anything from workload identity to X.509 certificates) and had considered tackling the identity problem for agents as well but it felt like it required an ecosystem-wide change with many more considerations to it including protocols like A2A. The most immediate problem being credential exfiltration seemed like the right place to start since we have a lot of experience with secrets management.
As far as I can tell the parent post is talking about discovery for agent-to-agent communications, which is not something I have much interest in myself: it feels very "OpenClaw" to replace stable, deterministic APIs with LLMs.