Using proxies to hide secrets from Claude Code

(joinformal.com)

40 points | by drewgregory 5 days ago

4 comments

  • samlinnfer 1 hour ago
    Here's the set up I use on Linux:

    The idea is to completely sandbox the program, and allow only access to specific bind mounted folders. But we also want to have to the frills of using GUI programs, audio, and network access. runc [1] allows us to do exactly this.

    My config sets up a container with folders bind mounted from the host. The only difficult part is setting up a transparent network proxy so that all the programs that need internet just work.

    Container has a process namespace, network namespace, etc and has no access to host except through the bind mounted folders. Network is provided via a domain socket inside a bind mounted folder. GUI programs work by passing through a Wayland socket in a folder and setting environmental variables.

    The set up looks like this

        * config.json - runc config
        * run.sh - runs runc and the proxy server
        * rootfs/ - runc rootfs (created by exporting a docker container) `mkdir rootfs && docker export $(docker create archlinux:multilib-devel) | tar -C rootfs -xvf -`
        * net/ - folder that is bind mounted into the container for networking
    
    Inside the container (inside rootfs/root):

        * net-conf.sh - transparent proxy setup
        * nfs.conf - transparent proxy nft config
        * start.sh - run as a user account
    
    Clone-able repo with the files: [2]

    [1] https://github.com/opencontainers/runc

    [2] https://github.com/dogestreet/dev-container

  • jackfranklyn 5 days ago
    The proxy pattern here is clever - essentially treating the LLM context window as an untrusted execution environment and doing credential injection at a layer it can't touch.

    One thing I've noticed building with Claude Code is that it's pretty aggressive about reading .env files and config when it has access. The proxy approach sidesteps that entirely since there's nothing sensitive to find in the first place.

    Wonder if the Anthropic team has considered building something like this into the sandbox itself - a secrets store that the model can "use" but never "read".

    • iterateoften 1 hour ago
      It could even hash individual keys and scan context locally before sending to check if it accidentally contains them.
    • mockbuild 55 minutes ago
      [dead]
  • TheRoque 1 hour ago
    At the moment I'm just using "sops" [1]. I have my env var files encrypted uth AGE encryption. Then I run whatever I want to run with "sops exec-env ...", it's basically forwarding the secrets to your program.

    I like it because it's pretty easy to use, however it's not fool-proof: if the editor which you use for editing the env vars is crashing or killed suddently, it will leave a "temp" file with the decrypted vars on your computer. Also, if this same editor has AI features in it, it may read the decrypted vars anyways.

    - [1]: https://github.com/getsops/sops

    • jclarkcom 59 minutes ago
      I do something similar but this only protects secrets at rest. If you app has an exploit an attack could just export all your secrets to a file.

      I prototyped a solution where I use an external debugger to monitor my app, when the app needs a secret it generates a breakpoint and the debugger catches it and then inspects the call stack of the function requesting the secret and then copies it into the process memory (intended to be erased immediately after use). Not 100% security but a big improvement and a bit more flexible and auditable compared to a proxy

  • dang 2 hours ago
    Recent and related: https://news.ycombinator.com/item?id=46623126 via Ask HN: How do you safely give LLMs SSH/DB access? - https://news.ycombinator.com/item?id=46620990.