Using TameFlare with LangChain: Zero-Code Agent Governance
LangChain agents call external APIs with zero built-in security. Add policy enforcement, credential isolation, and audit logging without changing a single line of agent code.
Why LangChain agents need governance
LangChain is the most popular framework for building AI agents. Its tool-calling system lets agents interact with GitHub, Slack, databases, payment APIs, and more. But LangChain has no built-in security layer.
When your LangChain agent calls a tool, the HTTP request goes directly to the upstream API. There is no policy check, no approval workflow, no audit trail, and no way to block a destructive action before it happens.
This is fine for prototyping. It is not fine for production.
How TameFlare works with LangChain
TameFlare is a transparent HTTP/HTTPS proxy. It sits between your LangChain process and the internet. Every outbound HTTP request is intercepted, parsed into a structured action, and evaluated against your policies.
The key insight: you do not change your LangChain code at all. TameFlare works at the network level, not the application level.
# Before: LangChain agent runs with full access
python langchain_agent.py
# After: LangChain agent runs through TameFlare proxy
npx tf run --gateway "langchain-prod" python langchain_agent.py
When you run your agent with tf run, TameFlare sets HTTP_PROXY and HTTPS_PROXY environment variables. Python's requests library (which LangChain uses internally) automatically routes all traffic through the proxy.
Step-by-step setup
1. Install TameFlare CLI
npm install -g @tameflare/cli
2. Initialize and create a gateway
npx tf init
Then open the dashboard at http://localhost:3000 and create a gateway using the wizard. Or use the CLI:
npx tf connector add github --token-env GITHUB_TOKEN
npx tf connector add openai --token-env OPENAI_API_KEY
npx tf permissions set --gateway "langchain-prod" --connector github --action "github.issue.*" --decision allow
npx tf permissions set --gateway "langchain-prod" --connector github --action "github.branch.delete" --decision deny
npx tf permissions set --gateway "langchain-prod" --connector openai --action "*" --decision allow
3. Run your LangChain agent
npx tf run --gateway "langchain-prod" python langchain_agent.py
That is it. Your agent now runs through the proxy. Every API call is logged, every action is evaluated against your policies, and credentials are injected by the gateway instead of being visible to the agent process.
What gets governed
TameFlare's connectors parse raw HTTP requests into structured actions. Here is what the GitHub connector recognizes from LangChain tool calls:
| LangChain tool call | TameFlare action | What it means |
| Create issue | github.issue.create | Agent creates a GitHub issue |
| Merge PR | github.pr.merge | Agent merges a pull request |
| Delete branch | github.branch.delete | Agent deletes a branch |
| Push to repo | github.contents.update | Agent pushes code changes |
| Create release | github.release.create | Agent creates a release |
| LangChain call | TameFlare action | What it means |
| Chat completion | openai.chat.create | Agent calls GPT-4 / Claude |
| Embedding | openai.embedding.create | Agent generates embeddings |
| Image generation | openai.image.create | Agent generates images |
Example: block destructive actions
Here is a policy that blocks branch deletion and requires approval for production merges:
- Open the TameFlare dashboard
- Go to Gateways and select your gateway
- Click Policy builder
- Create two rules:
github.branch.delete → Deny with reason "Branch deletion is not allowed"
- Scope: github.pr.merge where parameters.base equals main → Require approval
Now when your LangChain agent tries to delete a branch, it gets a 403 response. When it tries to merge to main, the proxy holds the connection until a human approves via the dashboard or CLI.
Credential isolation
A critical security benefit: your LangChain agent never sees real API keys.
Without TameFlare:
# Agent has direct access to your GitHub token
os.environ["GITHUB_TOKEN"] = "ghp_real_token_here"
With TameFlare:
# Agent has no API keys — the proxy injects them at request time
# os.environ["GITHUB_TOKEN"] is not set
# The proxy reads credentials from its encrypted vault
Even if your LangChain agent is compromised (prompt injection, malicious tool, supply chain attack), the attacker cannot extract API keys because the agent process never has them.
Monitoring and audit
Every API call your LangChain agent makes is logged in the TameFlare traffic log:
npx tf logs
# 2026-02-09 14:32:01 | langchain-prod | github.issue.create | ALLOW | 142ms
# 2026-02-09 14:32:03 | langchain-prod | github.pr.merge | HOLD | waiting...
# 2026-02-09 14:32:15 | langchain-prod | github.pr.merge | ALLOW | 89ms (approved by admin@company.com)
# 2026-02-09 14:32:18 | langchain-prod | github.branch.delete | DENY | 1ms
Or open the dashboard Traffic page for a real-time view with filters, search, and export.
Works with any LangChain setup
TameFlare works with:
langchain, langchain-community, langchain-openaiNo changes to your agent code. No special LangChain integration. The proxy is transparent.
Getting started
- Create a free account — 3 gateways, 1,000 actions/month
- Install the CLI:
npm install -g @tameflare/cli - Create a gateway and add connectors in the dashboard
- Run your agent:
npx tf run --gateway "my-gw" python agent.py - Monitor traffic in the dashboard