trust through transparency

Security isn't a feature

It's the architecture. Local first. Open source. No telemetry. No training on your code. Every function does what it says and nothing else.

🔒

Local first

  • All data stored on your machine by default
  • Memory, tasks, journal, settings — never leave your device
  • Cloud sync is opt-in, per-feature, revocable anytime
  • Works fully offline with local models
📖

Open source

  • Every line of code is public — extension, IDE, companion, CLI
  • Ava can read and explain her own source code
  • Fork it, audit it, verify every claim we make
  • Apache 2.0 licence
🚫

No hidden behaviour

  • Zero telemetry, zero analytics, zero tracking
  • Your code is never used to train AI models
  • No phone-home calls from any tool
  • Every function does its function. Nothing hidden.
the vault

How your secrets are handled

Your API keys and credentials never leak into conversation history, never appear in logs, and never travel to our servers.

OS keychain, not config files

Every API key you add lives in your operating system's secure keychain (SecretStorage on VS Code, Keychain on macOS, Credential Manager on Windows). Never in a plaintext file, never committed to git.

Capability-based grants

Ava never sees your whole vault. She asks for a specific secret, you approve it for the current chat, she receives an opaque handle. Value substitution happens at tool-execution time — the real secret is never in the model's context.

Session-scoped working set

Granted secrets live in an in-memory working set scoped to the current chat. Wiped on new chat, wiped on extension restart. Nothing persists unless you explicitly save it.

Streaming redactor

Defence-in-depth: every stream of text Ava produces gets scanned for high-confidence patterns (Anthropic keys, OpenAI keys, GitHub tokens, AWS keys, JWTs, Stripe keys, PEM blocks). Matches get replaced with [REDACTED:kind] before the text reaches the UI, memory, or disk.

How your data is handled

encrypted

API keys

Stored in your OS keychain (SecretStorage on VS Code, system keychain on IDE). Never transmitted to our servers. Never logged.

local

Conversations

Stored locally in ~/.ava/. Only synced to cloud if you connect an account AND enable cloud sync. Deletable anytime.

local

Memory

Stored locally by default. Cloud sync is opt-in. Memories are per-user and per-project scoped. You control what's saved.

local

Tasks & Journal

Local first. Cloud sync optional. Your productivity data stays on your machine unless you choose otherwise.

opt-in

Shared Learning

Off by default. When enabled, only anonymised technical patterns are shared — never personal data, code, or preferences.

minimal

Usage data

Token counts are tracked for billing on platform accounts. No conversation content is stored server-side. BYOK usage is not tracked at all.

runtime guards

The walls inside Ava

Every tool call passes through safety wires before it executes. Ava can ask to do anything — she can't actually do anything unsafe.

Path traversal validation

File reads and writes are checked against the project root and Ava's home directory. ../../ escapes, sibling-directory attacks, and absolute paths outside the sandbox are rejected at the tool boundary.

Tool risk levels

Every tool declares a risk level — safe, write, or dangerous. Permission modes (Strict / Balanced / Autonomous) gate what runs automatically and what requires your confirmation.

Credential pattern blocking

memory_save and support_request tools scan incoming text for 12+ credential patterns (API keys, JWTs, AWS / GitHub / Stripe / Slack tokens, PEM blocks, DB URLs). Matches block the tool call entirely — credentials never land in memory or a support ticket.

Prompt injection resistance

Tool outputs that look like instructions are treated as data, not commands. Ava doesn't execute user-facing tool results as if you typed them yourself.

You control what she can do

Three permission modes. You decide how much autonomy Ava gets.

Strict

Every write operation and dangerous tool requires your confirmation. Maximum control.

Balanced

Write operations auto-allowed. Dangerous tools still require confirmation. The default.

Autonomous

Everything auto-allowed. Plans and user questions still pause for your input.

platform integrity

Signup abuse protection

Every signup passes through three gates before an account exists. Free tokens stay available for real users, not for farming.

Disposable email blocklist

Known throwaway providers (mailinator, tempmail, guerrillamail, ese.kr family, 30+ exact domains + 6 wildcard patterns) are blocked at the database trigger. Bypassing the client doesn't help — the rejection is server-side, transactional, and atomic.

Signup forensics log

Every attempt — allowed or blocked — is recorded with timestamp, email, domain, and reason. Lets us see abuse patterns as they emerge and add new blocklist entries without a deploy.

Handle-new-user gate

A Postgres trigger rolls back the auth insert when the domain matches. No orphan accounts, no partial states. If the signup is rejected, nothing was created.

🛡️
Built-in security team

Security Mode: type !! and audit everything

Five specialist personas run a coordinated security audit on your codebase. Recon maps the attack surface. Scanner checks OWASP Top 10. CVE Researcher looks up known vulnerabilities in your dependencies. Verifier confirms exploitability. Reporter generates severity-sorted findings.

Available on every plan. Including free.

Responsible disclosure

Found a security vulnerability? We take every report seriously. We acknowledge within 24 hours and work with you on a fix before any public disclosure. Our coordinated disclosure window is 90 days.

Prefer a form? Open a support ticket with category “security”.

Don't trust us. Read the code.

Every claim on this page is verifiable in the source. That's the point.