Last updated: March 4, 2026
Security is foundational to how we build and operate Orion. This page provides a transparent overview of how we protect your data across every platform Orion supports -- desktop, mobile, messaging apps, and phone calls made on your behalf.
Orion is built by the Zinley team (zinley.com). We are actively growing and continuously improving our security posture. If you operate in a highly regulated or sensitive environment, we encourage you to evaluate Orion (and any AI tool) carefully. This page is intended to provide the transparency needed for that evaluation.
To report a vulnerability, contact security@meetorion.app or submit it through our GitHub Security page. For security-related inquiries, use the same address.
The following lists all third-party services that process your data, organized by level of data access. Your data is never used for model training and is retained only for the minimum period needed for service operation.
AWS - Sees and stores your data: Our infrastructure is primarily hosted on AWS. Most of our servers are in the US, with some latency-critical servers located in AWS regions in Asia (Tokyo) and Europe (London).
Cloudflare - Sees your data: We use Cloudflare as a reverse proxy in front of parts of our API and website to improve performance and security.
OpenAI - Sees your data: We rely on OpenAI's models to power AI responses and task completion. We have a zero data retention agreement with OpenAI.
Anthropic - Sees your data: We rely on Anthropic's models to power AI responses and task completion. We have a zero data retention agreement with Anthropic.
Google Cloud Vertex API - Sees your data: We rely on some Gemini models offered over Google Cloud's Vertex API. We have a zero data retention agreement with Vertex.
Datadog - Sees no user content: We use Datadog for logging and monitoring. Logs do not contain any of your conversations, files, or personal content.
Stripe - Sees no user content: We use Stripe to handle billing. Stripe will store your personal data (name, credit card, address).
WorkOS - Sees no user content: We use WorkOS to handle auth. WorkOS may store some personal data (name, email address).
None of our infrastructure is located in China. We do not directly use any Chinese company as a subprocessor, and to our knowledge, none of our subprocessors do either.
All team members are granted least-privilege access. Multi-factor authentication is enforced for AWS. Access is controlled through both network-level restrictions and secret management. Access reviews are conducted quarterly, and just-in-time privileged access requires approval workflows. Conversation content is not accessible to any Zinley team member.
We conduct penetration testing at least annually through reputable third-party firms. Visit trust.meetorion.app to request reports.
Orion is an interconnected personal representative that takes real actions on your behalf, which makes it a high-value target for adversarial attacks. If an attacker can manipulate the AI into following malicious instructions -- through a file, a message, or a webpage -- the consequences can be significant. The following describes our defense mechanisms.
Prompt injection occurs when malicious instructions are embedded in seemingly normal content, attempting to manipulate the AI into performing unauthorized actions. We defend against this at multiple layers:
This is a detection-and-wrapping system, not a hard block. The AI is instructed to respect the security boundaries, but as with all LLM-based defenses, effectiveness depends on the model following instructions correctly. The permission system (described below) serves as an additional safeguard -- even if an injection bypasses detection, actions still require your explicit approval.
Phone calls carry real-world risk and receive dedicated protection:
After the representative makes file changes, a quality gate triggers verification. The representative is prompted to review its own work before delivering results. For larger changes (10+ edits), sub-representatives with reviewer personas are spawned for a more thorough review. This operates within the same LLM -- it is a structured self-review process, not an independent verification system. Hard limits on retries and time prevent infinite loops.
In addition to protecting the representative from external threats, we ensure that the representative itself cannot act without appropriate oversight and user consent.
The permission model is our most critical security layer. Even if other defenses are bypassed -- such as prompt injection evading detection or a malicious skill passing the scanner -- the representative cannot act without your explicit approval.
Orion's primary containment mechanism for file access:
/ or C:\) is explicitly blocked.. patterns attempting to escape allowed directoriesThis is not an OS-level sandbox. Electron's renderer sandbox is disabled to support the IPC bridge that makes Orion work. Containment relies on the guardrail system checking every file and shell operation, combined with the permission model above.
Orion supports custom skills -- reusable routines that extend its capabilities. Every skill undergoes security scanning:
Use skills from trusted sources. We recommend using skills provided and maintained by the Orion team, as they are vetted, regularly updated, and automatically improved. Additional official skills are added over time. If you choose to use third-party skills, treat them as executable code: do not apply skill files from untrusted sources, do not import skills that request disabling security features, and review the skill's behavior before enabling it.
The client application can load guardrail rules into the representative's context covering security (SQL injection, XSS, exposed tokens), code quality, data handling, and tool misuse. When present, the representative reads and follows these rules before taking risky actions. These complement the permission system but are advisory -- they guide the representative's behavior rather than enforce it at the system level.
Orion runs on desktop (macOS, Windows, Linux), mobile (iOS, Android), CLI, and through messaging apps (Telegram, Discord, WhatsApp, Slack, iMessage).
unsafe-eval is excludedSameSite=Strict and Secure flagsIf you are behind a corporate proxy or firewall, allow the following domains:
snowx.ai and *.snowx.ai: API requests, AI processing, and WebSocket connectionsmeetorion.app: Authentication, dashboard, and account managementapi.zinley.dev: Skills and charactersstorage.googleapis.com/meetorion: App updates and releasesWhen you access Orion through messaging apps, your messages are processed through our secure API. Message content is retained for up to 30 days for service operation and then deleted. Each platform connection uses its own authentication tokens.
Call audio is processed in real-time and not recorded or stored beyond the active session. Call metadata (time, duration, outcome) is retained for up to 90 days for your records.
Orion makes AI requests to our server when you chat, when it makes calls, when it runs tasks, and sometimes in the background to build context or leverage its memory. A typical request includes your conversation history, relevant files, memory context, and task state. This goes to our AWS infrastructure, then to the appropriate model provider (OpenAI/Anthropic/Google).
All data flows over TLS. Even if you have configured your own API key, requests are routed through our AWS infrastructure, as request construction occurs server-side. Direct routing to enterprise AI deployments or self-hosted servers is not currently supported.
Zinley does not build AI models. We use third-party models from OpenAI, Anthropic, and Google, and we maintain zero data retention agreements with all of them. Your data is never used for model training.
All users -- Free, Plus, Pro, Max, and Enterprise -- receive the same level of data protection. There are no tiers or configuration options that affect this policy.
Enterprise customers can request Zero Data Retention (ZDR) configuration. Contact sales@meetorion.app.
Orion can semantically index your files and build persistent memory of your preferences and routines. Indexing is enabled by default and can be disabled in settings.
When enabled, Orion scans permitted files on your device and computes secure hashes. Files and directories in your ignore settings are excluded. On our end, we chunk and embed the data, storing embeddings securely with obfuscated device identifiers and file paths for filtering. Indexing is used solely for search and coordination -- never for model training.
Blocking specific files: Configure your device privacy settings to exclude specific files. Orion will make a best effort to exclude those files from all requests.
We collect operational metrics (latency, reliability, usage patterns) to maintain service performance. This does not include your conversation content, file paths, or personal data. We also collect error logs and crash reports for debugging purposes -- strictly operational data, no personal content.
You can disable telemetry and error reporting in your account settings. Disabling these does not affect Orion's core functionality.
For organizations with stricter requirements:
No software system can guarantee complete security. AI security in particular is a rapidly evolving field where new attack techniques emerge regularly and defenses must adapt. We recommend the following best practices:
Our security architecture is designed with defense in depth -- multiple independent layers so that if one is bypassed, others provide protection. We are committed to transparency about both our capabilities and their limitations.
To delete your account, navigate to Settings, select "Advanced," and choose "Delete Account." All associated data -- including indexed files, memory, conversation history, and connected platform data -- is permanently removed. Data is not moved to a backup or archive.
To report a vulnerability, follow the guide on our GitHub Security page or email security@meetorion.app. We will acknowledge your report within 5 business days.
We request that you allow reasonable time for us to address vulnerabilities before public disclosure. We do not pursue legal action against security researchers acting in good faith. A bug bounty program is under evaluation.
Company: Zinley (zinley.com)
Security Team: security@meetorion.app
General Support: support@meetorion.app
Enterprise Sales: sales@meetorion.app
Trust Center: trust.meetorion.app