Overview
The VARL API provides programmatic access to our biological intelligence platform. It enables developers, researchers, and institutions to integrate digital twin simulations, molecular pathway analysis, biomarker detection, and predictive modeling into their own workflows — all through a single, unified RESTful interface.
Every endpoint follows a consistent design philosophy: submit biological context, receive structured intelligence. Whether you are building a clinical decision support tool, automating drug screening pipelines, or constructing real-time patient monitoring dashboards, the VARL API abstracts the complexity of computational biology into clean, composable operations.
Base URL
All API requests are made to the following base URL. All endpoints require HTTPS. HTTP requests will be rejected.
https://api.varl.bio/v1Architecture
The API is organized around five primary resource groups that mirror VARL's scientific workflow. Each group encapsulates a distinct phase of the biological intelligence pipeline, from data ingestion to actionable prediction.
Digital Twins
Virtual representations of biological systems — from individual cells and protein networks to complete organ models. Create twins from genomic profiles, configure environmental parameters, attach real-time patient data, and query system state at any resolution. Twins persist across sessions and evolve as new data is integrated.
Simulations
Run computational experiments on digital twins. Introduce drug candidates, model pathway disruptions, simulate genetic mutations, and observe cascading effects across biological subsystems. Simulations execute in parallel and return time-series data with configurable granularity. Batch operations support running thousands of scenarios simultaneously.
Biomarkers
Detect, track, and analyze molecular biomarkers across patient cohorts or simulation outputs. The biomarker engine identifies statistically significant markers from multi-omics data, correlates them with disease states, and provides confidence-scored recommendations for diagnostic and therapeutic targets.
Predictions
AI-powered forecasting endpoints built on VARL's proprietary biological language models. Submit patient data, molecular profiles, or simulation snapshots and receive predictions about disease trajectories, treatment efficacy, adverse event probability, biomarker evolution, and system-level outcomes. Every prediction includes confidence intervals and explainability metadata.
Datasets
Access curated biological datasets spanning genomics, proteomics, metabolomics, and clinical trial records. Upload proprietary data for secure analysis, or query VARL's reference library of over 2.4 million annotated molecular interactions. Datasets support streaming for large-scale operations and are versioned for reproducibility.
Request Format
The API accepts JSON-encoded request bodies and returns JSON-encoded responses. All timestamps are in ISO 8601 format. Pagination follows cursor-based patterns for consistent performance across large result sets. Every response includes a request_id field for debugging and audit purposes.
// Create a digital twin from a genomic profile { "organism": "homo_sapiens", "system": "cardiovascular", "resolution": "cellular", "source_data": { "genomic_profile": "ds_gp_82kf9n", "clinical_history": "ds_ch_4m2j7p" }, "config": { "time_horizon": "365d", "update_frequency": "real_time", "fidelity": "high" } }
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"status": "initializing",
"organism": "homo_sapiens",
"system": "cardiovascular",
"resolution": "cellular",
"node_count": 847293,
"edge_count": 2341876,
"created_at": "2026-02-14T09:32:11Z",
"ready_at": null,
"request_id": "req_v4rl_7k2m9n"
}SDKs & Libraries
Official client libraries handle authentication, request signing, automatic retries, and response parsing. They are the recommended way to interact with the VARL API in production environments.
varl-sdk
pip install varl-sdk
v2.4.1@varl/sdk
npm install @varl/sdk
v2.4.0varl
install.packages("varl")
v1.8.3Versioning
The API uses date-based versioning. The current version is 2026-02-01. When breaking changes are introduced, a new version date is published and the previous version remains available for 12 months. You can pin your integration to a specific version by including the VARL-Version header in your requests.
Status
Current API uptime is 99.97%. System status, incident reports, and scheduled maintenance windows are published at status.varl.bio. Subscribe to receive real-time notifications via email or webhook.
Authentication
The VARL API uses API keys to authenticate requests. Every request must include a valid key in the Authorization header. Keys are scoped to organizations and carry specific permissions that determine which resources and operations are accessible.
API keys are sensitive credentials. Do not expose them in client-side code, public repositories, or log files. If a key is compromised, revoke it immediately from your dashboard and generate a new one. All key rotation events are logged and auditable.
Obtaining API Keys
API keys are generated from the VARL Dashboard under Settings → API Keys. Each organization can create up to 50 active keys. When creating a key, you must assign it a name, select its permission scope, and optionally restrict it to specific IP ranges or environments.
Keys are displayed only once at the time of creation. Store them securely in environment variables or a secrets manager. VARL does not store plaintext keys — only a cryptographic hash is retained on our servers.
Making Authenticated Requests
Include your API key in the Authorization header using the Bearer scheme. This is the only supported authentication method. Query parameter authentication is not supported for security reasons.
Authorization: Bearer varl_sk_live_4f8k2m9n7j3p1x...
curl -X GET https://api.varl.bio/v1/twins \ -H "Authorization: Bearer varl_sk_live_4f8k2m9n7j3p1x" \ -H "Content-Type: application/json" \ -H "VARL-Version: 2026-02-01"
from varl import Client # The SDK reads VARL_API_KEY from environment by default client = Client() # Or pass it explicitly client = Client(api_key="varl_sk_live_4f8k2m9n7j3p1x")
Key Types
VARL issues two types of API keys. Each serves a distinct purpose and carries different security implications. Using the wrong key type in production is a common source of integration issues.
Live Keys
Used in production environments. Live keys have access to real biological data, execute actual simulations on VARL's compute infrastructure, and consume your organization's quota. All operations performed with live keys are logged, billed, and subject to rate limits. Results from live key operations are persisted and available for downstream analysis.
Test Keys
Used in development and staging environments. Test keys operate against a sandboxed copy of the API that returns synthetic data. Simulations complete instantly with deterministic outputs. No quota is consumed and no data is persisted. Test keys are ideal for building integrations, running CI/CD pipelines, and validating request formats without incurring costs.
Permission Scopes
Each API key is assigned one or more permission scopes that control which endpoints it can access. Scopes follow a resource-based model with granular read/write separation. Apply the principle of least privilege — assign only the scopes required for each key's intended use case.
| Scope | Access | Description |
|---|---|---|
| twins:read | Read | List and retrieve digital twins, query twin state and metadata |
| twins:write | Write | Create, update, configure, and delete digital twins |
| simulations:run | Execute | Start simulations, submit batch jobs, cancel running operations |
| simulations:read | Read | Retrieve simulation results, status, and time-series outputs |
| biomarkers:read | Read | Query biomarker databases, retrieve detection results |
| predictions:run | Execute | Submit prediction requests, access forecasting models |
| datasets:read | Read | Access curated datasets and reference libraries |
| datasets:write | Write | Upload proprietary data, create custom dataset versions |
IP Allowlisting
For enhanced security, API keys can be restricted to specific IP addresses or CIDR ranges. When IP allowlisting is enabled, requests originating from unlisted addresses will receive a 403 Forbidden response regardless of key validity. Configure allowlists from the dashboard or via the management API.
{ "ip_allowlist": [ "203.0.113.0/24", "198.51.100.42" ], "enforce_allowlist": true }
Key Rotation
VARL recommends rotating API keys every 90 days as a security best practice. The rotation process is designed for zero-downtime transitions: create a new key, update your application to use it, verify successful requests, then revoke the old key. During the transition period both keys remain active.
Automated rotation is available through the management API. You can also configure expiration dates at key creation time — expired keys are automatically disabled and cannot be reactivated. Rotation events, including the originating IP and user agent, are recorded in your organization's audit log.
Authentication Errors
When authentication fails, the API returns one of the following error responses. All error responses include a machine-readable error.code field for programmatic handling.
authentication_required
No API key was provided. Include the Authorization header with a valid Bearer token.
invalid_api_key
The provided API key does not match any active key. Verify the key is correct and has not been revoked.
insufficient_scope
The API key is valid but lacks the required permission scope for this endpoint. Update the key's scopes from the dashboard.
ip_not_allowed
The request originates from an IP address not in the key's allowlist. Add the IP to the allowlist or disable IP restriction.
key_expired
The API key has passed its expiration date. Generate a new key from the dashboard. Expired keys cannot be reactivated.
Security Recommendations
Follow these practices to maintain the security of your VARL API integration:
- Store API keys in environment variables or a dedicated secrets manager — never hardcode them in source files.
- Use separate keys for development, staging, and production environments with appropriately scoped permissions.
- Enable IP allowlisting for production keys to restrict access to known infrastructure.
- Rotate keys every 90 days. Set expiration dates on keys that are intended for temporary use.
- Monitor the audit log for unexpected key usage patterns — geographic anomalies, unusual request volumes, or access outside business hours.
- Revoke compromised keys immediately. VARL invalidates revoked keys within 30 seconds globally.
Digital Twins
Digital twins are the foundational abstraction in the VARL platform. A digital twin is a high-fidelity computational replica of a biological system — a cell, a tissue, an organ, or an entire organism — constructed from real-world data and maintained as a living, queryable object. Twins ingest genomic profiles, proteomic signatures, clinical histories, and environmental parameters to produce a model that behaves as its biological counterpart would under identical conditions.
Unlike static snapshots, VARL digital twins are dynamic entities. They evolve over time as new data is fed into them, recalibrate their internal state in response to interventions, and maintain a full audit trail of every mutation. This makes them suitable for longitudinal patient monitoring, iterative drug design, and real-time clinical decision support.
Every twin is defined by three layers: a structural layer that maps the topology of biological components and their connections, a functional layer that encodes the kinetic and thermodynamic rules governing interactions, and a data layer that binds the model to patient-specific or population-level measurements. Together, these layers produce a system that can be interrogated, perturbed, and observed with the same rigor as a physical experiment.
Create a Digital Twin
Creating a twin initializes a new biological model from source data. The creation process involves three phases: data validation, graph construction, and calibration. Depending on the resolution and system complexity, initialization can take between 2 seconds and 15 minutes. The twin object is returned immediately with a status field that transitions from initializing to ready when calibration completes.
{ "organism": "homo_sapiens", "system": "immune", "resolution": "molecular", "name": "Patient-0042 Immune Model", "source_data": { "genomic_profile": "ds_gp_82kf9n", "proteomic_data": "ds_pd_3m7k1x", "clinical_history": "ds_ch_4m2j7p", "microbiome_snapshot": "ds_mb_9f2n4k" }, "config": { "time_horizon": "730d", "update_frequency": "real_time", "fidelity": "high", "stochastic_noise": true, "auto_calibrate": true } }
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| organism | string | Yes | Target organism. Supported: homo_sapiens, mus_musculus, rattus_norvegicus, danio_rerio, caenorhabditis_elegans |
| system | string | Yes | Biological system to model. Options include immune, cardiovascular, nervous, endocrine, respiratory, digestive, hepatic, renal, musculoskeletal, or whole_body |
| resolution | string | Yes | Model granularity. molecular (atomic-level interactions), cellular (cell-level dynamics), tissue (tissue-level aggregation), organ (organ-level abstraction). Higher resolution increases compute cost and initialization time. |
| name | string | No | Human-readable label for this twin. Maximum 256 characters. Defaults to an auto-generated identifier. |
| source_data | object | No | Dataset references to seed the twin. Accepts IDs from the Datasets API. If omitted, a generic population-average model is created. |
| config.time_horizon | string | No | Maximum simulation window. Format: Nd for days. Default 365d. Maximum 3650d. |
| config.fidelity | string | No | Computation precision. low (fast, approximate), medium (balanced), high (maximum accuracy, slower). Default medium. |
| config.stochastic_noise | boolean | No | Enable biological noise modeling. When true, simulations incorporate stochastic variation that mimics real-world biological variability. Default false. |
Response
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"name": "Patient-0042 Immune Model",
"status": "initializing",
"organism": "homo_sapiens",
"system": "immune",
"resolution": "molecular",
"node_count": 1247839,
"edge_count": 4892156,
"layers": {
"structural": { "status": "complete", "nodes": 412893 },
"functional": { "status": "calibrating", "rules": 89247 },
"data": { "status": "binding", "sources": 4 }
},
"config": {
"time_horizon": "730d",
"update_frequency": "real_time",
"fidelity": "high",
"stochastic_noise": true,
"auto_calibrate": true
},
"metadata": {
"compute_estimate_ms": 47200,
"memory_footprint_mb": 2340,
"version": "2026-02-01"
},
"created_at": "2026-02-14T09:32:11Z",
"ready_at": null,
"request_id": "req_v4rl_7k2m9n"
}Retrieve a Digital Twin
Fetch the current state of a twin, including its calibration status, node/edge counts, layer health, and the most recent snapshot timestamp. This endpoint is idempotent and safe for polling during initialization.
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"name": "Patient-0042 Immune Model",
"status": "ready",
"organism": "homo_sapiens",
"system": "immune",
"resolution": "molecular",
"node_count": 1247839,
"edge_count": 4892156,
"layers": {
"structural": { "status": "complete", "nodes": 412893 },
"functional": { "status": "complete", "rules": 89247 },
"data": { "status": "complete", "sources": 4 }
},
"health": {
"drift_score": 0.003,
"last_calibration": "2026-02-14T09:34:42Z",
"data_freshness": "2026-02-14T09:30:00Z"
},
"simulations_run": 0,
"snapshots": 1,
"created_at": "2026-02-14T09:32:11Z",
"ready_at": "2026-02-14T09:34:42Z",
"updated_at": "2026-02-14T09:34:42Z"
}List Digital Twins
Returns a paginated list of all twins in your organization. Results are ordered by creation date (newest first). Use cursor-based pagination for consistent results across large collections. Supports filtering by organism, system, status, and creation date range.
Query Parameters
| Parameter | Type | Description |
|---|---|---|
| organism | string | Filter by organism type |
| system | string | Filter by biological system |
| status | string | Filter by status: initializing, ready, degraded, archived |
| limit | integer | Number of results per page. Default 20, maximum 100. |
| cursor | string | Pagination cursor from a previous response's next_cursor field |
Update a Digital Twin
Modify a twin's configuration, attach new data sources, or rename it. Structural parameters (organism, system, resolution) are immutable after creation — to change them, create a new twin. Updating data sources triggers an automatic recalibration cycle.
{ "name": "Patient-0042 Immune Model v2", "source_data": { "metabolomic_panel": "ds_mt_7n3k9f" }, "config": { "fidelity": "high", "stochastic_noise": true } }
Delete a Digital Twin
Permanently deletes a twin and all associated data, including snapshots, simulation history, and cached predictions. This action is irreversible. Active simulations running against this twin will be terminated. For non-destructive removal, use the archive endpoint instead.
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"deleted": true
}Query Twin State
Inspect the internal state of a twin at any point in its timeline. State queries allow you to examine specific nodes (genes, proteins, metabolites), edges (interactions, pathways), or subgraphs (functional modules) without running a full simulation. This is useful for debugging, data validation, and building monitoring dashboards.
{ "query_type": "node_state", "targets": ["TP53", "BRCA1", "EGFR"], "timestamp": "2026-02-14T09:34:42Z", "include_neighbors": true, "depth": 2 }
{
"twin_id": "twn_8f3k2n9m",
"query_type": "node_state",
"timestamp": "2026-02-14T09:34:42Z",
"results": [
{
"node": "TP53",
"type": "tumor_suppressor",
"expression_level": 0.72,
"activity_state": "active",
"phosphorylation": { "S15": true, "S20": false },
"neighbors": ["MDM2", "ATM", "CDKN1A", "BAX"]
},
{
"node": "BRCA1",
"type": "dna_repair",
"expression_level": 0.89,
"activity_state": "active",
"complex_membership": ["BRCA1-BARD1", "BASC"],
"neighbors": ["BARD1", "RAD51", "PALB2", "ATM"]
},
{
"node": "EGFR",
"type": "receptor_tyrosine_kinase",
"expression_level": 0.34,
"activity_state": "basal",
"ligand_bound": false,
"neighbors": ["GRB2", "SOS1", "ERBB2", "SHC1"]
}
],
"subgraph": {
"total_nodes": 47,
"total_edges": 128,
"depth_explored": 2
}
}Snapshots
Snapshots capture the complete state of a digital twin at a specific moment. They serve as checkpoints that can be restored, compared, or used as starting points for simulations. VARL automatically creates snapshots after initialization and after each simulation. You can also create manual snapshots at any time.
Snapshots are immutable once created. They include the full node/edge state, all configuration parameters, and references to the source data that was active at the time of capture. Comparing two snapshots reveals exactly what changed between them — useful for tracking disease progression or measuring intervention impact.
{ "label": "pre-treatment-baseline", "description": "Baseline state before chemotherapy simulation" }
Compare Snapshots
Diff two snapshots to identify changes in node expression, edge weights, pathway activity, and system-level metrics. The comparison engine uses a hierarchical diffing algorithm that reports changes at the level of individual molecules, functional modules, and whole-system behavior.
{ "snapshot_a": "snap_2k4m8n", "snapshot_b": "snap_7j3p1x", "granularity": "pathway", "significance_threshold": 0.05 }
Twin Lifecycle
A digital twin passes through several states during its lifecycle. Understanding these states is important for building robust integrations that handle asynchronous operations correctly.
The twin is being constructed. Source data is validated, the biological graph is assembled, and calibration is in progress. Queries and simulations are not available in this state.
Calibration is complete and the twin is fully operational. All endpoints are available. The twin will remain in this state as long as it receives regular data updates and passes health checks.
New data has been attached and the twin is updating its internal state. Read queries remain available but may return stale data. New simulations are queued until recalibration completes.
The twin's drift score exceeds acceptable thresholds, indicating that its model has diverged significantly from observed biological reality. Simulations may return unreliable results. Supply fresh data or trigger a manual recalibration.
The twin has been soft-deleted. It is excluded from list results and cannot run simulations, but its data and snapshots are retained for 90 days. Archived twins can be restored to active status.
Webhooks
Register webhooks to receive real-time notifications about twin lifecycle events. Supported events include twin.ready, twin.degraded, twin.recalibrating, snapshot.created, and twin.deleted. Webhook payloads include the full twin object at the time of the event.
Limits & Quotas
The following limits apply to digital twin operations. Contact your account manager to request increases for enterprise workloads.
| Resource | Free Tier | Pro | Enterprise |
|---|---|---|---|
| Active twins | 5 | 100 | Unlimited |
| Max resolution | tissue | cellular | molecular |
| Snapshots per twin | 10 | 500 | Unlimited |
| Data sources per twin | 2 | 20 | 100 |
| Max time horizon | 90d | 730d | 3650d |