Open Intelligence

Free On-Device AI Inference & Training

Mission

Short overview, with technical detail in the sections below.

Why we exist

The AI business model is completely wrong.

People should not have to pay a subscription to use AI.

Businesses should not have to share private data to train AI models.

People and businesses should not have to share any private information to use AI.

Like accessing a file, accessing AI should be free, private, and run on your own device.

What we do

Open Intelligence creates models that run in the browser on your own hardware for free.

If you want to help train a model, you can do so by updating the model's weights

Users never share their prompts, documents, or other private information.

Why it matters

AI is now an essential part of work and daily life.

Continuous access to inference is as continuous access to the internet.

Good AI keeps sensitive data local and benefits from updates that improve over time.

A simple picture

  1. You run a model on your own hardware.
  2. It answers you using context that stays under your control.
  3. If you choose to contribute, you send an update to the weights, not your private data.
  4. Many sites improve a shared model without pooling raw records in a closed AI platform.

A different path for AI

This industry should not be controlled by closed AI platforms. Open rules for merging updates, clear provenance, and consent matter as much as raw scale. We care about systems where people know what was shared and why.

“AI should be free, private, and run on your own device.”

Under the hood

For engineers and partners. Train where the data already sits, exchange weight updates instead of datasets, and merge with rules everyone can read.

Data stays at the edge

Sensitive text and logs can stay in the hospital, firm, or factory where they belong. Training and fine-tuning run there (or in an approved enclave). Only weight tensors and metadata leave, as defined by the protocol.

Merge, don't mirror

Many small updates combine into one shared model with weighted averaging over chunks. The merge rules are simple on purpose so attribution stays clear and you can still split work by region or specialty.

Knowledge as a stake

Chunks can carry provenance so policies can tie access, credit, or payment to real contribution. A full marketplace is still early; the protocol focuses on the shared file formats and merge semantics first.

Why local is a big deal

When inference runs on hardware you control, prompts and files can stay on that machine. You are not relying on a distant service to delete logs on time or to keep tenants perfectly isolated. The default path is simpler. Sensitive content often never has to cross the network.

That matters for regulated work, for sites that must run offline or air-gapped, and any time a vendor breach would otherwise expose what people typed. It also uncouples “use the model” from “pay per API call,” so software can be designed around the person at the keyboard—not only around a vendor's usage meter.

Data can stay on your device. Demos run without a server round-trip. Share model updates, not private inputs. Help improve the model without exporting your files.

Where this matters

The industries differ, but the constraint is the same. Some data cannot legally or safely go to one central training pool. OIP is for teams that still want a shared model to improve by exchanging parameters and written rules, not by copying raw databases.

Operations and supply chains

Factories and suppliers already run models on local telemetry and images. The open question is what may leave the site boundary.

  • A tier-one supplier fine-tunes on internal defect photos and publishes weight chunks; the OEM merges into a shared “line vision” prototype without ingesting the supplier's library.
  • Regional factories keep adapters separate when regulation or contracts require it. Wider merges happen only when policy and trust allow.
  • Smaller manufacturers can share weight updates that encode process knowledge without emailing customer lists or formula text.

Care delivery and compliance

Health systems need models that reflect local populations without building another national PHI warehouse.

  • Each site trains on its own cohort; charts and notes stay behind the institution's controls.
  • Specialty and service-line prototypes (cardiology, oncology, perioperative care) can stay logically separate with governance encoded in metadata.
  • Auditors and partners reason about chunk lineage and contribution share instead of asking for a copy of the underlying records.

Legal, public sector, and science

Privileged notes, export-controlled lab data, and field observations are often unsuitable for a generic cloud fine-tune. They can still train a local model, and teams can merge weights later when contracts allow.

  • Legal and policy: Retrieval and drafting assistants tuned per jurisdiction or practice group, without a shared document pool.
  • Research and climate: Sensors, notebooks, and simulations stay where policy requires. Merged weights can still capture patterns that held across sites.
  • Media and language: Tone, safety rules, and locale stay local. Separate sites can still merge adapter weights into one shared language model when they agree to. KYRE runs public checkpoints in the browser as one way to try that stack.

Knowledge as capital

A trained model is an asset, not only a UI feature. When each update is a signed, attributable chunk, you can see who changed what, who may run it, and how to pay contributors back, without relabeling every training job as a bulk data collection.

Access and policy

Licenses can name which prototypes or parties may use a chunk, including open use, credential gates, metering, or revenue splits. The idea is simple. Permission to merge or run a model is not the same thing as holding the original dataset.

Stakes that match contribution

If you know what share of a prototype's chunks came from your batches, that number is a concrete input for governance and payment, within your own contracts and law. The protocol records contribution and merge rules. It does not replace lawyers.

Quality and impact over time

Evaluations and production metrics can show which updates helped. Missing domain coverage and real lift matter for trust, not only who spent the most on compute.

Incentive alignment

Incentive alignment means the system rewards what society actually wants.

Hospitals, factories, and firms improve a shared model because they can get credit, payment, or access in return, not because they were pushed to ship raw records to a black box.

If merges and contributions are secret, people assume their data or their work will be taken without trace. They stop participating. If merges are visible—who contributed which chunk, under which license—then contributing weight updates is rational again.

You still train on private data locally. The protocol is about making the shared part (merged weights and rules) honest enough that helping the common model and protecting your own interests stop being opposites.

“It's the blueprint for how collective superintelligence emerges through economic self-organization.”

Hierarchical intelligence

You do not have to merge everything into a single model. Routing by region, specialty, or training loss can keep specialist models separate until a wider merge actually makes sense.

Multi-Stage Reduction

  • Stage 1 (parallel): 30 site batches → 3 regional prototypes (10:1 fan-in)
  • Stage 2 (sequential): 3 regional → 1 federation master
  • Efficiency: Reduces serial bottleneck, enables parallel merging
  • Flexibility: Keep regional models separate or merge up hierarchy

Future Extensions

  • Similarity Routing: Assign batches to prototypes by loss profile or domain
  • Mixture of Experts: Multiple specialized prototypes, inference routing
  • Cross-Model Merge: Combine models trained on different architectures
  • Temporal Layers: Layer historical knowledge with recent specializations

Who builds this

Protocol designers, ML engineers, policy staff, and the KYRE team that ships the first browser checkpoints. No one company should own the only copy of “the” model. That is the design goal.

Technical primitives

  • Merge core: Chunked weighted averaging, resumable jobs, provenance logs
  • Checkpoints: oip-300mb-lab, oip-900mb-stable, oip-1gb-preview (live on KYRE)
  • Metadata: Contribution fractions, licenses, revocation hooks
  • Coordinators: Fan-in/fan-out graphs for regional → global roll-ups

Live entry points

Researchers

Test merge code under bad or dishonest peers and uneven chunk counts across sites, and measure fairness in plain numbers.

Enterprises

Run pilots where legal already said “no” to data lakes. Measure lift from federated fine-tunes vs. isolated baselines.

Builders & guilds

Keep customer relationships and trade craft off the network while still merging weight updates that capture what general models skip.

Hello, World!

Updates on the roadmap, demos, and community discussion.