Free, Private, On-Device Intelligence
Short overview here. Technical detail comes in the sections below.
Useful software should not depend on logging everything you type. We want models to get better in ways that do not force people to hand over private data.
Many AI products send your text and files to a remote service. OIP is meant for models that run on your device. What you send elsewhere, if anything, is up to you.
When a model improves, that improvement can be shared as a weight update. It does not have to include your raw prompts, documents, or other private material.
AI is part of work and daily life. Too often the choice looks like “share everything or fall behind.” We think you should be able to keep sensitive data local and still benefit from models that improve over time.
Models do not have to be owned only by a few closed platforms. Open rules for merging updates, clear provenance, and consent matter as much as raw scale. We care about systems where people know what was shared and why.
“Protocols should widen who can contribute, not lock users in.”
For engineers and partners. Train where the data already sits, exchange weight updates instead of datasets, and merge with rules everyone can read.
Sensitive text and logs can stay in the hospital, firm, or plant where they belong. Training and fine-tuning run there (or in an approved enclave). Only weight tensors and metadata leave, as defined by the protocol.
Many small updates combine into one shared model with weighted averaging over chunks. The merge rules are simple on purpose so attribution stays clear and you can still split work by region or specialty.
Chunks can carry provenance so policies can tie access, credit, or payment to real contribution. A full marketplace is still early; the protocol focuses on the shared file formats and merge semantics first.
Public checkpoints include oip-300mb-lab, oip-900mb-stable, and oip-1gb-preview. Run them in the browser on kyre.ai through the 300 MB lab, 900 MB lab, and OIP Studio. Federated training and merge tooling are still to come.
The industries differ, but the constraint is the same. Some data cannot legally or safely go to one central training pool. OIP is for teams that still want a shared model to improve by exchanging parameters and written rules, not by copying raw databases.
Factories and suppliers already run models on local telemetry and images. The open question is what may leave the site boundary.
Health systems need models that reflect local populations without building another national PHI warehouse.
Privileged notes, export-controlled lab data, and field observations are often unsuitable for a generic cloud fine-tune. They can still train a local model, and teams can merge weights later when contracts allow.
A trained model is an asset, not only a UI feature. When each update is a signed, attributable chunk, you can see who changed what, who may run it, and how to pay contributors back, without relabeling every training job as a bulk data collection.
Licenses can name which prototypes or parties may use a chunk, including open use, credential gates, metering, or revenue splits. The idea is simple. Permission to merge or run a model is not the same thing as holding the original dataset.
If you know what share of a prototype’s chunks came from your batches, that number is a concrete input for governance and payment, within your own contracts and law. The protocol records contribution and merge rules. It does not replace lawyers.
Evaluations and production metrics can show which updates helped. Missing domain coverage and real lift matter for trust, not only who spent the most on compute.
When merges and shares are logged in the open, a better shared model can still help your install. You keep local adapters, pull an improved shared base model, and you never needed to export the private texts or tables that trained your part.
You do not have to merge everything into a single model. Routing by region, specialty, or training loss can keep specialist models separate until a wider merge actually makes sense.
Protocol designers, ML engineers, policy staff, and the KYRE team that ships the first browser checkpoints. No one company should own the only copy of “the” model. That is the design goal.
oip-300mb-lab, oip-900mb-stable, oip-1gb-preview (live on KYRE)oip-300mb-lab, smallest checkpoint, runs in-browseroip-900mb-stable, same WebGPU pathoip-1gb-preview and experimentsTest merge code under bad or dishonest peers and uneven chunk counts across sites, and measure fairness in plain numbers.
Run pilots where legal already said “no” to data lakes. Measure lift from federated fine-tunes vs. isolated baselines.
Keep customer relationships and trade craft off the network while still merging weight updates that capture what general models skip.
You can run OIP-aligned models in the browser today on KYRE. Command-line federated training and public merge registries are still in progress. This section separates what works now from what is planned.
Use a recent desktop Chrome. Mobile WebGPU is uneven; use a laptop-class machine for the largest checkpoints.
Updates on the roadmap, demos, and community discussion.