Short overview, with technical detail in the sections below.
The AI business model is completely wrong.
People should not have to pay a subscription to use AI.
Businesses should not have to share private data to train AI models.
People and businesses should not have to share any private information to use AI.
Like accessing a file, accessing AI should be free, private, and run on your own device.
Open Intelligence creates models that run in the browser on your own hardware for free.
If you want to help train a model, you can do so by updating the model's weights
Users never share their prompts, documents, or other private information.
AI is now an essential part of work and daily life.
Continuous access to inference is as continuous access to the internet.
Good AI keeps sensitive data local and benefits from updates that improve over time.
This industry should not be controlled by closed AI platforms. Open rules for merging updates, clear provenance, and consent matter as much as raw scale. We care about systems where people know what was shared and why.
“AI should be free, private, and run on your own device.”
For engineers and partners. Train where the data already sits, exchange weight updates instead of datasets, and merge with rules everyone can read.
Sensitive text and logs can stay in the hospital, firm, or factory where they belong. Training and fine-tuning run there (or in an approved enclave). Only weight tensors and metadata leave, as defined by the protocol.
Many small updates combine into one shared model with weighted averaging over chunks. The merge rules are simple on purpose so attribution stays clear and you can still split work by region or specialty.
Chunks can carry provenance so policies can tie access, credit, or payment to real contribution. A full marketplace is still early; the protocol focuses on the shared file formats and merge semantics first.
The industries differ, but the constraint is the same. Some data cannot legally or safely go to one central training pool. OIP is for teams that still want a shared model to improve by exchanging parameters and written rules, not by copying raw databases.
Factories and suppliers already run models on local telemetry and images. The open question is what may leave the site boundary.
Health systems need models that reflect local populations without building another national PHI warehouse.
Privileged notes, export-controlled lab data, and field observations are often unsuitable for a generic cloud fine-tune. They can still train a local model, and teams can merge weights later when contracts allow.
A trained model is an asset, not only a UI feature. When each update is a signed, attributable chunk, you can see who changed what, who may run it, and how to pay contributors back, without relabeling every training job as a bulk data collection.
Licenses can name which prototypes or parties may use a chunk, including open use, credential gates, metering, or revenue splits. The idea is simple. Permission to merge or run a model is not the same thing as holding the original dataset.
If you know what share of a prototype's chunks came from your batches, that number is a concrete input for governance and payment, within your own contracts and law. The protocol records contribution and merge rules. It does not replace lawyers.
Evaluations and production metrics can show which updates helped. Missing domain coverage and real lift matter for trust, not only who spent the most on compute.
You do not have to merge everything into a single model. Routing by region, specialty, or training loss can keep specialist models separate until a wider merge actually makes sense.
Protocol designers, ML engineers, policy staff, and the KYRE team that ships the first browser checkpoints. No one company should own the only copy of “the” model. That is the design goal.
oip-300mb-lab, oip-900mb-stable, oip-1gb-preview (live on KYRE)oip-300mb-lab, smallest model, runs in-browseroip-900mb-lab, same KYRE WebGPU runtimeoip-1gb-previewTest merge code under bad or dishonest peers and uneven chunk counts across sites, and measure fairness in plain numbers.
Run pilots where legal already said “no” to data lakes. Measure lift from federated fine-tunes vs. isolated baselines.
Keep customer relationships and trade craft off the network while still merging weight updates that capture what general models skip.
Updates on the roadmap, demos, and community discussion.