Free, Private, On-Device Intelligence for Everyone
Plain language first. The sections below go deeper for builders.
We believe intelligence should not require surveillance. We are building toward a world where learning can be shared, data does not have to be, and everyone who participates can benefit.
Most AI products quietly send your words and files to a distant server. OIP points the other way: the model can live on your device. It can adapt through how you use it. You stay in charge of what leaves your machine.
When a model gets better, the improvement can travel as an update to the weights—the “shape” of what was learned—without packaging up your private material.
AI is becoming part of daily work and daily life. Today that often means a tradeoff: hand over your data, trust an opaque system, or miss out. We think there is another path—one where you keep your data, keep control, and still get software that improves over time.
We do not think intelligence should live only behind closed systems. It can grow in the open—with people, not extracted from them. Stronger models come from trust, merge rules, and provenance—not from begging everyone to upload their secrets.
“Creation scales; domination hoards. Choose protocols that multiply contributors, not cages for them.”
For engineers and partners: how OIP aims to compound intelligence without centralizing raw data—train where data lives, exchange parameters, merge with transparent rules.
Hospital charts, legal briefs, factory logs, and craft knowledge never need a shared warehouse. You optimize on premises (or in a vetted enclave) and export only what the protocol defines—typically weight tensors and metadata.
Chunk-based weighted averaging is the workhorse: many small contributors, one evolving prototype. The math is boring on purpose—stable merges, clear attribution, room for hierarchical and specialist topologies.
When chunks carry provenance, contribution can map to access, reputation, or economics. The marketplace layer is nascent; the protocol primitives are the prerequisite.
When inference moves off the server-only default, usage changes: you are not billed per API call the same way, sensitive work can stay air-gapped, and product design can favor the person in front of the screen—not only the platform’s ledger. OIP is the protocol layer for that shift: private inputs, optional collective learning, open rules.
Today: public checkpoints use IDs such as oip-300mb-lab, oip-900mb-stable, and oip-1gb-preview—run them in the browser on kyre.ai via the 300 MB lab, 900 MB lab, and OIP Studio. Federated training and merge are the next layers.
The use cases differ; the constraint does not. Valuable data is trapped in places that cannot ship it to a central trainer. OIP is for teams that still want a model that improves together—by moving parameters and clear rules, not raw records.
Factories, logistics networks, and suppliers already run models on local telemetry and imagery. The fight is whether that learning ever leaves the fence line.
Health systems need models that reflect local populations without building another national PHI warehouse.
Privileged workpapers, export-controlled lab data, and field observations are the wrong inputs for a generic cloud fine-tune. They are the right inputs for a local one—with optional merge when terms align.
A trained model is not only an app feature; it is an asset. When updates arrive as signed, attributable chunks, you can reason about who improved what, who may use it, and how value flows back to contributors—without treating every training run as a data grab.
Chunks can be licensed to named prototypes or parties: open participation, credential-gated use, usage metering, or equity-style splits. The goal is to separate permission to merge or run from possession of the underlying dataset.
If your batches represent a known fraction of a prototype’s chunks, that fraction is a legible input to governance and economics—subject to the contracts and laws you operate under. The protocol’s job is to make contribution visible and merge rules explicit, not to replace your counsel.
Downstream evals and operational metrics can show which updates helped. Scarce domain coverage and measurable lift become first-class signals for who earned trust—not only who had the biggest cluster budget.
When merges are logged and share is visible, improving the shared prototype can still improve your deployment: you keep local adapters, pull a better collective head, and you never had to ship the corpus that trained your side of the story.
Not every merge should collapse to one blob. Clustering-inspired routing—by geography, specialty, or loss profile—keeps experts sharp while still allowing roll-ups when trust is high.
Protocol people, ML engineers, policy teams, and the KYRE crew shipping the first browser-native checkpoints. No single landlord for “the model”—that is the point.
oip-300mb-lab, oip-900mb-stable, oip-1gb-preview (live on KYRE)oip-300mb-lab—fastest path in-browseroip-900mb-stable, same WebGPU pathoip-1gb-preview and experimentsStress-test merge operators, Byzantine settings, and fairness metrics when chunk counts differ by orders of magnitude across sites.
Run pilots where legal already said “no” to data lakes. Measure lift from federated fine-tunes vs. isolated baselines.
Preserve craft and locale: merge what general models miss while keeping customer relationships and recipes off the wire.
You can run OIP-aligned models in the browser now on KYRE. Federated training CLIs and public merge registries are still rolling out—this section splits “live” from “next.”
Use a recent desktop Chrome. Mobile WebGPU is still uneven—stick to laptop class hardware for the big checkpoints.
Roadmap, demos, and community—follow the story where it ships.