Superintelligence Together

Free, Private, On-Device Intelligence for Everyone

What we believe

Plain language first. The sections below go deeper for builders.

Our mission

We believe intelligence should not require surveillance. We are building toward a world where learning can be shared, data does not have to be, and everyone who participates can benefit.

What this is

Most AI products quietly send your words and files to a distant server. OIP points the other way: the model can live on your device. It can adapt through how you use it. You stay in charge of what leaves your machine.

When a model gets better, the improvement can travel as an update to the weights—the “shape” of what was learned—without packaging up your private material.

Why it matters

AI is becoming part of daily work and daily life. Today that often means a tradeoff: hand over your data, trust an opaque system, or miss out. We think there is another path—one where you keep your data, keep control, and still get software that improves over time.

A simple picture

  1. You run a model locally—for example in your browser with WebGPU.
  2. It responds and adapts in context you control.
  3. If you want to contribute, you share a model update, not a dump of your inputs.
  4. The network can grow smarter without exposing anyone’s raw data.
Your data stays with you. Shared intelligence. Private inputs. No servers required to try the demos. Improve the model without giving up your data.

A different path for AI

We do not think intelligence should live only behind closed systems. It can grow in the open—with people, not extracted from them. Stronger models come from trust, merge rules, and provenance—not from begging everyone to upload their secrets.

“Creation scales; domination hoards. Choose protocols that multiply contributors, not cages for them.”

Under the hood

For engineers and partners: how OIP aims to compound intelligence without centralizing raw data—train where data lives, exchange parameters, merge with transparent rules.

Data stays at the edge

Hospital charts, legal briefs, factory logs, and craft knowledge never need a shared warehouse. You optimize on premises (or in a vetted enclave) and export only what the protocol defines—typically weight tensors and metadata.

Merge, don’t mirror

Chunk-based weighted averaging is the workhorse: many small contributors, one evolving prototype. The math is boring on purpose—stable merges, clear attribution, room for hierarchical and specialist topologies.

Knowledge as a stake

When chunks carry provenance, contribution can map to access, reputation, or economics. The marketplace layer is nascent; the protocol primitives are the prerequisite.

Why “local” is a big deal

When inference moves off the server-only default, usage changes: you are not billed per API call the same way, sensitive work can stay air-gapped, and product design can favor the person in front of the screen—not only the platform’s ledger. OIP is the protocol layer for that shift: private inputs, optional collective learning, open rules.

// Illustrative API shape (roadmap)—not a shipping CLI yet const myModel = await trainTransformer({ data: "s3://our-factory-logs/redacted/", architecture: "oip-900mb-stable", tokenizer: "oip-vocab" }); // Publish chunks + terms; registry records your share of the prototype const chunks = myModel.exportWeightChunks(); await oip.contribute(chunks, { prototype: "industrial-vision-coalition", region: "us-midwest", access: "consortium-license" }); // Pull the merged prototype; your local adapter stays yours const collective = await oip.getMergedPrototype("industrial-vision-coalition");

Today: public checkpoints use IDs such as oip-300mb-lab, oip-900mb-stable, and oip-1gb-preview—run them in the browser on kyre.ai via the 300 MB lab, 900 MB lab, and OIP Studio. Federated training and merge are the next layers.

Where this matters

The use cases differ; the constraint does not. Valuable data is trapped in places that cannot ship it to a central trainer. OIP is for teams that still want a model that improves together—by moving parameters and clear rules, not raw records.

Operations and supply chains

Factories, logistics networks, and suppliers already run models on local telemetry and imagery. The fight is whether that learning ever leaves the fence line.

  • A tier-one supplier fine-tunes on internal defect photos and publishes weight chunks; the OEM merges into a shared “line vision” prototype without ingesting the supplier’s library.
  • Regional plants keep adapters separate when regulation or contract terms require it; roll-ups happen only when policy and trust allow.
  • Craft and specialty manufacturing get the same pattern: share what the model learned about materials and process, not customer lists or proprietary formulas as plain text dumps.

Care delivery and compliance

Health systems need models that reflect local populations without building another national PHI warehouse.

  • Each site trains on its own cohort; charts and notes stay behind the institution’s controls.
  • Specialty and service-line prototypes (cardiology, oncology, perioperative care) can stay logically separate with governance encoded in metadata.
  • Auditors and partners reason about chunk lineage and contribution share instead of asking for a copy of the underlying records.

Legal, public sector, and science

Privileged workpapers, export-controlled lab data, and field observations are the wrong inputs for a generic cloud fine-tune. They are the right inputs for a local one—with optional merge when terms align.

  • Legal and policy: Retrieval and drafting assistants tuned per jurisdiction or practice group, without a shared document pool.
  • Research and climate: Sensors, notebooks, and simulations stay where liability and sovereignty demand; merged weights carry forward what generalized across sites.
  • Media and language: Editorial tone, safety, and locale stay local; readers still benefit when organizations choose to merge adapters into a shared language prototype. KYRE’s in-browser checkpoints are one way that stack meets people on the device.

Knowledge as capital

A trained model is not only an app feature; it is an asset. When updates arrive as signed, attributable chunks, you can reason about who improved what, who may use it, and how value flows back to contributors—without treating every training run as a data grab.

Access and policy

Chunks can be licensed to named prototypes or parties: open participation, credential-gated use, usage metering, or equity-style splits. The goal is to separate permission to merge or run from possession of the underlying dataset.

Stakes that match contribution

If your batches represent a known fraction of a prototype’s chunks, that fraction is a legible input to governance and economics—subject to the contracts and laws you operate under. The protocol’s job is to make contribution visible and merge rules explicit, not to replace your counsel.

Quality and impact over time

Downstream evals and operational metrics can show which updates helped. Scarce domain coverage and measurable lift become first-class signals for who earned trust—not only who had the biggest cluster budget.

Incentive alignment

When merges are logged and share is visible, improving the shared prototype can still improve your deployment: you keep local adapters, pull a better collective head, and you never had to ship the corpus that trained your side of the story.

Sell the lift encoded in the weights, not a zip file of someone else’s documents. Aggregation builds durable capability; hoarding raw data builds brittle monopolies.

Hierarchical intelligence

Not every merge should collapse to one blob. Clustering-inspired routing—by geography, specialty, or loss profile—keeps experts sharp while still allowing roll-ups when trust is high.

Multi-Stage Reduction

  • Stage 1 (parallel): 30 site batches → 3 regional prototypes (10:1 fan-in)
  • Stage 2 (sequential): 3 regional → 1 federation master
  • Efficiency: Reduces serial bottleneck, enables parallel merging
  • Flexibility: Keep regional models separate or merge up hierarchy

Future Extensions

  • Similarity Routing: Assign batches to prototypes by loss profile or domain
  • Mixture of Experts: Multiple specialized prototypes, inference routing
  • Cross-Model Merge: Combine models trained on different architectures
  • Temporal Layers: Layer historical knowledge with recent specializations
// Initialize K prototypes for specialized domains const prototypes = [ { id: 0, specialty: "medical", seed: "batch0000" }, { id: 1, specialty: "legal", seed: "batch0010" }, { id: 2, specialty: "technical", seed: "batch0020" } ]; // Assign remaining batches to nearest prototype by similarity for (const batch of remainingBatches) { const prototypeId = findNearestByLoss(batch, prototypes); await mergeToPrototype(batch, prototypeId); } // Result: 3 specialized models instead of 1 averaged model // Deploy as mixture-of-experts with routing layer

Who builds this

Protocol people, ML engineers, policy teams, and the KYRE crew shipping the first browser-native checkpoints. No single landlord for “the model”—that is the point.

Technical primitives

  • Merge core: Chunked weighted averaging, resumable jobs, provenance logs
  • Checkpoints: oip-300mb-lab, oip-900mb-stable, oip-1gb-preview (live on KYRE)
  • Metadata: Contribution fractions, licenses, revocation hooks
  • Coordinators: Fan-in/fan-out graphs for regional → global roll-ups

Live entry points

  • 300 MB lab: oip-300mb-lab—fastest path in-browser
  • 900 MB lab: oip-900mb-stable, same WebGPU path
  • OIP Studio: Playground UI; good home for oip-1gb-preview and experiments
  • KYRE: Home for shipping demos

Researchers

Stress-test merge operators, Byzantine settings, and fairness metrics when chunk counts differ by orders of magnitude across sites.

Enterprises

Run pilots where legal already said “no” to data lakes. Measure lift from federated fine-tunes vs. isolated baselines.

Builders & guilds

Preserve craft and locale: merge what general models miss while keeping customer relationships and recipes off the wire.

Start today

You can run OIP-aligned models in the browser now on KYRE. Federated training CLIs and public merge registries are still rolling out—this section splits “live” from “next.”

Live on KYRE (WebGPU)

// No install — open in Chromium / Chrome, allow WebGPU // 1) Load tokenizer + weights from KYRE CDN // 2) Run prefill + decode in-page // 3) Try OIP Studio for sliders / presets window.location = "https://kyre.ai/oip-300mb-lab"; // or: oip-900mb-stable @ /oip-900mb-lab , oip-1gb-preview via /qwen-studio (OIP Studio)

Use a recent desktop Chrome. Mobile WebGPU is still uneven—stick to laptop class hardware for the big checkpoints.

Visit us

Roadmap, demos, and community—follow the story where it ships.

  • theoip.org: Project home & narrative
  • Federated CLI: Train / publish / merge tooling (in progress)
  • Registries: Public prototype directories (in progress)