Open IntelligenceFree On-Device AI Inference & Training
Short overview, with technical detail in the sections below.
The centralized AI business model is completely wrong.
People should not have to pay a subscription or log in to use AI.
People and businesses should not have to share private data with AI companies.
Like accessing a file, accessing AI should be free, private, and run on your own device.
Open Intelligence creates models that run in the browser on your own hardware for free.
If you want to customize a model, you can do so by updating the model's weights.
Users never share their prompts, documents, or other private information.
AI is now an essential part of work and daily life.
Continuous access to AI is as fundamental as access to the internet.
Good AI keeps sensitive data local and improves from updates that users contribute over time.
This industry should not be controlled by closed AI platforms. Open rules for merging updates, clear provenance, and consent matter as much as raw scale. We care about systems where people know what was shared and why.
“We make AI free and private, running on our own devices.”
Open Intelligence allows everyone to train and run models on their own hardware.
This lets your AI Agents become always-on digital workers. No subscriptions. No data sharing.
You can download and run AI models in your own browser, on your own computers.
Your prompts and files stay on your machine. Inference never has to round-trip through the network, or through someone else's API or server.
For individuals, your AI assistant does not have to upload your private data to closed AI.
For companies, your workforce can produce internal content, checklists, and documents where the data already lives, under the policies you already use to protect your data on your own fleet of devices.
Your always-on digital workforce can draft, summarize, classify, and generate documents on a schedule you control.
The sections that follow describe how sensitive inputs stay at the edge, how many small weight updates merge into shared prototypes, and how provenance can attach to contribution — so “always-on” stays aligned with consent and auditability, not opaque AI platforms that ask you to trust their management.
The industries differ, but the constraint is the same. Some data cannot legally or safely go to one central training pool. OIP is the solution for teams that still want a shared model to improve by exchanging parameters and written rules, not by copying raw databases.
Factories and suppliers already run models on local telemetry and images. The open question is what may leave the site boundary.
Health systems need models that reflect local populations without building another national PHI warehouse.
Privileged notes, export-controlled lab data, and field observations are often unsuitable for a generic cloud fine-tune. They can still train a local model, and teams can merge weights later when contracts allow.
A trained model is an asset, not only a UI feature. When each update is a signed, attributable chunk, you can see who changed what, who may run it, and how to pay contributors back, without relabeling every training job as a bulk data collection.
Licenses can name which prototypes or parties may use a chunk, including open use, credential gates, metering, or revenue splits. The idea is simple. Permission to merge or run a model is not the same thing as holding the original dataset.
If you know what share of a prototype's chunks came from your batches, that number is a concrete input for governance and payment, within your own contracts and law. The protocol records contribution and merge rules. It does not replace lawyers.
Evaluations and production metrics can show which updates helped. Missing domain coverage and real lift matter for trust, not only who spent the most on compute.
You do not have to merge everything into a single model. Routing by region, specialty, or training loss can keep specialist models separate until a wider merge actually makes sense.
Protocol designers, ML engineers, policy staff, and the KYRE team that shipped the first browser models.
No one company should own the only copy of “the” model. That is the design goal.
oip-300mb-lab, oip-900mb-lab, oip-2gb, oip-4gboip-300mb-lab, KYRE WebGPU runtimeoip-900mb-lab, larger model, same runtimeoip-2gb and oip-4gbTest merge code under bad or dishonest peers and uneven chunk counts across sites, and measure fairness in plain numbers.
Run pilots where legal already said “no” to data lakes. Measure lift from federated fine-tunes vs. isolated baselines.
Keep customer relationships and trade craft off the network while still merging weight updates that capture what general models skip.
Updates on the roadmap, demos, and community discussion.