FLock.io has announced the launch of its new Bittensor subnet (UID 96) FLock OFF, accelerated by Yuma. Season 1 mining kicked off at 9:06 AM EST this month on May 2nd.
Bittensor is a permissionless federated learning network optimised for Small Language Models (SLMs) training on edge devices.
The next era of edge AI demands dense, high-quality datasets built for training on lightweight, local devices. FLock OFF is our answer to a foundational challenge in federated learning: how do you build a dataset that’s small in size but massive in knowledge?
See the GitHub here and YouTube here.
What is FLock OFF?
FLock OFF is the FLock Open Federated Framework. It stands for:
- FLock: Federated intelligence clusters.
- Open: Permissionless participation.
- Federated: Only nodes collectively coordinate and aggregate computation with data privacy – secure, decentralised, privacy-preserving coordination.
- Framework: A robust pipeline for decentralised AI training and validation
We want this to be the best dataset for supervised fine-tuning (SFT) and Direct Preference Optimisation (DPO) available. It will be open, evolving, and crowd-sourced by an aligned, high-signal community.
With FLock OFF, we aim to build an ultra-high-quality dataset that maximizes knowledge within a fixed size limit, ideal for edge-based training where bandwidth and compute are limited.
What’s the vision?
FLock is building a subnet on Bittensor to compress large domain datasets into compact, information-rich ones that dramatically improve the efficiency and quality of federated learning on the edge.
Think of FLock OFF as a permissionless lab for pushing forward edge AI. Anyone can participate:
- Miners create high-quality training data
- Validators evaluate and fine-tune Small Language Models (SLMs)
- All activity is coordinated in a privacy-preserving, decentralized fashion
The aim? To establish a robust foundation for edge inference and federated training, enabling us to deploy powerful SLMs within our broader decentralized FL network.
Why does the DeAI world need it?
Chips like Apple M-series, A18 Pro, Snapdragon 8 Gen3, and Dimensity 9400 are now powerful enough to support Parameter-Efficient Fine-Tuning (PEFT) on edge devices.
But there’s a bottleneck. They need high-quality, compact, domain-specific data, without compromising user privacy.
FLock OFF solves this. We’re building what these edge devices need: a high-signal dataset tailored for SLM training and deployment.
Become a part of FLock OFF!
We asked the FLock community in December whether we should build a subnet on Bittensor. The answer was a resounding yes. And now, here we are.
Our long-term goal is to build an open-source SLM that can surpass GPT-4.1-nano and other efficient closed models. We welcome you to join the FLock community!