
Templar Completes Covenant72B Pre-Training on the Bittensor
The Covenant72B training run has been completed by Templar (SN3) on top of the Bittensor Network. A 72-billion-parameter model was trained entirely on decentralized infrastructure, without a data center, without a central server, and without a single point of control.
Instead of relying on owned compute or hierarchical coordination, the run unfolded across an open network. Contributors provided heterogeneous hardware, operated under real-world network constraints, and coordinated through protocol rather than authority.
The compute did not belong to one organization, nor was it directed by a traditional command structure. It emerged from a system that has been steadily increasing its technical ambition over time. The run completed as planned.
Training at Scale on the Bittensor Network, Without Central Ownership

Large-scale AI training usually depends on consolidation. Hardware is centralized, networks are tightly controlled, and operational complexity is reduced through ownership and proximity.
By contrast, Covenant72B was trained under different assumptions on the Bittensor network. Contributors operated across unreliable networks and diverse hardware environments. As a result, latency, synchronization costs, and partial failure were not edge cases. They were normal operating conditions.
The run did not eliminate these challenges. Instead, it showed that the network could manage them long enough, and well enough, to complete frontier-scale pretraining. What matters here is not efficiency under ideal conditions, but the ability to coordinate work under constraint.
Why This Matters for the Bittensor Network

Covenant72B is not significant because it immediately redefines model quality. Its importance lies in what it reveals about the state of the Bittensor network itself, specifically its ability to coordinate large-scale pretraining under real-world constraints.
Discussions around Bittensor often remain abstract. Incentives, emissions, subnets, and theory dominate the conversation. This training run shows something more concrete: a system capable of coordinating real work at non-trivial scale, over extended periods of time, and under sustained technical pressure.
That capability does not appear overnight. It emerges through accumulated tooling, shared norms, and contributors who continue to participate when coordination becomes difficult. Over time, those elements compound into infrastructure that becomes increasingly legible and usable.
While Covenant72B itself concerns pretraining, it is often discussed in the context of a broader, longer-term vision for what such infrastructure could eventually support:
Templar is reaching centralized lab size.
This plays into Bittensor’s greater vision for a decentralized AI lab that trains its own foundation models, fine tunes them, RL reinforces them, inferences them and wraps them as agents.
An entirely open, incentivized, decentralized AI stack.
Bottom up and importantly out, for everyone.
– @const_reborn
Execution, Transition, and the Next Phase

Templar took responsibility for executing a training run that most decentralized systems are not yet prepared to attempt. That responsibility included coordination, failure management, and sustained operation without the guarantees that centralized control typically provides.
What makes this notable is not simply that the run completed, but that it did so within the constraints of an open network. The execution belongs to Templar as a subnet operator, while the underlying capability rests on infrastructure and incentives that extend beyond any single team.
With pretraining complete, Covenant72B now moves toward post-training. Fine-tuning, alignment, and evaluation will determine how useful the model becomes in practice. During this transition, emissions have been paused by design. The operational mode required for a coordinated training run differs from the one required for refinement and benchmarking.
Rather than accelerating blindly, the system is adjusting to a new phase.
The next stage, Crusades, shifts the focus from coordination to optimization. Participants submit training code, validators benchmark it in controlled environments, and emissions flow to the most efficient implementations. This process feeds improvements back into the shared capability of the Bittensor network, strengthening the foundation for future training runs.
Covenant72B does not claim finality. It does not suggest that decentralized training has solved the hard problems of AI development. What it shows instead is a system demonstrating the ability to coordinate increasingly complex pretraining workloads in the open, with outcomes that are measurable rather than abstract.


