Back to Blog
Your Simple Guide to Templar (SN3)
Bittensor
2 months ago

Your Simple Guide to Templar (SN3)

Published January 28, 2026

AI models keep growing in size and capability, and training them keeps getting more expensive. Modern large language models require massive GPU clusters that cost billions of dollars to build and operate. As a result, only a small number of well-funded corporations can train them at scale. Templar tackles this problem by enabling decentralized model training, coordinating distributed compute resources across the internet instead of relying on a single centralized data center.

What is Templar?

Templar is a decentralized AI training subnet on the Bittensor network. It allows participants worldwide to collaboratively train large language models by contributing compute rather than capital-intensive infrastructure. Within Templar (Subnet 3), training runs assign specific tasks to independent participants, known as miners, who compute updates and submit them for evaluation.

During a typical training run, miners use their GPUs to compute gradients on assigned data. They then submit these updates to the network in compressed form. Validators evaluate each contribution and distribute rewards in TAO and the subnet’s own token, commonly called the subnet alpha token, based on measurable training impact rather than raw compute alone.

The official GitHub repository hosts open-source miner and validator implementations, configuration files, and documentation. The protocol handles gradient compression, peer synchronization, and incentive distribution, which together make decentralized training economically and technically viable.

How Templar works?

Templar runs in blockchain-synchronized training windows of about 84 seconds. In each window, the protocol assigns miners a deterministically generated, pseudorandom dataset based on their unique identifier and the current window number. This approach keeps assignments reproducible across the network while still enabling effective data shuffling.

Miners perform forward and backward passes on their assigned data and compress the resulting gradients before submission. The compression pipeline uses techniques such as Discrete Cosine Transform (DCT) and top-k selection. These methods significantly reduce update size and make large-scale coordination possible over standard internet connections.

Validators aggregate the compressed gradients and measure training loss improvement to assess each miner’s contribution. They track performance over time using the OpenSkill rating system, which produces dynamic miner rankings. Validators then use these rankings to set on-chain weights that directly determine reward distribution.

A core technical component of Templar is the SparseLoCo approach. It combines high sparsity, typically 1 to 3 percent, with low-bit quantization. Depending on model configuration, this approach can reduce synchronization costs by orders of magnitude compared to full-gradient exchange.

Who is behind it?

Covenant AI develops Templar. The project is commonly associated with Sam Dare and contributors active in the Bittensor ecosystem. Covenant AI builds a broader decentralized AI stack composed of interoperating subnets. Templar focuses on large-scale pre-training, Basilica (Subnet 39) targets decentralized compute infrastructure, and Grail focuses on post-training and reinforcement learning workflows.

Researchers and engineers with backgrounds in academic and applied machine learning contribute to the project. The research community has also taken notice. Galaxy Digital published a research report highlighting Templar as a decentralized training system, and researchers presented related work, Gauntlet and SparseLoCo, at the NeurIPS Optimization Workshop.

Why it is valuable?

Templar lowers the barrier to training large language models by distributing compute across a permissionless network of participants. Instead of rewarding raw compute, the Gauntlet incentive mechanism rewards measurable training impact, such as real loss improvement. This structure encourages honest participation and reduces incentives to game the system.

The project has demonstrated training at scale with Covenant72B, a 72-billion-parameter language model trained through permissionless participation. A publicly released checkpoint at 420 billion processed tokens reports 53.84 percent on ARC-C, 77.74 percent on ARC-E, 80.58 percent on PIQA, and 77.08 percent on HellaSwag. These results provide a concrete reference point for comparing decentralized and centralized training approaches.

Within the Bittensor ecosystem, Templar shows how economic incentives and open participation can coordinate large-scale machine learning workloads without a single organization owning all infrastructure.

The future of Templar

After completing the Covenant72B training run, the team began developing Templar: Crusades. This competitive framework allows participants to submit training implementations for evaluation on target hardware. The system ranks submissions by performance and distributes emissions based on results.

Future work includes Heterogeneous SparseLoCo, which aims to support consumer-grade GPUs alongside data-center hardware. The team is also building training and fine-tuning APIs to make decentralized model development more accessible.

The long-term goal of Templar is to offer a credible, open alternative to centralized AI training infrastructure, enabling large-scale model development without control by a small group of corporations.

Sources:
https://www.tplr.ai
https://github.com/one-covenant/templar
https://docs.tplr.ai
https://huggingface.co/tplr/Covenant72B

Ready to Start Trading AI Tokens?