Back to Blog
Targon (SN4) SDK Looks Like a Real Cloud Alternative | Bittensor
News
7 days ago

Targon (SN4) SDK Looks Like a Real Cloud Alternative | Bittensor

Published March 13, 2026

Targon SDK, the developer toolkit from Manifold Labs, is rolling out a significant feature expansion in 2026. The update brings new CLI tools, full container lifecycle management and the ability to deploy any Python source code on confidential GPUs and CPUs. These additions build on the beta version that launched in 2025 and mark a clear step toward making Bittensor Subnet 4 (SN4) a practical compute platform for AI developers.

What Is Targon SDK?

Targon SDK is a Python-based framework that lets developers build, deploy and manage applications on the Targon (SN4) compute network. The core concept revolves around a Targon App object. Developers create an app, register functions using simple Python decorators and then deploy those functions to run on remote hardware. The entire process requires only a few lines of code, according to the official Targon documentation.

The SDK supports both ephemeral sessions for testing and short-lived jobs, as well as persistent deployments for long-running services. As a result, functions can be triggered by webhooks or called by other Targon applications once deployed. This flexibility makes Targon SDK suitable for everything from quick prototyping to production-grade AI inference.

New Features in the 2026 Update

The 2026 expansion introduces three major capabilities that significantly broaden what developers can do with Targon SDK on SN4.

Targon CLI for Easier Deployment

The new Targon CLI streamlines the deployment process. Developers can now use command-line tools to run, deploy and manage their applications directly from the terminal. In practice, the CLI wraps common operations like app.run() and app.deploy() into simple commands. This reduces the amount of boilerplate code developers need to write and speeds up the iteration cycle.

Container Lifecycle Management

One of the most notable additions is full container lifecycle management. Developers can now create, deploy, delete and control the lifetime of containers through Targon SDK. This gives users direct control over their compute environments. Combined with auto-scaling that adjusts the number of running replicas based on demand, this feature creates a serverless experience where infrastructure management is largely automated.

In addition, the SDK allows configuration of minimum and maximum replicas for each function. Minimum replicas keep warm instances running for instant response times. Maximum replicas set the ceiling for auto-scaling during peak usage. This design ensures that developers only pay for the compute they actually use.

Python Source Code on Confidential GPUs

The third major feature enables developers to deploy any Python source code through on-demand, confidential GPUs and CPUs. Targon’s (SN4) infrastructure runs on hardware-level security through the Targon Virtual Machine (TVM), which uses Intel TDX and AMD SEV technologies to create isolated execution environments.

As a result, sensitive AI workloads can run with cryptographic guarantees that data remains protected. For developers working with proprietary models or confidential data, this is a practical differentiator compared to standard cloud GPU providers.

Available Compute Tiers

Targon SDK offers eight compute tiers across CPU and GPU categories. The CPU options range from CPU Small for lightweight tasks like webhooks and background jobs, up to CPU XL for heavy processing workloads. On the GPU side, all tiers use NVIDIA H200 hardware. Options span from H200 Small for prompt serving and small language models to H200 XL for training multi-billion parameter models.

Confidential H200 rentals on the Targon (SN4) network are currently available on Targon.com with pricing starting at $1.90 per hour at the time of writing. Users can choose between on-demand and long-term rental options. All GPU access runs through Targon Virtual Machine with full hardware-backed encryption.

Efficiency-Based Pricing

Targon SDK uses what the team calls efficiency-based pricing. Serverless containers are automatically scaled with usage, so costs are directly tied to actual resource consumption. Developers are not charged for idle infrastructure. This model is particularly attractive for AI workloads that tend to be bursty in nature, with periods of high demand followed by stretches of low or zero activity. Consequently, teams running intermittent inference jobs or periodic training pipelines on Targon (SN4) can see meaningful cost savings compared to fixed-rate cloud alternatives.

About Manifold Labs and Bittensor SN4

Manifold Labs is the team behind Targon (SN4) and operates Subnet 4 on the Bittensor network. The company is led by Rob Myers, a founding Bittensor contributor and former Senior Software Engineer at Opentensor, alongside co-founder James Woodman, formerly COO of the Bittensor Foundation. In July 2025, Manifold Labs raised a $10.5 million Series A round led by OSS Capital.

Targon (SN4) was one of the first subnets launched on Bittensor back in October 2023. Since then, it has evolved from an AI inference verification system into a full-scale confidential compute marketplace. The Targon SDK expansion is the latest step in that evolution, positioning SN4 as infrastructure that competes directly with centralized cloud providers.

What This Means for the Bittensor Ecosystem

The Targon SDK update represents a broader trend within the Bittensor network. Subnets are increasingly moving beyond experimental technology and toward production-ready tools that developers can use in real-world applications. By offering a familiar Python-based SDK with CLI tools, auto-scaling containers and confidential compute, Manifold Labs is lowering the barrier to entry for developers who may not have interacted with decentralized infrastructure before.

Whether the demand for decentralized confidential compute matches the ambition remains to be seen. However, what is clear is that Targon SDK now offers a feature set that puts Targon (SN4) in direct conversation with established serverless platforms, while adding the cryptographic security guarantees that are unique to the Bittensor ecosystem.

Start deploying here.

FAQ:

What is Targon SDK?

Targon SDK is a Python-based developer framework created by Manifold Labs for building, deploying and managing applications on the Targon compute network. It runs on Bittensor Subnet 4 (SN4) and allows developers to execute functions on remote GPUs and CPUs using simple Python decorators and a few lines of code.

How does Targon SDK pricing work?

Targon SDK uses efficiency-based pricing, where serverless containers scale automatically based on actual usage. Developers only pay for the compute resources they consume. There is no charge for idle infrastructure. For direct H200 GPU rentals, pricing starts at $1.90 per hour with both on-demand and long-term options available on Targon.com. All rentals run through the Targon (SN4) confidential compute infrastructure.

What hardware does Targon SDK support?

The SDK offers eight compute tiers split across CPU and GPU categories. CPU tiers range from CPU Small to CPU XL, while all GPU tiers run on NVIDIA H200 hardware, ranging from H200 Small to H200 XL. All tiers are accessible through the targon.Compute constants in the SDK.

What makes Targon SDK different from other cloud compute platforms?

Targon SDK differentiates itself through confidential computing on Bittensor Subnet 4 (SN4). The Targon Virtual Machine (TVM) uses Intel TDX and AMD SEV technologies to create hardware-isolated execution environments with cryptographic security guarantees. This means sensitive AI workloads, proprietary models and confidential data remain protected during execution, which is not standard on most centralized cloud platforms.

Can I use Targon SDK for both testing and production?

Yes. Targon SDK supports ephemeral sessions for testing and short-lived jobs through the app.run() method, as well as persistent deployments for long-running production services through app.deploy(). The new Targon CLI further simplifies both workflows directly from the terminal.

Ready to Start Trading AI Tokens?