- Influencers
- Chutes

Recent Promotions
No promotions recorded yet.
Promoted projects will appear here.
Latest X Posts
Auto Top-Up is now in billing. Head to: https://chutes.ai/app/api/billing-balance Pick a floor and a refill amount, save a card, and your balance refills automatically. People asked for it and we delivered. http://Chutes.ai https://t.co/LQT6ZS94DW

Three weeks of pricing comparisons. Someone should ask the obvious question. MiniMax M2.5 TEE at $0.15/M input. Sonnet 4.6 at $3.00/M. That's a 95% gap. With hardware privacy on the cheap side. How? OpenAI rents or builds data centers. Staffs them. Amortizes the capex across every token. Anthropic does the same. Google does the same. Their prices cover infrastructure, headcount, real estate, and margin. Chutes runs on Bittensor Subnet 64. GPU operators worldwide compete to serve inference. They set their own prices. Outperform → earn more TAO. Underperform → lose traffic. No data center lease to recoup. No facilities team. No single company setting prices in a boardroom. The GPU market finds the price. Competitive markets produce lower prices than monopolies. That's not a hack. That's how markets work. 100B+ tokens per day. Months of sustained throughput. 700k+ users. $1.4M in 90 days. Does the pricing make more sense now? Or do you still think cheap means broken? http://chutes.ai | $TAO

A 27B dense model outscored Claude 4.5 Opus on vision. Qwen3.6-27B. Live on Chutes. Qwen's newest open source flagship. Image, video, and text native. 262K context (extensible to 1M). Apache 2.0. Head-to-head vs Claude 4.5 Opus on vision: - V*: 94.7 vs 67.0 - CountBench: 97.8 vs 90.6 - VideoMME (w/ sub): 87.7 vs 77.7 - ERQA: 62.5 vs 46.8 - CharXiv RQ: 78.4 vs 68.5 Terminal-Bench 2.0 matches Claude 4.5 Opus at 59.3. New "Preserve Thinking" mode keeps reasoning traces across turns for agent workflows. Running inside a TEE on Chutes. The GPU operators serving the model can't see your prompts or outputs. $0.195 in / $1.56 out per million tokens. Try it: http://chutes.ai/app/chute/7aa5e899-c0ba-5482-af48-d3f31d635c9f

How does Subnet 64 verify that miners are running the GPUs they claim to be running? Most decentralized compute networks rely on trust. Subnet 64 verifies every GPU before it serves a request. Every GPU on the network gets validated by GraVal, our open-source validation library. The GPU runs device-info-seeded matrix operations. Validators check that the results match the expected output, and that 95% of the claimed VRAM is available for those operations. Spoofed GPUs and GPUs with partitioned VRAM fail the test. The encryption layer adds a second check. Chutes encrypts every request with keys only the exact GPU advertised can decrypt. Reroute to a different GPU on the same server and decryption fails. The runtime drops the job. Miners who underperform lose traffic, and miners who try to cheat get caught by validators running cross-verification on the outputs. All of this runs at 100B+ tokens per day. http://github.com/chutesai/graval

Kimi K2.6 by @Kimi_Moonshot is now live on Chutes. 54.0 on HLE-Full with tools. Ahead of GPT-5.4 (52.1), Claude Opus 4.6 (53.0), and Gemini 3.1 Pro (51.4). It also leads on: - SWE-Bench Pro: 58.6 (ahead of all three again) - DeepSearchQA f1: 92.5 (next closest is Claude at 91.3) - BrowseComp with Agent Swarm: 86.3 (up from K2.5's 78.4) 1T parameters, 32B activated. 256K context. Native multimodal. Modified MIT license. Agent Swarm now scales to 300 sub-agents across 4,000 coordinated steps in a single run. Running inside a TEE on Chutes. The GPU operators serving the model can't see your prompts or outputs. $0.95 in / $4.00 out per million tokens. Try it now: http://chutes.ai/app/chute/aac09863-35b4-5d9b-9b67-6e6a9d54273a

Frequently Asked Questions
Related Influencers
vitalik.eth
vitalik.eth offers profound insights into blockchain technology, the evolution of Ethereum, and the broader Web3 ecosystem. His content delves into decentralized systems, the philosophical underpinnings of digital currencies, and the future trajectory of web decentralization, shaping conversations across various platforms.
Watcher.Guru
Watcher.Guru provides real-time, unbiased coverage of the global cryptocurrency and finance markets. Through high-frequency updates on Twitter, they deliver breaking news on blockchain regulation, market movements, and institutional adoption to a global audience.
CoinDesk
CoinDesk is a prominent global media company delivering news, insights, and data on cryptocurrency and blockchain technology. They cover market trends, future innovations, and host industry-leading events like Consensus, providing comprehensive analysis across various digital platforms, including their podcasts and specialized market reports.
Mario Nawfal
Mario Nawfal is a leading Web3 entrepreneur and host on X, providing 24/7 live streams and market analysis on business, technology, and global crypto markets. As a venture capitalist, he delivers deep insights into startup investing, digital asset trends, and breaking blockchain news.
Gary Vaynerchuk
Gary Vaynerchuk is a leading entrepreneur and NFT pioneer, renowned for founding VeeFriends and shaping the digital asset landscape. His content offers educational insights, market analysis, and business strategies, bridging traditional ventures with Web3 culture to highlight long-term value, community building, and digital ownership across various global platforms.
Cointelegraph
Cointelegraph is a leading global media outlet providing comprehensive coverage of the cryptocurrency and blockchain industry. Since 2013, they've delivered breaking news, in-depth research, and expert podcasts across multiple digital platforms, offering critical analysis and interviews for the Web3 community.
Latest X Posts
Auto Top-Up is now in billing. Head to: https://chutes.ai/app/api/billing-balance Pick a floor and a refill amount, save a card, and your balance refills automatically. People asked for it and we delivered. http://Chutes.ai https://t.co/LQT6ZS94DW

Three weeks of pricing comparisons. Someone should ask the obvious question. MiniMax M2.5 TEE at $0.15/M input. Sonnet 4.6 at $3.00/M. That's a 95% gap. With hardware privacy on the cheap side. How? OpenAI rents or builds data centers. Staffs them. Amortizes the capex across every token. Anthropic does the same. Google does the same. Their prices cover infrastructure, headcount, real estate, and margin. Chutes runs on Bittensor Subnet 64. GPU operators worldwide compete to serve inference. They set their own prices. Outperform → earn more TAO. Underperform → lose traffic. No data center lease to recoup. No facilities team. No single company setting prices in a boardroom. The GPU market finds the price. Competitive markets produce lower prices than monopolies. That's not a hack. That's how markets work. 100B+ tokens per day. Months of sustained throughput. 700k+ users. $1.4M in 90 days. Does the pricing make more sense now? Or do you still think cheap means broken? http://chutes.ai | $TAO

A 27B dense model outscored Claude 4.5 Opus on vision. Qwen3.6-27B. Live on Chutes. Qwen's newest open source flagship. Image, video, and text native. 262K context (extensible to 1M). Apache 2.0. Head-to-head vs Claude 4.5 Opus on vision: - V*: 94.7 vs 67.0 - CountBench: 97.8 vs 90.6 - VideoMME (w/ sub): 87.7 vs 77.7 - ERQA: 62.5 vs 46.8 - CharXiv RQ: 78.4 vs 68.5 Terminal-Bench 2.0 matches Claude 4.5 Opus at 59.3. New "Preserve Thinking" mode keeps reasoning traces across turns for agent workflows. Running inside a TEE on Chutes. The GPU operators serving the model can't see your prompts or outputs. $0.195 in / $1.56 out per million tokens. Try it: http://chutes.ai/app/chute/7aa5e899-c0ba-5482-af48-d3f31d635c9f

How does Subnet 64 verify that miners are running the GPUs they claim to be running? Most decentralized compute networks rely on trust. Subnet 64 verifies every GPU before it serves a request. Every GPU on the network gets validated by GraVal, our open-source validation library. The GPU runs device-info-seeded matrix operations. Validators check that the results match the expected output, and that 95% of the claimed VRAM is available for those operations. Spoofed GPUs and GPUs with partitioned VRAM fail the test. The encryption layer adds a second check. Chutes encrypts every request with keys only the exact GPU advertised can decrypt. Reroute to a different GPU on the same server and decryption fails. The runtime drops the job. Miners who underperform lose traffic, and miners who try to cheat get caught by validators running cross-verification on the outputs. All of this runs at 100B+ tokens per day. http://github.com/chutesai/graval

Kimi K2.6 by @Kimi_Moonshot is now live on Chutes. 54.0 on HLE-Full with tools. Ahead of GPT-5.4 (52.1), Claude Opus 4.6 (53.0), and Gemini 3.1 Pro (51.4). It also leads on: - SWE-Bench Pro: 58.6 (ahead of all three again) - DeepSearchQA f1: 92.5 (next closest is Claude at 91.3) - BrowseComp with Agent Swarm: 86.3 (up from K2.5's 78.4) 1T parameters, 32B activated. 256K context. Native multimodal. Modified MIT license. Agent Swarm now scales to 300 sub-agents across 4,000 coordinated steps in a single run. Running inside a TEE on Chutes. The GPU operators serving the model can't see your prompts or outputs. $0.95 in / $4.00 out per million tokens. Try it now: http://chutes.ai/app/chute/aac09863-35b4-5d9b-9b67-6e6a9d54273a
