Comparisons

Tenstorrent vs NVIDIA: Can Open Hardware Challenge the AI Monopoly?

November 11, 2025
7 min read
tenstorrentnvidiaai-acceleratorrisc-vcompetition

The NVIDIA Problem Everyone Knows About

Let's state the obvious: NVIDIA has an effective monopoly on AI accelerators.

H100s are allocated like rare gems. Prices have doubled in two years. The CUDA ecosystem locks in developers. And there's no realistic alternative for most workloads.

This isn't healthy for the industry. Monopolies breed complacency, inflate prices, and stifle innovation. Competition made CPUs better (AMD vs Intel). Competition made cloud cheaper (AWS vs Azure vs GCP). Competition should make AI hardware better too.

Enter Tenstorrent—a company that's not trying to out-NVIDIA NVIDIA, but to change the rules of the game entirely.

Tenstorrent: The Challenger's Approach

Company Background

  • Founded: 2016 (Toronto, Canada)
  • CEO: Jim Keller (former AMD Zen architect, Apple A-series, Tesla Autopilot)
  • Funding: $340M+ (Series C, 2023)
  • Strategy: Open-source RISC-V architecture for AI acceleration

Jim Keller is perhaps the most respected chip architect alive. He designed AMD's Zen architecture that brought AMD back from the dead. He worked on Apple's A-series chips. He led Tesla's Autopilot hardware. When he joined Tenstorrent in 2021, it signaled serious intent.

The Technical Approach

Tenstorrent's bet is fundamentally different from NVIDIA's:

NVIDIA's approach:

  • Proprietary CUDA programming model
  • General-purpose GPUs adapted for AI
  • Massive transistor budgets (80B+ on H100)
  • Premium pricing, premium performance

Tenstorrent's approach:

  • Open-source RISC-V architecture
  • Purpose-built AI accelerators
  • Smaller, more efficient chips
  • Competitive pricing, targeted performance

The key insight: AI workloads are predictable. You know in advance what operations will run (matrix multiplications, attention, etc.). You don't need general-purpose flexibility—you need optimized execution of known patterns.

Current Products

Tenstorrent currently offers three main product lines:

Wormhole (n150, n300):

  • Entry-level AI accelerator
  • 74 TFLOPS (FP8)
  • 12GB GDDR6 memory
  • Price: $999-$1,399

Grayskull:

  • First-generation architecture
  • Focus on inference workloads
  • Developer-focused pricing

Blackhole (upcoming):

  • Next-generation architecture
  • Expected 2025
  • Targeting datacenter deployment

Head-to-Head: Tenstorrent vs NVIDIA

Raw Performance

MetricTenstorrent n300NVIDIA RTX 4090NVIDIA H100
FP8 TFLOPS148 (2x n150)6601,979
Memory24GB GDDR624GB GDDR6X80GB HBM3
Memory BW400 GB/s1,008 GB/s3,350 GB/s
TDP300W450W700W
Price~$2,400~$1,600~$25,000

Analysis: On raw TFLOPS, NVIDIA wins decisively. H100 delivers 13x the compute of Tenstorrent's n300. But raw TFLOPS isn't the whole story—efficiency, price-performance, and software matter too.

Price-Performance

MetricTenstorrent n300RTX 4090H100
Price$2,400$1,600$25,000
FP8 TFLOPS1486601,979
$/TFLOP$16.22$2.42$12.63
TFLOPS/Watt0.491.472.83

Analysis: RTX 4090 offers the best price-performance by far. H100 wins on efficiency (TFLOPS/Watt). Tenstorrent is currently the most expensive per TFLOP—but that's comparing apples to oranges.

The Software Reality

Here's where Tenstorrent faces the steepest climb:

NVIDIA CUDA ecosystem:

  • 20+ years of development
  • Every AI framework supports CUDA
  • Millions of developers know CUDA
  • Extensive libraries (cuDNN, TensorRT, etc.)

Tenstorrent software stack:

  • TT-Metalium (low-level runtime)
  • TT-NN (neural network primitives)
  • TT-Buda (high-level framework)
  • PyTorch/ONNX model support (growing)

Reality check: Running a model on Tenstorrent today requires more work than on NVIDIA. Not all operations are optimized. Not all models work out-of-the-box. You're an early adopter, with all that implies.

Why Tenstorrent Might Matter Anyway

1. The Open-Source Advantage

Tenstorrent is building on RISC-V—an open-source instruction set architecture. This matters because:

  • No licensing fees: Anyone can build RISC-V chips
  • Customization: Modify the architecture for specific workloads
  • Transparency: Full visibility into how the hardware works
  • Community: Growing ecosystem of RISC-V developers

CUDA is proprietary. You can't see how it works. You can't modify it. You're locked into NVIDIA's roadmap.

2. The Jim Keller Factor

Jim Keller doesn't join companies to lose. His track record:

  • AMD Zen: Revived AMD's CPU business
  • Apple A-series: Made Apple silicon dominant in mobile
  • Tesla Autopilot HW: Designed custom AI chips for Tesla

If anyone can build a competitive AI architecture from scratch, it's him.

3. China and Export Controls

NVIDIA's most powerful GPUs (H100, H200) are export-restricted. China—the world's second-largest AI market—can't legally buy them.

Tenstorrent, being Canadian with open-source architecture, faces fewer restrictions. There's a massive potential market for non-NVIDIA AI accelerators that can legally ship to China.

4. The Long Game

Tenstorrent isn't trying to beat H100 today. They're building for 2027-2030:

  • First-generation products (Grayskull, Wormhole): Prove the architecture works
  • Second-generation (Blackhole): Competitive datacenter performance
  • Third-generation: Challenge NVIDIA's high end

Think of it like AMD Zen: first-generation was "good enough," second-generation was competitive, third-generation was industry-leading.

Who Should Consider Tenstorrent Today?

Good Fit:

  • Open-source advocates: If you believe in open hardware, Tenstorrent aligns with that philosophy
  • Edge AI developers: Wormhole's price/power makes sense for edge deployment
  • AI chip researchers: The open architecture enables research impossible on NVIDIA
  • Export-restricted markets: Where NVIDIA isn't an option
  • Long-term strategic buyers: Betting on the ecosystem maturing

Poor Fit:

  • Production ML teams: You need proven reliability, not early-adopter risk
  • Anyone on deadline: Debugging unfamiliar hardware takes time
  • Teams without AI expertise: You'll need to solve problems yourself
  • Performance-critical applications: NVIDIA is still faster

The Broader Alternative Accelerator Landscape

Tenstorrent isn't the only challenger. The AI accelerator market includes:

AMD Instinct (MI300X):

  • Closest NVIDIA competitor in raw performance
  • 192GB HBM3 memory (more than H100)
  • ROCm software stack (improving but still behind CUDA)
  • Price competitive with H100

Intel Gaudi 3:

  • Intel's AI accelerator line (via Habana Labs acquisition)
  • Competitive training performance
  • Strong in enterprise/cloud markets
  • Software ecosystem less mature

Google TPUs:

  • Available only via Google Cloud
  • Excellent for TensorFlow/JAX workloads
  • Can't buy hardware directly

Groq LPU:

  • Inference-focused architecture
  • Extremely fast token generation
  • Not suitable for training
  • Cloud service model

Cerebras:

  • Wafer-scale chips (entire wafer = one chip)
  • Massive memory bandwidth
  • Expensive ($1M+ per system)
  • Research and enterprise focus

The Honest Assessment

Today: NVIDIA is the safe choice. CUDA works. Frameworks work. Support exists. Performance is proven. For production workloads, there's no serious debate.

Tomorrow: The landscape could shift. AMD's ROCm is improving. Tenstorrent's next-generation chips could be competitive. Export controls could expand. A CUDA-killer framework could emerge.

The bet: Buying Tenstorrent today is betting on the future. It's betting that open-source wins, that competition emerges, that NVIDIA's monopoly doesn't last forever.

That bet might pay off. Jim Keller's track record suggests it could. But it's a bet, not a certainty.

Practical Recommendations

For individuals experimenting with alternative architectures:

Buy a Wormhole n150 ($999). It's cheap enough to experiment with, powerful enough to run real models, and you'll learn about non-NVIDIA AI hardware.

For startups building products:

Stick with NVIDIA for now. Your business depends on shipping working software. Debug business problems, not hardware problems.

For enterprises with strategic concerns:

Evaluate Tenstorrent alongside AMD and Intel. Build multi-vendor capability. Don't let NVIDIA lock-in become a strategic risk.

For researchers:

Tenstorrent's open architecture enables research impossible elsewhere. If you're studying AI accelerator design, this is valuable.

Conclusion

Tenstorrent represents the most interesting challenge to NVIDIA's dominance. Not because they're winning today—they're not—but because they're asking the right questions:

  • Should AI hardware be open-source?
  • Should a single company control the AI stack?
  • Can purpose-built accelerators beat general-purpose GPUs?

The answers will play out over the next 5-10 years. Tenstorrent might be the AMD of AI accelerators—the underdog that eventually competes. Or they might be another failed challenger to NVIDIA's moat.

What's certain: NVIDIA monopoly isn't healthy. Competition would benefit everyone. Whether Tenstorrent specifically wins matters less than whether someone breaks the monopoly.

For now, keep NVIDIA for production and keep an eye on Tenstorrent for the future. The game is far from over.

---

Ready to explore AI accelerators?

Published November 27, 2025

Share this post

Related Posts

Supermicro vs Dell: Which Enterprise AI Server Vendor Should You Choose?
Comparisons
November 16, 20257 min read

Supermicro vs Dell: Which Enterprise AI Server Vendor Should You Choose?

Enterprise AI server buyers face a fundamental choice: Supermicro's flexibility and value versus Dell's reliability and support. After analyzing hundreds of servers from both vendors, here's how to decide.

Read More →
Puget Systems vs Exxact: Which Custom AI Builder Should You Choose?
Comparisons
November 9, 20257 min read

Puget Systems vs Exxact: Which Custom AI Builder Should You Choose?

Two of the most respected names in custom AI workstations take different approaches to the same market. Here's how Puget Systems and Exxact compare on hardware, support, pricing, and who each builder serves best.

Read More →
DGX Spark vs DIY AI Workstation: Cost Breakdown
Comparisons
October 25, 202513 min read

DGX Spark vs DIY AI Workstation: Cost Breakdown

Can you build a better AI workstation than NVIDIA's $3,000 DGX Spark? We break down three DIY builds with real component prices, hidden costs, and honest recommendations for when to buy vs build.

Read More →