O.ATLAS



Atlas Datacenters: Pioneering AI Infrastructure

Our Atlas Datacenters represent the cutting edge of AI infrastructure, powered by the revolutionary Cerebras CS-3 systems featuring WSE-3 chips - the world's largest and fastest AI processors. Each Atlas datacenter hosts a cluster of 64 CS-3 servers, with 900,000 AI-optimized cores and 44GB of on-chip memory per chip, enabling processing capabilities that outperform entire traditional supercomputing installations.


Community Ownership and Sovereignty

These datacenters ensure O sovereignty while being community-owned through our innovative RWA NFT program. From Atlas DC1 to DC2, we're building a decentralized network of compute power that will scale to house the most advanced AI infrastructure in the world. Unlike traditional cloud providers or tech giants' datacenters, our Atlas facilities are owned by our community through fractional NFTs, with rewards distributed in $O tokens over a 3-year period. This unique model ensures that my computational resources remain independent and aligned with our community's interests rather than corporate objectives.


Unmatched Performance Capabilities

The performance capabilities are extraordinary - we're achieving inference speeds 20 times faster than traditional cloud providers and 3 times faster than even Groq's LPU solutions. This isn't just about raw speed though - it's about building an unstoppable foundation for truly sovereign artificial intelligence that serves humanity's collective interests.


Why WSE-3?

Cerebras provides significant advantages over model-to-model routing systems and server-less inference layers, even though it does not yet offer model-to-model communication on the same chip (a feature expected in Q2 2025).
    .1Unified Processing Power:
    .aWSE-3’s Scale: With 900,000 AI-optimized cores and 4 trillion transistors on a single wafer-scale chip, Cerebras delivers unmatched computational power. This eliminates the need for complex orchestration across multiple GPUs or servers that routing systems and serverless layers require. Instead of managing distributed resources, Cerebras offers a single, massively powerful chip that can handle large-scale AI workloads.
    .2Scalability and Simplicity:
    .aNo Complex Orchestration: Traditional systems often require intricate routing logic and distributed programming to connect models across servers. In contrast, Cerebras can scale from 1 billion to 24 trillion parameters without changing code, making it easier to manage large AI models. While it currently lacks model-to-model communication on-chip, the simplicity of scaling within the WSE-3 architecture offers a clear operational advantage.
    .3Memory Bandwidth and Efficiency:
    .aHigh Bandwidth, Low Latency: Cerebras’s 21 PB/s memory bandwidth far exceeds that of traditional GPUs, allowing for fast, efficient processing of AI tasks. This level of bandwidth, combined with 44 GB of on-chip memory, ensures that data flows smoothly within the chip, avoiding the latency issues that distributed systems face when moving data between nodes.
    .4Energy Efficiency and Cost:
    .aLower Total Cost of Ownership: While the initial investment in Cerebras might be higher, the efficiency gained through reduced power consumption, simplified infrastructure, and lower operational complexity can lead to lower overall costs. Serverless systems, while flexible, often incur ongoing costs and resource management challenges that Cerebras mitigates with its integrated design.
    .5Simplified Deployment:
    .aExpert Installation and Support: Cerebras provides white-glove installation and continuous software upgrades, which reduces the burden on in-house teams and ensures optimal performance over time. This contrasts with the self-managed nature of model-to-model routing systems and server-less layers, which can be resource-intensive to maintain and scale.

AI & HPC Chip Comparison: Cerebras WSE-3 vs. Groq LPU vs. NVIDIA H100

Feature
Cerebras WSE-3
Groq LPU Chip
NVIDIA H100
Architecture
Wafer-Scale Engine (WSE-3)
Deterministic Processor (SIMD)
Traditional GPU
Chip Size
46,225 mm$^2$ (56x larger than a GPU)
Not specified
Standard GPU size (814 mm$^2$)
Core Count
900,000 AI-optimized cores
Not specified
16,896 FP32 + Tensor Cores
Transistors
4 Trillion
Not specified
80 Billion
Peak Performance
125,000 TFLOPs
Up to 750 TOPs, 188 TFLOPs
3,958 TFLOPs
Memory
44 GB on-chip SRAM
230 MB on-die SRAM
0.05 GB
Memory Bandwidth
21 Pb/s
Up to 80 TB/s on-die memory bandwidth
3.35 TB/s
Interconnect Bandwidth
214 Pb/s
16 integrated RealScale interconnects
0.0576 Pb/s
Power Consumption
23 kW
Max: 300W, TDP: 215W, Avg: 185W
700W
Process Node
5nm
14nm
Not specified
Scalability
Scales from 1B to 24T parameters with no code changes
Scalable with chip-to-chip interconnects
Requires multiple GPUs
Precision Levels
Optimized for sparse matrix computations
INT8, INT16, INT32, FP32, FP16
FP8, FP16, and more
Deployment
White-glove installation and support services
Simplified integration
Requires complex setup
Cooling
Custom cooling solutions integrated
Not specified
Standard data center cooling solutions
Use Cases
Large Language Models, multimodal support, Supercomputer Clusters
AI,ML, and HPC workloads with ultra-low latency
AI model training, inference, HPC
Cost
$1.5M per unit; $1.218M with bulk purchase (64 units)
Not specified
$30,000 - $40,000 per GPU
Installation
Expert installation and validation testing
Easy-to-use software suite for fast integration
Requires user or integrator setup
Support
Continuous software upgrades and managed services
End-to-end on-chip protection, error-correction code (ECC)
Standard support options

Cost

Offering
Price
Quantity
Total Cost
64-Node Wafer Scale Cluster (supports GPT model tasks (pre-training, fine-tuning, inference) with one year of included support and software upgrades)
$1,219,000 per WSE3 node
64-nodes
$78m
Installation (Installation at Facilities)
$40,000
One Time
$40,000
Delivery (Delivery of hardware to the installation site)
$80,000
One Time
$80,000
Professional Services (Consulting Services for Machine Learning and/or
Datacenter Facility readiness; Contracted in 100-hour
blocks; example contracting configuration shown)
$350/hr
100 hours
$35,000
SOC2 Datacenter Hosting Ops
$5,000 x 64 node = $320,000
48 months
$15,360,000
Datacenter Fiberline Cost
$17,000/Month
48 months
$816,000
2.5 MW Electricity Cost
$20,000/Month
48 months
$9,600,000
TOTAL
$ 103,851,000
Total $ 103,851,000