Infrastructure
Built for
frontier AI.
From day one.
GPU Platform
NVIDIA · High-density AI racks
Cooling
Direct liquid-to-chip · Water-chilled plant
Target PUE
~1.15 Designed PUE
Power
27.6 kV Dedicated Feeder · Utility-Grade Grid · N+1
Uptime
99.99% SLA · Tier III Design Standard
Security
Biometric access · Mantrap entry · 24/7 NOC
Connectivity
Dark fibre · 100–400 Gbps scalable
Location
Ontario, Canada · Stable grid · Mature utility infrastructure
Compliance
PIPEDA · SOC 2 · ISO 27001
Deployment
Dedicated data hall · Custom-scoped · Wholesale
Dark Fibre Connectivity
5ms to Toronto.
12ms to Chicago.
Dark fibre now.
Primary Carrier
Available now · Toronto 5ms · Chicago 12ms
Carrier 2
100–400 Gbps scalable
Carrier 3
100–400 Gbps · Diverse entry paths
2× diverse routes to Toronto · 1× to Chicago · Capacity expansion planned 2026–30
LIQUID COOLED.
ZERO THROTTLE.
EVERY RUN.
ZERO THROTTLE.
EVERY RUN.
Direct-to-chip liquid cooling designed in from day one — not retrofitted.
Your GPUs run at full clock speed for the full duration of every training run.
Your GPUs run at full clock speed for the full duration of every training run.
~1.15
Designed PUE
0
Thermal Throttle
Phase 1
Liquid from Day 1