The Gulf doesn't just supply AI compute. It prices it.
AI Factories Engineered for Density and Performance
50–140 kW per rack; up to 150 kW Peak load. Designed to go up to 300 kW making it Future ready.
0.5–5 MW per module within a compact footprint, with built-in scalability.
16 to 36 weeks from design agreement to RFS.
Tier III-ready, vendor-agnostic, globally compatible. With Option for a non-critical setup.
Direct-to-Chip Liquid Cooling (DLC)
- Cold plate loops mounted directly to GPU/CPU dies
- Supports thermal loads >100 kW per rack
- Low Delta T and thermal resistance, ensuring minimal energy loss
- Circulated via cooling distribution units (CDUs)
Rear Door Heat Exchangers (RDHx)
- Rack-mounted liquid-to-air exchangers
- Removes the 5- 20% residual heat post-DLC
- Passive/active failover to ensure thermal protection during component failure
Chiller/Drycooler Strategy
- Hybrid N+1 architecture (chillers + drycoolers)
- Grundfos VFD pumps ensure stable flow and hydraulic balance
- Adiabatic option: Enables PUE reduction in moderate climates
This is our base model, engineered to deliver up to 5 MW of compute power. It supports up to 36 air-cooled 19-inch racks, all equipped with high-efficiency Rear Door Heat Exchangers to ensure optimal thermal management. All of this starting from a 416m² footprint (32-13m)
Built to meet Tier III standards, this model offers exceptional reliability and uptime and can be upgraded to Tier IV for even greater fault tolerance and resilience.
Designed with scalability in mind, it enables seamless, uninterrupted expansion as your needs grow, forming the foundation for a 20 MW+ AI campus.
Technical/ service walkway
Scalable power pod A feed
Scalable power pod B feed
Argonite based fire detection/extinguishing system
- Standard: Build to a TIER III/IV design
- Modular Architecture: Scalable from 1 MW to 20 MW
- Rack Capacity: 26–36 racks per module
- Power System: Dual scalable power pods (A & B feeds), UPS- backed
- Fire Protection: Argonite-based detection and suppression
- Purpose: Rapid, turnkey AI infrastructure with integrated compute
- Deployment Time: 24 weeks