
AI Hardware from a Single Source
Qualified Systems for the COMI AI Platform
We support you in selecting the right hardware for your AI applications. Whether through our qualified reference systems or through our partners who build systems to your specifications - you receive hardware optimally tuned for the COMI AI Platform. GPUs from NVIDIA, AMD, or Intel are configured according to your requirements.
All systems are tested, pre-configured, and shipped with the AI Platform by us. You receive a ready-to-use system - from compact workstations to 19-inch rack servers to HPC clusters.
AI Workstations
Compact systems for local AI applications

AI Server S
Ultra-compact workstation for desktop or production floor deployment.
- Processor
- 12 Cores
- Memory
- 64 GB (2 Channels)
- Storage
- 2× 2 TB + 2× 4 TB
- GPU
- 1× 96 GB
- Network
- 2× 10 Gbit/s
- Processor
- 16 Cores
- Memory
- 96 GB (2 Channels)
- Storage
- 2× 2 TB + 2× 8 TB
- GPU
- 1× 96 GB + 1× 24 GB
- Network
- 2× 25 Gbit/s

AI Server Pro
Powerful workstation for demanding local AI tasks and training.
- Processor
- 32 Cores
- Memory
- 128 GB (4 Channels)
- Storage
- 2× 2 TB + 2× 3.84 TB
- GPU
- 1× 96 GB + 1× 24 GB
- Network
- 2× 25 Gbit/s
- Processor
- 64 Cores
- Memory
- 256 GB (4 Channels)
- Storage
- 2× 2 TB + 2× 7.68 TB
- GPU
- 2× 96 GB + 2× 24 GB
- Network
- 2× 100 Gbit/s
19" Rack Servers
Data center-ready systems for centralized AI workloads

AI Server Rack Mini
2U Short-Depth · 24 Cores · up to 256 GB RAM · up to 2 GPUs (2× 96 GB) · up to 30 TB · up to 2× 200 Gbit/s
Compact short-depth chassis for industrial racks and tight spaces.

AI Server Rack
2U · 32 Cores · up to 2 TB RAM · up to 4 GPUs (4× 96 GB) · up to 60 TB · up to 4× 200 Gbit/s
Powerful rack server for data centers.

AI Server Rack Ultra
4U · up to 2× 128 Cores · up to 4 TB RAM · up to 8 GPUs (8× 141 GB) · up to 180 TB · up to 4× 400 Gbit/s
Maximum performance for compute-intensive workloads.
AI Server Cluster
Scalable HPC infrastructure for maximum performance
Modular cluster solution with specialized components - flexibly scalable to your requirements.
GPU Nodes
High-performance compute nodes for parallel AI workloads and training large models.
CPU Nodes
Classic compute nodes for data processing, orchestration, and CPU-intensive tasks.
Storage
High-performance storage solutions for large datasets and fast data access.
Networking
High-speed interconnect for minimal latency between cluster nodes.
Power & Cooling
Redundant power distribution and efficient cooling for uninterrupted operation.
Frequently Asked Questions
What you need to know about our AI Servers.
We support GPUs from NVIDIA, AMD, and Intel - depending on your requirements and compatibility with your AI models.
Yes, all systems are tested, pre-configured, and shipped with operating system, drivers, and the AI Platform - ready to use immediately.
From compact workstations to 19-inch rack servers to HPC clusters, we offer the right hardware for every use case.
Yes, through our partners, systems can be assembled according to your specific requirements.
Our servers support standard network protocols and integrate seamlessly into your existing data center. Remote management via IPMI/BMC is available on all rack systems.
All systems are modular and leave room for upgrades. Workstations can be upgraded with additional GPUs or more memory, and with clusters almost all components can be scaled independently.
With continuous usage, dedicated hardware often pays for itself within the first year. You retain full control over your data and avoid ongoing cloud costs that escalate quickly with GPU instances.
Let's enable AI together!
Get advice on our AI Server solutions.
When you click "Schedule via HubSpot", data will be transmitted to HubSpot.
By submitting your data, you agree to the processing. For more information, see our Privacy Policy.



