AI Server

Pre-configured AI infrastructure for every use case

AI Hardware from a Single Source

Qualified Systems for the COMI AI Platform

We support you in selecting the right hardware for your AI applications. Whether through our qualified reference systems or through our partners who build systems to your specifications - you receive hardware optimally tuned for the COMI AI Platform. GPUs from NVIDIA, AMD, or Intel are configured according to your requirements.

All systems are tested, pre-configured, and shipped with the AI Platform by us. You receive a ready-to-use system - from compact workstations to 19-inch rack servers to HPC clusters.

AI Workstations

Compact systems for local AI applications
AI Server S

AI Server S

Ultra-compact workstation for desktop or production floor deployment.

S
Processor
12 Cores
Memory
64 GB (2 Channels)
Storage
2× 2 TB + 2× 4 TB
GPU
1× 96 GB
Network
2× 10 Gbit/s
S Max
Processor
16 Cores
Memory
96 GB (2 Channels)
Storage
2× 2 TB + 2× 8 TB
GPU
1× 96 GB + 1× 24 GB
Network
2× 25 Gbit/s
Edge InferenceSingle WorkstationLarge Language Models
AI Server Pro

AI Server Pro

Powerful workstation for demanding local AI tasks and training.

Pro
Processor
32 Cores
Memory
128 GB (4 Channels)
Storage
2× 2 TB + 2× 3.84 TB
GPU
1× 96 GB + 1× 24 GB
Network
2× 25 Gbit/s
Pro Plus
Processor
64 Cores
Memory
256 GB (4 Channels)
Storage
2× 2 TB + 2× 7.68 TB
GPU
2× 96 GB + 2× 24 GB
Network
2× 100 Gbit/s
Local TrainingMulti-Model InferenceResearch

19" Rack Servers

Data center-ready systems for centralized AI workloads
AI Server Rack Mini

AI Server Rack Mini

2U Short-Depth · 24 Cores · up to 256 GB RAM · up to 2 GPUs (2× 96 GB) · up to 30 TB · up to 2× 200 Gbit/s

Compact short-depth chassis for industrial racks and tight spaces.

Industrial RackEdge Data CenterCompact Entry
AI Server Rack

AI Server Rack

2U · 32 Cores · up to 2 TB RAM · up to 4 GPUs (4× 96 GB) · up to 60 TB · up to 4× 200 Gbit/s

Powerful rack server for data centers.

Central InferenceMulti-User OperationProduction Environment
AI Server Rack Ultra

AI Server Rack Ultra

4U · up to 2× 128 Cores · up to 4 TB RAM · up to 8 GPUs (8× 141 GB) · up to 180 TB · up to 4× 400 Gbit/s

Maximum performance for compute-intensive workloads.

Large-Scale TrainingHigh ThroughputEnterprise

AI Server Cluster

Scalable HPC infrastructure for maximum performance

Modular cluster solution with specialized components - flexibly scalable to your requirements.

GPU Nodes

High-performance compute nodes for parallel AI workloads and training large models.

LLM TrainingParallel InferenceDeep Learning

CPU Nodes

Classic compute nodes for data processing, orchestration, and CPU-intensive tasks.

Data PreprocessingOrchestrationClassic ML

Storage

High-performance storage solutions for large datasets and fast data access.

Large DatasetsModel CheckpointsShared Storage

Networking

High-speed interconnect for minimal latency between cluster nodes.

InfiniBandLow LatencyHigh Throughput

Power & Cooling

Redundant power distribution and efficient cooling for uninterrupted operation.

RedundancyUPS IntegrationLiquid Cooling
Local Inference

Local Inference

Compact workstations for on-site AI inference - fast response times, full data control.

Central Processing

Central Processing

Rack servers for central processing of multiple data streams and requests.

Model Training

Model Training

HPC clusters for training custom models with large datasets.

Redundancy & Scaling

Redundancy & Scaling

Cluster solutions for high availability and horizontal scaling.

Frequently Asked Questions

What you need to know about our AI Servers.

Which GPU manufacturers are supported?

We support GPUs from NVIDIA, AMD, and Intel - depending on your requirements and compatibility with your AI models.

Are systems delivered pre-configured?

Yes, all systems are tested, pre-configured, and shipped with operating system, drivers, and the AI Platform - ready to use immediately.

What form factors are available?

From compact workstations to 19-inch rack servers to HPC clusters, we offer the right hardware for every use case.

Can custom configurations be created?

Yes, through our partners, systems can be assembled according to your specific requirements.

How do the systems integrate with existing infrastructure?

Our servers support standard network protocols and integrate seamlessly into your existing data center. Remote management via IPMI/BMC is available on all rack systems.

How can I expand my system later?

All systems are modular and leave room for upgrades. Workstations can be upgraded with additional GPUs or more memory, and with clusters almost all components can be scaled independently.

Is owning hardware worthwhile compared to cloud solutions?

With continuous usage, dedicated hardware often pays for itself within the first year. You retain full control over your data and avoid ongoing cloud costs that escalate quickly with GPU instances.

Let's enable AI together!

Get advice on our AI Server solutions.

When you click "Schedule via HubSpot", data will be transmitted to HubSpot.

By submitting your data, you agree to the processing. For more information, see our Privacy Policy.