Back to blog

11 minute read

Real-world benchmarks demonstrating performance variances across different GPU cloud infrastructures

Emmanuel OhiriSean Berry

Emmanuel Ohiri & Sean Berry

Summary

A controlled, apples-to-apples benchmark on CUDO Compute compared three NVIDIA GPUs—H100 SXM, A100, and L40S —under identical software stacks and single-GPU VM configurations.

Key Findings

MetricH100 SXML40SA100 PCIe
Training (cost/10 M tokens)$0.88 (-86 %)$2.15 (-66 %)$6.32
Inference (cost / 1 M tokens)$0.026 (-86 %)$0.023 (-88 %)$0.191
Throughput boost vs A10012× train tps7× infer tps5× train tps4× infer tps

Using an end-to-end BERT-base masked-LM fine-tune as the workload, the study measured raw throughput (tokens/samples per second) and normalised all results to cost-per-million-tokens.

Real-world_benchmarks_image_1

  • H100 combines 4th-gen tensor cores, 3 TB/s NVLink bandwidth, and BF16/FP8 optimisations to deliver the fastest and lowest-cost training and high QPS serving.
  • L40S, despite having a lower raw speed, achieves a lower cost-per-token rate compared to the A100 in inference due to a 35% lower hourly rate.

Introduction

Choosing a GPU cloud vendor is no longer a matter of renting the biggest card your budget allows. Transformer workloads have fragmented into a spectrum of use cases—from trillion-parameter pre-training runs to millisecond-latency microservices that serve responses to millions of users per day. Each scenario stresses hardware in different ways, and the same cloud infrastructure can deliver wildly different business outcomes depending on which GPU you deploy.

To clearly illustrate these variations, we conducted a controlled, apples-to-apples benchmark exercise using the open-source Transformers Benchmarks framework. We measured three NVIDIA GPUs — H100 SXM 80GB (Hopper), A100 PCIe 80GB (Ampere), and L40S 48GB (Ada Lovelace) — across both training-centric and inference-centric transformer workloads.

In this article, we will cover our methodology, training and inference result, and provide a roadmap for maximising ROI when selecting GPUs.

Methodology

The configuration below ensures that every result presented in this report can be fully replicated.

1. Test environment

ComponentConfiguration (constant across runs unless noted)
Cloud hostCUDO Compute single-tenant GPU instances
GPUs testedH100 SXM 80GB • A100 PCIe 80GB • L40S 48GB
Containernvcr.io/nvidia/pytorch:24.04-py3 (CUDA 12.4, PyTorch 2.3)
OS imageUbuntu 22.04 LTS (CUDO standard GPU template)

Only one GPU was attached per VM; multi-GPU scaling is outside the scope of this study.

2. Software stack

LayerVersion/commit
NVIDIA driver555.52.06
CUDA toolkit12.4
PyTorch2.3.0 + CU124
Transformers4.40.0
Benchmark harnesstransformers-benchmarks commit c5a9a0d

The container digest was frozen; only the GPU SKU changed between runs.

3. Workload

End-to-end training test – Hugging Face BERT-base masked-LM

  • Precision: FP16
  • Sequence length: 128
  • Batch size: auto-tuned to max VRAM fit (32–64)
  • Dataset slice: 4,627 samples (≈86 MB)

This single workload was chosen to reflect a realistic fine-tuning scenario while maintaining a runtime of under two minutes per GPU.

4. Measurement protocol

  • Warm-up: 25 iterations to stabilise clocks.
  • Timing window: 100 timed iterations; median tokens/s reported.
  • Determinism: torch.backends.cudnn.benchmark=False, torch.manual_seed(42).
  • No power sampling: Energy tracking reserved for future work.

5. Cost model

CUDO Compute on-demand prices:

GPU$/h
H100 SXM2.25
A100 80G1.35
L40S 48G0.87

The cost-per-sample is calculated by dividing the observed samples by the corresponding hourly rate; no commitment discounts are applied.

6. Limitations

  • Single-GPU scope; fabric latency and NVLink scaling not covered
  • One workload family (Transformer masked-LM); results should not be generalised to CNNs or diffusion models without further testing.
  • No power or thermal data captured.
  • Each run was repeated three times; if the relative standard deviation exceeded 5%, a fourth run was performed to replace the outlier.

With the test bed defined, we now turn to the results, comparing raw throughput and cost-normalised performance across H100, A100, and L40S instances.

Training benchmark result

Real-world model training involves two very different stress tests:

  • Micro-kernels – e.g. a single BERT encoder layer, which exposes raw tensor-core muscle and memory-bandwidth limits.
  • Macro runs – an end-to-end BERT-base masked-LM fine-tune, surfacing dataloader overheads, optimiser cost, and scheduler jitter that ML-Ops teams hit daily.

Below, we walk through both, anchoring the numbers in cost-per-token so that you can see exactly where the budget is allocated.

1. Micro-benchmark headline: BERT layer, seq 512, batch 64

GPUTokens/sec× vs A100 (speed)$/1 M tokens
H100 SXM252.82.2×$ 2.47
A100 PCIe116.2$ 3.23
L40S69.80.6×$ 3.46

*Costs use public CUDO rates as of June 2025 (H100: $2.25/hr, A100: $1.35/hr, L40S: $0.87/hr).

Real-world_benchmarks_image_2

Take-aways:

  • Hopper wins on both speed and cost. It's 4th-gen tensor cores, plus 3.36 TB/s memory bandwidth, deliver a 2.15× throughput edge over the A100 and a 23% lower cost-per-token, despite the higher hourly rate.
  • L40S stays competitive only on the list price. It’s ~35% cheaper per hour than A100, but the slower tensor cores mean its cost-per-token actually increases by 7% for large-context training.
  • A100 is now the “bronze” choice for transformer pre-training—solid, but neither the fastest nor the cheapest on CUDO Compute, especially considering the release of the Blackwell GPUs.

Macro reality-check | Full BERT-base fine-tune (bf16, seq 128)

GPUThroughput (samples/sec)Cost/10M tokensΔ cost vs A100
H100 SXM92.8$0.88-86 %
L40S41.3$2.15-66 %
A100 PCIe7.68$6.32

Why the wider gap?

  • CPU ↔ GPU synchronisation: Hopper’s HW-accelerated threadblock-array scheduler keeps streaming multiprocessors (SMs) saturated even when Python dataloaders stall; Ampere idles.
  • BF16 optimiser fusion: PyTorch 2.7’s advanced compile path maps directly onto Hopper’s FP8/BF16 tensor-cores, trimming optimiser time by ~30%.
  • PCIe tax: The A100 node operates over PCIe Gen-4; the 350 GB/s gap to SXM/ becomes apparent in end-to-end jobs.

Real-world_benchmarks_image_3

Resulting economics

  • $0.88 to process 10M tokens on H100
  • $2.15 on L40S
  • $6.32 on A100

What does this mean for your budget?

ScenarioBest SKUWhy
Large-context pre-training (seq 512-2k, batch 64-128)H100Lowest $/token and 2× wall-clock speed slash engineer wait-time.
Daily fine-tunes & RAG adapters (seq 128-256, batch 16-32)L40SNear-Ampere speed at 60 % of the hourly rate—perfect for many small jobs, pipelines.
Legacy mixed workloadsA100 → migrateUnless you require exact Ampere reproducibility, upgrading reduces the cost per run by 30–50%.

Operational notes:

  • Memory headroom: Hopper’s 80GB HBM3 enables us to increase BERT batch size from 64 to 96 with no out-of-memory (OOM) errors, reducing epochs per hour by another 1.4 times.
  • Gradient checkpointing: Still worth enabling on L40S to maintain utilisation ≥ 95%; on H100, it shaved just 2% off the time-to-accuracy.
  • Scheduler simplicity: Since all three GPUs reside within the same CUDO tenancy, you can schedule tasks via a single Terraform module, allowing the price tier to be determined automatically through the use of tags.

Key takeaway:

Every training dollar buys 50–100% more work when you align the GPU SKU to the workload. On CUDO Compute, that’s a one-line terraform change, not a multi-cloud migration. In the next section, we’ll show how the picture flips for latency-critical inference.

Inference benchmark result

For inference workloads, we analysed the benchmark results by doubling the measured train_samples_per_second (since forward-only inference is approximately half the work of a full training step) and converting 128-token sequences to tokens per second. This provides a direct, first-order view of serving economics and highlights the relative performance gaps across SKUs:

GPUtrain_samples/s (seq 128)≈ Tokens/s inferenceTokens/hr
H100 SXM92.79≈23.8k85.6 M
L40S41.29≈10.6k38.0 M
A100 PCIe7.68≈2.0k7.1 M

*tokens/s = train_samples_per_second × 128 tokens × 2 (fwd-only)

Our analysis reveals critical insights for inference performance:

  • H100 dominates absolute speed: One H100 SXM card can stream approximately 24,000 tokens per second. This is sufficient to serve a Llama-70B parameter model at around 6 ms/token, enabling real-time chat user experiences even before advanced KV-cache optimisations.
  • L40S wins on raw efficiency: Despite being slower than Hopper, the L40S's lower hourly rate makes it the most cost-effective option, delivering the cheapest cost-per-million-tokens. This makes it ideal for bursty allocator pools.
  • A100 is a legacy tax: Serving on A100 is almost eight times more expensive per token than on L40S and about seven times slower than Hopper.

Real-world_benchmarks_image_4

Latency nuances

Beyond raw throughput, real-world inference performance is shaped by specific latency characteristics:

  • Batch-32 sweet spot: Both Hopper and L40S maintain over 90% utilisation at batch sizes up to 32. Beyond this, queuing delay increases more rapidly on the L40S due to its lack of NVLink peer-to-peer copy.
  • Tokeniser masking: Hopper’s FP8 decode path can shave an additional ~12 µs per token when fast RMS-norm kernels are enabled, pushing end-to-end latency below 40 ms for typical 20-token responses.
  • Cold-start tales: Container cold-boot on CUDO Compute is primarily network and disk-bound. All three SKUs spin up within approximately 45 seconds from a Terraform apply, allowing scaling policies to remain GPU-agnostic.

Cost-focused deployment patterns

Based on our log-validated benchmarks, here's a playbook for optimising GPU selection on CUDO Compute for various serving patterns:

Serving patternBest CUDO SKUWhy
24 × 7 high-QPS API(> 50 req/s)H100Lowest tail latency and headroom to absorb traffic spikes without replica thrashing.
Bursty micro-services / A-B testsL40SOffers the lowest cost-per-token; its spin-up time is identical to that of H100, simplifying autoscaler logic.
Legacy endpoints(model retrain compatibility)Migrate to H100/L40SA100 now costs nearly 10 times more per response. Switching involves adjusting scripts, not clouds.

Quick optimisation tips for ML-Ops teams

To further maximise performance and efficiency:

  • Pin torch.compile(fullgraph=True) on Hopper: This optimisation typically gains an additional ~8% in tokens/s by fusing Layer-norm and MatMul operations, requiring no code rewrite.
  • Enable NVIDIA TensorRT-LLM on L40S: Activating this can recover approximately 15% throughput, significantly narrowing the speed gap to Ampere while preserving the L40S's price advantage.
  • Use weighted round-robin balancer: Implement this in your gRPC balancer to direct long prompts to H100 buckets and short, bursty chat requests to L40S, achieving near-perfect fleet utilisation.

For inference workloads, the freshly crunched logs confirm that H100 maximises raw performance, while L40S minimises cost-per-output. Notably, both new SKUs outperform A100 decisively on every dollar metric within CUDO Compute. Since all three GPUs are accessible under the same CUDO Compute API, swapping SKUs is a single Terraform variable change, not a platform migration, enabling you to fine-tune your latency-versus-cost balance in seconds.

How to plan your GPU selection

The choice of GPU within a cloud environment can significantly impact the cost of compute. Below is a fast-acting playbook you can apply today on CUDO Compute:

Strategic goalBest GPU (among the compared selection)Business impact
Slash time-to-market for new LLMsH100 SXM2× faster epochs and ~25 % lower $/training-token than A100/L40S.
Minimise serving OPEX for chat/RAG appsL40SLowest $ / inference-token (≈ $0.023 per M tokens) while matching Hopper cold-start times.
De-risk budget overruns on legacy Ampere stacksSwitch to H100 or L40SFresh logs indicate an 86–88% cost savings per million tokens.

For ML-Ops and engineers, here is a checklist you can use:

TaskHopper tweakL40S tweak
Training throughputtorch.compile(fullgraph=True) → +8% tokens/sEnable gradient-checkpointing at 512 seq-len
Inference throughputKV-cache + FP8 LayerNorm fusionTensorRT-LLM (--fp16-weight) → +15% tps
Fleet utilisationWeighted round-robin: long prompts ➜ H100Burst chat traffic ➜ L40S

Sustainability footnote

Since the same job takes half the time on Hopper as on Ampere, the total kWh draw falls proportionally. Pair that with CUDO’s 100% renewable energy commitment, and you get the greenest path to state-of-the-art AI.

Your next three clicks

  • Log in to CUDO Portal → pick GPU Catalogue.
  • Select the GPU according to the playbook.
  • Deploy the GPU and start building.

Ready to cut costs or halve training time? Start a GPU on CUDO Compute now and benchmark your own workloads against the above numbers.

Alternatively, to access large-scale GPU clusters, click here to get started.

Subscribe to our Newsletter

Get the latest product news, updates and insights with the CUDO Compute newsletter.

Find the resources to empower your AI journey