13 minute read

What is the cost of training large language models?

Emmanuel OhiriRichard Poole

Emmanuel Ohiri & Richard Poole

Large language models (LLMs) like OpenAI's GPT series and Google's BERT have become foundational technologies that drive many applications, from automated customer service to advanced research tools.

Training these models requires substantial financial investment, primarily due to the vast parameter spaces and the computational power required. Training LLMs involves using high-end GPUs or specialized AI hardware, which can be very costly.

For example, the compute cost for training GPT-3 alone is estimated to range from about $500,000 to as high as $4.6 million, depending on the specific hardware and operational efficiencies achieved during the training process.

This article explores the multifaceted expenses involved in bringing these generative AI models to life, focusing mainly on infrastructure needs, data management, and the increasingly pivotal role of cloud computing. Read on to get a comprehensive view of the financial and logistical considerations that shape the development of large language models today.

What are large language models?

LLMs are designed to mimic human intelligence. They are trained on vast datasets containing text from books, websites, and other digital content.

They learn the statistical properties of language, allowing them to generate coherent and contextually relevant text based on the input they receive. For example, models like GPT are trained on a variety of internet text and can generate text that mimics human writing styles across many contexts and topics.

cost-of-llms-image-3

Transformer models are typically used to build LLMs. These models use mechanisms like attention and context awareness to process parts of text in relation to each other. This allows the model to weigh the importance of different parts of the input text differently, depending on the context provided by other parts of the text. This context awareness is crucial in understanding and generating coherent and contextually appropriate responses.

BERT is an example, as it can understand the context of words in a sentence by reading the text bidirectionally (both left-to-right and right-to-left), a significant advancement over older models that processed text in one direction. This capability makes BERT especially effective for tasks that require a deep understanding of language context, such as answering questions or classifying text.

LLMs have wide-ranging applications, touching industries ranging from healthcare, where they can predict patient outcomes based on historical data, to entertainment, where they generate realistic dialogues for virtual characters.

Now, let’s discuss the cost of training LLMs with cloud services.

Cost of training LLMs with cloud servers

As AI development shifts increasingly to cloud platforms for several reasons, including GPU shortages, cloud services are one of the easiest and most reliable ways of training LLMs. Their scalability is also excellent for the fluctuating demands of AI training cycles.

According to NVIDIA CEO Jensen Huang at the NVIDIA GTC 2024, training the GPT-MoE-1.8T model using 25,000 Ampere-based GPUs (most likely the A100) took 3 to 5 months. Doing the same with Hopper (H100) would take about 8,000 GPUs in 90 days.

Due to the significant financial investment required, most users won't train LLMs from scratch. Instead, they'll leverage pre-trained models offered by other companies or organizations (like ChatGPT or Llama2).

There are two ways of training LLMs using this method:

  • Hosting your own model.
  • Pay per token

Let’s take a look at each method.

Hosting models in the cloud

Companies like CUDO Compute offer comprehensive suites that support the entire machine learning lifecycle—from data storage and compute to deployment and management. However, the convenience of cloud-based training comes at a cost.

cost-of-llms-image-4

When training large models or models with billions of parameters like GPT-3B or Falcon 180B, the cost goes beyond just the GPUs, such as the A100s. In a cloud service environment, you would also need to account for:

  • Virtual CPUs (vCPUs) manage the execution of the model training tasks.
  • Memory (RAM) is used to store immediate data for computations.
  • Storage costs, which include saving the model's parameters and training data.

Each of these components would add to the cost, and optimizing the use of resources to manage expenses effectively is crucial. Cloud providers typically charge based on the compute time, the amount of memory allocated, and the amount of data stored or transferred, making training large AI models particularly costly.

Cost of training large language models on CUDO Compute

Let’s break down how this might work when training an LLM on a large model on CUDO Compute:

At the time of writing, the cost of the A100 on CUDO Compute starts from $1.67 per hour or $1,219.94 per month. When factoring in the other costs, such as vCPUs and memory needed, each is charged based on location.

Using the median price for an A100 GPU on CUDO Compute, we will base our analysis on pricing from the Los Angeles 1 location. Here is how much it costs for each resource needed:

ResourceLocationUnitUnit price per hourUnit price per month
vCPUsLos Angeles 11 vCPU$0.0022$1.61
MemoryLos Angeles 11 GB$0.0035$2.56
StorageLos Angeles 11 GB$0.00012$0.09

Multiple GPUs are advised for optimal results. This would be the recommended amount needed to train a Falcon 180B on CUDO Compute based on the default instance for training the same model on AWS:

ResourceQuantity required
vCPUs1,152
Memory320 GB
Storage8,000 GB
GPU (NVIDIA A100)8

The above configuration is very similar to the default configuration used on AWS for training LLMs on the same model. To use this configuration on CUDO Compute, it will total just over USD 13,000 monthly. Here is the breakdown:

ResourceQuantity requiredUnit cost per monthTotal cost per month(qty x unit cost)
vCPUs1,152$1.61$1,854.72
Memory320 GB$2.56$819.20
Storage8,000 GB$0.09$720.00
GPU (NVIDIA A100)8$1,219.94$9,759.52
Total$13,153.44

Keeping in mind that training an LLM will likely take months, this cost will add up over time, particularly when training involves multiple iterations over extensive datasets. CUDO Compute pricing is extremely competitive, so the compute costs are typically higher on other platforms. For example, using an instance with a similar configuration on AWS (ml.p4de.24xlarge) will cost over USD 23,000 per month.

Given the costs, some users might prefer to pay per token. Here is how that works.

Pay per token (PPT) for LLM access

The high cost of training and maintaining LLMs has led to the rise of the pay per token (PPT) model for accessing these powerful language models. Here's how it works:

Companies like OpenAI and Google AI pre-train massive LLMs on vast datasets that have been made publicly available through APIs. This allows developers and businesses to use these models, such as GPT-3 or similar, without the prohibitive costs and technical challenges of training such models themselves.

Users don't incur the upfront costs of training and infrastructure. Instead, they pay a fee based on the number of tokens (roughly equivalent to words or sub-words) processed by the LLM when completing tasks like text generation, translation, or code writing.

cost-of-llms-image-5

The PPT model offers a significantly more cost-effective approach than in-house training for tasks that don't require extensive LLM usage. Users only pay for the resources they actually use.

Benefits of pay per token:

  • Reduced costs: This model eliminates the upfront investment in hardware, software, and training data.
  • Scalability: Users can easily scale their LLM usage up or down based on their needs, paying only for the tokens they consume.
  • Accessibility: PPT allows a wider range of users and smaller companies to access LLMs without the prohibitive costs of in-house training.

Why is it so expensive to train LLMs?

"

Training large language models (LLMs) requires immense computational power. These models have billions of parameters, and training them involves complex algorithms running on powerful hardware (like GPUs) for days or even months. Cloud services offering this infrastructure come at a significant cost, with factors like compute time, storage space, and data transfer contributing to the overall expense.

"

Considerations for pay per token:

  • Pricing models: Different providers offer varying pricing structures based on the specific LLM model and the volume of tokens used. Some might offer discounts for higher usage tiers.
  • Limited control: Users have less control over the training data and specific configurations used for the pre-trained model than in-house training.
  • Latency: Depending on the length of the response and how many tokens per second the model can generate on the backend hardware, users might experience some latency when interacting with the LLM through the API.

cost-of-llms-image-6

The pay-per-token model offers a compelling alternative for most users seeking to use LLMs without the significant financial burden of in-house training. However, understanding the pricing structures, limitations on control, and potential latency issues is important before choosing this method.

Steps to controlling the cost of training LLMs

While the cost of training LLMs remains significant, there are strategies to optimize resource utilization and minimize expenses:

1. Implement model optimization techniques:

  • Model architecture selection: Carefully select a model architecture that balances complexity with desired performance. Smaller models often require fewer resources to train. Pruning techniques can further reduce model size without significant accuracy loss.
  • Training data optimization: Ensure your training data is high quality and relevant to the task at hand. Filtering out irrelevant data can lead to faster training times and lower compute costs.
  • Knowledge distillation: In the knowledge distillation process, a smaller "student" model is trained to replicate the performance of a larger "teacher" model. This allows the student model to benefit from the teacher's knowledge without the extensive computational resources required to train the larger model from scratch. Being more compact, the student model is more efficient for deployment, especially in resource-constrained environments.

cost-of-llms-image-7

  • Mixed-precision training: Mixed-precision training uses half-precision (FP16) and single-precision (FP32) floating-point formats within a single training workflow. The goal is to speed up training and reduce memory usage while maintaining the model's accuracy and stability. Special techniques, such as loss scaling, are used to manage the reduced numerical precision's impact on training dynamics. This can be done on compatible hardware like the NVIDIA H100 GPUs.

2. Consider hardware optimizations:

  • Efficient hardware utilization: Monitor resource utilization during training. Techniques like gradient accumulation can help achieve higher GPU utilization, leading to faster training times and reduced costs.
  • Choose the right hardware: Select hardware that offers the best performance-to-cost ratio for your specific training needs. Consider newer GPUs like the H100 that boast significant performance improvements over previous generations.
  • Cloud service optimization: Explore different cloud service providers and pricing models. On-demand pricing might offer cost savings compared to reserved instances, depending on your training schedule predictability.

Can I train my own LLM?

"

Technically, you can train your own Large Language Model (LLM), but it can be very expensive. Training requires significant computational resources (powerful GPUs) and large amounts of data. Cloud services offer this infrastructure, but costs can reach millions of dollars depending on the model size and training time.

"

3. Optimize training configurations:

  • Hyperparameter tuning: Experiment with different learning rates, batch sizes, and other training hyperparameters to find the optimal configuration that balances training speed and accuracy.
  • Early stopping: Implement techniques to monitor training progress and stop training once the desired performance level is achieved. This prevents unnecessary resource consumption.
  • Gradient checkpointing: Periodically save the model state during training. This allows you to resume training from a checkpoint in case of hardware failures or interruptions, saving time and resources.

4. Consider using a mixture of experts model:

  • Specialized sub-networks: Mixture of experts (MoE) architectures divide the training workload among multiple specialized sub-networks, or "experts." Each expert focuses on a specific subset of the data, potentially leading to faster training times and improved efficiency compared to ensemble techniques.
  • Reduced computational load: By distributing the training across multiple experts, MoE can utilize hardware resources more effectively, reducing the overall computational demands and lowering costs.
  • Complexity and research: MoE is quickly becoming the prevalent way to keep model sizes manageable while covering a wide range of topics. Implementing MoE requires careful configuration and expertise.

5. Collaborate and utilize open-source tools:

  • Take advantage of open-source tools: Utilize open-source frameworks like TensorFlow or PyTorch that provide efficient LLM training functionalities.
  • Collaborate with research institutions: Partner with research institutions that might have access to subsidized compute resources for LLM training.

Data acquisition can also add to training LLMs, let's look at the data requirements and their associated costs.

Data requirements and costs

Data is the lifeblood of LLMs. Data quality, volume, and diversity directly influence the model's effectiveness and accuracy. Collecting, cleaning, and managing this data incurs substantial costs. The data needs to be vast and varied enough to train a model that is not biased and can generalize across different contexts. The dataset creation process involves extensive labor, including human tasks, such as labeling for supervised learning scenarios, which adds to the cost.

However, this data doesn't come for free, and managing it efficiently adds significantly to the overall cost. Here's a breakdown of the key financial aspects of data management for LLMs:

  • Data acquisition: There are two primary ways to acquire data for LLM training: purchasing existing datasets or licensing access to them. Renowned research institutions and private companies often curate and sell text and code datasets specifically designed for training AI models. These datasets can be very expensive, depending on their size, domain-specificity, and quality.
  • Data storage: Storing massive datasets requires significant storage capacity. Traditional on-premise storage solutions can be expensive to maintain and scale. Cloud storage services offer a more flexible and potentially cost-effective alternative, but the ongoing storage fees can accumulate over time, especially for datasets in the terabyte or petabyte range.

cost-of-llms-image-8

  • Data preprocessing: Raw data is rarely usable in its original form for LLM training. It often requires extensive cleaning, labeling, and formatting. This preprocessing can involve:
  • Cleaning: Removing irrelevant information like code comments, HTML tags, or duplicate entries can be a computationally expensive task, especially for large datasets.
  • Labeling: Depending on the training objectives, data might need to be labeled with specific categories or information. This can be a labor-intensive process requiring human effort, or it can be automated with specialized tools, incurring software licensing costs.
  • Formatting: Ensuring data is in a consistent format suitable for LLM training can involve additional processing and potentially custom software development.

Moreover, handling such data responsibly to adhere to privacy laws and ethical standards introduces additional layers of complexity and expense. Data anonymization, secure storage, and compliance with regulations like GDPR in Europe or CCPA in California can increase the overhead costs for any AI project.

Optimizing these data management processes is crucial for cost control. Techniques like data selection (using only relevant subsets) and transfer learning (leveraging pre-trained models) can help reduce the reliance on massive, expensive datasets.

By implementing these strategies, researchers and developers can significantly reduce the cost of training LLMs. Carefully optimizing models, leveraging efficient hardware and cloud services, and adopting cost-saving training configurations are all crucial for managing the financial burden of LLM development.

Starting from $0.45/hr

NVIDIA V100's are now available on-demand

A cost-effective option for AI, VFX and HPC workloads. Prices starting from $0.45/hr

Subscribe to our Newsletter

Subscribe to the CUDO Compute Newsletter to get the latest product news, updates and insights.