AIMultiple ResearchAIMultiple Research

In-Depth Guide to Cloud Large Language Models (LLMs) in 2024

In-Depth Guide to Cloud Large Language Models (LLMs) in 2024In-Depth Guide to Cloud Large Language Models (LLMs) in 2024

Figure 1. The increasing popularity of the keyword “cloud LLM” on Google since the release of ChatGPT-3.5 in November 2022.

Large Language Models (LLMs) have become a hot topic for businesses, especially after the release of ChatGPT-3.5 to the market in 2022. The integration of generative AI into these models has further elevated their capabilities, creating a dilemma for companies between cost efficiency and reliability. In this article, we provide the readers with an extensive view on cloud LLMs, some case studies, and how they differ from local LLMs served in-house.

What is the Cloud Large Language Model (LLM)?

Large enterprises, despite extensive cloud initiatives and SaaS integration, usually have just 15%-20% of their applications in the cloud.1 This indicates that a significant portion of their IT infrastructure and applications rely on on-premises or legacy systems.

Large Language Models (LLMs) work with trained data and can be applicable to NLP or NLG tasks with different use cases for businesses. If you want to build a LLM for your business operations, you can either choose a cloud or on-premise, local LLM.

Cloud LLM refers to a Large Language Model hosted in a cloud environment. These models, like GPT-4, are AI systems that have advanced language understanding capabilities and can generate human-like texts. Cloud LLMs are accessible via the internet, making them easy to use in various applications, such as chatbots, content generation, and language translation.

Cloud service providers offer managed services for LLMs, such as :

  • Azure
  • AWS
  • GCP

These providers often use a pay-as-you-go pricing model based on usage, which can be more cost-effective for many applications. However, costs can escalate with increased usage.

Cloud LLMs are most suitable for:

  • Teams with low tech expertise: Cloud LLMs can be suitable for teams with limited technical expertise because they are often accessible through user-friendly interfaces and APIs, requiring less technical know-how to implement and utilize effectively.
  • Teams with limited tech budget: Crating or training an LLM is a costly endeavor. The expense for GPUs alone can reach millions, with OpenAI’s GPT-3 model needing at least $5 million worth of these GPUs for each training session.2 Cloud LLMs eliminate the need for significant upfront hardware and software investments. Users can pay for cloud LLM services on a subscription or usage basis, which may be more budget-friendly.

Pros of Cloud LLMs

  • No maintenance efforts: Users of cloud LLMs are relieved from the burden of maintaining and updating the underlying infrastructure, as cloud service providers handle these responsibilities, and the costs are added in the subscription prices.
  • Connectivity: Cloud LLMs can be accessed from anywhere with an internet connection, enabling remote collaboration and use across geographically dispersed teams.
  • Less financial costs: Users can benefit from cost-effective pay-as-you-go pricing models, reducing the initial capital expenditure associated with hardware and software procurement and leveraging the model whenever they need.

Cons of Cloud LLMs

  • Security risks: Storing sensitive data or using LLMs may raise cloud security concerns due to potential data breaches or unauthorized access. This might be a burden for companies with strong privacy concerns as they might be vulnerable to sophisticated social engineering attacks.

What are Local LLMs?

Local LLMs refer to Large Language Models that are installed and run on an organization’s own servers or infrastructure. These models offer more control and potentially enhanced security but require more technical expertise and maintenance efforts compared to their cloud computing counterparts.

Suitable for:

  • Teams with high-tech expertise: Ideal for organizations with a dedicated AI department, such as major tech companies (e.g., Google, IBM) or research labs that have the resources and skills to maintain complex LLM infrastructures.
  • Industries with specialized terminology: Beneficial for sectors like law or medicine, where customized models trained on specific jargon are essential.
  • Those invested in cloud infrastructure: Companies that have already made significant investments in cloud technologies. (i.e., Salesforce) can set up in-house LLMs more effectively.
  • Those who can initiate rigorous testing: Necessary for businesses needing extensive model testing for accuracy and reliability.

Pros of local LLMs

High security operations: It allows organizations to maintain full control over their data and how it is processed, ensuring compliance with data privacy regulations and internal security policies.

Speed: While cloud latency can be a bottleneck, Local LLMs can provide more streamlined workflows.

For instance, Diffblue, an Oxford-originated company, compared OpenAI’s cloud LLMs with their product, Diffblue Cover, which uses local reinforcement learning.3 The tests involved automatic generation of unit tests for Java code. The LLM-generated tests required manual review for meeting specific criteria and were significantly slower, taking 20-40 seconds per test using cloud GPUs, compared to 1.5 seconds per test for Diffblue Cover’s local approach.

If you plan to build LLM in-house, here is a guide to gathering LLM data.

Cons of local LLMs

Initial costs: Significant investment in GPUs and servers is needed, akin to a scenario where a mid-size tech company might spend a few hundred thousand dollars to establish a local LLM infrastructure.

Scalability & hardware needs: Difficulties in scaling resources to meet fluctuating demands, such as fine-tuning the model.

For more on LLM fine tuning, check out our article.

Environmental concerns: Training one large language model can emit about 315 tons of carbon dioxide.4

Comparison of on-premise vs cloud LLMs

Source: The Cube Research5

The image positions cloud LLMs as broad-scale, flexible solutions, primarily developed by large tech companies for general applications. On-premises LLMs, on the other hand, are tailored for specific enterprise use where control and security are paramount. This indicates a market delineation where cloud LLMs are geared towards volume and innovation, while on-premises LLMs are chosen for specialized, secure applications with clear economic intent.

Here is a comparison of local and cloud LLMs based on different factors:

FactorIn-house LLMsCloud LLMs
Tech expertiseStrongly neededLess needed
Initial costsHighLow
Overall costsHighMedium to high*
ScalabilityLowHigh
Data controlHighLow
CustomizationHighLow
Downtime riskHighLow

*Overall costs can accelerate depending on business needs.

If you are willing to invest in cloud GPUs, check out our vendor benchmarking.

Local LLMs on cloud hardware

Source: The Cube Research6

Figure 1. Percentage of companies implementing private and/or public infrastructure for their GenAI applications.

Another option would be to build LLMs on-premise and run these models using cloud hardware. Indeed, recent research shows that most companies use a mix of both models (see figure above). This way, organizations can maintain control over their models and data while leveraging the computational power and scalability of cloud infrastructure.

How to choose between local vs cloud LLM?

Source: AIM Research7

Figure 1. In-house vs API LLMs

While choosing between local or cloud LLMs, there are some questions you should consider:

1- Do you have in-house expertise?

Running LLMs locally requires significant technical expertise in machine learning and managing complex IT infrastructure. This can be a challenge for organizations without a strong technical team. On the other hand, cloud-based LLMs offload much of the technical burden to the cloud provider, including maintenance and updates, making them a more convenient option for businesses lacking specialized IT employees.

2- What are your budget constraints?

Local LLM deployment involves significant upfront costs, mainly due to the need for powerful computing hardware, especially GPUs. This can be a major hurdle for smaller companies or startups. Cloud LLMs, conversely, typically have lower initial costs with pricing models based on usage, such as subscriptions or pay-as-you-go plans.

3- What are your data size & computational needs ?

For businesses with consistent, high-volume computational needs and the infrastructure to support them, local LLMs can be a more reliable choice. However, cloud LLMs offer scalability that is beneficial for businesses with fluctuating demands. The cloud model allows for easy scaling of resources to handle increased workloads, which is particularly useful for companies whose computational needs may spike periodically (e.g., Cosmetics company on Black Friday season).

4- What are your risk management assets?

While local LLMs offer more direct control over data security and may be preferred by organizations handling sensitive information (such as financial or healthcare data), they also require robust internal security protocols. Cloud LLMs, while potentially posing higher risks due to data transmission over the internet, are managed by providers who typically invest heavily in security measures.

3 Cloud LLMs case studies

Manz & deepset Cloud

Manz, an Austrian legal publisher, employed deepset Cloud to optimize legal research with semantic search.8 Their extensive legal database necessitated a more efficient way to find relevant documents. They implemented a semantic recommendation system through deepset Cloud’s expertise in NLP and German language models. Manz significantly improved research workflows.

Cognizant & Google Cloud

Cognizant and Google Cloud are collaborating to leverage generative AI, including Large Language Models (LLMs), in the cloud to tackle healthcare challenges.9 They aim to streamline administrative processes in healthcare, like appeals and patient engagement, using Google Cloud’s Vertex AI platform and Cognizant’s industry expertise. This partnership highlights the potential of cloud-based LLMs to optimize healthcare operations and drive business efficiency.

Allied Banking Corporation & Finastra

Allied Banking Corporation, based in Hong Kong, has transitioned its core banking operations to the cloud and upgraded to Finastra’s next-generation Essence solution.10 They’ve also implemented Finastra’s Retail Analytics for enhanced reporting. This move reflects a strategic shift toward modern, cost-effective technology, enabling future growth and efficiency gains.

If you need help deciding between on-premise or cloud LLMs for your business, feel free to contact us:

Find the Right Vendors

Further Reading on LLMs

This article was originally written by former AIMultiple industry analyst Begüm Yılmaz and reviewed by Cem Dilmegani.

Access Cem's 2 decades of B2B tech experience as a tech consultant, enterprise leader, startup entrepreneur & industry analyst. Leverage insights informing top Fortune 500 every month.
Cem Dilmegani
Principal Analyst
Follow on
Cem Dilmegani
Principal Analyst

Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month.

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE, NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and media that referenced AIMultiple.

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.

Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

To stay up-to-date on B2B tech & accelerate your enterprise:

Follow on

Next to Read

Comments

Your email address will not be published. All fields are required.

0 Comments