Serverless functions enable developers to run code without having to manage a server. This allows them to focus on writing and deploying applications while infrastructure scaling and maintenance are handled automatically in the background.
In this benchmark, we evaluated 7 popular cloud service providers following our methodology to test their serverless function performance. We measured their fastest and slowest response times, the total execution time for 1000 requests, throughput, and the average time per successful request under load.
Serverless functions benchmark results
The first chart visualizes the performance of each provider not as a single number, but as a range of response times observed during our 1000-request benchmark. This performance spectrum is represented by a “Lower Band” and an “Upper Band”, which you can switch between using the buttons above the chart.
- Lower Band: This represents the fastest response times recorded for each provider. It indicates the best-case performance, showing how quickly a function can execute under optimal conditions (e.g., a “warm” start with cached resources). In this view, a lower value (further to the left) is better.
- Upper Band: This represents the slowest response times observed for each provider. It highlights the worst-case performance, which can be influenced by factors like “cold starts,” network latency, or temporary resource contention. This value is critical for understanding potential latency spikes that could affect user experience.
Requests/sec: The number of requests per second, i.e., the average throughput. This measures the server’s processing capacity. Higher is better because it means more requests can be processed per second.
Total Time: Lower is better because the platform can quickly handle the workload.
Average Time per Successful Request: The average time per request for successfully processed requests, excluding any errors or failed requests. Lower is better, indicating faster processing for each request.
Serverless function providers
There is a variety of serverless function providers, each with distinct features, ecosystem integrations, and strengths tailored to specific use cases:
Microsoft Azure Functions
Microsoft Azure Functions is a serverless computing service that enables developers to build and deploy event-driven applications without managing infrastructure.1 It provides integration with other Azure services, such as Azure Blob Storage for file handling, Cosmos DB for database operations, and Event Grid for event routing.
Azure Functions feature automatic scaling to manage varying request volumes and integrate with Azure Monitor and Azure Security Center for performance tracking and security management.
AWS Lambda
AWS Lambda is a serverless computing service offered by Amazon Web Services (AWS) that integrates with other AWS services, such as Amazon S3 for storage, DynamoDB for database operations, and API Gateway for HTTP endpoints, enabling the development of event-driven architectures.2
AWS Step Functions can coordinate multiple Lambda functions, supporting the creation of complex workflows for tasks like data processing or application orchestration.
Google Cloud Functions
Google Cloud Functions is a serverless execution environment that allows developers to run code triggered by events from sources such as HTTP requests, Cloud Storage updates, or Pub/Sub messages. The platform scales automatically to handle fluctuating workloads, provisioning resources as needed without manual intervention.3
Google Cloud Functions also integrates with Google Cloud’s data and analytics services, such as BigQuery for large-scale data analysis and Cloud Dataflow for stream processing, supporting applications focused on data handling and real-time insights. Its event-driven design ensures the efficient execution of tasks tied to specific triggers within the Google Cloud ecosystem.
Vercel Functions
Vercel is a cloud platform aimed at front-end developers, providing deployment and scaling tools for modern web applications. It is known for developing Next.js and offers integration with this widely used React framework.
Vercel Functions enables developers to execute backend code without managing servers and supports languages, including JavaScript (Node.js), TypeScript, Python, Go, and Ruby. Features like automatic deployments, preview URLs, and a global edge network improve performance and developer productivity.4
Cloudflare Workers
With Cloudflare Workers, developers can run their code in data centers worldwide, achieving low latency.5 The platform supports technologies such as JavaScript and WebAssembly, allowing developers to deploy their applications quickly. Cloudflare Workers is also optimized for AI and blockchain applications.
Cloudflare Workers is focused on edge computing and high performance with low latency. Developers can evaluate these platforms according to their needs and project requirements.
Huawei Cloud FunctionGraph
Huawei Cloud FunctionGraph is a service that enables developers to execute code in response to events without managing server infrastructure.6 The service integrates with event sources within the Huawei Cloud ecosystem, including Object Storage Service (OSS) for file-related triggers and API Gateway for HTTP-based invocations, allowing the creation of event-driven applications.
Huawei Cloud FunctionGraph provides automatic scaling to adapt to workload changes and operates on a pay-per-use billing model, charging only for the resources consumed during execution. It also includes monitoring and logging capabilities through Huawei Cloud’s observability tools, assisting developers in tracking performance and diagnosing application issues.
Heroku
Heroku is a Platform as a Service (PaaS) that allows rapid application deployment and management. It uses virtual containers called “dynos” to facilitate application management and scaling.7 Additionally, it offers temporary “one-off dynos” for executing specific operations in a serverless function manner.
Supported language count
What are serverless functions?
Serverless functions, also known as Function as a Service (FaaS), are a cloud computing model, such as cloud GPU, that allows developers to execute code without the need to manage underlying servers or infrastructure. In this approach, developers write small, event-driven pieces of code (functions) that are triggered by specific events, such as an HTTP request, a database update, or a message in a queue.
The cloud provider automatically handles the server provisioning, scaling, and management, freeing developers to focus solely on writing and deploying their code.
In serverless architectures, resources are dynamically scaled according to real-time demand. During periods of inactivity, the infrastructure automatically scales down to zero, eliminating resource consumption and associated costs.
On the other hand, when demand surges, the system rapidly scales up to handle increased workloads. This dynamic scalability ensures optimized cost-effectiveness, as billing is based on the actual computing resources used.
How do serverless functions work?
1. Event Trigger:
Serverless functions are event-driven, triggered by HTTP requests, file uploads, database changes, or other events. The event defines when the function should be executed.
2. Execution:
Once an event is triggered, the cloud provider provisions a lightweight environment to run the function. This is often called a “container” or “execution environment.” The code is executed within this environment, but the environment is temporary and created just for the duration of the function’s execution.
3. Scaling:
Serverless platforms are designed to scale automatically based on demand. If multiple events happen simultaneously, the platform will spin up more instances of the function to handle them, often called horizontal scaling.8 The cloud provider handles this automatically, so you don’t need to manage the infrastructure yourself.
4. Shutdown:
Once the function has finished executing, the environment (container) is shut down. The serverless function doesn’t run or consume resources after completing its task.
Benefits of serverless functions
No server management
With serverless functions, developers don’t need to worry about provisioning, managing, or maintaining the underlying infrastructure. The cloud provider handles server management, such as patching, scaling, and monitoring, allowing developers to focus on writing and deploying the business logic.
This abstracts away the complexity of managing servers, operating systems, or hardware, resulting in fewer operational headaches for development teams.
For example, with AWS Lambda, developers can deploy their functions without managing virtual machines, load balancers, or networking components. The platform automatically provisions the resources needed to execute the function in response to an event, ensuring execution without manual intervention.
Cost efficiency
Serverless functions are typically billed based on the actual usage of resources, not pre-allocated computing power or idle time. This pay-as-you-go model allows businesses to only pay for the time their code runs, often measured at a very granular level. This contrasts with traditional cloud computing models, where you may pay for reserved computing power even when it’s unused.
For instance, you don’t pay for unused capacity if your function is idle or receiving low traffic. On the other hand, when demand spikes, the platform dynamically adjusts resources to meet the load without extra cost beyond the actual usage. This makes serverless computing a highly cost-effective option, particularly for workloads with variable traffic patterns.
Automatic scaling
One of the most powerful features of serverless functions is their ability to scale automatically in response to demand. When many events trigger functions simultaneously, the platform automatically provisions additional resources (such as new instances of the function) to handle the increased load. Once the demand subsides, the system scales down resources, ensuring that only the necessary infrastructure is being used.
For example, during high-traffic events like product launches or flash sales, a serverless platform such as AWS Lambda or Azure Functions will spin up additional resources to handle the increased request volume. After the event ends, the platform will scale back down to save resources and reduce costs.
Rapid deployment
Serverless functions can be deployed much faster than traditional applications, especially when integrating with other services. This is because you only need to write small, discrete units of code (functions) that are triggered by specific events. Deployment often involves simply uploading the function code to the platform, and the system takes care of everything from provisioning resources to managing runtime environments.
The rapid deployment feature is crucial for speeding up development cycles. Developers can experiment and iterate more quickly, as they don’t need to spend time setting up infrastructure or managing complex deployment pipelines.
This can significantly reduce the time it takes to release new features or fix bugs, fostering a more agile development process. For example, you can quickly deploy a function that reacts to a file upload in a storage service or an API request without the overhead of managing the infrastructure yourself.
Methodology of serverless functions benchmark
In this benchmark, we developed a function that checks whether a site visitor’s browser is up to date based on the current operating system and user agent. The goal was to assess each platform’s performance in handling this type of request, which involves checking multiple user agents for browser updates.
Testing Procedure:
- Code Implementation: A Python function was created to inspect a visitor’s User Agent string. The function checks the current state of the operating system and compares it with the browser’s version to determine whether the browser is up-to-date. The code utilizes a simple comparison between the current version of the browser and the version supported by the operating system.
- Parallel Requests: The function was executed 1000 times in parallel, simulating real-world traffic, using 10 parallel threads to generate load. This setup tests the platforms’ ability to handle multiple simultaneous requests efficiently.
- Performance Metrics Collected: Several key performance metrics were recorded during the test to evaluate each platform’s efficiency and scalability.
Further reading
Discover recent developments on serverless platforms by checking out:
Best 10+ Serverless GPU Providers 2025: AWS, Azure & More
FAQ
Reference Links

- Has 20 years of experience as a white-hat hacker and development guru, with extensive expertise in programming languages and server architectures.
- Is an advisor to C-level executives and board members of corporations with high-traffic and mission-critical technology operations like payment infrastructure.
- Has extensive business acumen alongside his technical expertise.

Be the first to comment
Your email address will not be published. All fields are required.