Contact Us
No results found.
Ekrem Sarı

Ekrem Sarı

AI Researcher
23 Articles
Stay up-to-date on B2B Tech

Ekrem is an AI Researcher at AIMultiple, focusing on intelligent automation, GPUs, AI Agents, and LLMOps for RAG frameworks.

Professional Experience

During his tenure as an Assessor at Yandex, he evaluated search results using proprietary frameworks and automated protocols. He implemented QA testing through data annotation, relevance scoring, and user intent mapping across 10,000+ queries monthly, while conducting technical assessments, including performance monitoring and spam detection using ML feedback loops.

Research Interest

At AIMultiple, his research is centered on the MLOps lifecycle and the performance and benchmarking of end-to-end AI systems. He contributes to a wide range of projects, including Retrieval-Augmented Generation (RAG) optimization, extensive Large Language Model (LLM) benchmarking, and the design of agentic AI frameworks. Ekrem specializes in developing data-driven methodologies to measure and improve AI technology performance across critical operational metrics like accuracy, efficiency, API cost, and scalability.

His analysis covers the entire technology stack, from foundational components like embedding models and vector databases to the high-performance GPU and cloud infrastructure required for deploying AI agents.

Education

Ekrem holds a bachelor's degree from Hacettepe Üniversitesi and a master's degree from Başkent Üniversitesi.

Latest Articles from Ekrem

AIJan 30

Benchmark of 16 Best Open Source Embedding Models for RAG

Most embedding benchmarks measure semantic similarity. We measured correctness. We tested 16 open-source models, from 23M-parameter to 8B-parameter embeddings, on 490,000 Amazon product reviews, scoring each by whether it retrieved the right product review through exact ASIN matching, not just topically similar documents.

DataJan 30

Remote Browsers: Web Infra for AI Agents Compared

AI agents rely on remote browsers to automate web tasks without being blocked by anti-scraping measures. The performance of this browser infrastructure is critical to an agent’s success. We benchmarked 8 providers on success rate, speed, and features.

AIJan 29

LLM Quantization: BF16 vs FP8 vs INT4

Quantization reduces LLM inference cost by running models at lower numerical precision.  We benchmarked 4 precision formats of Qwen3-32B on a single H100 GPU. We ran over 2,000 inference runs and 12,000+ MMLU-Pro questions to measure the real-world trade-offs between speed, memory, and accuracy.

AIJan 29

RAG Frameworks: LangChain vs LangGraph vs LlamaIndex

We benchmarked 5 RAG frameworks: LangChain, LangGraph, LlamaIndex, Haystack, and DSPy, by building the same agentic RAG workflow with standardized components: identical models (GPT-4.1-mini), embeddings (BGE-small), retriever (Qdrant), and tools (Tavily web search). This isolates each framework’s true overhead and token efficiency.

AIJan 28

Multimodal Embedding Models: Apple vs Meta vs OpenAI

Multimodal embedding models excel at identifying objects but struggle with relationships. Current models struggle to distinguish “phone on a map” from “map on a phone.” We benchmarked 7 leading models across MS-COCO and Winoground to measure this specific limitation. To ensure a fair comparison, we evaluated every model under identical conditions using NVIDIA A40 hardware and bfloat16 precision.

AIJan 28

Supervised Fine-Tuning vs Reinforcement Learning

Can large language models internalize decision rules that are never stated explicitly? To examine this, we designed an experiment in which a 14B parameter model was trained on a hidden “VIP override” rule within a credit decisioning task, without any prompt-level description of the rule itself.

AIJan 28

RAG Evaluation Tools: Weights & Biases vs Ragas vs DeepEval vs TruLens

Failures in Retrieval Augmented Generation systems occur not only because of hallucinations but more critically because of retrieval poisoning. In such cases, the retriever returns documents that share substantial lexical overlap with the query but do not contain the necessary information.

AIJan 27

Multi-GPU Benchmark: B200 vs H200 vs H100 vs MI300X

For over two decades, optimizing compute performance has been a cornerstone of my work. We benchmarked NVIDIA’s B200, H200, H100, and AMD’s MI300X to assess how well they scale for Large Language Model (LLM) inference. Using the vLLM framework with the meta-llama/Llama-3.1-8B-Instruct model, we ran tests on 1, 2, 4, and 8 GPUs.

Enterprise SoftwareJan 21

Top Serverless Functions: Vercel vs Azure vs AWS

Serverless functions enable developers to run code without having to manage a server. This allows them to focus on writing and deploying applications while infrastructure scaling and maintenance are handled automatically in the background. In this benchmark, we evaluated 7 popular cloud service providers following our methodology to test their serverless function performance.

AIJan 20

GPU Concurrency Benchmark: H100 vs H200 vs B200 vs MI300X

I have spent the last 20 years focusing on system-level computational performance optimization. We benchmarked the latest NVIDIA GPUs, including the NVIDIA (H100, H200, and B200) and AMD (MI300X), for concurrency scaling analysis. Using the vLLM framework with the gpt-oss-20b model, we tested how these GPUs handle concurrent requests, from 1 to 512.