Contact Us
No results found.
Cem Dilmegani

Cem Dilmegani

Principal Analyst
684 Articles
Stay up-to-date on B2B Tech
Cem has been the principal analyst at AIMultiple for almost a decade.

Cem's work at AIMultiple has been cited by leading global publications including Business Insider, Forbes, Morning Brew, Washington Post, global firms like HPE, NGOs like World Economic Forum and supranational organizations like European Commission. [1], [2], [3], [4], [5]

Professional experience & achievements

Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.

He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider. [6], [7]

Research interests

Cem's work focuses on how enterprises can leverage new technologies in AI, agentic AI, cybersecurity (including network security, application security) and data including web data.

Cem's hands-on enterprise software experience contributes to his work. Other AIMultiple industry analysts and the tech team support Cem in designing, running and evaluating benchmarks.

Education

He graduated as a computer engineer from Bogazici University in 2007. During his engineering degree, he studied machine learning at a time when it was commonly called "data mining" and most neural networks had a few hidden layers.

He holds an MBA degree from Columbia Business School in 2012.

Cem is fluent in English and Turkish. He is at an advanced level in German and beginner level in French.

External publications

Media, conference & other event presentations

Sources

  1. Why Microsoft, IBM, and Google Are Ramping up Efforts on AI Ethics, Business Insider.
  2. Microsoft invests $1 billion in OpenAI to pursue artificial intelligence that’s smarter than we are, Washington Post.
  3. Empowering AI Leadership: AI C-Suite Toolkit, World Economic Forum.
  4. Science, Research and Innovation Performance of the EU, European Commission.
  5. EU’s €200 billion AI investment pushes cash into data centers, but chip market remains a challenge, IT Brew.
  6. Hypatos gets $11.8M for a deep learning approach to document processing, TechCrunch.
  7. We got an exclusive look at the pitch deck AI startup Hypatos used to raise $11 million, Business Insider.

Latest Articles from Cem

AIJan 27

RAG Frameworks in 2026: LangChain, LangGraph vs LlamaIndex

We benchmarked 5 RAG frameworks: LangChain, LangGraph, LlamaIndex, Haystack, and DSPy. We build the same agentic RAG workflow with standardized components: identical models (GPT-4.1-mini), embeddings (BGE-small), retriever (Qdrant), and tools (Tavily web search). This isolates each framework’s true overhead and token efficiency.

AIJan 27

Best 10 Serverless GPU Clouds & 14 Cost-Effective GPUs

Serverless GPU can provide easy-to-scale computing services for AI workloads. However, their costs can be substantial for large-scale projects. Navigate to sections based on your needs: Serverless GPU price per throughput Serverless GPU providers offer different performance levels and pricing for AI workloads.

AIJan 27

LLM Inference Engines: vLLM vs LMDeploy vs SGLang ['26]

We benchmarked 3 leading LLM inference engines on NVIDIA H100: vLLM, LMDeploy, and SGLang. Each engine processed identical workloads: 1,000 ShareGPT prompts using Llama 3.1 8B-Instruct to isolate the true performance impact of their architectural choices and optimization strategies.

AIJan 27

Compare Large Vision Models: GPT-4o vs YOLOv8n [2026]

Large vision models (LVMs) can automate and improve visual tasks such as defect detection, medical diagnosis, and environmental monitoring. We benchmarked three object detection models: YOLOv8n, DETR, and GPT-4o Vision, across 1,000 images each, measuring metrics such as mAP@0.5, inference speed, FLOPs, and parameter count.

AIJan 27

Large Multimodal Models (LMMs) vs LLMs in 2026

We evaluated the performance of Large Multimodal Models (LMMs) in financial reasoning tasks using a carefully selected dataset. By analyzing a subset of high-quality financial samples, we assess the models’ capabilities in processing and reasoning with multimodal data in the financial domain. The methodology section provides detailed insights into the dataset and evaluation framework employed.

Agentic AIJan 27

Centralizing AI Tool Access with the MCP Gateway in 2026

Source: Jahgirdar, Manoj In this article, I’ll walk through the evolution of AI tool integration, explain what the Model Context Protocol (MCP) is, and show why MCP alone isn’t production-ready. Then we’ll explore real-world gateway implementations between AI agents and external tools.

AIJan 27

10+ Large Language Model Examples & Benchmark in 2026

We have used open-source benchmarks to compare top proprietary and open-source large language model examples. You can choose your use case to find the right model. Comparison of the most popular large language models We have developed a model scoring system based on three key metrics: user preference, coding, and reliability.

DataJan 26

Top 13 Training Data Platforms in 2026

Data is an essential part of the quality of machine learning models. Supervised AI/ML models require high-quality data to make accurate predictions. Training data platforms streamline data preparation from collection to annotation, ensuring high-quality inputs for AI systems.

DataJan 26

Top 3 Prolific Alternatives in 2026

Prolific is a popular AI data collection service that offers a crowdsourcing platform for AI data seekers. Our research identified some drawbacks of working with Prolific from the perspectives of its customers and workers.

Agentic AIJan 26

Benchmarking Agentic AI Frameworks in Analytics Workflows

Frameworks for building agentic workflows differ substantially in how they handle decisions and errors, yet their performance on imperfect real-world data remains largely untested.