Contact Us
No results found.
Nazlı Şipi

Nazlı Şipi

AI Researcher
8 Articles
Stay up-to-date on B2B Tech
Nazlı is a data analyst at AIMultiple. She has prior experience in data analysis across various industries, where she worked on transforming complex datasets into actionable insights.

She is also part of the benchmark team, focusing on large language models (LLMs), AI agents, and agentic frameworks.

Nazlı holds a Master’s degree in Business Analytics from the University of Denver.

Latest Articles from Nazlı

AIFeb 2

LLM Observability Tools: Weights & Biases, Langsmith ['26]

LLM-based applications are becoming more capable and increasingly complex, making their behavior harder to interpret. Each model output results from prompts, tool interactions, retrieval steps, and probabilistic reasoning that cannot be directly inspected. LLM observability addresses this challenge by providing continuous visibility into how models operate in real-world conditions.

AIJan 28

AI Hallucination Detection Tools: W&B Weave & Comet ['26]

We benchmarked three hallucination detection tools: Weights & Biases (W&B) Weave HallucinationFree Scorer, Arize Phoenix HallucinationEvaluator, and Comet Opik Hallucination Metric, across 100 test cases. Each tool was evaluated on accuracy, precision, recall, and latency to provide a fair comparison of their real-world performance.

Agentic AIJan 26

Benchmarking Agentic AI Frameworks in Analytics Workflows

Frameworks for building agentic workflows differ substantially in how they handle decisions and errors, yet their performance on imperfect real-world data remains largely untested.

AIJan 23

Top 9 AI Providers Compared in 2026

The AI infrastructure ecosystem is growing rapidly, with providers offering diverse approaches to building, hosting, and accelerating models. While they all aim to power AI applications, each focuses on a different layer of the stack.

Agentic AIJan 23

Top 5 Open-Source Agentic AI Frameworks in 2026

We reviewed several popular open-source agentic AI frameworks, examining their performance, multi-agent orchestration capabilities, agent and function definitions, memory management, and human-in-the-loop features. To evaluate their performance, we implemented four data analysis tasks on each framework: logistic regression, clustering, random forest classification, and descriptive statistical analysis.

AIJan 22

LLM Latency Benchmark by Use Cases in 2026

The effectiveness of large language models (LLMs) is determined not only by their accuracy and capabilities but also by the speed at which they engage with users. We benchmarked the performance of leading language models across various use cases, measuring their response times to user input.

AIJan 14

Compare Multimodal AI Models on Visual Reasoning [2026]

We benchmarked 9 leading multimodal AI models on visual reasoning using 200 visual-based questions. The evaluation consisted of two tracks: 100 Chart Understanding questions testing data visualization interpretation, and 100 Visual Logic questions assessing pattern recognition and spatial reasoning. Each question was run 5 times to ensure consistent and reliable results.

Agentic AIDec 25

Vision Language Models Compared to Image Recognition

Can advanced Vision Language Models (VLMs) replace traditional image recognition models? To find out, we benchmarked 16 leading models across three paradigms: traditional CNNs (ResNet, EfficientNet), VLMs ( such as GPT-4.1, Gemini 2.5), and Cloud APIs (AWS, Google, Azure).