Sıla Ermut
Sıla is an industry analyst at AIMultiple focused on email marketing and sales videos.
Research interests
Sıla's research areas include email marketing, eCommerce marketing campaigns and marketing automation.She is also part of AIMultiple's email deliverability benchmark. She is designing and running email deliverability benchmarks while collaborating with the AIMultiple technology team.
Professional experience
Sıla previously worked as a recruiter and worked in project management and consulting firms.Education
She holds:- Bachelor of Arts degree in International Relations from Bilkent University.
- Master of Science degree in Social Psychology from Başkent University.
Her Master's thesis was focused on ethical and psychological concerns about AI. Her thesis examined the relationship between AI exposure, attitudes towards AI, and existential anxieties across different levels of AI usage.
Latest Articles from Sıla
LLM Quantization: BF16 vs FP8 vs INT4 in 2026
LLM quantization involves converting large language models from high-precision numerical representations to lower-precision formats to reduce model size, memory usage, and computational costs while maintaining acceptable inference performance. We benchmarked 4 precision formats of Qwen3-32B on a single H100 GPU.
AGI/Singularity: 8,590 Predictions Analyzed in 2026
Artificial general intelligence (AGI/singularity) occurs when an AI system matches or exceeds human-level cognitive abilities across a broad range of tasks, rather than excelling in a single domain. While many researchers and experts anticipate the near-term arrival of AGI, opinions differ on its speed and development pathway.
AI Presentation Maker: Gamma vs Google Slides in 2026
We evaluated the top 5 AI presentation makers across 9 dimensions with 4 different prompts to assess their context and prompt understanding, visual AI integration, and voice and brand style adaptation capabilities: AI presentation maker benchmark results See the methodology and evaluation criteria to understand how we determined these results.
Generative AI Ethics: How to Manage Them in 2026
Generative AI raises important concerns about how knowledge is shared and trusted. Britannica, for instance, filed a lawsuit against Perplexity, alleging that the company illegally and knowingly copied Britannica’s human-verified content and misused its trademarks without permission. Explore what generative AI ethics concerns are and best practices for managing them. 1.
Compare Relational Foundation Models in 2026
We benchmarked SAP-RPT-1-OSS against gradient boosting (LightGBM, CatBoost) on 17 tabular datasets spanning the full semantic-numeral spectrum, small/high-semantic tables, mixed business datasets, and large low-semantic numerical datasets.
Top 125 Generative AI Applications in 2026
Based on our analysis of 30+ case studies and 10 benchmarks, where we tested and compared over 40 products, we identified 120 generative AI use cases across the following categories: For other applications of AI for requests where there is a single correct answer (e.g., prediction or classification), check out AI applications.
Compare Remote Control Software: NinjaOne & Acronis
We tested the top 3 remote control software (also known as remote access software) to evaluate the general UI and remote control experience, their remote control quality, protocols, and unique capabilities: Strengths and weaknesses based on our observations An agent needs to be installed for each tool we tested in this benchmark.
Text-to-Speech Software: Hume & ElevenLabs in 2026
As AI capabilities evolve, text-to-speech (TTS) software is becoming more adept at producing natural, human-like speech. We evaluated and compared the performance of five different TTS and sentiment analysis tools (Resemble, ElevenLabs, Hume, Azure, and Cartesia) across seven core emotion categories to determine which could most accurately, consistently, and comprehensively recognize emotional tones.
Text-to-Image Generators: Nano Banana Pro & GPT Image 1.5
We compared the top 6 text-to-image models across 15 prompts to evaluate visual generation capabilities in terms of temporal consistency, physical realism, text and symbol recognition, human activity understanding, and complex multi-object scene coherence: Text-to-image generators benchmark results Review our benchmark methodology to understand how these results are calculated and see output examples.
AI Image Detector Benchmark in 2026
As these synthetic visuals grow more realistic and accessible, the ability to detect them has become a critical concern for upholding generative AI ethics, combating misinformation, and ensuring image authenticity. We compared the top 7 AI image detectors across 5 dimensions and found that most perform no better than a coin toss.
AIMultiple Newsletter
1 free email per week with the latest B2B tech news & expert insights to accelerate your enterprise.