Sıla Ermut
Sıla is an industry analyst at AIMultiple focused on email marketing and sales videos.
Research interests
Sıla's research areas include email marketing, eCommerce marketing campaigns and marketing automation.She is also part of AIMultiple's email deliverability benchmark. She is designing and running email deliverability benchmarks while collaborating with the AIMultiple technology team.
Professional experience
Sıla previously worked as a recruiter and worked in project management and consulting firms.Education
She holds:- Bachelor of Arts degree in International Relations from Bilkent University.
- Master of Science degree in Social Psychology from Başkent University.
Her Master's thesis was focused on ethical and psychological concerns about AI. Her thesis examined the relationship between AI exposure, attitudes towards AI, and existential anxieties across different levels of AI usage.
Latest Articles from Sıla
LLM Parameters: GPT-5 High, Medium, Low and Minimal
New LLMs, such as OpenAI’s GPT-5 family, come in different versions (e.g., GPT-5, GPT-5-mini, and GPT-5-nano) and with various parameter settings, including high, medium, low, and minimal. Below, we explore the differences between these model versions by gathering their benchmark performance and the costs to run the benchmarks. Price vs.
Top 4 AI Guardrails: Weights and Biases & NVIDIA NeMo
As AI becomes more integrated into business operations, the impact of security failures increases. Most AI-related breaches result from inadequate oversight, access controls, and governance rather than technical flaws. According to IBM, the average cost of a data breach in the US reached $10.22 million, mainly due to regulatory fines and detection costs.
Top 9 AI Providers Compared in 2026
The AI infrastructure ecosystem is growing rapidly, with providers offering diverse approaches to building, hosting, and accelerating models. While they all aim to power AI applications, each focuses on a different layer of the stack.
Top 20 Predictions from Experts on AI Job Loss in 2026
AI could eliminate half of entry-level white-collar jobs within the next five years. These job losses could affect the global workforce faster than previous waves of technological change. By 2027, millions of jobs may be displaced or significantly altered. While some roles will evolve, the workforce must prepare for a sharp increase in disrupted employment.
Top 20 Sustainability AI Applications & Examples in 2026
According to PwC, GenAI could improve operational efficiency, which might indirectly reduce carbon footprints in business processes. Companies can implement strategies to reduce energy consumption during the development, customization, and inference stages of AI models. By leveraging GenAI applications, companies can offset emissions in other areas of their operations.
Top 6 Social Media Post Generator Benchmark in 2026
Generative AI is playing a significant role in the creation and management of social media content. As more tools offer features like caption writing, image selection, and post scheduling, it’s helpful to understand how they compare.
E-Commerce AI Video Maker Benchmark: Veo 3 vs Sora 2
Product visualization plays a crucial role in e-commerce success, yet creating high-quality product videos remains a significant challenge. Recent advancements in AI video generation technology offer promising solutions.
AI Hallucination Detection Tools: W&B Weave & Comet ['26]
We benchmarked three hallucination detection tools: Weights & Biases (W&B) Weave HallucinationFree Scorer, Arize Phoenix HallucinationEvaluator, and Comet Opik Hallucination Metric, across 100 test cases. Each tool was evaluated on accuracy, precision, recall, and latency to provide a fair comparison of their real-world performance.
Compare Remote Desktop Tools: NinjaOne & Acronis ['26]
We tested the top 3 remote desktop tools to evaluate the general UI and remote control experience, their remote control quality, protocols, and unique capabilities. The results of our file transfer test See our methodology to learn how we measured these tools.
LLM Scaling Laws: Analysis from AI Researchers in 2026
Large language models are usually trained as neural language models that predict the next token in natural language. The term LLM scaling laws refers to empirical regularities that link model performance to the amount of compute, training data, and model parameters used when training models.
AIMultiple Newsletter
1 free email per week with the latest B2B tech news & expert insights to accelerate your enterprise.