
Sedat Dogan
Technical focus
Sedat is a daily user of the products that he technically reviews. As part of his role at AIMultiple, he oversees the collection of numerous public data points on more than 80,000 B2B technology vendors using proxy, web scraping and web unblocker solutions from multiple providers.Sedat is also the technical lead for running AIMultiple proxy and web unblocker benchmarks that measures the performance of top web data infrastructure providers annually.
In addition, he designs and conducts AI benchmarks to evaluate the efficiency, scalability, and accuracy of leading artificial intelligence solutions.
Professional experience
Currently, Sedat is the CTO at AIMultiple.He is also a board advisor at
- a VC investing in early-stage technology firms
- Ödeal, a regional digital payment platform serving 125,000 merchants with its POS solutions.
Previously, he worked as
- CTO at Bionluk, an online marketplace for freelancers
- CTO at Expertera, a global expertise platform
- CEO of a cybersecurity services provider
- CTO and co-founder at a social network
He has 20 years of experience as a white-hat hacker and development guru focused on programming languages and server architectures.
He directed and secured the technological infrastructure and cybersecurity operations of the last seven national elections in his country.
He has also been recognized in the cybersecurity Hall of Fame by global technology leaders including Twitter.
Education
Sedat holds dual bachelor's degrees in engineering from Yıldız Teknik Üniversitesi.Latest Articles from Sedat
Top 10+ DSPM Vendors to Enhance Data Security
As a technology and information security leader, I selected the top 10 DSPM solutions for discovering, classifying, and protecting sensitive data across IaaS, SaaS, and DBaaS environments.
Top Serverless Functions: Vercel vs Azure vs AWS
Serverless functions enable developers to run code without having to manage a server. This allows them to focus on writing and deploying applications while infrastructure scaling and maintenance are handled automatically in the background. In this benchmark, we evaluated 7 popular cloud service providers following our methodology to test their serverless function performance.
Best LinkedIn Scrapers: Benchmarks + Python Guide
We benchmarked the best LinkedIn scraper tools using 9,000 requests across posts, profiles, and job listings. This guide covers two main areas: This guide will help you whether you’re choosing a scraper for your team or developing your own LinkedIn scraper in Python.
12 Best Residential Proxy Service Providers
As AIMultiple’s CTO, I lead data collection from thousands of websites and run a residential proxy benchmark, including a load test with 100,000 parallel requests.
S3 Compatible Cloud Object Storage Benchmark
I have been using Amazon S3 for over a decade. I have benchmarked leading S3-compatible object storage providers across 9 key criteria (e.g., ease of migration, data transfer fees, storage costs, retrieval times, and mount-ability). AWS leads the public cloud computing market, with S3 being the most widely used cloud storage service.
10 Best Datacenter Proxy Providers (Benchmark & Pricing Guide)
Datacenter proxy is an IP address hosted on a data center server that directs traffic between your device and target websites. We benchmarked the top datacenter proxy providers across 15,000 requests, comparing their servers for speed, reliability, and affordability.
Benchmark-Based Comparison of WAF Solutions
With two decades of industry experience and market data showing that nearly half of breaches begin with web applications, so we manually tested three leading WAFs (Cloudflare, Imperva, Barracuda) against real attack traffic and documented the results.
Screenshot to Code: Lovable vs v0 vs Bolt
During my 20 years as a software developer, I led many front-end teams in developing pages based on designs that were inspired by screenshots. Designs can be transferred to code using AI tools.
Top 30 Cloud GPU Providers & Their GPUs
We benchmarked 10 most common GPUs in typical scenarios (e.g. finetuning an LLM like Llama 3.2). Based on these learnings, if you: Ranking: Sponsors have links and are highlighted at the top. After that, hyperscalers are listed by US market share. Then, providers are sorted by the number of models that they offer.
Multi-GPU Benchmark: B200 vs H200 vs H100 vs MI300X
For over two decades, optimizing compute performance has been a cornerstone of my work. We benchmarked NVIDIA’s B200, H200, H100 and AMD’s MI300X to assess how well they scale for Large Language Model (LLM) inference. Using the vLLM framework with the meta-llama/Llama-3.1-8B-Instruct model, we ran tests on 1, 2, 4 and 8 GPUs.
AIMultiple Newsletter
1 free email per week with the latest B2B tech news & expert insights to accelerate your enterprise.