AIMultipleAIMultiple
No results found.

MongoDB Monitoring: SolarWinds vs New Relic vs Datadog

Sedat Dogan
Sedat Dogan
updated on Dec 5, 2025

Monitoring tools promise easy integration, but which ones actually deliver when you’re not a DevOps expert?

We installed Solarwinds, Datadog, and New Relic on clean systems running MongoDB 7.0 to find out. Our infrastructure team went through each tool’s complete setup process, documenting every step and roadblock.

MongoDB Performance Monitoring Tools Benchmark Results

Platform
Setup Time
Query Profiling
Metric Accuracy
Resource Usage
Best For
Solarwinds
5 min
✅ Comprehensive
✅ 100% accurate
Medium (500MB)
Production optimization
New Relic
15 min
❌ Undercounted 23-800%
Low (90MB)
Basic health checks
Datadog
20+ min
⚠️ Unclear
Medium (330MB)
Multi-tech monitoring

The winner: Solarwinds completed setup in 5 minutes with automatic detection and provided query-level profiling that the others lacked. New Relic took 15 minutes with manual verification steps and reported inaccurate metrics. Datadog required 20+ minutes of YAML editing and offered only basic visibility.

You can also see how these platforms monitor MySQL.

Test Environment and Methodology

We ran all three tools on identical setups to ensure fair comparison. Each test used:

  • Database: MongoDB 7.0 Community Edition
  • Server: AWS m6i.xlarge instance
  • Starting point: Fresh installation with the main monitoring agent already installed

All three vendors require you to install their base agent before adding specific integrations, such as MongoDB. We completed that step beforehand, so our test focused purely on the MongoDB integration experience.

What we measured:

  • Setup complexity: Number of manual steps, automatic versus manual configuration, instruction clarity, and whether the interface guided us or left us hunting for next steps.
  • Agent resource consumption: CPU, memory, disk I/O, and network usage during idle and under load (inserting 7 million records).
  • Monitoring capabilities: Dashboard quality, metric accuracy, query-level analysis, and troubleshooting features.

We approached each tool as a regular user would, without reading documentation beforehand, and with no prior training. If something wasn’t apparent in the interface, we noted it.

1. Installation & Onboarding Experience

1. Solarwinds

SolarWinds finished the MongoDB integration in under 5 minutes. Solarwinds opens with a simple modal: “What do you want to monitor?” When you select database performance, the platform displays the supported databases up front.

After selecting MongoDB, Solarwinds checks for existing agents.

The platform immediately detected our previously installed agent.

One feature stood out: the interface shows agent details (operating system, cloud instance ID, version) right there in the selection screen. No searching through dropdowns.

Now Solarwinds asks for MongoDB credentials. We entered the connection details: localhost, authentication method (password-based), username, and password. The display name auto-filled with our server information, though it used the full internal hostname rather than the agent name we’d specified earlier.

One oddity: the “Query Capture” dropdown appeared without explanation. We selected “Log” and moved forward, unsure what the other options did.

The next screen presented three database commands to run. Each command had a copy button. We ran them in MongoDB and clicked “Observe Database.”

Here’s where Solarwinds impressed us. Instead of asking us to figure out permissions, it provided copy-paste commands:

  1. Create a monitoring user with specific credentials
  2. Grant the necessary privileges (clusterMonitor and readAnyDatabase roles)
  3. Set the profiling level

A summary screen appeared showing our configuration. The plugin status showed “Plugin is being deployed.”

Seconds later, the status changed to “Plugin deployment is successful” with a link to view the dashboard. Setup complete.

Discover SolarWinds Observability with deep MongoDB monitoring and query profiling. Explore SolarWinds.

Visit Website

2. New Relic

New Relic took roughly 15 minutes to set up, but the time wasn’t the real problem. The friction came from answering questions that the platform should have already known.

New Relic starts at the Integrations & Agents page.

We searched for “mongo” and found multiple MongoDB-related integrations.

After selecting MongoDB, New Relic asked us to choose an instrumentation method.

We picked “On a host” since our agent was already installed. The next screen asked for the operating system. We selected Linux. This felt unnecessary since the agent was already running on the server, but we continued.

The next screen asked for MongoDB host details. The term “SCRAM” appeared without explanation. Most people know this as username/password authentication, but the technical term adds confusion.

After clicking continue, New Relic asked us which server to install on. This question should have come first, not after we’d already entered configuration details. The agent was already installed on “aimultiple-benchmark,” so we selected it and continued.

The next screen asked us to verify MongoDB version compatibility. New Relic wanted us to run mongod --version and confirm the output matched its requirements. We had to copy the command, switch to our terminal, run it, check the version number, and come back to click continue.

The agent’s already installed on the server. It could check this automatically.

After clicking continue, we reached the user creation step. New Relic provided a MongoDB script to create the monitoring user. The commands were clear, with proper role assignments (clusterMonitor and readAnyDatabase). We also had to run a connection test command to verify that the user worked correctly.

This approach was better than asking for root access, but it assumed we’d figure out where to run these commands.

The next screen asked us to install the integration package. Now New Relic wants us to install manually using yum.

We ran the correct apt command for Ubuntu, then moved to the next screen. New Relic provided a YAML configuration file and told us exactly where to put it: /etc/newrelic-infra/integrations.d/. At least the file path was clear.

We created the file, pasted the configuration, and clicked Continue. The final screen showed a “Test connection” button. We clicked it and waited.

The test passed. Setup complete.

3. Datadog

Datadog took over 20 minutes to complete. The integration worked eventually, but getting there required significant manual effort.

After logging in, we went to Integrations and searched for “mongo.” We clicked on MongoDB, and a modal appeared.

The overview showed what MongoDB monitoring includes, but clicking “Install Integration” just opened another screen with dense instructions.

This is where Datadog overwhelmed us. The screen showed a complete reference guide covering every possible MongoDB scenario: standalone instances, replica sets, sharded clusters, authentication methods, SSL configuration, and more.

For someone just trying to monitor a single MongoDB instance, the wall of text felt excessive.

We scrolled through looking for the basic steps:

  1. Create a monitoring user in MongoDB
  2. Edit the configuration YAML file
  3. Restart the Datadog agent

Datadog provided the MongoDB commands to create the user, which was helpful. But when it came to the YAML file, the documentation said to edit conf.yaml without clearly stating where this file should go.

We knew from experience it belongs in /etc/datadog-agent/conf.d/mongo.d/, but the instructions buried this detail deep in the documentation.

We created the MongoDB user, wrote the YAML configuration, placed it in the correct directory, and restarted the agent.

Then we went back to the Datadog interface and clicked “Install Integration.”

The button disappeared. No confirmation message, no success notification, no redirect to a dashboard. Nothing.

We waited a moment, then navigated to the Dashboards section manually and found MongoDB metrics beginning to populate.

2. Agent Resource Consumption

We monitored how much each agent consumed while running. The test ran for approximately 10 minutes with all three agents collecting data simultaneously from the same MongoDB instance under load.

We stressed the system by inserting 2 million records into MongoDB using a script that generated random data. This simulated real-world database activity while we measured agent resource usage.

CPU Consumption

All three agents used minimal CPU resources during the test.

  • New Relic showed the lowest average CPU consumption but had occasional spikes reaching 4%. These spikes were brief and didn’t impact system performance.
  • Solarwinds maintained the most consistent CPU usage, staying around 3% without significant variation.
  • Datadog fell in the middle, averaging just over 2% with stable performance throughout the test.

Memory Usage

Memory usage showed more significant differences between the agents.

New Relic consumed roughly 5-6x less memory than Solarwinds. On our 16GB test server, this translated to:

  • New Relic: ~90MB
  • Datadog: ~330MB
  • Solarwinds: ~500MB

For most production servers, these amounts won’t matter. But if you’re running agents on resource-constrained systems or monitoring hundreds of databases, the difference adds up.

Memory usage remained stable across all three agents throughout the test. No memory leaks or unexpected growth occurred.

Disk I/O

Disk activity varied considerably between agents.

SolarWinds performed significantly more disk reads than the other two agents, about 40x as many as New Relic and 1.5x as many as Datadog. This suggests that SolarWinds accesses locally stored data more frequently, possibly for its query-profiling features.

Datadog wrote the least to disk, indicating it buffers less data locally before sending it to the cloud.

New Relic showed the most balanced I/O pattern with moderate reads and writes.

Network Usage

Network traffic showed how much data each agent sent to its backend.

All three agents sent similar amounts of data over the network. Datadog transmitted slightly less, possibly due to more aggressive compression or different sampling rates.

The bidirectional traffic (sent vs received being nearly equal) makes sense agents send metrics and receive configuration updates or commands from the platform.

Resource Impact Summary

None of these agents will strain your system. Even under database load with all three running simultaneously, total resource consumption stayed well under 10% for CPU and memory combined.

New Relic wins on memory efficiency. Solarwinds uses more resources but delivers more detailed query-level analysis. Datadog sits in the middle.

For most use cases, these resource differences won’t influence your decision. Choose based on features and usability, not resource consumption.

3. Dashboard & Monitoring Capabilities

After completing the setup, we needed to see what each platform actually shows. We ran the same workload across all three: inserting 2 million records in batches of 5,000, followed by another 5 million records.

The script used Node.js with Faker to generate random user data names, emails, addresses, and phone numbers. This gave us a realistic dataset to monitor.

While the inserts ran, we monitored agent resource consumption in the background.

The workload put real stress on MongoDB, which let us see how each platform captured and displayed the activity.

Solarwinds Dashboard

We clicked “Databases” in the left menu and immediately saw our MongoDB instance. One click, and a complete dashboard appeared.

The top of the screen showed MongoDB health, average response time, throughput (queries per second), and error count. The “Top 10 Service Breakdown” bubble chart displayed the most frequently used query patterns with their counts and percentages.

The numbers told a story. Throughput showed 3 queries per second on average. The breakdown showed 1,400 insert operations. Why 1,400 instead of 7 million?

We inserted 7 million records in batches of 5,000. That’s 1,400 batch operations. Solarwinds tracked every single batch without missing one.

The Profiler tab showed query patterns with average execution times.

Our insert queries took 4-5 seconds each, which seems high until you remember each query wrote 5,000 rows.

The Health tab showed everything running smoothly.

We stopped the MongoDB service to see how quickly Solarwinds would notice. Within 30-40 seconds, the health status changed to “Bad.”

The Queries tab provided advanced filtering. You could list queries that:

  • Returned errors
  • Ran without proper indexes
  • Responded slowly
  • Generated warnings

Each query pattern showed when it first appeared, when it last ran, how many samples were captured, and execution statistics. For troubleshooting, this level of detail matters.

The Alerts tab let us create MongoDB-specific alerts. We’d created a memory alert for the host earlier, but now we could set up database-specific notifications.

The Resources tab showed host-level metrics alongside MongoDB stats, CPU, memory, disk, and network. This context helps distinguish between database issues and underlying infrastructure problems.

The Advisors tab had no recommendations yet, but it provided them for MySQL in our previous test. We expect it to offer optimization suggestions as it collects more MongoDB data.

New Relic Dashboard

We went to the Dashboards section. No MongoDB dashboard appeared automatically.

We searched for “mongo” in the dashboard catalog and found two MongoDB options.

We selected the regular MongoDB dashboard and clicked “Setup MongoDB.”

It redirected us to the MongoDB integration setup again. The platform already knew we’d installed MongoDB, so why send us back to installation? We clicked “Done” and proceeded to the dashboard.

The dashboard opened completely empty. “No value reported for service check mongodb.can_connect.”

We checked our configuration using newrelic-infra agent configtest.

The integration_name showed “nri-prometheus.” We’d accidentally selected the Prometheus version of the MongoDB integration during dashboard setup. A regular user wouldn’t catch this.

We went back and installed the “MongoDB (Prometheus)” dashboard.

This time, data appeared.

But here’s the problem: how would a normal user figure this out? The installation process was confusing, and now the dashboard selection added another layer of complexity.

The dashboard layout felt odd. The top showed total servers and databases information that changes once a year, yet it occupied prime screen real estate.

Below that, “Connection Saturation” appeared prominently. This metric only matters when something’s wrong. Why put it at the top?

The “Query Operations” section reported 11,670 inserts. The number was wrong. We inserted 7 million records in 1,400 batch operations. The graph didn’t match reality.

The Databases tab showed database size, object counts, and index sizes. These numbers were correct7 million objects. New Relic gets this data by querying MongoDB directly (“How many documents do you have?”). But the real-time query counting failed.

Collections tab included useful graphs for collection-level metrics: size (with both table and graph views), total size with percentage change, read operations count, read latency, write operations count, write latency, transaction counts, transaction latency, index access operations, command execution counts, command latency, command frequency, and command duration.

Notably absent: host metrics. We couldn’t see CPU, memory, disk, or network usage for the server running MongoDB. SolarWinds and Datadog both included this context.

More importantly, no query-level analysis existed anywhere. No query patterns, no profiling, no slow query identification, no missing index detection. For database troubleshooting, these features matter.

Datadog Dashboard

We clicked “Dashboards” in the left menu. A “MongoDB – Overview” dashboard appeared automatically.

We opened it, but it was empty.

The problem took time to diagnose. During installation, Datadog’s autodiscovery configuration required specifying which databases to monitor using a pattern match. The default pattern didn’t match our database name. Datadog never mentioned this during setup.

We changed all the patterns to .* (match everything) and restarted the agent.

But why was the dashboard completely empty? Even without database-specific metrics, uptime, connection counts, and server stats should have appeared. They didn’t.

We ran datadog-agent check mongo to debug. The config file had an indentation error. YAML’s strict formatting requirement caught us. After fixing it and rerunning our load test with 5 million inserts, data finally appeared.

The dashboard immediately showed problems. The Logs section displayed “Not Accessible” even though we’d configured log collection in our YAML file. Datadog’s setup process said everything was fine, but logs weren’t working.

The dashboard layout made little sense for our use case. The top section focused on sharding statistics. We weren’t running a sharded cluster. The middle showed replica set metrics. We didn’t have replica sets. The bottom returned to sharding again. Roughly 60% of the dashboard displayed empty sections for features we weren’t using.

The useful information occupied maybe 40% of the screen: uptime, memory usage, network I/O, queries per second, and read/write latency. No query analysis, no profiling, no slow query detection, no index recommendations.

We couldn’t even determine how many operations ran from this dashboard.

Final Verdict

We set out to answer a simple question: which monitoring platform makes MongoDB integration easiest for non-technical teams?

After installing all three, running identical workloads, and evaluating dashboards, the answer became clear.

Solarwinds: Built for Database Monitoring

Solarwinds won this comparison decisively. The platform immediately detected our agent, guided us through credential setup via copy-paste commands, and deployed the integration automatically. Setup took 5 minutes.

The dashboard appeared instantly with relevant information. Query profiling showed exactly which operations consumed the most resources. The platform caught all 1,400 batch operations without missing a single one. When we stopped MongoDB, Solarwinds detected the failure within 40 seconds.

The Queries tab let us filter by errors, missing indexes, slow responses, and warnings features that directly support database optimization. The Advisors feature promised recommendations (though we didn’t generate enough data to trigger any during our test).

Solarwinds focused on what database administrators actually need: query analysis, performance profiling, and actionable insights.

New Relic: Lost in Configuration

New Relic took 15 minutes to set up, but time wasn’t the main issue. The platform asked questions in the wrong order, required manual verification of things the agent could check automatically, and forced us to manually install packages.

The dashboard confusion made things worse. We installed MongoDB monitoring, but the default dashboard selection led to an empty screen. Only after digging into configuration files did we realize we’d selected the wrong integration type. A regular user wouldn’t figure this out.

When data finally appeared, the metrics were wrong. New Relic reported 11,670 inserts when we’d performed 1,400 batch operations totaling 7 million records. The platform undercounted by an order of magnitude.

More critically, New Relic provided no query-level analysis. No profiling, no slow query detection, no missing index identification. For database troubleshooting, these omissions matter.

Datadog: Manual Work Required

Datadog required 20+ minutes of setup and the most manual configuration. We edited YAML files, figured out where to place them, and restarted services from the command line.

The dashboard appeared automatically but displayed nothing. The autodiscovery configuration used a pattern that didn’t match our database. After fixing the pattern and correcting YAML indentation errors, data finally populated.

The dashboard itself proved poorly designed for single-instance MongoDB. Sixty percent of the screen showed empty sections for sharding and replica sets—features we weren’t using. The remaining 40% offered basic metrics: uptime, memory, network I/O, queries per second, and latency.

No query analysis. No profiling. No optimization recommendations. We couldn’t even determine operation counts accurately from the dashboard.

The Core Difference

SolarWinds treats database monitoring as a specialized discipline requiring deep query visibility.

New Relic and Datadog treat databases as another monitored component. They provide surface-level metrics but lack depth for database optimization.

Recommendations

SolarWinds: If you need query analysis, performance profiling, accurate metrics, and troubleshooting tools.

New Relic: If you’re already using it for application monitoring and need only basic database health checks.

Datadog: If you’re comfortable with manual configuration and monitoring many technologies through one platform.

CTO
Sedat Dogan
Sedat Dogan
CTO
Sedat is a technology and information security leader with experience in software development, web data collection and cybersecurity. Sedat:
- Has ⁠20 years of experience as a white-hat hacker and development guru, with extensive expertise in programming languages and server architectures.
- Is an advisor to C-level executives and board members of corporations with high-traffic and mission-critical technology operations like payment infrastructure.
- ⁠Has extensive business acumen alongside his technical expertise.
View Full Profile

Be the first to comment

Your email address will not be published. All fields are required.

0/450