Skip to main content
Slow MongoDB queries are usually not mysterious. Most of the time, one of four things is happening: MongoDB is scanning too much data, sorting too much data, returning too much data, or doing work too late in an aggregation pipeline. The trick is to stop guessing and inspect the actual query plan with explain("executionStats"). It shows you whether a query used an index, how many keys and documents it examined, and whether the work being done is reasonable for the number of results returned.

The mindset: don’t tune vibes, tune evidence

When a query feels slow, the worst move is randomly adding indexes and hoping one sticks. The better move is to treat the query like a crime scene. You want to know:
  • Did MongoDB use an index or fall back to a collection scan?
  • How many index keys did it look at?
  • How many documents did it read?
  • How many documents did it actually return?
  • Did it sort in index order, or sort in memory afterwards?
That’s why explain("executionStats") is your best friend. It gives you the closest thing MongoDB has to an X-ray — completed execution information for the winning plan, including the execution statistics you need to diagnose slow queries.

Step 1: isolate the exact slow query

Start with the smallest, cleanest version of the query that still shows the slowdown. Strip out extra projections, application code, and unrelated stages until you’re looking at the real offender. For a normal query:
db.orders.find(
  { tenantId: "t1", status: "open" }
).sort({ createdAt: -1 })
For an aggregation:
db.orders.aggregate([
  { $match: { tenantId: "t1", status: "open" } },
  { $sort: { createdAt: -1 } },
  { $limit: 50 }
])
This matters because MongoDB optimizes based on the actual query shape. The profiler and explain tooling both reason about predicate, sort, projection, and related plan details — so you want to inspect the real shape, not a fuzzy approximation of it.

Step 2: run explain(“executionStats”)

Now ask MongoDB how it actually ran the query.
db.orders.find(
  { tenantId: "t1", status: "open" }
).sort({ createdAt: -1 }).explain("executionStats")
Or for aggregation:
db.orders.explain("executionStats").aggregate([
  { $match: { tenantId: "t1", status: "open" } },
  { $sort: { createdAt: -1 } },
  { $limit: 50 }
])
The most useful outputs here are the winning plan, execution statistics, and the stages MongoDB used to satisfy the query.

Step 3: look for the two big red flags

Red flag 1: COLLSCAN

If you see COLLSCAN, MongoDB scanned the entire collection instead of using an index. That usually means one of three things:
  • There is no useful index.
  • There is an index, but it doesn’t match the query shape well enough.
  • The optimizer decided scanning was cheaper than the available index path.
In plain English: MongoDB had to read way more data than it wanted to.

Red flag 2: examined is way bigger than returned

The most revealing numbers are usually:
  • nReturned — How many documents came back.
  • totalKeysExamined — How many index entries were scanned.
  • totalDocsExamined — How many actual documents were read.
If MongoDB examined 50,000 documents to return 20, your query is doing too much work. That gap is the core signal when analyzing query performance. A healthy query often looks like this:
  • Low keys examined
  • Low docs examined
  • Results returned close to docs examined
An unhealthy query often looks like this:
  • Huge keys examined
  • Huge docs examined
  • Tiny result set
That usually means weak selectivity, a poor index, or a query shape that doesn’t line up with the index you have.

Step 4: check whether the sort is the real problem

A lot of slow queries aren’t slow because of the filter. They’re slow because MongoDB has to sort a big pile of matching documents after it finds them. This is where compound index order matters. MongoDB recommends the ESR guideline for many compound indexes: put Equality fields first, then Sort, then Range. The order of fields in a compound index materially affects how effective the index is. So if your query is:
db.orders.find(
  { tenantId: "t1", status: "open" }
).sort({ createdAt: -1 })
A much better index is often:
db.orders.createIndex({ tenantId: 1, status: 1, createdAt: -1 })
That gives MongoDB a chance to both filter and return the results in the required order without doing an extra in-memory sort.

Step 5: check projection — you may be returning too much

Sometimes the query is “fast enough” on the database side, but still feels slow because you’re pulling back giant documents when the UI only needs a few fields. The best case is a covered query: if the fields used in the filter and the fields returned are all in the index, MongoDB can answer the query from the index alone. In explain output, that often shows up as totalDocsExamined: 0.
// Index that covers filter + sort + returned fields
db.users.createIndex({ status: 1, createdAt: -1, email: 1 })

// Query that can be covered
db.users.find(
  { status: "active" },
  { _id: 0, email: 1, createdAt: 1 }
).sort({ createdAt: -1 })
That’s the dream: filter, sort, and return everything from the index, with no extra document reads.

Step 6: if it’s aggregation, move $match earlier

Aggregation pipelines get slow when they drag too many documents through expensive stages. Place $match as early in the pipeline as possible, because earlier $match stages reduce the number of documents flowing through the rest of the pipeline. If $match is at the beginning, it can also take advantage of indexes the same way a normal find() query does. So this:
db.orders.aggregate([
  { $project: { customer: 1, total: 1, createdAt: 1, status: 1, tenantId: 1 } },
  { $sort: { createdAt: -1 } },
  { $match: { tenantId: "t1", status: "open" } }
])
Is usually worse than this:
db.orders.aggregate([
  { $match: { tenantId: "t1", status: "open" } },
  { $sort: { createdAt: -1 } },
  { $project: { customer: 1, total: 1, createdAt: 1 } }
])
Same logic, way less work.

Step 7: use the profiler when you don’t know which query is slow

Sometimes the problem isn’t one known query. Sometimes the app just feels sluggish and you need to catch the culprits. MongoDB provides the Database Profiler for exactly this. It can identify slow queries and help determine how to improve query performance. The default slow operation threshold is 100 milliseconds. That makes profiler data useful when you want to answer questions like:
  • Which query shapes are showing up repeatedly?
  • Which operations are slow in production but not in development?
  • Are we dealing with one bad query or a whole family of them?
Just be aware that the profiler can affect server performance, consume disk space, and expose query metadata — so enable it thoughtfully rather than leaving it on casually forever.

Step 8: check whether you have the wrong indexes — or too many

Not every slow query is fixed by adding an index. Sometimes you already have the right idea, but the wrong order. Sometimes you have five overlapping indexes and MongoDB still can’t satisfy the query efficiently. Sometimes the index exists, but nobody uses it. MongoDB provides $indexStats to return statistics about how each index is being used. That helps answer:
  • Is this index actually being used?
  • Do we have dead-weight indexes hurting write performance?
  • Are we missing the one index that matches the real query shape?

The most common slow-query patterns

  1. The “no index” problem — You run a query, see COLLSCAN, and MongoDB is reading the whole collection. That’s usually your easiest win. Add an index that matches the filter or filter-plus-sort pattern.
  2. The “wrong compound index order” problem — You have an index, but it’s not helping enough. This is where ESR matters most. Equality first, then sort, then range.
  3. The “sort after filter” problem — Your filter is okay, but MongoDB still has to sort a large intermediate result. A better compound index can often eliminate that extra work.
  4. The “aggregation does work too late” problem — Your pipeline technically works, but $match comes too late, or a heavy stage is processing far more documents than necessary.
  5. The “covered query missed by one field” problem — You’re close to a covered query, but one projected field is missing from the index, so MongoDB still has to fetch documents. To achieve a covered query, you must index projected fields.

A dead-simple workflow that actually works

When a query is slow, do this in order:
  1. Reproduce the slow query cleanly.
  2. Run explain("executionStats").
  3. Check for COLLSCAN.
  4. Compare totalKeysExamined, totalDocsExamined, and nReturned.
  5. Check whether the sort is index-backed.
  6. Check whether projection is larger than it needs to be.
  7. For aggregations, move $match as early as possible.
  8. Build or reshape the index using ESR.
  9. Re-run explain() and compare the numbers again.
  10. Use profiler data if you need to find recurring offenders in production.

Quick cheat sheet

  • IXSCAN = good sign, MongoDB used an index.
  • COLLSCAN = bad sign, MongoDB scanned the collection.
  • ⚠️ totalDocsExamined >> nReturned = query is doing too much work.
  • ⚠️ Sort not supported by index = expect pain as result sets grow.
  • ⚠️ Late $match in aggregation = usually wasted effort.
  • 🏆 totalDocsExamined: 0 = often a covered-query style win.

Summary

Debugging slow MongoDB queries is not about memorizing obscure internals. It’s about reading the evidence. Run explain("executionStats"), look for collection scans, compare examined versus returned, make sure sorts are index-backed, move $match earlier in pipelines, and shape compound indexes with ESR. When you don’t know where the problem is, use the profiler carefully to surface the slow query shapes that matter most. Inspect the plan, measure the work, then fix the query or index shape based on evidence rather than instinct. With tools like Spanna Pro to visualize query plans and index usage, you can turn this into a quick, repeatable workflow instead of a guessing game.