Skip to main content
If your MongoDB queries feel sluggish, the fix is almost never “get a bigger server.” It’s almost always “make the database do less work.” Fewer documents scanned, fewer fields returned, less data sorted in memory, and indexes that actually match the way your queries are written. That’s 90% of performance tuning right there. The best part? MongoDB gives you a built-in X-ray machine to see exactly what’s going on. It’s called explain(), and it’s about to become your best friend.

Start with explain() — not guesses

Before you change a single index or rewrite a query, run it with .explain("executionStats"). This is the single most important debugging tool in your MongoDB toolkit. It shows you the winning query plan, whether an index was used, how many keys were scanned, how many documents were examined, and how many results came back.
db.orders.find(
  { tenantId: "t1", status: "open", total: { $gte: 100 } },
  { customer: 1, total: 1, createdAt: 1 }
).sort({ createdAt: -1 }).explain("executionStats")

What to look for in the results

Think of these four fields as your vital signs:
  • queryPlanner.winningPlan — The strategy MongoDB chose
  • executionStats.nReturned — How many documents came back
  • executionStats.totalKeysExamined — How many index entries were scanned
  • executionStats.totalDocsExamined — How many actual documents were read
Here’s the cheat sheet:
  • See COLLSCAN? MongoDB is scanning the entire collection. That’s almost always bad.
  • totalDocsExamined >> nReturned? The query is doing way more work than the result size justifies.
  • totalDocsExamined = 0? Congratulations — you’ve got a covered query, where MongoDB answered entirely from the index. That’s the Holy Grail.
  • totalDocsExamined ≈ nReturned? Your index is doing its job efficiently.

The usual suspects: why queries go slow

1. Missing or mismatched indexes

This is the #1 cause of slow queries. Either there’s no usable index, or the existing index doesn’t match the query shape. Don’t index fields in isolation — index the combination of predicate, sort, projection, and collation that your query actually uses.

2. Poor compound index order

For compound indexes, field order isn’t just important — it’s everything. Follow the ESR Rule (Equality, Sort, Range). Put equality fields first, sort fields next, range fields last. Getting this wrong is the difference between a lightning-fast indexed query and one that still forces extra scanning or in-memory sorting.

3. In-memory sorts

If MongoDB can’t satisfy the sort from an index, it has to sort the matching documents in memory. That’s fine for 100 documents, but with 100,000 it becomes painful fast. Pro-tip: Design your indexes to support both the filter and the sort. The ESR Rule does this naturally.

4. Returning too much data

Even a perfectly selective query can be sluggish if it’s returning entire documents when the UI only needs three fields. This is like ordering the entire menu when you just wanted a coffee. Use projection to slim down the payload.

5. Expensive regex or late-stage filtering

Broad regex patterns and aggregation pipelines that put $match at the end instead of the beginning both force MongoDB to process way more data than necessary. Always filter early, and prefer prefix regex (^abc) over unanchored patterns.

The “Golden Rule” revisited: design indexes with ESR

If you read the indexes guide, you already know about ESR. Here’s a quick refresher applied to performance tuning:
  1. Equality (E): Fields you match exactly — put these first.
  2. Sort (S): Fields you sort by — put these next.
  3. Range (R): Fields with $gt, $gte, $lt, $lte — put these last.

Example

// Query
db.orders.find(
  { tenantId: "t1", status: "open", total: { $gte: 100 } }
).sort({ createdAt: -1 })

// A good candidate index
db.orders.createIndex({ tenantId: 1, status: 1, createdAt: -1, total: 1 })
That index puts equality matches first (tenantId, status), then the sort (createdAt), then the range field (total). MongoDB can narrow the results and return them in order without breaking a sweat.

Aim for covered queries

A covered query is one MongoDB answers entirely from the index — it never touches the documents on disk. This is the fastest possible read. In explain output, you’ll see totalDocsExamined: 0.
// Index that covers filter + sort + returned fields
db.users.createIndex({ status: 1, createdAt: -1, email: 1 })

// Query that can be covered
db.users.find(
  { status: "active" },
  { _id: 0, email: 1, createdAt: 1 }
).sort({ createdAt: -1 })
Because the index contains everything the query needs — filter, sort, and returned fields — MongoDB can serve the whole thing from RAM.

Reduce work wherever you can

Return fewer fields

Use projection to return only what the screen actually needs. Less data read, less data sent over the network, and sometimes it enables covered queries too:
db.users.find(
  { status: "active" },
  { name: 1, email: 1, plan: 1 }
)

Limit result sets

If the user only needs the first page or the top 50 rows, don’t fetch thousands of documents. Pro-tip: MongoDB can optimize $sort followed immediately by $limit — it only keeps the top N items in memory while sorting, which is massively faster than sorting everything.
db.events.aggregate([
  { $match: { tenantId: "t1", level: "error" } },
  { $sort: { timestamp: -1 } },
  { $limit: 100 }
])

Filter as early as possible

In aggregation pipelines, put $match first. Always. Early filtering reduces the documents flowing through every subsequent stage, and a leading $match can use indexes just like a normal find() query.

Sorting: fast vs. slow

Sorting is lightning-fast when the index already provides the order. It’s painfully slow when MongoDB has to collect all matching documents and sort them in memory. For any query that powers a list or grid view, make sure the index supports both the filter and the sort:
// Query
db.orders.find(
  { tenantId: "t1", status: "open" }
).sort({ createdAt: -1 })

// Index that supports both filter and sort
db.orders.createIndex({ tenantId: 1, status: 1, createdAt: -1 })
With this index, MongoDB reads matching documents in the correct order — no separate sort step needed.

Regex: handle with care

Regex is powerful but expensive. The performance depends entirely on the pattern:
// Fast — prefix regex can use an index
{ email: { $regex: "^nick@" } }

// Slow — unanchored regex scans everything
{ email: { $regex: "nick", $options: "i" } }
For exact matches, prefixes, enums, or IDs, always prefer normal query operators over regex. Save regex for when you genuinely need pattern matching.

Tuning aggregation pipelines

Aggregation pipelines follow the same “reduce work early” principle:
  • Put $match first — This is the single biggest pipeline optimization.
  • Use indexes for early filtering and sort support.
  • Apply $limit early when the UI only needs the top results.
  • Keep $project tight — Don’t pass fields you don’t need through the rest of the pipeline.
  • Be cautious with $sort, $group, and repeated $lookup — These are blocking stages that can eat RAM on large inputs.

Monitoring: beyond one-off explain()

For ongoing investigation, MongoDB provides the database profiler and slow query logging. These catch recurring slow query shapes in production that you’d never find with one-off explain() calls. Just be thoughtful about the slow operation threshold — profiling adds overhead, so you don’t want it cranked up to “log everything” on a busy server.

The tuning workflow: a step-by-step recipe

  1. Run the query with .explain("executionStats").
  2. Check whether it uses IXSCAN or falls back to COLLSCAN.
  3. Compare nReturned vs totalKeysExamined vs totalDocsExamined.
  4. Check whether the sort is index-backed.
  5. Create or adjust a compound index using ESR.
  6. Reduce projection and result size if needed.
  7. Re-run explain() and compare the numbers.
  8. Use profiler or slow query logs to find recurring patterns in production.

Common mistakes (and how to dodge them)

Adding too many indexes

Indexes speed up reads, but every index adds write overhead. If you have 20 indexes on one collection, every insert is 20x more work. Index the query patterns you actually use, not everything “just in case.”

Using single-field indexes for multi-field queries

A query that filters on three fields and sorts on a fourth needs a compound index, not a pile of separate single-field indexes. The ESR Rule exists precisely for this.

Ignoring projection

Everyone optimizes the filter and forgets about the payload. Returning full documents when the UI only needs three fields is one of the easiest performance wins to overlook.

Putting $match too late in a pipeline

The later filtering happens, the more documents flow through every stage. This is the aggregation pipeline equivalent of putting the bouncer at the exit instead of the entrance.

Relying too heavily on $lookup

Excessive $lookup usage can grind pipelines to a halt. If you’re doing multiple lookups on every query, consider whether embedding or reshaping the data model would eliminate the join-like work altogether.

Quick checklist

  • ✅ Use explain("executionStats") before guessing
  • ✅ Prefer IXSCAN over COLLSCAN
  • ✅ Keep totalDocsExamined close to nReturned
  • ✅ Build compound indexes using ESR
  • ✅ Support sorts with indexes
  • ✅ Use projection to return fewer fields
  • ✅ Limit result sets
  • ✅ Put $match early in pipelines
  • ✅ Use regex carefully
  • ✅ Monitor recurring slow query shapes with profiling/logs

Summary

Performance tuning in MongoDB is an evidence-driven loop: measure with explain, identify the waste, fix the index or query shape, and measure again. With tools like Spanna Pro to help you visualize query plans and index usage, you can turn this into a quick, repeatable workflow instead of a guessing game. Your database (and your users) will thank you.