explain(), and it’s about to become your best friend.
Start with explain() — not guesses
Before you change a single index or rewrite a query, run it with.explain("executionStats"). This is the single most important debugging tool in your MongoDB toolkit. It shows you the winning query plan, whether an index was used, how many keys were scanned, how many documents were examined, and how many results came back.
What to look for in the results
Think of these four fields as your vital signs:queryPlanner.winningPlan— The strategy MongoDB choseexecutionStats.nReturned— How many documents came backexecutionStats.totalKeysExamined— How many index entries were scannedexecutionStats.totalDocsExamined— How many actual documents were read
- See COLLSCAN? MongoDB is scanning the entire collection. That’s almost always bad.
- totalDocsExamined >> nReturned? The query is doing way more work than the result size justifies.
- totalDocsExamined = 0? Congratulations — you’ve got a covered query, where MongoDB answered entirely from the index. That’s the Holy Grail.
- totalDocsExamined ≈ nReturned? Your index is doing its job efficiently.
The usual suspects: why queries go slow
1. Missing or mismatched indexes
This is the #1 cause of slow queries. Either there’s no usable index, or the existing index doesn’t match the query shape. Don’t index fields in isolation — index the combination of predicate, sort, projection, and collation that your query actually uses.2. Poor compound index order
For compound indexes, field order isn’t just important — it’s everything. Follow the ESR Rule (Equality, Sort, Range). Put equality fields first, sort fields next, range fields last. Getting this wrong is the difference between a lightning-fast indexed query and one that still forces extra scanning or in-memory sorting.3. In-memory sorts
If MongoDB can’t satisfy the sort from an index, it has to sort the matching documents in memory. That’s fine for 100 documents, but with 100,000 it becomes painful fast. Pro-tip: Design your indexes to support both the filter and the sort. The ESR Rule does this naturally.4. Returning too much data
Even a perfectly selective query can be sluggish if it’s returning entire documents when the UI only needs three fields. This is like ordering the entire menu when you just wanted a coffee. Use projection to slim down the payload.5. Expensive regex or late-stage filtering
Broad regex patterns and aggregation pipelines that put$match at the end instead of the beginning both force MongoDB to process way more data than necessary. Always filter early, and prefer prefix regex (^abc) over unanchored patterns.
The “Golden Rule” revisited: design indexes with ESR
If you read the indexes guide, you already know about ESR. Here’s a quick refresher applied to performance tuning:- Equality (E): Fields you match exactly — put these first.
- Sort (S): Fields you sort by — put these next.
- Range (R): Fields with
$gt,$gte,$lt,$lte— put these last.
Example
tenantId, status), then the sort (createdAt), then the range field (total). MongoDB can narrow the results and return them in order without breaking a sweat.
Aim for covered queries
A covered query is one MongoDB answers entirely from the index — it never touches the documents on disk. This is the fastest possible read. In explain output, you’ll seetotalDocsExamined: 0.
Reduce work wherever you can
Return fewer fields
Use projection to return only what the screen actually needs. Less data read, less data sent over the network, and sometimes it enables covered queries too:Limit result sets
If the user only needs the first page or the top 50 rows, don’t fetch thousands of documents. Pro-tip: MongoDB can optimize$sort followed immediately by $limit — it only keeps the top N items in memory while sorting, which is massively faster than sorting everything.
Filter as early as possible
In aggregation pipelines, put$match first. Always. Early filtering reduces the documents flowing through every subsequent stage, and a leading $match can use indexes just like a normal find() query.
Sorting: fast vs. slow
Sorting is lightning-fast when the index already provides the order. It’s painfully slow when MongoDB has to collect all matching documents and sort them in memory. For any query that powers a list or grid view, make sure the index supports both the filter and the sort:Regex: handle with care
Regex is powerful but expensive. The performance depends entirely on the pattern:Tuning aggregation pipelines
Aggregation pipelines follow the same “reduce work early” principle:- Put
$matchfirst — This is the single biggest pipeline optimization. - Use indexes for early filtering and sort support.
- Apply
$limitearly when the UI only needs the top results. - Keep
$projecttight — Don’t pass fields you don’t need through the rest of the pipeline. - Be cautious with
$sort,$group, and repeated$lookup— These are blocking stages that can eat RAM on large inputs.
Monitoring: beyond one-off explain()
For ongoing investigation, MongoDB provides the database profiler and slow query logging. These catch recurring slow query shapes in production that you’d never find with one-off explain() calls. Just be thoughtful about the slow operation threshold — profiling adds overhead, so you don’t want it cranked up to “log everything” on a busy server.The tuning workflow: a step-by-step recipe
- Run the query with
.explain("executionStats"). - Check whether it uses IXSCAN or falls back to COLLSCAN.
- Compare
nReturnedvstotalKeysExaminedvstotalDocsExamined. - Check whether the sort is index-backed.
- Create or adjust a compound index using ESR.
- Reduce projection and result size if needed.
- Re-run
explain()and compare the numbers. - Use profiler or slow query logs to find recurring patterns in production.
Common mistakes (and how to dodge them)
Adding too many indexes
Indexes speed up reads, but every index adds write overhead. If you have 20 indexes on one collection, every insert is 20x more work. Index the query patterns you actually use, not everything “just in case.”Using single-field indexes for multi-field queries
A query that filters on three fields and sorts on a fourth needs a compound index, not a pile of separate single-field indexes. The ESR Rule exists precisely for this.Ignoring projection
Everyone optimizes the filter and forgets about the payload. Returning full documents when the UI only needs three fields is one of the easiest performance wins to overlook.Putting $match too late in a pipeline
The later filtering happens, the more documents flow through every stage. This is the aggregation pipeline equivalent of putting the bouncer at the exit instead of the entrance.Relying too heavily on $lookup
Excessive$lookup usage can grind pipelines to a halt. If you’re doing multiple lookups on every query, consider whether embedding or reshaping the data model would eliminate the join-like work altogether.
Quick checklist
- ✅ Use
explain("executionStats")before guessing - ✅ Prefer
IXSCANoverCOLLSCAN - ✅ Keep
totalDocsExaminedclose tonReturned - ✅ Build compound indexes using ESR
- ✅ Support sorts with indexes
- ✅ Use projection to return fewer fields
- ✅ Limit result sets
- ✅ Put
$matchearly in pipelines - ✅ Use regex carefully
- ✅ Monitor recurring slow query shapes with profiling/logs

