You won’t always have a perfect index for every query, but may have have single-field indexes for each filter. In such cases, PostgreSQL can use Bitmap Scan to combine these indexes. MongoDB is also capable of merging multiple index bounds in a single scan or using index intersection to combine separate scans. Yet, the MongoDB query planner rarely selects index intersection. Let’s look at the reasons behind this.
TL;DR: if you think you need index intersection, you probably need better compound indexes.
Test Setup
I create a collection with one hundred thousand documents and two fields, with an index on each:
let bulk = [];
for (let i = 0; i < 100000; i++) {
bulk.push({
a: Math.floor(Math.random()*100),
b: Math.floor(Math.random()*100)
});
}
db.demo.insertMany(bulk);
// separate indexes
db.demo.createIndex({ a: 1 });
db.demo.createIndex({ b: 1 });
In PostgreSQL, we’ll mirror the dataset as follow:
CREATE TABLE demo AS
SELECT id,
(random()*100)::int AS a,
(random()*100)::int AS b
FROM generate_series(1,100000) id;
CREATE INDEX demo_a_idx ON demo(a);
CREATE INDEX demo_b_idx ON demo(b);
In my MongoDB collection of 100,000 documents, only nine documents have both the “a” and “b” fields set to 42:
mongo> db.demo.countDocuments()
100000
mongo> db.demo.find({ a: 42, b: 42 }).showRecordID()
[
{ _id: ObjectId('6928697ae5fd2cdba9d53f54'), a: 42, b: 42, '$recordId': Long('36499') },
{ _id: ObjectId('6928697ae5fd2cdba9d54081'), a: 42, b: 42, '$recordId': Long('36800') },
{ _id: ObjectId('6928697ae5fd2cdba9d54a7c'), a: 42, b: 42, '$recordId': Long('39355') },
{ _id: ObjectId('6928697ae5fd2cdba9d55a3e'), a: 42, b: 42, '$recordId': Long('43389') },
{ _id: ObjectId('6928697ae5fd2cdba9d5a214'), a: 42, b: 42, '$recordId': Long('61779') },
{ _id: ObjectId('6928697ae5fd2cdba9d5e52a'), a: 42, b: 42, '$recordId': Long('78953') },
{ _id: ObjectId('6928697ae5fd2cdba9d5eeea'), a: 42, b: 42, '$recordId': Long('81449') },
{ _id: ObjectId('6928697ae5fd2cdba9d61f48'), a: 42, b: 42, '$recordId': Long('93831') },
{ _id: ObjectId('6928697ae5fd2cdba9d61f97'), a: 42, b: 42, '$recordId': Long('93910') }
]
In my PostgreSQL database, there are 100,000 rows and, among them, nine rows have the value 42 in both the “a” and “b” columns:
postgres=# select count(*) from demo;
count
--------
100000
(1 row)
postgres=# SELECT *, ctid FROM demo WHERE a = 42 AND b = 42;
a | b | id | ctid
----+----+-------+-----------
42 | 42 | 4734 | (25,109)
42 | 42 | 15678 | (84,138)
42 | 42 | 29464 | (159,49)
42 | 42 | 29748 | (160,148)
42 | 42 | 31139 | (168,59)
42 | 42 | 37785 | (204,45)
42 | 42 | 55112 | (297,167)
42 | 42 | 85823 | (463,168)
42 | 42 | 88707 | (479,92)
I displayed the CTID for PostgreSQL and the RecordID for MongoDB, to see the distribution over the heap table (for PostgreSQL) or the WiredTiger B-Tree (for MongoDB).
MongoDB possible execution plans
I have executed db.demo.find({ a: 42, b: 42 }), and multiple plans have been evaluated:
mongo> db.demo.getPlanCache().list();
[
{
version: '1',
queryHash: 'BBC007A6',
planCacheShapeHash: 'BBC007A6',
planCacheKey: '51C56FDD',
isActive: true,
works: Long('968'),
worksType: 'works',
timeOfCreation: ISODate('2025-11-27T15:09:11.069Z'),
createdFromQuery: { query: { a: 42, b: 42 }, sort: {}, projection: {} },
cachedPlan: {
stage: 'FETCH',
filter: { b: { '$eq': 42 } },
inputStage: {
stage: 'IXSCAN',
keyPattern: { a: 1 },
indexName: 'a_1',
isMultiKey: false,
multiKeyPaths: { a: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { a: [ '[42, 42]' ] }
}
},
creationExecStats: [
{
nReturned: 12,
executionTimeMillisEstimate: 0,
totalKeysExamined: 967,
totalDocsExamined: 967,
executionStages: {
stage: 'FETCH',
filter: { b: { '$eq': 42 } },
nReturned: 12,
executionTimeMillisEstimate: 0,
works: 968,
advanced: 12,
needTime: 955,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
docsExamined: 967,
alreadyHasObj: 0,
inputStage: {
stage: 'IXSCAN',
nReturned: 967,
executionTimeMillisEstimate: 0,
works: 968,
advanced: 967,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { a: 1 },
indexName: 'a_1',
isMultiKey: false,
multiKeyPaths: { a: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { a: [Array] },
keysExamined: 967,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
}
}
},
{
nReturned: 10,
executionTimeMillisEstimate: 0,
totalKeysExamined: 968,
totalDocsExamined: 968,
executionStages: {
stage: 'FETCH',
filter: { a: { '$eq': 42 } },
nReturned: 10,
executionTimeMillisEstimate: 0,
works: 968,
advanced: 10,
needTime: 958,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 0,
docsExamined: 968,
alreadyHasObj: 0,
inputStage: {
stage: 'IXSCAN',
nReturned: 968,
executionTimeMillisEstimate: 0,
works: 968,
advanced: 968,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 0,
keyPattern: { b: 1 },
indexName: 'b_1',
isMultiKey: false,
multiKeyPaths: { b: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { b: [Array] },
keysExamined: 968,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
}
}
},
{
nReturned: 7,
executionTimeMillisEstimate: 0,
totalKeysExamined: 968,
totalDocsExamined: 7,
executionStages: {
stage: 'FETCH',
filter: { '$and': [ [Object], [Object] ] },
nReturned: 7,
executionTimeMillisEstimate: 0,
works: 968,
advanced: 7,
needTime: 961,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 0,
docsExamined: 7,
alreadyHasObj: 0,
inputStage: {
stage: 'AND_SORTED',
nReturned: 7,
executionTimeMillisEstimate: 0,
works: 968,
advanced: 7,
needTime: 961,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 0,
failedAnd_0: 232,
failedAnd_1: 230,
inputStages: [ [Object], [Object] ]
}
}
}
],
candidatePlanScores: [ 2.012596694214876, 1.0105305785123966, 1.0073314049586777 ],
indexFilterSet: false,
estimatedSizeBytes: Long('4826'),
solutionHash: Long('6151768200665613849'),
host: '40ae92e83a12:27017'
}
]
The cached plan uses only one index, on “a”, but there are two additional possible plans in the cache: one using the index on “b” and another using a combination of both indexes with AND_SORTED. The scores (candidatePlanScores) are:
- 2.012 for the index on “a”
- 1.010 for the index on “b”
- 1.007 for the AND_SORTED intersection of the indexes on “a” and “b”
This may be surprising, and given how the data was generated, we should expect similar costs for the two indexes. We can see that during the trial period on the query plans, the index on “a” finished the scan (isEOF: 1), and even though the other two had similar performance and were going to end, the trial period ended before they reached the end (isEOF: 0). MongoDB adds an EOF bonus of 1 when one when the trial plan finishes before the others, and that explains why the score is higher. So it’s not really that the index on “a” is better than the other plans, but just that all plans are good, and the first one started and finished first, and got the bonus.
In addition to that, there’s another small penalty on index intersection. Finally the scores are:
- Index on “a”: 1 (base) + 0.012 (productivity) + 1.0 (EOF bonus) = 2.012
- Index on “b”: 1 (base) + 0.010 (productivity) + 0 (no EOF) = 1.010
- AND_SORTED: 1 (base) + 0.007 (productivity) + 0 (no EOF) = 1.007
Without the penalties, AND_SORTED would still not have been chosen. The problem is that the score measure productivity in units of work (advanced/work) but do not account for lighter work: one index scan must fetch the document and apply the additional filter in one work unit, where AND_SORTED doesn’t and waits for the intersection without additional fetch and filter.
To show the AND_SORTED plan, I’ll force it on my lab database for the following examples in this article:
// get the current parametrs (default):
mongo> Object.keys(db.adminCommand({ getParameter: "*" })).filter(k => k.toLowerCase().includes("intersection")) .forEach(k => print(k + " : " + allParams[k]));
internalQueryForceIntersectionPlans : false
internalQueryPlannerEnableHashIntersection : false
internalQueryPlannerEnableIndexIntersection : true
// set all at true:
db.adminCommand({
setParameter: 1,
internalQueryPlannerEnableIndexIntersection: true,
internalQueryPlannerEnableHashIntersection: true,
internalQueryForceIntersectionPlans: true
});
I have set internalQueryForceIntersectionPlans to force index intersection (it still uses the query planner, but with a 3-point boost to the score). Index intersection is possible for AND_SORTED by default, but I also set AND_HASH for another test later that cannot use AND_SORTED.
Index Intersection in MongoDB
Now that I forced index intersection, I can observe it with execution statistics:
db.demo.find({ a: 42, b: 42 }).explain("executionStats").executionStats
Execution plan shows that both indexes were scanned with one range (seeks: 1), and combined with an AND_SORTED before fetching the documents for the result:
{
executionSuccess: true,
nReturned: 9,
executionTimeMillis: 4,
totalKeysExamined: 2009,
totalDocsExamined: 9,
executionStages: {
isCached: false,
stage: 'FETCH',
filter: {
'$and': [ { a: { '$eq': 42 } }, { b: { '$eq': 42 } } ]
},
nReturned: 9,
executionTimeMillisEstimate: 0,
works: 2010,
advanced: 9,
needTime: 2000,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
docsExamined: 9,
alreadyHasObj: 0,
inputStage: {
stage: 'AND_SORTED',
nReturned: 9,
executionTimeMillisEstimate: 0,
works: 2010,
advanced: 9,
needTime: 2000,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
failedAnd_0: 501,
failedAnd_1: 495,
inputStages: [
{
stage: 'IXSCAN',
nReturned: 964,
executionTimeMillisEstimate: 0,
works: 965,
advanced: 964,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { a: 1 },
indexName: 'a_1',
isMultiKey: false,
multiKeyPaths: { a: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { a: [ '[42, 42]' ] },
keysExamined: 964,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
},
{
stage: 'IXSCAN',
nReturned: 1045,
executionTimeMillisEstimate: 0,
works: 1045,
advanced: 1045,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 0,
keyPattern: { b: 1 },
indexName: 'b_1',
isMultiKey: false,
multiKeyPaths: { b: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { b: [ '[42, 42]' ] },
keysExamined: 1045,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
}
]
}
}
}
Here, AND_SORTED means that it’s the intersection (AND) of two streams, both sorted on the same key (the natural order on RecordId, which is the key used to fetch documents from the collections). Not only does it not require additional sorting to merge the two sources, but it may also optimize the fetches from the collection. These details are essential for understanding variations in compound indices, where an additional key may change the ordering.
Bitmap-OR in PostgreSQL
Here is the equivalent on PostgreSQL:
postgres=# EXPLAIN (ANALYZE, BUFFERS, VERBOSE, COSTS off)
SELECT *
FROM demo
WHERE a = 42 AND b = 42;
QUERY PLAN
--------------------------------------------------------------------------------------------
Bitmap Heap Scan on public.demo (actual time=0.136..0.147 rows=9.00 loops=1)
Output: a, b
Recheck Cond: ((demo.a = 42) AND (demo.b = 42))
Heap Blocks: exact=9
Buffers: shared hit=15
-> BitmapAnd (actual time=0.127..0.127 rows=0.00 loops=1)
Buffers: shared hit=6
-> Bitmap Index Scan on demo_a_idx (actual time=0.047..0.047 rows=964.00 loops=1)
Index Cond: (demo.a = 42)
Index Searches: 1
Buffers: shared hit=3
-> Bitmap Index Scan on demo_b_idx (actual time=0.053..0.053 rows=960.00 loops=1)
Index Cond: (demo.b = 42)
Index Searches: 1
Buffers: shared hit=3
Planning Time: 0.074 ms
Execution Time: 0.162 ms
Both engines combine outputs from multiple index scans. PostgreSQL implements it as a bitmap in memory (BitmapAnd), a generic operation used not only for combining multiple indexes but also for inverted indexes or even a single index scan to avoid too many random heap table reads. MongoDB’s index intersection is more specific to the filters and available indexes and, in this case, merges sorted RecordIds without going through intermediate representations in memory.
Hash intersection when input scans are not sorted
AND_SORTED was fast because both indexes had an equality filter on their full key, and the scan returns in the order of RecordIds, easy to intersect and filter, before fetching the document. It was similar to a merge join on the two indexes. If those indexes were more complex, with some additional fields in the key, it would not be ordered on this additional field. I’ll test it by adding a field to each index:
db.demo.createIndex({ a: 1 , _id:1 });
db.demo.createIndex({ b: 1 , _id:1 });
db.demo.dropIndex({ a: 1 });
db.demo.dropIndex({ b: 1 });
I run the same query:
db.demo.find({ a: 42, b: 42 }).explain("executionStats").executionStats
Execution plan shows that both indexes were scanned on one range, like before, but are combined with an AND_HASH before fetching the documents for the result:
{
executionSuccess: true,
nReturned: 9,
executionTimeMillis: 6,
totalKeysExamined: 2010,
totalDocsExamined: 9,
executionStages: {
isCached: false,
stage: 'FETCH',
filter: {
'$and': [ { a: { '$eq': 42 } }, { b: { '$eq': 42 } } ]
},
nReturned: 9,
executionTimeMillisEstimate: 0,
works: 2013,
advanced: 9,
needTime: 2003,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
docsExamined: 9,
alreadyHasObj: 0,
inputStage: {
stage: 'AND_HASH',
nReturned: 9,
executionTimeMillisEstimate: 0,
works: 2013,
advanced: 9,
needTime: 2003,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
memUsage: 59622,
memLimit: 33554432,
mapAfterChild_0: 1046,
inputStages: [
{
stage: 'IXSCAN',
nReturned: 1046,
executionTimeMillisEstimate: 0,
works: 1047,
advanced: 1046,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { b: 1, _id: 1 },
indexName: 'b_1__id_1',
isMultiKey: false,
multiKeyPaths: { b: [], _id: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { b: [ '[42, 42]' ], _id: [ '[MinKey, MaxKey]' ] },
keysExamined: 1046,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
},
{
stage: 'IXSCAN',
nReturned: 964,
executionTimeMillisEstimate: 0,
works: 965,
advanced: 964,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { a: 1, _id: 1 },
indexName: 'a_1__id_1',
isMultiKey: false,
multiKeyPaths: { a: [], _id: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { a: [ '[42, 42]' ], _id: [ '[MinKey, MaxKey]' ] },
keysExamined: 964,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
}
]
}
}
}
AND_HASH is similar to a hash join. It cannot walk both streams in parallel, like a AND_SORTED, but builds an in‑memory hash table of RecordIds from the first child scan, then probes with the second. This consumes memory proportional to the number of matches from the first index (memUsage: 59622).
Compound Indexes as the right solution
We can create a compound index because we have equality filters on two different columns of the same document. Creating an index that begins with these columns will be provide direct access to the filtered entries:
db.demo.dropIndexes();
db.demo.createIndex({ a: 1, b: 1 });
db.demo.find({ a: 42, b: 42 }).explain("executionStats").executionStats;
{
executionSuccess: true,
nReturned: 9,
executionTimeMillis: 0,
totalKeysExamined: 9,
totalDocsExamined: 9,
executionStages: {
isCached: false,
stage: 'FETCH',
nReturned: 9,
executionTimeMillisEstimate: 0,
works: 11,
advanced: 9,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
docsExamined: 9,
alreadyHasObj: 0,
inputStage: {
stage: 'IXSCAN',
nReturned: 9,
executionTimeMillisEstimate: 0,
works: 10,
advanced: 9,
needTime: 0,
needYield: 0,
saveState: 0,
restoreState: 0,
isEOF: 1,
keyPattern: { a: 1, b: 1 },
indexName: 'a_1_b_1',
isMultiKey: false,
multiKeyPaths: { a: [], b: [] },
isUnique: false,
isSparse: false,
isPartial: false,
indexVersion: 2,
direction: 'forward',
indexBounds: { a: [ '[42, 42]' ], b: [ '[42, 42]' ] },
keysExamined: 9,
seeks: 1,
dupsTested: 0,
dupsDropped: 0
}
}
}
In this case, a single compound index was chosen instead of intersecting two individual indexes. The index bounds show the intersection directly: { a: [ '[42, 42]' ], b: [ '[42, 42]' ] }. This approach is optimal because it limits the scan to only those index entries that are required (keysExamined: 9) for producing the results (nReturned: 9).
In PostgreSQL, it is also recommended to use a compound index rather than relying on bitmap intersection:
DROP INDEX demo_a_idx, demo_b_idx;
CREATE INDEX demo_ab_idx ON demo(a, b);
postgres=# EXPLAIN (ANALYZE, BUFFERS, VERBOSE, COSTS off)
SELECT *
FROM demo
WHERE a = 42 AND b = 42;
QUERY PLAN
-------------------------------------------------------------------------------------
Bitmap Heap Scan on public.demo (actual time=0.026..0.038 rows=9.00 loops=1)
Output: a, b, x, id
Recheck Cond: ((demo.a = 42) AND (demo.b = 42))
Heap Blocks: exact=9
Buffers: shared hit=11
-> Bitmap Index Scan on demo_ab_idx (actual time=0.016..0.016 rows=9.00 loops=1)
Index Cond: ((demo.a = 42) AND (demo.b = 42))
Index Searches: 1
Buffers: shared hit=2
Planning Time: 0.072 ms
Execution Time: 0.053 ms
(11 rows)
PostgreSQL still used a bitmap scan here, because it can optimize the access to the heap table, but with no need of a BitmapAnd operation.
Conclusion
Although MongoDB’s index intersection is powerful in theory, it’s rarely visible in practice for two reasons:
- The query planner’s scoring method often underestimates index intersection due to the EOF bonus, biased productivity, and intersection penalties. Those are known issues but given low priority, mainly due to the next reason.
- Compound indexes almost always provide the best performance for queries and are favored by the query planner as a result. While you can force intersection plans for testing, production workloads should rely on well-designed compound indexes.
For similar cases, PostgreSQL relies on Bitmap Scan to run index intersections as a bitmap AND, and is widely used even with single indexes to optimize the access to heap tables from one index, multiple indexes, or indexes with multiple entries per rows.