Mongdb 90 million data query and retrieval

  mongodb, question

Assuming a single document type:

{
 "_id" : ObjectId("5a19403b421aa92332bc2b32"),
 "id" : "95957f4a9eab11e787f1509a4c4be0cd",
 "incre" :1
 "city" "city name"
 }

The amount of data, 90 million, how to quickly take out all, city is all the id of Beijing.
Incre is a self-increasing id.
I use the following method, 100 threads, 20 pieces of data at a time according to incre, and poll 90 million pieces of data from 1:

Find ("$ and": [{"city": "Beijing"}, {"Incre": {"$ GTE": 50, "$ lt": 70}}]})

Suppose the query takes 1 second, 100 threads and 2,000 pieces of data takes 1 second.
90 million data, 750 hours.
Is there a faster way?
Seek help from the great god.
Thanks.

-Where are the bosses

From experience, I would feel directFind({"city ":"Beijing city "})It may be even faster. I might as well compare it myself.
The use of multithreading here will greatly increase the complexity, but the actual value it brings is limited or even counterproductive, if you are not proficient enough in multithreading.