No idea about mongodb...
A couple thoughts from my experience working with huge SQL Server tables:
1. on your inserts, you can pass the 10k rows for update, but write your insert code in such a way that you insert one at a time. How? You could use a temp table and a cursor to do one at a time... could all be in the stored proc. Insert into a temp table so its not locking the main table... depending on how many users are inserting... also, what I did was a temp per authorized user, so they don't lock other users... and also when submitting a big chunk, I serialized it to a string and used a timer call on another thread to actually do the insert, so the submit call didn't have to wait for it to process.
2. on selects - you can use sub queries - stored proc should have last row out parameter and set size
return the data to view with a select. Yes, some will say OH NO, not a sub query, this will surely slow it down... it could if you do it wrong, but if you understand what is going on, you'll see why it works.
The way to do it is to write it such that your sub query retrieves a small set of keys into a virtual table,
then joins with the really big table. This way, you use the key index which is as quick as you're going to get and will join with the main table almost as quickly to return the fields for the rows you are to display in the grid. This technique works well. I have used it on 40 million+ row tables... and bigger... the point is to reduce the size of data being touched and to avoid any table scans... and to use an index or two
a simple example:
select p1.* from [Photos].[dbo].[Photos] p1,
(SELECT [PhotoID] FROM [Photos].[dbo].[Photos] where SomeValue="YourFilter") VirtTable1
where p1.[PhotoID]= VirtTable1.[PhotoID];
this shows a more complex example http://www.akadia.com/services/sqlsrv_subqueries.html
It takes a bit of getting used to, but if you make your virtual table selection pretty quick by index and join it with the big table, you will be surprised at the performance gain. People argued with me, but the proof is in the results with your stopwatch. IT CAN BE DONE. I DID IT FOR MORE THAN ONE COMPANY with measurable success.
Also note, that you can do several virtual tables... just wrap with parenthesis and give it a your virtual table name and you can play around endlessly to get the performance and results required... can do lots of stuff with
it - here's one with two nested virtual tables used to grab a small subset
set rowcount = 100
Select * from SomeHugeTable,
(Select TransactionID, CustomerID from
(Select CustomerID, TransactionID from Orders
where CustomerName="HappyClient" and OrderDate='6/12/2013') VirtualCustomerTrans1 ,
(select CityID from Cities where CityName ="Hartford") VirtualTable1
where VirtualCustomerTrans1.CityID = VirtualTable1.CityID
) QuickrSmallSubset,
where SomeHugeTable.TransactionID = QuickrSmallSubset.TransactionID
and SomeHugeTable.ID> @LastRowIDDisplayed
not a great example
The point is to make the subquery be something quick and a small set
.
pass the row start as out parameter in the stored procedure to get grid paging
pass in maxdisplay rows and set it before the select
hope that gets you going
You'll notice that the first query is the Hartford filter - should be one or two rows in a city table...
then the next filter is by customer and date, so probably just a few rows... or maybe your 10k rows
then join with the main big table but no more than the grid size, here I used 100 rows, pass that in
so you only wait for a few index scans to get a small set of GRIDSIZE rows to return,
and not trying to pass huge chunks over the wire every time you click next page
and you're never working with more data than you can view
everything you have working stays working...
tada! you're a hero
in about a day... or however long it takes you to rewrite your stored procs
http://www.codeproject.com/Articles/19229/SQL-SERVER-2005-T-SQL-Paging-Query-Technique-Compa
http://msdn.microsoft.com/en-us/library/ms187638%28v=SQL.105%29.aspx