ASP.NET - MangoDB vs MS SQL Server - Asked By avula on 13-Jun-13 12:34 PM

In one of my project requirement our client suggesting to use "MangoDB" instead of "SQL Server 2008".

Actually our project involves multi-user access with huge data. write,retrieve and processing huge data in SQL SERVER 2008 giving lots of performance issues.

We have implemented all performance tuning techniques but not improved expected performance .

Requirement in short, We used to insert 10,000 records per request and chances are there is lots of data in one table in one day or one month.

Am getting problem while retrieve data,showing data in grid(paging with 10 OK), but showing all at once giving performance issues.

Is that MangoDB suite to our requirement?

if MangoDB good how can i import my following current objects to MangoDB
1)stored procedures
2)functions
3)view
4)triggers
5)Table valued parameters
6)Long running transactions
Robbe Morris replied to avula on 13-Jun-13 12:41 PM
Personally I wouldn't never use an open source database platform for a paying client.  However, others like it.

http://www.nullskull.com/a/1476/store-aspnet-site-visitor-stats-in-mongodb.aspx
Bill B replied to Robbe Morris on 13-Jun-13 07:15 PM
No idea about mongodb...

A couple thoughts from my experience working with huge SQL Server tables:

1. on your inserts, you can pass the 10k rows for update, but write your insert code in such a way that you insert one at a time. How? You could use a temp table and a cursor to do one at a time... could all be in the stored proc.   Insert into a temp table so its not locking the main table... depending on how many users are inserting... also, what I did was a temp per authorized user, so they don't lock other users... and also when submitting a big chunk, I serialized it to a string and used a timer call on another thread to actually do the insert, so the submit call didn't have to wait for it to process.

2. on selects - you can use sub queries - stored proc should have last row out parameter and set size
return the data to view with a select.  Yes, some will say OH NO, not a sub query, this will surely slow it down... it could if you do it wrong, but if you understand what is going on, you'll see why it works. 

The way to do it is to write it such that your sub query retrieves a small set of keys into a virtual table,
then joins with the really big table. This way, you use the key index which is as quick as you're going to get and will join with the main table almost as quickly to return the fields for the rows you are to display in the grid.  This technique works well. I have used it on 40 million+ row tables... and bigger... the point is to reduce the size of data being touched and to avoid any table scans... and to use an index or two

a simple example:
select p1.* from [Photos].[dbo].[Photos] p1,
(SELECT [PhotoID] FROM [Photos].[dbo].[Photos] where SomeValue="YourFilter") VirtTable1
where p1.[PhotoID]= VirtTable1.[PhotoID];

this shows a more complex example http://www.akadia.com/services/sqlsrv_subqueries.html

It takes a bit of getting used to, but if you make your virtual table selection  pretty quick by index and join it with the big table, you will be surprised at the performance gain.  People argued with me, but the proof is in the results with your stopwatch.  IT CAN BE DONE. I DID IT FOR MORE THAN ONE COMPANY with measurable success.

Also note, that you can do several virtual tables... just wrap with parenthesis and give it a your virtual table name and you can play around endlessly to get the performance and results required... can do lots of stuff with
it - here's one with two nested virtual tables used to grab a small subset

set rowcount = 100

Select * from SomeHugeTable,

  (Select TransactionID, CustomerID from
   (Select CustomerID, TransactionID from Orders
     where CustomerName="HappyClient" and OrderDate='6/12/2013') VirtualCustomerTrans1  ,
   (select CityID from Cities where CityName ="Hartford") VirtualTable1
   where VirtualCustomerTrans1.CityID = VirtualTable1.CityID 
  )  QuickrSmallSubset,

where SomeHugeTable.TransactionID = QuickrSmallSubset.TransactionID
        and SomeHugeTable.ID> @LastRowIDDisplayed

not a great example

The point is to make the subquery be something quick and a small set.

pass the row start as out parameter in the stored procedure to get grid paging
pass in maxdisplay rows and set it before the select

hope that gets you going

You'll notice that the first query is the Hartford filter - should be one or two rows in a city table...
then the next filter is by customer and date, so probably just a few rows... or maybe your 10k rows
then join with the main big table but no more than the grid size, here I used 100 rows, pass that in

so you only wait for a few index scans to get a small set of GRIDSIZE rows to return,
and not trying to pass huge chunks over the wire every time you click next page
and you're never working with more data than you can view

everything you have working stays working...

tada! you're a hero 

in about a day... or however long it takes you to rewrite your stored procs


http://www.codeproject.com/Articles/19229/SQL-SERVER-2005-T-SQL-Paging-Query-Technique-Compa

http://msdn.microsoft.com/en-us/library/ms187638%28v=SQL.105%29.aspx
Robbe Morris replied to Bill B on 13-Jun-13 10:24 PM
Nice.  I think future visitors will find your insight quite useful.
Sathish S replied to Bill B on 14-Jun-13 07:30 AM
Thanks Bill. You must write a article on this.