While venturing into the world of MongoDB I had a lot of “aha” experiences. I can’t even remember all of them and write about them, but a few stands out.
One common way to start on a new .NET/MS SQL project is to start at the database layer. weather your sitting in Enterprise manager creating tables and relations, or coding up your classes in a POCO model, your tend to think more about relations of your data, and ways your application would need to “fetch” data, and worry about performance later.And, god knows why, for some reason in my mind set, the less data you would need to get per dataset, the faster it had to be, right ?
But for me, working with MongoDB had me thinking more about performance first, ways to insert/update, and THEN how the application would access this. A lot of the performance gains can be archived though using $INC instead of updating complete documents. That can be quite hard to implement if you got a documents that is not just a few strings and integers.
So to challenge my self I set my self the goal to create a browser based RPG game. My goal was no page would take more than 20 milliseconds to generate. As I progressed and the game (and database) got more complex I found my self doing more and more “database” stuff directly in the webpages than in a “prober” DAL layer, to get that “extras” performance. While load testing I would also start getting issues with concurrency ( 2 webpages updating the same document )
So after a few weeks I deleted everything and started over, but this time, wiser from experience, I started with another mind set, that would include measuring each document and field in regard to concurrency. ( Do a Google search on Atomic Updates and concurrency in mongo dB if your interested in this. ) To avoid having tons of small “update” statements placed all over my code, I started thinking in two ways to handle updates. One would be the traditionally “entity framework” thinking where all my classes would have some kind of “tracking” ability to detect updated. I still believe this is the best way, but it is a LOT of work to implement. I also started wondering if you couldn’t just automated all this, and while goggling that, I came across UpdateDocumentBuilder .
This class is really cool, but there are 2 big issues with this class. The first issue comes from Mongo dB. You cannot pop/push multiple classes from an array in one update statement. Sure you could extend this class to support splitting up updates into many updates, but I will leave that up to someone else. The other issue, is the fact it uses $SET on all updates, and not embracing the whole “we don’t care in what order updates gets done” thinking. So I created a new version of this class, that uses $INC on all number values. To ensure support for atomic updates I add an lastupdatedon and use that in my query incase if needed.
Odata. There are so many skilled people out there that would be much better at creating this than me, but until that happens I had to work with my own implementation. I found a ton of issues in my last post, so I’ve uploaded a new version that has less errors and better support for Microsoft OData client ( adding a Service reference to your odata feed ). Beware with that. It doesn’t support properties starting with underscoore _. Also if you want to expose _t use a different name, add bsonignore to avoid double updates, and then use my filterinformation attribute to support querys on it ( see jira 742 )
Basicly just use the new BSONFilter2 from your web api controller, You can test different querys by running then TestBSONFilter project.
You can see the updated filter, and my updated UpdateDocumentBuilder in this test project.