5 Pro MongoDB Data Modeling Tips
We recently challenged ourselves by choosing mongodb for a client project. After a lot of prototypes, tests, wrong approaches, right approaches, lively discussions, and theorizing about the role of NoSQL databases, we decided to move on and face – with responsibility – the challenge.
From this amazing experience we developed 5 pro tips that kept us on track:
- Improve speed by denormalizing models. With a NoSQL database, normalization creates more work for the developer and the machine. Let mongodb shine by going with the flow and denormalizing as much as possible. It definitely takes some practice to strike this balance.
- Make Your Documents Rich. Don’t be afraid to create fairly complex JSON documents. Remember, mongodb can index deep into a document, which comes in handy when you’re modeling something really complicated. Just keep in mind that mongodb has a limitation of 16MB for each document.
- Denormalize even more. We found a few cases where read-heavy data became a performance concern. In those cases we crossed the Rubicon of denormalization and replicated the data into multiple documents. This created more effort on the infrequent writes, but kept the application running smoothly overall.
- Pre-populate your fields. This is a well known tip on the DBMS environment that translates well to the NoSQL world. If you find yourself frequently checking for null values before using a field, make it a habit to initialize the field when you create the record instead. The result will be cleaner code with fewer bugs.
- Improve scan search. A database can easily get fragmented after hundreds of inserts, updates and deletes fired against it. You’re generally ok if you have proper indexes, but if you don’t have one or are trying to create one you’re in for some pain. To solve this, use the repair command. It will run the equivalent of a mongodump followed by a mongorestore command.
If you have a pro tip we didn’t mention show off your wizardry in the comments.