Mongodb's mongod server seems to be making use of huge amount of RAM if available inside the machine and if OS's VM allows it to use the free RAM (in idle cases). This is expected because mongodb used memory-mapped files and the RAM would not get cleared up by OS unless some other processes inside the machine requests for resources. And with memory limit (80MB for my plan) in webfaction's shared hosting, my application's mongod process is getting killed eventually after growing more than 80MB. Did anyone come across this issue? also, is anyone using mongodb for their application? asked 27 Nov '10, 22:15 narxgun |
If you can tell how frequently your processes will get to 80MB you can run a cron job that restarts your MongoDB instance to keep your memory limit down.
The restarts will result in application downtimes. For an unexpected traffic, we then have to shutdown mongod forever.
My db data size is more than 80MB and since mongoDB uses memory mapped files, continuos access requests can fill up all the available memory without replacing. We too don't want the mongod process to use that much memory. Is it possible to implement a rule on specific process sizes inside the OS?
You cannot set memory limits in the OS. It must be application level.
We cannot set a memory usage limit on the application - MongoDB.
In this case, it sounds like you're going to have a hard time using MongoDB on a shared host.