I was using a VM managed by vagrant, and it turns out the base box I was using had the memory configured by default to be around 512MB, while the Cloudera Manager was configured to have a max heap space of 2 GB. Ouch. What happened was that the service would at some point exhaust the available memory, and the OS killed it.
I finally found the important information hidden on the requirements page: http://www.cloudera.com/content/cloudera/en/documentation/core/v5-3-x/topics/cm_ig_cm_requirements.html
There it says (highlighting added):
- RAM - 4 GB is recommended for most cases and is required when using Oracle databases. 2 GB may be sufficient for non-Oracle deployments with fewer than 100 hosts. However, to run the Cloudera Manager Server on a machine with 2 GB of RAM, you must tune down its maximum heap size (by modifying -Xmx in /etc/default/cloudera-scm-server). Otherwise the kernel may kill the Server for consuming too much RAM.
Well thank you, may I suggest you put something like that on the Troubleshooting page as well?
After seeing this, it didn't take long to figure out what was going on (that was after, roughly, 2 or 3 days of debugging...)