In the last post, we looked at the difference between self-hosting Sentry versus the SaaS option. In this post, we’ll do a deeper dive on self-hosting Sentry and take a look at some possible pitfalls you may encounter.
In order to self-host your Sentry installation, there are a couple of resources you’ll need to look over:
- https://docs.sentry.io/server/installation/ – Which provides all of the documentation for installing a Sentry server
- https://github.com/getsentry/onpremise – Which is how you will actually self host your sentry installation in most cases
As the readme in the onpremise repo says, all you need to do is run the ./install.sh script after modifying the docker-compose.yml file. With that said, there are some important caveats.
Sentry does not provide SSL out of the box
If you want to connect to your server over SSL (and you should), the easiest way is to run an Nginx proxy configured with LetsEncrypt. You will run your Sentry server on some port not accessible to external traffic (say port 9000) and then point the Nginx server at the Sentry server with a config that looks something similar to the Nginx configuration located at the Sentry docs. As suggested in the repo, you can optionally add this to your docker-compose.yml file so that the Nginx proxy starts when all the Sentry services do:
Sentry requires a fair amount of memory
This is noted in the README for the on-premise repo, but it bears its own section for elaboration. Sentry runs an internal database to track issues. In order to get the schema in the right format, Sentry will run database migrations:
- The first time it runs
- Whenever it is upgraded
While the actual Sentry services are fairly lightweight, the database migrations themselves are very heavy. Moreover if the migrations fail due to memory constraints they can leave the system in an indeterminate state. When you start Sentry for the first time or run sentry upgrade you must have a VM with at least 4GB of memory (and preferably 8GB to make sure nothing fails.)
Some example Github issues of what can happen with too little free memory
If you are hosting Sentry yourself as a startup, you may not need a VM with a lot of resources day to day (depending on how much traffic you receive). You will however need a fairly big VM when you start Sentry or while performing upgrades. My suggestion therefore is to bring up a big VM during the initial install and then scale back after installation is completed.
Have a plan to deal with disk space
Sentry tracks a fair amount of data which will slowly fill your disk space if you let it sit. You should at the minimum set up a Cron job to run https://docs.sentry.io/server/cli/cleanup/ every N days (where N is the number of days to clean trailing data for, defaulted to 30).
Sentry ships with an email server
This is useful for alerting specific parties to events. If you are relying on emails to alert your developers or other interested parties about critical issues, you need to remind them to check their spam folder and allow Sentry emails specifically. If you are hosting Sentry in a VM (or even on-premise), email providers like Gmail may auto-reject the emails Sentry sends into the spam folder based on their source. Marking the emails specifically as “Not Spam” in Gmail fixes the issue. Other email providers should have a similar mechanism.
Monitoring the monitor
Sentry starts up with a single project called internal. This is Sentry’s internal error tracker and will collect any stacktraces generated by the server itself. It’s worth configuring Sentry’s alerting to send an email to whoever is in charge of keeping the system running in the case that Sentry starts detecting internal errors. If you are using a different service for health checking your services running in production, you may also want to configure it to health check your Sentry instance