Side Project

Health Checks for Free

The Challenge

Spend least amount of recurring money to monitor your application.

Frugality, one of the core principal had almost kind of became a habit from my first job at Amazon).

The Goal

Get notified when I am online when my server goes down.

Backstory: I am running the Open Source Go application on a single $5 Digital Ocean server. No Load Balancers, No fancy hardware, no high end configuration, no Docker, just the application running on Caddy for self signed Certificates, no vault(

Search for the tools

Ideally having another server monitor this process is the basic idea, but I did not want to setup/maintain another DO server in a different location. Since pushing is not an option, polling is another

One of the options considered was Pingdom ~$7/month (free version is hidden which I found assuming while talking a friend later)

PagerDuty is one of the famous paying options of $9/month. Found few other solutions like Pingdom, but none of them had international calling or right integrations to notify me.

I finally I went down with was HealthChecks. The idea is to set an expectation of a cron which runs every X minutes etc., and meeting that expectation of cron finished running successfully or not. It has limits on the number of crons made per user and the history stored. I used Pushbullet extensively to pass websites from mobile to the desktop.

HealthChecks had integration with Pushbullet which helped me configure the notifications. A notification meant something went wrong with the server.

The script

while [ 1 ] ; do
t=$(curl -D - -s | head -1 | egrep -o 200)
if [[ "$?" -eq "0" ]]; then
  curl --retry 3
echo -ne " $t "
sleep 58

The False Positives:

The health check used to fail occasionally there used to be errors occasionally with the curl command informing that it cannot resolve the IP address of the domain or could not connect to the server (from the gateway). I choose not to spend time on this problem though since it was intermittent.


I started learning Go couple of months ago. Writing is the best way for learning a new language and appreciating the beauty.

This is my 2nd Go project hosted at GitNotify.

The project aims at notifying users periodically about the new code changes that went it. I felt I was missing out new merges happening in smaller projects while learning a language. I found amazing libraries in Go and wanted a way to get weekly diffs to understand what happened.

GitNotify is useful to:

  • Track awesome lists
  • Observe small sized to large sized repositories
  • Get daily diffs for open source libraries
  • Host inside your own organization for private instances of Github

Go being a language of simplicity and no magic (unlike Ruby/Rails), the philosophy is that the developer is responsible for all the code and no complex frameworks or magic should be involved. Some of the beautiful Go Projects I have seen are

Deciding on the periodic notifications

The interface for providing periodic notifications needed a thought. I was going to allow my users give complete customization since different people wanted different times of day and a timezone was a factor to consider

After a lot of brainstorming on the User Interface to allow users to customize, hour, minute, day of the week and thinking about complex javascript interactions and validations on the backend, I decided to use the design pattern used in the Unix environment, The Crontab.

The Crontab

Crontab is the best and simplest way to setup recurring tasks. The UI for it became simple when you provide customization for only the hour and day of week for a recurring schedule.

Remember the sites which ask you to select your timezone or location, the information can be found with Javascript so that users don’t re-enter what we can find from system time.

For my case, timezone offset was not sufficient enough, The cron package I was using requires a Timezone instead of offset. Timezone helps detect DST offset as well which an offset like -0600 would help identify.

Coming Soon

  • Support for Gitlab Added support for Gitlab on 2016/12/21
  • Slack Notifications Added support for Slack Notifications on 2016/12/12

Making a better and powerful API based PasteBin

Update: Code is present at github/sairam/daata-portal

The product is aimed as a tool for developers so that they can store arbitrary information like partial extracts from logs or log files like an s3, but hosted internally which need not scale. The aim is to provide a tool where the complete company can share data as well as information.

Running commands on all machines is the standard thing, but capturing the output and cleaning that up is usually a matter of making scripts to clean up the data especially when you are debugging during downtime of your service.

I have talked to 10 of my developer/devops friends some of whom liked and appreciated the usecase while some others were motivated to write a tool for their specific need and maintaining it.

My idea of the API looks like a regular pastebin with apps. The tool should be hosted with authentication or within a company ecosystem.

Code Name: daata

daata is the simplest name I came up with signifying data and has a .xyz domain too

Some of the features that are currently present:

  • Upload static files
    • Host text files
    • Host flame graphs from your code
  • Upload a zip file to extract the files
    • This is ideal during the phase of a build to host your documentation
    • Host website HTML mocks from designers
    • Host static websites/pages/single page apps
  • bitly like Redirection of urls useful when sharing links or urls with teams
  • track simple metrics with time/key/value into a graph providing insights

Update: Code is open source and present at github/sairam/daata-portal

Future plans:

  • Proxying/Mocking HTTP requests
    • Catch and respond to http requests in local/staging systems like SMS/Email
    • Mock requests in local environments from production data
    • Proxy requests through the service to capture/debug information
    • Replay requests from caught requests to one or more services
  • Features like ngrok to proxy local connections to a central setup
  • Display logfile stats like Kibana to display information
  • Pluggable modules that can be linked
  • Sharing files within the network/intranet (p2p sharing)

Started the analysis in early September 2016 and coding started on Sep 15 2016, it went a bit slow since I am new to the language and the other languages that I have worked on continue to influence me on how to do.

P.S. do not take time finding good names for your projects, you can change them later anyways. I have spent more brain cycles trying to find a name than code it up or market it