S3 logging

Logs from the reverse-proxy are collected via a side-car process running filebeat. It pushes the logging flow to a Kafka topic for later consumption by logstash.

Filebeat

Filebeat is a tool written in Go that "tails" log files, applies minimal changes (add fields and context) and pushes the records to Kafka in our case.

The configuration is generated by nomad when a Træfik proxy is spawned on a node. See the GIT repository that contains the job definitions.

Logstash

Logstash is the tool that reads the aggregated log stream from Kafka, does most of the transformation and writes to Elasticsearch. The daemon runs as a docker container in the MONIT Marathon cluster. The sources and image can be found in Gitlab

Elasticsearch

We finally have our dedicated Elasticsearch instance managed by the Elasticsearch Service \o/ There's a job that deletes data older than a month (using curator)

There's not much to configure from our side, just a few useful links and the endpoint config repository:

Improve me !