Logs for long have been considered as an afterthought, but they make vital contributions to tackle the problems, provided they are well implemented and are well organized and searchable. It is an established fact, that without logs troubleshooting is almost impossible or a very tough job.
This holds true if you are running applications inside containers. Docker containers logging must be planned at initial stages of design, to get well-managed logs for better system functionality.
Logs are the key to find solutions and troubleshoot the problem. The very first thing a system admin does whenever a problem is reported is to see and analyze the log. They tell the story of every layer of the application, storage or networking.
Docker containers take the logging to the next level, logging with docker is different than traditional logging of apps as Docker add complexity to software stack, nonetheless functionality and objective is still the same here.
Docker support many logging drivers (like Fluentd, JSON File, Journal, Syslog), by default logging driver is JSON File. To use a different logging driver, change the value of ‘log-driver’ in
/etc/docker/daemon.json file, make sure to restart the Docker daemon for changes to take place. All containers that will start after this change will use the new logging driver.
Command to get Logs with Docker is
docker logs [OPTIONS]
docker logs fetches logs as this command is executed.
You can use many options like
--details (Show extra details provided to logs)
--timestamp to show timestamp
Navigate to Docker logs Document, to learn more about this command.
Docker logs have two main categories
The infrastructure management logs are further broken down into two subtypes.
a) Docker Engine
These logs are automatically captured by the OS’s system manager. These logs can then be sent to centralized logging server.
b) Infrastructure Services
Containerized infrastructure services deployed for the purpose of monitoring, auditing, reporting, etc. Those services generate important logs and need to be handled separately, if not captured by Docker Engine default logging driver.
logs in this category are combination of custom application logs and the STDOUT/STDERR logs of the main process of the application. There is no need to do anything special to capture the STDOUT/STDERR logs as they are captured by the Docker Engine default logging driver.
There are various methods to log Docker Containers. Some of them are
In this approach, application running inside the docker container uses some logging framework for handling the log process. For instance, your java app running inside docker container may uses
Log4j to send logs to remote server, removing the load of log management from OS and Docker environment. But it creates an overhead on application process, that’s running inside the docker container.
This allow developers to use logging framework within the application, without importing logging functionality from outside.
Containers have a stateless nature, which means all their data will be lost in case container is shut down, same will be the case with all logs inside container. For permanent storage of logs, containers must either forward log events to a centralized logging service such as Loggly or store log events in a data volume (a mechanism for storing data generated by and used by Docker containers.).
Using data volume you can map directory inside your container to directory on local machine for permanent storage of data.You can also use a single volume for multiple containers as there central logging place.
You can also use Docker Logging Drivers to send events to syslog instance running on the host machine.These drivers reads log events directly from the container’s stdout and stderr output, removing the need to read and write from log files.
This approach lets you manage logging from within Docker Environment. There are containers that are dedicated for logging only, there sole purpose is to gather logs from other containers and after aggregating them dispatch to third-party service. With this approach your dependency on local machine is eliminated, without any performance degradation to your logging functionalities.
For example, the Logspout container automatically captures stdout output from any containers running on the same host and forwards them to a remote syslog service. You can define the destination URL when running the container.
In this approach every container has its own logging container. The first container (that is running application) will save its logs to data volume that can be accessed by the logging container.The second container (logging container) uses file monitoring to organize and move logs to log management tool like Loggly.
On the down side, it is complex and difficult to setup as compared to other methods.
Dockers have a complete solution for implementing a logging strategy. Having an understanding of Dockers logging methods, will help in choosing a solution that best meets your requirement.
It is recommended not to leave the log data on the host machine unless it is highly desired. Having a well managed log data at disposal, will help both operation and development team.