Logging: Why Logging is Important?
A guide toward why you should do logging, and how to choose a module for logging.
Logging can be described as the process of writing the information in a log file, which can be used later for debugging in the case of any issue.
Logging is crucial considering the fact, that sometimes if your code breaks and you are unable to debug what was the issue, then the problem remains the same, to get the RCA of the problem it is important that you log.
Why logging is important?
Consider the scenario, one fine morning your code all of the sudden stops working, now instead of restarting the code, your primary concern would be to understand the root cause of the problem. How will you understand the problem?
You should have some robust logging mechanism, which you can refer to for understanding the problem, and then the second challenge would be to store these for future reference. All this is done if you follow best practices while implementing logging.
The drawback of not having proper logging is that you will never be able to understand when did your system broke. This is something you should always keep in mind that unless until you are able to provide the end-to-end solution for the problem you will definitely face the same problem again.
It is safe to understand what needs to be logged and debug the problem end to end so that you don't have to waste your time again in solving the same problem.
Different ways of logging in Nodejs
In nodejs there are a couple of modules that can be used to do logging, some of them are log4js, Winston, morgan, Bunyan, etc. You can use any of these because most of the modules have a similar feature, you just need to understand the module deeply and implement them properly.
I have personally used log4js and Winston and in both cases, they have solved my every use case.
Before implementing any of the modules, try to understand different log level like
These log levels may slightly vary with a different module, also you can create your custom one for your particular use case, but I will not recommend that since this won't be a standard, this will only create confusion.
Always try to follow some best practices and make sure you provide all the configurations as per your need, like max day and max file size, etc.
Things to remember while choosing a logging tool
- It should have a log rotating facility.
This means suppose you want to keep logs of the last 10 days, then the module which you chose should provide you this facility. Although this functionality could easily be achieved using running a cronjob, it's better if your module provides it by default.
- It should have a max file size facility.
Max file size gives you an opportunity to store the logs into chunks of different files. This helps in debugging the logs easily. For example, if you are aware that at a particular time your application was down you can simply debug the respective log file.
- It should have the facility to create logs based on status code. This means that the log should be segregated based on the status code of the log for example status code 2xx should go as a success log, vice versa 4xx or 5xx should be an error log.
Tips that should be followed while implementing a logger
- In all of your microservices try to keep the same name of the logs file. Follow standard configuration throughout your entire project or even better throughout your organization.
- Try to follow the location of the logs I will recommend it should be at the same location as other microservices. This will help to debug easy
- The max size of the file should be the same in all the microservice.
- The max days to keeps the log should also be maintained as same in all the microservices.
- Configuration should be identical if not the same in all the cases.
Different challenges faced while implementing proper logging
- Logs rotation is not proper i. e. you are lacking the knowledge about how proper log rotation needs to be done.
- You are not logging the right things but just logging for the sake of logging. Even if you get a terabyte of data, this will just be garbage for you, since you won't be able to fetch anything fruitful from it.
- The logs file is too long. Since a file of 20GB will become very difficult to handle. Keel your file size smaller so that it is handy.
- Not logging the timestamp. Probably is the biggest mistake, unless until you are not able to log time, it will become next to impossible to debug the log based on the time of the issue.
- Not logging crucial information that can be used for debugging.
- User data is not tracked, so unable to understand where the request is from.
- Log Level i.e. Error, Debug, etc are not tracked properly.
- Not having an understanding of what to log is the biggest issue.
- Make it a practice that you log at all the key points.
- Do not log important information like the credit card details of the user.
- Try to log everything which you feel could be used later. Obviously, you will figure these things out when it is faced, but try to start with the bare minimum.
- Choose some user interface tools like monitoring, notification that can provide you enhanced features like ELK stack, Grafana, etc.
- Never log useless things, which are very obvious, this is similar to comment, there is no point logging the things which are giving you no message.
- Try to store the backup of at least 3 days log, you can increase that as per your requirement.