Log Management Module: Use of Loguru

Log Management Module: Use of Loguru

Python's log management module can use its own logging module or a third-party Loguru module. For the simple use of logging and loguru modules, please refer to the following article. The writing is still good: The use of logging and loguru

For detailed usage of logging module, please refer to this article: Detailed usage of logging module .

This article only records the usage of loguru module, including simple usage, and usage under multi-module and multi-threading.

1. Installation of loguru

1.1, pip installation

pip install loguru copy the code

2. Simple use of loguru

2.1. Very easy to use

Use out of box without boilerplate.

The usage of loguru is very simple. There is one and only one object in loguru :

logger
. For ease of use, the logger is configured in advance when it is used, and is initially output to stderr by default (but these are completely reconfigurable), and the printed log information is configured with colors by default. As shown below, the use of loguru is really very simple:

from loguru import logger logger.debug ( "A log apos This Message" ) Copy Code

The above logging statement prints an output statement to stderr (console) by default, and the output result is as follows:

It can be seen that loguru is configured with a set of log output formats by default, with time, level, module name, line number, and log information. There is no need to manually create a logger, just use it directly. In addition, the output is still colored, which looks more friendly. So we don't need to configure anything in advance, just use it directly.

2.2, add() function

No Handler, no Formatter, no Filter: one function to rule them all.

When using the logging module, we need to manually configure Handler, Formatter and Filter, and we need to call different functions for configuration, but in loguru, only one add() function is needed. Through the add() function, we can set Handler, Formatter, Filter Message and Level . Example of use:

import sys from loguru import logger logger.add(sys.stderr, format = "{time} {level} {message}" , filter = "my_module" , level= "INFO" ) logger.debug ( "A new new log apos This Message" ) Copy Code

In the above code, the Handler is specified for console output through the add() function, the format of the format, and the filter and level are specified. Then you can output the log information:

The prototype of the add() function is defined as follows:

def add ( self, sink, *, level=_defaults.LOGURU_LEVEL, format =_defaults.LOGURU_FORMAT, filter =_defaults.LOGURU_FILTER, colorize=_defaults.LOGURU_COLORIZE, serialize=_defaults.LOGURU_SERIALIZE, backtrace=_defaults.LOGURU_BACKTRACE, diagnose=_defaults.LOGURU_DIAGNOSE, enqueue=_defaults.LOGURU_ENQUEUE, catch=_defaults.LOGURU_CATCH, **kwargs ): Pass Copy the code

There are many parameters that can be used to configure different properties. There is a very important parameter

sink
. Through sink we can pass in a variety of different data structures , summarized as follows:

  • sink can pass in a file object , for example
    sys.stderr
    or
    open('file.log','w')
    It's okay.
  • sink can directly pass in a str string or pathlib.Path object , which is actually
    Represents the file path
    Yes , if it recognizes this type, it will automatically create a log file corresponding to the path and output the log .
  • Sink can be a method , you can define your own output implementation .
  • sink can be a Handler of a logging module , such as
    FileHandler
    ,
    StreamHandler
    Wait, so that you can implement the configuration of a custom Handler .
  • The sink can also be a custom class , and the specific implementation specifications can be found in the official documentation.

2.3, create a record log file

Easier file logging with rotation/retention/compression.

2.3.1, create a log file

We can pass in a file name string or file path, loguru will automatically create a log file, as shown below:

from loguru import logger logger.add( "runtime.log" ) # Create a log file named runtime logger.debug ( "This's A log in the Message File" ) Copy the code

The above program will create a file named runtime.log in the directory where the program file is located, and will record the log in the file:

At the same time, log information will be output on the console:

2.3.2 The log will not be output to the console

If you don't want to output log information in the console, because logger is output to stderr by default, you only need to remove it before:

from loguru import logge logger.remove(handler_id = None ) logger.add ( "runtime.log" ) # creates a log file named runtime file logger.debug ( "This's A log in the Message File" ) Copy the code

This way, log information will not be output to the console.

There are two ways to use this place when we use it:

  1. We can remove the part of the output to the console, and then we use print to output some important information, and then output all log messages to the log file

  2. We can specify the information level of certain logs (higher output to the console), and output all log information (lower + higher) to the log file

2.3.3, specify the name of the created log file

When add() creates a log file, you can add the date of the file by adding a placeholder, as shown below:

from loguru import logger logger.add ( "runtime_ {Time} .log" ) # creates a log file named runtime file logger.debug ( "This's A log in the Message File" ) Copy the code

This will create a log file with the date.

2.3.4, rotation log file

By configuration

rotation
Parameter, specify the condition of log file rolling record , as shown below:

1),

logger.add ( "file_1.log" , rotation = "500 MB" ) # Automatically Rotate TOO Big File Copy the code

Through this configuration, we can store a file every 500MB, and a new log file will be created if each log file is too large. We can add a (time) placeholder when creating the file, so that the time can be automatically replaced when it is generated, and a log file with the file name containing the time can be generated .

2),

logger.add ( "file_2.log" , rotation = "12:00" ) # New File IS AT Noon Day the Created the each copy the code

Through the above configuration, you can create a log file output at 12:00 noon every day.

3),

logger.add ( "file_3.log" , rotation = "Week. 1" ) # File Once IS TOO The Old, apos Rotated duplicated code

Through the above configuration, a new log file output can be created every 1 week .

2.3.5, retention specifies the log retention time

By configuration

retention
Parameter, you can specify the retention time of the log:

logger.add ( "file_X.log" , Retention = "10 Days" ) # Cleanup Time After some duplicated code

Through the above configuration, you can specify that the logs are retained for up to 10 days , and the old logs will be cleaned up after every 10 days, so as not to cause memory waste.

2.3.5, compression configuration file compression format

By configuration

compression
The parameter can specify the compression format of the log file:

logger.add ( "file_Y.log" , compression = "ZIP" ) # Space Loved the Save some duplicated code

Through the above configuration, you can specify the log file compression format as zip format , which can save storage space.

2.4, exception capture

Exceptions catching within threads or main.

What makes me feel that the loguru module is powerful is its exception capture function. If the program crashes or errors during the running process, recording the log is an important way for us to trace the execution process of the program, but many times, according to the log, we don t know why the program went wrong or can t see where the program went wrong. Being able to record the situation or information when the exception occurs in the log, so much for us to solve the program problem, it is simply twice the result with half the effort.

In the loguru module, there are two ways to capture exceptions:

2.4.1, catch decorator method

by

catch decorator
The way to achieve exception capture:

from loguru import logger logger.add( "runtime.log" ) @logger.catch def my_function ( x, y, z ): return 1/(x + y + z) # An error? It's caught anyway! for my_function ( 0 , 0 , 0 ) copying the code

In the above code, the function my_function is decorated by the catch decorator, so that when an exception occurs in the function, the exception log information will be printed, as shown below:

In the log information, not only the place where the exception occurred is indicated, but the value of the parameter is also recorded.

2.4.2, exception method

by

exception method
It is also possible to capture and record exceptions:

from loguru import logger logger.add( "runtime.log" ) def my_function1 ( x, y, z ): try : return 1/(x + y + z) except ZeroDivisionError: logger.exception( "What?!" ) my_function1 ( 0 , 0 , 0 ) copying the code

The recorded log information is as follows:

3. The use of loguru in multi-module situations

Asynchronous, Thread-safe, Multiprocess-safe.

Because there is one and only one object in loguru: logger. So loguru can be used in multiple module files without conflicts:

  • exceptions_catching2_03.py:
from loguru import logger def func ( a, b ): logger.info( "Process func" ) return a/b def nested ( c ): try : func( 5 , c) except ZeroDivisionError: logger.exception( "What?!" ) Copy code
  • test.py:
#coding:utf-8 from loguru import logger import exceptions_catching2_03 as ec3 if __name__== ' __main__ ' : logger.add( "run.log" ) logger.info( "Start!" ) ec3.nested( 0 ) logger.info( "End!" ) Copy code

The results of the operation are as follows:

4. The use of loguru in multi-threaded situations

Asynchronous, Thread-safe, Multiprocess-safe

All added to logger

sink
The default is
Thread safe
The
so loguru can also be very safe to use in multithreaded scenarios:

#coding:utf-8 from atexit import register from random import randrange from threading import Thread, Lock, current_thread from time import sleep, ctime from loguru import logger class CleanOutputSet ( set ): def __str__ ( self ): return ',' .join(x for x in self) lock = Lock() loops = (randrange( 2 , 5 ) for x in range (randrange( 3 , 7 ))) remaining = CleanOutputSet() def loop ( nsec ): myname = current_thread().name logger.info( "Startted {}" , myname) ''' The application and release of the lock are handed over to the with context manager ''' with lock: remaining.add(myname) sleep(nsec) logger.info( "Completed {} ({} secs)" , myname, nsec) with lock: remaining.remove(myname) logger.info ( "the Remaining: {}" , (Remaining or 'NONE' )) ''' The _main() function is a special function. This function will only be executed when the module is run directly from the command line (cannot be imported by other modules) ''' def _main (): for pause in loops: Thread(target=loop, args=(pause,)).start() ''' This function (the decorator method) will register an exit function in the python interpreter, that is, it will request to call this special function before the script exits ''' @register def _atexit (): logger.info( "All Thread DONE!" ) logger.info( "\n========================================== ================================\n" ) if __name__== ' __main__ ' : logger.add( "run.log" ) _main() Copy code

The log file is as follows:

The above code creates 3 threads, and each thread prints out the log information correctly.