创新互联Python教程:日志操作手册

日志操作手册

作者

创新互联公司专注为客户提供全方位的互联网综合服务,包含不限于网站建设、成都网站设计、北流网络推广、微信平台小程序开发、北流网络营销、北流企业策划、北流品牌公关、搜索引擎seo、人物专访、企业宣传片、企业代运营等,从售前售中售后,我们都将竭诚为您服务,您的肯定,是我们最大的嘉奖;创新互联公司为所有大学生创业者提供北流建站搭建服务,24小时服务热线:18980820575,官方网址:www.cdcxhl.com

Vinay Sajip

This page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference information, please see 其他资源.

在多个模块中记录日志

无论对 logging.getLogger('someLogger') 进行多少次调用,都会返回同一个 logger 对象的引用。不仅在同一个模块内如此,只要是在同一个 python 解释器进程中,跨模块调用也是一样。同样是引用同一个对象,应用程序也可以在一个模块中定义和配置一个父 logger,而在另一个单独的模块中创建(但不配置)子 logger,对于子 logger 的所有调用都会传给父 logger。以下是主模块:

 
 
 
 
  1. import logging
  2. import auxiliary_module
  3. # create logger with 'spam_application'
  4. logger = logging.getLogger('spam_application')
  5. logger.setLevel(logging.DEBUG)
  6. # create file handler which logs even debug messages
  7. fh = logging.FileHandler('spam.log')
  8. fh.setLevel(logging.DEBUG)
  9. # create console handler with a higher log level
  10. ch = logging.StreamHandler()
  11. ch.setLevel(logging.ERROR)
  12. # create formatter and add it to the handlers
  13. formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
  14. fh.setFormatter(formatter)
  15. ch.setFormatter(formatter)
  16. # add the handlers to the logger
  17. logger.addHandler(fh)
  18. logger.addHandler(ch)
  19. logger.info('creating an instance of auxiliary_module.Auxiliary')
  20. a = auxiliary_module.Auxiliary()
  21. logger.info('created an instance of auxiliary_module.Auxiliary')
  22. logger.info('calling auxiliary_module.Auxiliary.do_something')
  23. a.do_something()
  24. logger.info('finished auxiliary_module.Auxiliary.do_something')
  25. logger.info('calling auxiliary_module.some_function()')
  26. auxiliary_module.some_function()
  27. logger.info('done with auxiliary_module.some_function()')

以下是辅助模块:

 
 
 
 
  1. import logging
  2. # create logger
  3. module_logger = logging.getLogger('spam_application.auxiliary')
  4. class Auxiliary:
  5. def __init__(self):
  6. self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
  7. self.logger.info('creating an instance of Auxiliary')
  8. def do_something(self):
  9. self.logger.info('doing something')
  10. a = 1 + 1
  11. self.logger.info('done doing something')
  12. def some_function():
  13. module_logger.info('received a call to "some_function"')

输出结果会像这样:

 
 
 
 
  1. 2005-03-23 23:47:11,663 - spam_application - INFO -
  2. creating an instance of auxiliary_module.Auxiliary
  3. 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
  4. creating an instance of Auxiliary
  5. 2005-03-23 23:47:11,665 - spam_application - INFO -
  6. created an instance of auxiliary_module.Auxiliary
  7. 2005-03-23 23:47:11,668 - spam_application - INFO -
  8. calling auxiliary_module.Auxiliary.do_something
  9. 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
  10. doing something
  11. 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
  12. done doing something
  13. 2005-03-23 23:47:11,670 - spam_application - INFO -
  14. finished auxiliary_module.Auxiliary.do_something
  15. 2005-03-23 23:47:11,671 - spam_application - INFO -
  16. calling auxiliary_module.some_function()
  17. 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
  18. received a call to 'some_function'
  19. 2005-03-23 23:47:11,673 - spam_application - INFO -
  20. done with auxiliary_module.some_function()

在多个线程中记录日志

多线程记录日志并不需要特殊处理,以下示例演示了在主线程(起始线程)和其他线程中记录日志的过程:

 
 
 
 
  1. import logging
  2. import threading
  3. import time
  4. def worker(arg):
  5. while not arg['stop']:
  6. logging.debug('Hi from myfunc')
  7. time.sleep(0.5)
  8. def main():
  9. logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
  10. info = {'stop': False}
  11. thread = threading.Thread(target=worker, args=(info,))
  12. thread.start()
  13. while True:
  14. try:
  15. logging.debug('Hello from main')
  16. time.sleep(0.75)
  17. except KeyboardInterrupt:
  18. info['stop'] = True
  19. break
  20. thread.join()
  21. if __name__ == '__main__':
  22. main()

脚本会运行输出类似下面的内容:

 
 
 
 
  1. 0 Thread-1 Hi from myfunc
  2. 3 MainThread Hello from main
  3. 505 Thread-1 Hi from myfunc
  4. 755 MainThread Hello from main
  5. 1007 Thread-1 Hi from myfunc
  6. 1507 MainThread Hello from main
  7. 1508 Thread-1 Hi from myfunc
  8. 2010 Thread-1 Hi from myfunc
  9. 2258 MainThread Hello from main
  10. 2512 Thread-1 Hi from myfunc
  11. 3009 MainThread Hello from main
  12. 3013 Thread-1 Hi from myfunc
  13. 3515 Thread-1 Hi from myfunc
  14. 3761 MainThread Hello from main
  15. 4017 Thread-1 Hi from myfunc
  16. 4513 MainThread Hello from main
  17. 4518 Thread-1 Hi from myfunc

以上如期显示了不同线程的日志是交替输出的。当然更多的线程也会如此。

多个 handler 和多种 formatter

日志是个普通的 Python 对象。 addHandler() 方法可加入不限数量的日志 handler。有时候,应用程序需把严重错误信息记入文本文件,而将一般错误或其他级别的信息输出到控制台。若要进行这样的设定,只需多配置几个日志 handler 即可,应用程序的日志调用代码可以保持不变。下面对之前的分模块日志示例略做修改:

 
 
 
 
  1. import logging
  2. logger = logging.getLogger('simple_example')
  3. logger.setLevel(logging.DEBUG)
  4. # create file handler which logs even debug messages
  5. fh = logging.FileHandler('spam.log')
  6. fh.setLevel(logging.DEBUG)
  7. # create console handler with a higher log level
  8. ch = logging.StreamHandler()
  9. ch.setLevel(logging.ERROR)
  10. # create formatter and add it to the handlers
  11. formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
  12. ch.setFormatter(formatter)
  13. fh.setFormatter(formatter)
  14. # add the handlers to logger
  15. logger.addHandler(ch)
  16. logger.addHandler(fh)
  17. # 'application' code
  18. logger.debug('debug message')
  19. logger.info('info message')
  20. logger.warning('warn message')
  21. logger.error('error message')
  22. logger.critical('critical message')

需要注意的是,“应用程序”内的代码并不关心是否存在多个日志 handler。示例中所做的改变,只是新加入并配置了一个名为 fh 的 handler。

在编写和测试应用程序时,若能创建日志 handler 对不同严重级别的日志信息进行过滤,这将十分有用。调试时无需用多条 print 语句,而是采用 logger.debug :print 语句以后还得注释或删掉,而 logger.debug 语句可以原样留在源码中保持静默。当需要再次调试时,只要改变日志对象或 handler 的严重级别即可。

在多个地方记录日志

假定要根据不同的情况将日志以不同的格式写入控制台和文件。比如把 DEBUG 以上级别的日志信息写于文件,并且把 INFO 以上的日志信息输出到控制台。再假设日志文件需要包含时间戳,控制台信息则不需要。以下演示了做法:

 
 
 
 
  1. import logging
  2. # set up logging to file - see previous section for more details
  3. logging.basicConfig(level=logging.DEBUG,
  4. format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
  5. datefmt='%m-%d %H:%M',
  6. filename='/tmp/myapp.log',
  7. filemode='w')
  8. # define a Handler which writes INFO messages or higher to the sys.stderr
  9. console = logging.StreamHandler()
  10. console.setLevel(logging.INFO)
  11. # set a format which is simpler for console use
  12. formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
  13. # tell the handler to use this format
  14. console.setFormatter(formatter)
  15. # add the handler to the root logger
  16. logging.getLogger('').addHandler(console)
  17. # Now, we can log to the root logger, or any other logger. First the root...
  18. logging.info('Jackdaws love my big sphinx of quartz.')
  19. # Now, define a couple of other loggers which might represent areas in your
  20. # application:
  21. logger1 = logging.getLogger('myapp.area1')
  22. logger2 = logging.getLogger('myapp.area2')
  23. logger1.debug('Quick zephyrs blow, vexing daft Jim.')
  24. logger1.info('How quickly daft jumping zebras vex.')
  25. logger2.warning('Jail zesty vixen who grabbed pay from quack.')
  26. logger2.error('The five boxing wizards jump quickly.')

当运行后,你会看到控制台如下所示

 
 
 
 
  1. root : INFO Jackdaws love my big sphinx of quartz.
  2. myapp.area1 : INFO How quickly daft jumping zebras vex.
  3. myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
  4. myapp.area2 : ERROR The five boxing wizards jump quickly.

而日志文件将如下所示:

 
 
 
 
  1. 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
  2. 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
  3. 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
  4. 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
  5. 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.

如您所见,DEBUG 级别的日志信息只出现在了文件中,而其他信息则两个地方都会输出。

上述示例只用到了控制台和文件 handler,当然还可以自由组合任意数量的日志 handler。

Note that the above choice of log filename /tmp/myapp.log implies use of a standard location for temporary files on POSIX systems. On Windows, you may need to choose a different directory name for the log - just ensure that the directory exists and that you have the permissions to create and update files in it.

Custom handling of levels

Sometimes, you might want to do something slightly different from the standard handling of levels in handlers, where all levels above a threshold get processed by a handler. To do this, you need to use filters. Let’s look at a scenario where you want to arrange things as follows:

  • Send messages of severity INFO and WARNING to sys.stdout

  • Send messages of severity ERROR and above to sys.stderr

  • Send messages of severity DEBUG and above to file app.log

Suppose you configure logging with the following JSON:

 
 
 
 
  1. {
  2. "version": 1,
  3. "disable_existing_loggers": false,
  4. "formatters": {
  5. "simple": {
  6. "format": "%(levelname)-8s - %(message)s"
  7. }
  8. },
  9. "handlers": {
  10. "stdout": {
  11. "class": "logging.StreamHandler",
  12. "level": "INFO",
  13. "formatter": "simple",
  14. "stream": "ext://sys.stdout",
  15. },
  16. "stderr": {
  17. "class": "logging.StreamHandler",
  18. "level": "ERROR",
  19. "formatter": "simple",
  20. "stream": "ext://sys.stderr"
  21. },
  22. "file": {
  23. "class": "logging.FileHandler",
  24. "formatter": "simple",
  25. "filename": "app.log",
  26. "mode": "w"
  27. }
  28. },
  29. "root": {
  30. "level": "DEBUG",
  31. "handlers": [
  32. "stderr",
  33. "stdout",
  34. "file"
  35. ]
  36. }
  37. }

This configuration does almost what we want, except that sys.stdout would show messages of severity ERROR and above as well as INFO and WARNING messages. To prevent this, we can set up a filter which excludes those messages and add it to the relevant handler. This can be configured by adding a filters section parallel to formatters and handlers:

 
 
 
 
  1. "filters": {
  2. "warnings_and_below": {
  3. "()" : "__main__.filter_maker",
  4. "level": "WARNING"
  5. }
  6. }

and changing the section on the stdout handler to add it:

 
 
 
 
  1. "stdout": {
  2. "class": "logging.StreamHandler",
  3. "level": "INFO",
  4. "formatter": "simple",
  5. "stream": "ext://sys.stdout",
  6. "filters": ["warnings_and_below"]
  7. }

A filter is just a function, so we can define the filter_maker (a factory function) as follows:

 
 
 
 
  1. def filter_maker(level):
  2. level = getattr(logging, level)
  3. def filter(record):
  4. return record.levelno <= level
  5. return filter

This converts the string argument passed in to a numeric level, and returns a function which only returns True if the level of the passed in record is at or below the specified level. Note that in this example I have defined the filter_maker in a test script main.py that I run from the command line, so its module will be __main__ - hence the __main__.filter_maker in the filter configuration. You will need to change that if you define it in a different module.

With the filter added, we can run main.py, which in full is:

 
 
 
 
  1. import json
  2. import logging
  3. import logging.config
  4. CONFIG = '''
  5. {
  6. "version": 1,
  7. "disable_existing_loggers": false,
  8. "formatters": {
  9. "simple": {
  10. "format": "%(levelname)-8s - %(message)s"
  11. }
  12. },
  13. "filters": {
  14. "warnings_and_below": {
  15. "()" : "__main__.filter_maker",
  16. "level": "WARNING"
  17. }
  18. },
  19. "handlers": {
  20. "stdout": {
  21. "class": "logging.StreamHandler",
  22. "level": "INFO",
  23. "formatter": "simple",
  24. "stream": "ext://sys.stdout",
  25. "filters": ["warnings_and_below"]
  26. },
  27. "stderr": {
  28. "class": "logging.StreamHandler",
  29. "level": "ERROR",
  30. "formatter": "simple",
  31. "stream": "ext://sys.stderr"
  32. },
  33. "file": {
  34. "class": "logging.FileHandler",
  35. "formatter": "simple",
  36. "filename": "app.log",
  37. "mode": "w"
  38. }
  39. },
  40. "root": {
  41. "level": "DEBUG",
  42. "handlers": [
  43. "stderr",
  44. "stdout",
  45. "file"
  46. ]
  47. }
  48. }
  49. '''
  50. def filter_maker(level):
  51. level = getattr(logging, level)
  52. def filter(record):
  53. return record.levelno <= level
  54. return filter
  55. logging.config.dictConfig(json.loads(CONFIG))
  56. logging.debug('A DEBUG message')
  57. logging.info('An INFO message')
  58. logging.warning('A WARNING message')
  59. logging.error('An ERROR message')
  60. logging.critical('A CRITICAL message')

And after running it like this:

 
 
 
 
  1. python main.py 2>stderr.log >stdout.log

We can see the results are as expected:

 
 
 
 
  1. $ more *.log
  2. ::::::::::::::
  3. app.log
  4. ::::::::::::::
  5. DEBUG - A DEBUG message
  6. INFO - An INFO message
  7. WARNING - A WARNING message
  8. ERROR - An ERROR message
  9. CRITICAL - A CRITICAL message
  10. ::::::::::::::
  11. stderr.log
  12. ::::::::::::::
  13. ERROR - An ERROR message
  14. CRITICAL - A CRITICAL message
  15. ::::::::::::::
  16. stdout.log
  17. ::::::::::::::
  18. INFO - An INFO message
  19. WARNING - A WARNING message

日志配置服务器示例

以下是一个用到了日志配置服务器的模块示例:

 
 
 
 
  1. import logging
  2. import logging.config
  3. import time
  4. import os
  5. # read initial config file
  6. logging.config.fileConfig('logging.conf')
  7. # create and start listener on port 9999
  8. t = logging.config.listen(9999)
  9. t.start()
  10. logger = logging.getLogger('simpleExample')
  11. try:
  12. # loop through logging calls to see the difference
  13. # new configurations make, until Ctrl+C is pressed
  14. while True:
  15. logger.debug('debug message')
  16. logger.info('info message')
  17. logger.warning('warn message')
  18. logger.error('error message')
  19. logger.critical('critical message')
  20. time.sleep(5)
  21. except KeyboardInterrupt:
  22. # cleanup
  23. logging.config.stopListening()
  24. t.join()

以下脚本将接受文件名作为参数,然后将此文件发送到服务器,前面加上文件的二进制编码长度,做为新的日志配置:

 
 
 
 
  1. #!/usr/bin/env python
  2. import socket, sys, struct
  3. with open(sys.argv[1], 'rb') as f:
  4. data_to_send = f.read()
  5. HOST = 'localhost'
  6. PORT = 9999
  7. s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
  8. print('connecting...')
  9. s.connect((HOST, PORT))
  10. print('sending config...')
  11. s.send(struct.pack('>L', len(data_to_send)))
  12. s.send(data_to_send)
  13. s.close()
  14. print('complete')

处理日志 handler 的阻塞

有时你必须让日志记录处理程序的运行不会阻塞你要记录日志的线程。 这在 Web 应用程序中是很常见,当然在其他场景中也可能发生。

有一种原因往往会让程序表现迟钝,这就是 SMTPHandler:由于很多因素是开发人员无法控制的(例如邮件或网络基础设施的性能不佳),发送电子邮件可能需要很长时间。不过几乎所有网络 handler 都可能会发生阻塞:即使是 SocketHandler 操作也可能在后台执行 DNS 查询,而这种查询实在太慢了(并且 DNS 查询还可能在很底层的套接字库代码中,位于 Python 层之下,超出了可控范围)。

有一种解决方案是分成两部分实现。第一部分,针对那些对性能有要求的关键线程,只为日志对象连接一个 QueueHandler。日志对象只需简单地写入队列即可,可为队列设置足够大的容量,或者可以在初始化时不设置容量上限。尽管为以防万一,可能需要在代码中捕获 queue.Full 异常,不过队列写入操作通常会很快得以处理。如果要开发库代码,包含性能要求较高的线程,为了让使用该库的开发人员受益,请务必在开发文档中进行标明(包括建议仅连接 QueueHandlers )。

解决方案的另一部分就是 QueueListener,它被设计为 QueueHandler 的对应部分。QueueListener 非常简单:传入一个队列和一些 handler,并启动一个内部线程,用于侦听 QueueHandlers``(或其他 ``LogRecords 源)发送的 LogRecord 队列。LogRecords 会从队列中移除并传给 handler 处理。

QueueListener 作为单独的类,好处就是可以用同一个实例为多个 QueueHandlers 服务。这比把现有 handler 类线程化更加资源友好,后者会每个 handler 会占用一个线程,却没有特别的好处。

以下是这两个类的运用示例(省略了 import 语句):

 
 
 
 
  1. que = queue.Queue(-1) # no limit on size
  2. queue_handler = QueueHandler(que)
  3. handler = logging.StreamHandler()
  4. listener = QueueListener(que, handler)
  5. root = logging.getLogger()
  6. root.addHandler(queue_handler)
  7. formatter = logging.Formatter('%(threadName)s: %(message)s')
  8. handler.setFormatter(formatter)
  9. listener.start()
  10. # The log output will display the thread which generated
  11. # the event (the main thread) rather than the internal
  12. # thread which monitors the internal queue. This is what
  13. # you want to happen.
  14. root.warning('Look out!')
  15. listener.stop()

在运行后会产生:

 
 
 
 
  1. MainThread: Look out!

备注

Although the earlier discussion wasn’t specifically talking about async code, but rather about slow logging handlers, it should be noted that when logging from async code, network and even file handlers could lead to problems (blocking the event loop) because some logging is done from asyncio internals. It might be best, if any async code is used in an application, to use the above approach for logging, so that any blocking code runs only in the QueueListener thread.

在 3.5 版更改: 在 Python 3.5 之前,QueueListener 总会把由队列接收到的每条信息都传递给已初始化的每个处理程序。(因为这里假定级别过滤操作已在写入队列时完成了。)从 3.5 版开始,可以修改这种处理方式,只要将关键字参数 respect_handler_level=True 传给侦听器的构造函数即可。这样侦听器将会把每条信息的级别与 handler 的级别进行比较,只在适配时才会将信息传给 handler 。

通过网络收发日志事件

假定现在要通过网络发送日志事件,并在接收端进行处理。有一种简单的方案,就是在发送端的根日志对象连接一个 SocketHandler 实例:

 
 
 
 
  1. import logging, logging.handlers
  2. rootLogger = logging.getLogger('')
  3. rootLogger.setLevel(logging.DEBUG)
  4. socketHandler = logging.handlers.SocketHandler('localhost',
  5. logging.handlers.DEFAULT_TCP_LOGGING_PORT)
  6. # don't bother with a formatter, since a socket handler sends the event as
  7. # an unformatted pickle
  8. rootLogger.addHandler(socketHandler)
  9. # Now, we can log to the root logger, or any other logger. First the root...
  10. logging.info('Jackdaws love my big sphinx of quartz.')
  11. # Now, define a couple of other loggers which might represent areas in your
  12. # application:
  13. logger1 = logging.getLogger('myapp.area1')
  14. logger2 = logging.getLogger('myapp.area2')
  15. logger1.debug('Quick zephyrs blow, vexing daft Jim.')
  16. logger1.info('How quickly daft jumping zebras vex.')
  17. logger2.warning('Jail zesty vixen who grabbed pay from quack.')
  18. logger2.error('The five boxing wizards jump quickly.')

在接收端,可以用 socketserver 模块设置一个接收器。简要示例如下:

 
 
 
 
  1. import pickle
  2. import logging
  3. import logging.handlers
  4. import socketserver
  5. import struct
  6. class LogRecordStreamHandler(socketserver.StreamRequestHandler):
  7. """Handler for a streaming logging request.
  8. This basically logs the record using whatever logging policy is
  9. configured locally.
  10. """
  11. def handle(self):
  12. """
  13. Handle multiple requests - each expected to be a 4-byte length,
  14. followed by the LogRecord in pickle format. Logs the record
  15. according to whatever policy is configured locally.
  16. """
  17. while True:
  18. chunk = self.connection.recv(4)
  19. if len(chunk) < 4:
  20. break
  21. slen = struct.unpack('>L', chunk)[0]
  22. chunk = self.connection.recv(slen)
  23. while len(chunk) < slen:
  24. chunk = chunk + self.connection.recv(slen - len(chunk))
  25. obj = self.unPickle(chunk)
  26. record = logging.makeLogRecord(obj)
  27. self.handleLogRecord(record)
  28. def unPickle(self, data):
  29. return pickle.loads(data)
  30. def handleLogRecord(self, record):
  31. # if a name is specified, we use the named logger rather than the one
  32. # implied by the record.
  33. if self.server.logname is not None:
  34. name = self.server.logname
  35. else:
  36. name = record.name
  37. logger = logging.getLogger(name)
  38. # N.B. EVERY record gets logged. This is because Logger.handle
  39. # is normally called AFTER logger-level filtering. If you want
  40. # to do filtering, do it at the client end to save wasting
  41. # cycles and network bandwidth!
  42. logger.handle(record)
  43. class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
  44. """
  45. Simple TCP socket-based logging receiver suitable for testing.
  46. """
  47. allow_reuse_address = True
  48. def __init__ 本文名称:创新互联Python教程:日志操作手册
    文章出自:http://www.36103.cn/qtweb/news41/5091.html

    网站建设、网络推广公司-创新互联,是专注品牌与效果的网站制作,网络营销seo公司;服务项目有等

    广告

    声明:本网站发布的内容(图片、视频和文字)以用户投稿、用户转载内容为主,如果涉及侵权请尽快告知,我们将会在第一时间删除。文章观点不代表本网站立场,如需处理请联系客服。电话:028-86922220;邮箱:631063699@qq.com。内容未经允许不得转载,或转载时需注明来源: 创新互联