Decorators

Python supports a powerful tool called the decorator. Decorators let you add rich features to groups of functions and classes, without modifying them at all; untangle distinct, frustratingly intertwined concerns in your code, in ways not otherwise possible; and build powerful, extensible software frameworks. Many of the most popular and important Python libraries in the world leverage decorators. This chapter teaches you how to do the same.

A decorator is something you apply to a function or method. You’ve probably seen decorators before. There’s a decorator called property often used in classes:

  1. >>> class Person:
  2. ... def __init__(self, first_name, last_name):
  3. ... self.first_name = first_name
  4. ... self.last_name = last_name
  5. ...
  6. ... @property
  7. ... def full_name(self):
  8. ... return self.first_name + " " + self.last_name
  9. ...
  10. >>> person = Person("John", "Smith")
  11. >>> print(person.full_name)
  12. John Smith

(Note what’s printed: person.full_name, not person.full_name().) Another example: in the Flask web framework, here is how you define a simple home page:

  1. @app.route("/")
  2. def hello():
  3. return "<html><body>Hello World!</body></html>"

The app.route("/") is a decorator, applied here to the function called hello. So an HTTP GET request to the root URL ("/") will be handled by the hello function.

A decorator works by adding behavior around a function - meaning, lines of code which are executed before that function begins, after it returns, or both. It does not alter any lines of code inside the function. Typically, when you go to the trouble to define a decorator, you plan to use it on at least two different functions, usually more. Otherwise you’d just put the extra code inside the lone function, and not bother writing a decorator.

Using decorators is simple and easy; even someone new to programming can quickly learn that. Our objective is different: to give you the ability to write your own decorators, in many different useful forms. This is not a beginner topic; it barely qualifies as intermediate. It requires a deep understanding of several sophisticated Python features, and how they play together. Most Python developers never learn how to create them. In this chapter, you will.[9]

The Basic Decorator

Once a decorator is written, using it is easy. You just write @ and the decorator name, on the line before you define a function:

  1. @some_decorator
  2. def some_function(arg):
  3. # blah blah

This applies the decorator called some_decorator to some_function.[10] Now, it turns out this syntax with the @ symbol is a shorthand. In essence, when byte-compiling your code, Python will translate the above into this:

  1. def some_function(arg):
  2. # blah blah
  3. some_function = some_decorator(some_function)

This is valid Python code too, and what people did before the @ syntax came along. The key here is the last line:

  1. some_function = some_decorator(some_function)

First, understand that a decorator is just a function. That’s it. It happens to be a function taking one argument, which is the function object being decorated. It then returns a different function. In the code snippet above is you defining a function, initially called some_function. That function object is passed to some_decorator, which returns a different function object, which is finally stored in some_function.

To keep us sane, let’s define some terminology:

  • The decorator is what comes after the @. It’s a function.
  • The bare function is what’s def​'ed on the next line. It is, obviously, also a function.
  • The end result is the decorated function. It’s the final function you actually call in your code.[11]

Your mastery of decorators will be most graceful if you remember one thing: a decorator is just a normal, boring function. It happens to be a function taking exactly one argument, which is itself a function. And when called, the decorator returns a different function.

Let’s make this concrete. Here’s a simple decorator which logs a message to stdout, every time the decorated function is called.

  1. def printlog(func):
  2. def wrapper(arg):
  3. print("CALLING: " + func.__name__)
  4. return func(arg)
  5. return wrapper
  6. @printlog
  7. def foo(x):
  8. print(x + 2)

Notice this decorator creates a new function, called wrapper, and returns that. This is then assigned to the variable foo, replacing the undecorated, bare function:

  1. # Remember, this...
  2. @printlog
  3. def foo(x):
  4. print(x + 2)
  5. # ...is the exact same as this:
  6. def foo(x):
  7. print(x + 2)
  8. foo = printlog(foo)

Here’s the result:

  1. >>> foo(3)
  2. CALLING: foo
  3. 5

At a high level, the body of printlog does two things: define a function called wrapper, then return it. Many decorators will follow that structure. Notice printlog does not modify the behavior of the original function foo itself; all wrapper does is print a message to standard out, before calling the original (bare) function.

Once you’ve applied a decorator, the bare function isn’t directly accessible anymore; you can’t call it in your code. Its name now applies to the decorated version. But that decorated function internally retains a reference to the bare function, calling it inside wrapper.

This version of printlog has a big shortcoming, though. Look what happens when I apply it to a different function:

  1. >>> @printlog
  2. ... def baz(x, y):
  3. ... return x ** y
  4. ...
  5. >>> baz(3,2)
  6. Traceback (most recent call last):
  7. File "<stdin>", line 1, in <module>
  8. TypeError: wrapper() takes 1 positional argument but 2 were given

Can you spot what went wrong?

printlog is built to wrap a function taking exactly one argument. But baz has two, so when the decorated function is called, the whole thing blows up. There’s no reason printlog needs to have this restriction; all it’s doing is printing the function name. You fix it by declaring wrapper with variable arguments:

  1. # A MUCH BETTER printlog.
  2. def printlog(func):
  3. def wrapper(*args, **kwargs):
  4. print("CALLING: " + func.__name__)
  5. return func(*args, **kwargs)
  6. return wrapper

This decorator is compatible with any Python function:

  1. >>> @printlog
  2. ... def foo(x):
  3. ... print(x + 2)
  4. ...
  5. >>> @printlog
  6. ... def baz(x, y):
  7. ... return x ** y
  8. ...
  9. >>> foo(7)
  10. CALLING: foo
  11. 9
  12. >>> baz(3, 2)
  13. CALLING: baz
  14. 9

A decorator written this way, using variable arguments, will potentially work with functions and methods written years later - code the original developer never imagined. This structure has proven very powerful and versatile.

  1. # The prototypical form of Python decorators.
  2. def prototype_decorator(func):
  3. def wrapper(*args, **kwargs):
  4. return func(*args, **kwargs)
  5. return wrapper

We don’t always do this, though. Sometimes you are writing a decorator that only applies to a function or method with a very specific kind of signature, and it would be an error to use it anywhere else. So feel free to break this rule when you have a reason.

Decorators apply to methods just as well as to functions. And you often don’t need to change anything: when the wrapper has a signature of wrapper(*args, **kwargs), like printlog does, it works just fine with any object’s method. But sometimes you will see code like this:

  1. # Not really necessary.
  2. def printlog_for_method(func):
  3. def wrapper(self, *args, **kwargs):
  4. print("CALLING: " + func.__name__)
  5. return func(self, *args, **kwargs)
  6. return wrapper

This is a bit interesting. This wrapper has one required argument, named self. And it works fine when applied to a method. But for the decorator I’ve written here, that self is completely unnecessary, and in fact has a downside.

Simply defining wrapper(*args, **kwargs) causes self to be considered one of the args; such a decorator works just as well with both functions and methods. But if a wrapper is defined to require self, that means it must always be called with at least one argument. Suddenly you have a decorator that cannot be applied to functions without at least one argument. (That it’s named self doesn’t matter; it’s just a temporary name for that first argument, inside the scope of wrapper.) You can apply this decorator to any method, and to some functions. But if you apply it a function that takes no arguments, you’ll get a run-time error.

Now, here’s a different decorator:

  1. # This is more sensible.
  2. def enhanced_printlog_for_method(func):
  3. def wrapper(self, *args, **kwargs):
  4. print("CALLING: {} on object ID {}".format(
  5. func.__name__, id(self)))
  6. return func(self, *args, **kwargs)
  7. return wrapper

It could be applied like this:

  1. class Invoice:
  2. def __init__(self, id_number, total):
  3. self.id_number = id_number
  4. self.total = total
  5. self.owed = total
  6. @enhanced_printlog_for_method
  7. def record_payment(self, amount):
  8. self.owed -= amount
  9. inv = Invoice(42, 117.55)
  10. print(f"ID of inv: {id(inv)}")
  11. inv.record_payment(55.35)

Here’s the output when you execute:

ID of inv: 4320786472
CALLING: record_payment on object ID 4320786472

This is a different story, because this wrapper​'s body explicitly uses the current object - a concept that only makes sense for methods. That makes the self argument perfectly appropriate. It prevents you from using this decorator on some functions, but it would actually be an error to apply it to a non-method anyway.

When writing a decorator for methods, I recommend you get in the habit of making your wrapper only take *args and **kwargs, except when you have a clear reason to do differently. After you’ve written decorators for a while, you’ll be surprised at how often you end up using old decorators on new functions, in ways you never imagined at first. A signature of wrapper(*args, **kwargs) preserves that flexibility. If the decorator turns out to need an explicit self argument, it’s easy enough to put that in.

Data In Decorators

Some of the most valuable decorator patterns rely on using variables inside the decorator function itself. This is not the same as using variables inside the wrapper function. Let me explain.

Imagine you need to keep a running average of what some function returns. And further, you need to do this for a family of functions or methods. We can write a decorator called running_average to handle this - as you read, note carefully how data is defined and used:

  1. def running_average(func):
  2. data = {"total" : 0, "count" : 0}
  3. def wrapper(*args, **kwargs):
  4. val = func(*args, **kwargs)
  5. data["total"] += val
  6. data["count"] += 1
  7. print("Average of {} so far: {:.01f}".format(
  8. func.__name__, data["total"] / data["count"]))
  9. return func(*args, **kwargs)
  10. return wrapper

Each time the function is called, the average of all calls so far is printed out.[12] Decorator functions are called once for each function they are applied to. Then, each time that function is called in the code, the wrapper function is what’s actually executed. So imagine applying it to a function like this:

  1. @running_average
  2. def foo(x):
  3. return x + 2

This creates an internal dictionary, named data, used to keep track of foo​'s metrics. Running foo several times produces:

  1. >>> foo(1)
  2. Average of foo so far: 3.00
  3. 3
  4. >>> foo(10)
  5. Average of foo so far: 7.50
  6. 12
  7. >>> foo(1)
  8. Average of foo so far: 6.00
  9. 3
  10. >>> foo(1)
  11. Average of foo so far: 5.25
  12. 3

The placement of data is important. Pop quiz:

  • What happens if you move the line defining data up one line, outside the running_average function?
  • What happens if you move that line down, into the wrapper function?

Looking at the code above, decide on your answers to these questions before reading further.

Here’s what it looks like if you create data outside the decorator:

  1. # This version has a bug.
  2. data = {"total" : 0, "count" : 0}
  3. def outside_data_running_average(func):
  4. def wrapper(*args, **kwargs):
  5. val = func(*args, **kwargs)
  6. data["total"] += val
  7. data["count"] += 1
  8. print("Average of {} so far: {:.01f}".format(
  9. func.__name__, data["total"] / data["count"]))
  10. return func(*args, **kwargs)
  11. return wrapper

If we do this, every decorated function shares the exact same data dictionary! This actually doesn’t matter if you only ever decorate just one function. But you never bother to write a decorator unless it’s going to be applied to at least two:

  1. @outside_data_running_average
  2. def foo(x):
  3. return x + 2
  4. @outside_data_running_average
  5. def bar(x):
  6. return 3 * x

And that produces a problem:

  1. >>> # First call to foo...
  2. ... foo(1)
  3. Average of foo so far: 3.0
  4. 3
  5. >>> # First call to bar...
  6. ... bar(10)
  7. Average of bar so far: 16.5
  8. 30
  9. >>> # Second foo should still average 3.00!
  10. ... foo(1)
  11. Average of foo so far: 12.0

Because outside_data_running_average uses the same data dictionary for all the functions it decorates, the statistics are conflated.

Now, the other situation: what if you define data inside wrapper?

  1. # This version has a DIFFERENT bug.
  2. def running_average_data_in_wrapper(func):
  3. def wrapper(*args, **kwargs):
  4. data = {"total" : 0, "count" : 0}
  5. val = func(*args, **kwargs)
  6. data["total"] += val
  7. data["count"] += 1
  8. print("Average of {} so far: {:.01f}".format(
  9. func.__name__, data["total"] / data["count"]))
  10. return func(*args, **kwargs)
  11. return wrapper
  12. @running_average_data_in_wrapper
  13. def foo(x):
  14. return x + 2

Look at the average as we call this decorated function multiple times:

  1. >>> foo(1)
  2. Average of foo so far: 3.0
  3. 3
  4. >>> foo(5)
  5. Average of foo so far: 7.0
  6. 7
  7. >>> foo(20)
  8. Average of foo so far: 22.0
  9. 22

Do you see why the running average is wrong? The data dictionary is reset every time the decorated function is called. This is why it’s important to consider the scope when implementing your decorator. Here’s the correct version again (repeated so you don’t have to skip back):

  1. def running_average(func):
  2. data = {"total" : 0, "count" : 0}
  3. def wrapper(*args, **kwargs):
  4. val = func(*args, **kwargs)
  5. data["total"] += val
  6. data["count"] += 1
  7. print("Average of {} so far: {:.01f}".format(
  8. func.__name__, data["total"] / data["count"]))
  9. return func(*args, **kwargs)
  10. return wrapper

So when exactly is running_average executed? The decorator function itself is executed exactly once for every function it decorates. If you decorate N functions, running_average is executed N times, so we get N different data dictionaries, each tied to one of the resulting decorated functions. This has nothing to do with how many times a decorated function is executed. The decorated function is, basically, one of the created wrapper functions. That wrapper can be executed many times, using the same data dictionary that was in scope when that wrapper was defined.

This is why running_average produces the correct behavior:

  1. @running_average
  2. def foo(x):
  3. return x + 2
  4. @running_average
  5. def bar(x):
  6. return 3 * x
  1. >>> # First call to foo...
  2. ... foo(1)
  3. Average of foo so far: 3.0
  4. 3
  5. >>> # First call to bar...
  6. ... bar(10)
  7. Average of bar so far: 30.0
  8. 30
  9. >>> # Second foo gives correct average this time!
  10. ... foo(1)
  11. Average of foo so far: 3.0
  12. 3

Now, what if you want to peek into data? The way we’ve written running_average, you can’t. data persists because of the reference inside of wrapper, but there is no way you can access it directly in normal Python code. But when you do need to do this, there is a very easy solution: simply assign data as an attribute to wrapper. For example:

  1. # collectstats is much like running_average, but lets
  2. # you access the data dictionary directly, instead
  3. # of printing it out.
  4. def collectstats(func):
  5. data = {"total" : 0, "count" : 0}
  6. def wrapper(*args, **kwargs):
  7. val = func(*args, **kwargs)
  8. data["total"] += val
  9. data["count"] += 1
  10. return val
  11. wrapper.data = data
  12. return wrapper

See that line wrapper.data = data? Yes, you can do that. A function in Python is just an object, and in Python, you can add new attributes to objects by just assigning them. This conveniently annotates the decorated function:

  1. @collectstats
  2. def foo(x):
  3. return x + 2
  1. >>> foo.data
  2. {'total': 0, 'count': 0}
  3. >>> foo(1)
  4. 3
  5. >>> foo.data
  6. {'total': 3, 'count': 1}
  7. >>> foo(2)
  8. 4
  9. >>> foo.data
  10. {'total': 7, 'count': 2}

It’s clear now why collectstats doesn’t contain any print statement: you don’t need one! We can check the accumulated numbers at any time, because this decorator annotates the function itself, with that data attribute.

Let’s switch to another problem you might run into, and how you deal with it. Here’s an decorator that counts how many times a function has been called:

  1. # Watch out, this has a bug...
  2. count = 0
  3. def countcalls(func):
  4. def wrapper(*args, **kwargs):
  5. global count
  6. count += 1
  7. print(f"# of calls: {count}")
  8. return func(*args, **kwargs)
  9. return wrapper
  10. @countcalls
  11. def foo(x): return x + 2
  12. @countcalls
  13. def bar(x): return 3 * x

This version of countcalls has a bug. Do you see it?

That’s right: it stores count as a global variable, meaning every function that is decorated will use that same variable:

  1. >>> foo(1)
  2. # of calls: 1
  3. 3
  4. >>> foo(2)
  5. # of calls: 2
  6. 4
  7. >>> bar(3)
  8. # of calls: 3
  9. 9
  10. >>> bar(4)
  11. # of calls: 4
  12. 12
  13. >>> foo(5)
  14. # of calls: 5
  15. 7

The solution is trickier than it seems. Here’s one attempt:

  1. # Move count inside countcalls, and remove the
  2. # "global count" line. But it still has a bug...
  3. def countcalls(func):
  4. count = 0
  5. def wrapper(*args, **kwargs):
  6. count += 1
  7. print(f"# of calls: {count}")
  8. return func(*args, **kwargs)
  9. return wrapper

But that just creates a different problem:

  1. >>> foo(1)
  2. Traceback (most recent call last):
  3. File "<stdin>", line 1, in <module>
  4. File "<stdin>", line 6, in wrapper
  5. UnboundLocalError: local variable 'count' referenced before assignment

We can’t use global, because it’s not global. But we can use the nonlocal keyword:

  1. # Final working version!
  2. def countcalls(func):
  3. count = 0
  4. def wrapper(*args, **kwargs):
  5. nonlocal count
  6. count += 1
  7. print(f"# of calls: {count}")
  8. return func(*args, **kwargs)
  9. return wrapper

This finally works correctly:

  1. >>> foo(1)
  2. # of calls: 1
  3. 3
  4. >>> foo(2)
  5. # of calls: 2
  6. 4
  7. >>> bar(3)
  8. # of calls: 1
  9. 9
  10. >>> bar(4)
  11. # of calls: 2
  12. 12
  13. >>> foo(5)
  14. # of calls: 3

Applying nonlocal gives the count variable a special scope that is part-way between local and global. Essentially, Python will search for the nearest enclosing scope that defines a variable named count, and use it like it’s a global.footnote:

You may be wondering why we didn’t need to use nonlocal with the first version of running_average above - here it is again, for reference:

  1. def running_average(func):
  2. data = {"total" : 0, "count" : 0}
  3. def wrapper(*args, **kwargs):
  4. val = func(*args, **kwargs)
  5. data["total"] += val
  6. data["count"] += 1
  7. print("Average of {} so far: {:.01f}".format(
  8. func.__name__, data["total"] / data["count"]))
  9. return func(*args, **kwargs)
  10. return wrapper

When we have a line like count += 1, that’s actually modifying the value of the count variable itself - because it really means count = count + 1. And whenever you modify (instead of just read) a variable that was created in a larger scope, Python requires you to declare that’s what you actually want, with global or nonlocal.

Here’s the sneaky thing: when we write data["count"] += 1, that is not actually modifying data! Or rather, it’s not modifying the variable named data, which points to a dictionary object. Instead, the statement data["count"] += 1 invokes a method on the data object. This does change the state of the dictionary, but it doesn’t make data point to a different dictionary. But count +=1 makes count point to a different integer, so we need to use nonlocal there.

Decorators That Take Arguments

Early in the chapter I showed you an example decorator from the Flask framework:

  1. @app.route("/")
  2. def hello():
  3. return "<html><body>Hello World!</body></html>"

This is different from any decorator we’ve implemented so far, because it actually takes an argument. How do we write decorators that can do this? For example, imagine a family of decorators adding a number to the return value of a function:

  1. def add2(func):
  2. def wrapper(n):
  3. return func(n) + 2
  4. return wrapper
  5. def add4(func):
  6. def wrapper(n):
  7. return func(n) + 4
  8. return wrapper
  9. @add2
  10. def foo(x):
  11. return x ** 2
  12. @add4
  13. def bar(n):
  14. return n * 2

There is literally only one character difference between add2 and add4; it’s very repetitive, and poorly maintainable. Wouldn’t it be better if we can do something like this:

  1. @add(2)
  2. def foo(x):
  3. return x ** 2
  4. @add(4)
  5. def bar(n):
  6. return n * 2

We can. The key is to understand that add is actually not a decorator; it is a function that returns a decorator. In other words, add is a function that returns another function. (Since the returned decorator is, itself, a function).

To make this work, we write a function called add, which creates and returns the decorator:

  1. def add(increment):
  2. def decorator(func):
  3. def wrapper(n):
  4. return func(n) + increment
  5. return wrapper
  6. return decorator

It’s easiest to understand from the inside out:

  • The wrapper function is just like in the other decorators. Ultimately, when you call foo (the original function name), it’s actually calling wrapper.
  • Moving up, we have the aptly named decorator. Hint: we could say add2 = add(2), then apply add2 as a decorator.
  • At the top level is add. This is not a decorator. It’s a function that returns a decorator.

Notice the closure here. The increment variable is encapsulated in the scope of the add function. We can’t access its value outside the decorator, in the calling context. But we don’t need to, because wrapper itself has access to it.

Suppose the Python interpreter is parsing your program, and encounters the following code:

  1. @add(2)
  2. def f(n):
  3. # ....

Python takes everything between the @-symbol and the end-of-line character as a single Python expression - that would be add(2) in this case. That expression is evaluated. This all happens at compile time. Evaluating the decorator expression means executing add(2), which will return a function object. That function object is the decorator. It’s named decorator inside the body of the add function, but it doesn’t really have a name at the top level; it’s just applied to f.

What can help you see more clearly is to think of functions as things that are stored in variables. In other words, if I write def foo(x):, in my code, I could say to myself "I’m creating a function called foo". But there is another way to think about it. I can say "I’m creating a function object, and storing it in a variable called foo". Believe it or not, this is actually much closer to how Python actually works. So things like this are possible:

  1. >>> def foo():
  2. ... print("This is foo")
  3. >>> baz = foo
  4. >>> baz()
  5. This is foo
  6. >>> # foo and baz have the same id()... so they
  7. ... # refer to the same function object.
  8. >>> id(foo)
  9. 4301663768
  10. >>> id(baz)
  11. 4301663768

Now, back to add. As you realize add(2) returns a function object, it’s easy to imagine storing that in a variable named add2. As a matter of fact, the following are all exactly equivalent:

  1. # This...
  2. add2 = add(2)
  3. @add2
  4. def foo(x):
  5. return x ** 2
  6. # ... has the same effect as this:
  7. @add(2)
  8. def foo(x):
  9. return x ** 2

Remember that @ is a shorthand:

  1. # This...
  2. @some_decorator
  3. def some_function(arg):
  4. # blah blah
  5. # ... is translated by Python into this:
  6. def some_function(arg):
  7. # blah blah
  8. some_function = some_decorator(some_function)

So for add, the following are all equivalent:

  1. add2 = add(2) # Store the decorator in the add2 variable
  2. # This function definition...
  3. @add2
  4. def foo(x):
  5. return x ** 2
  6. # ... is translated by Python into this:
  7. def foo(x):
  8. return x ** 2
  9. foo = add2(foo)
  10. # But also, this...
  11. @add(2)
  12. def foo(x):
  13. return x ** 2
  14. # ... is translated by Python into this:
  15. def foo(x):
  16. return x ** 2
  17. foo = add(2)(foo)

Look over these four variations, and trace through what’s going on in your mind, until you understand how they are all equivalent. The expression add(2)(foo) in particular is interesting. Python parses this left-to-right. So it first executes add(2), which returns a function object. In this expression, that function has no name; it’s temporary and anonymous. Python takes that anonymous function object, and immediately calls it, with the argument foo. (That argument is, of course, the bare function - the function which we are decorating, in other words.) The anonymous function then returns a different function object, which we finally store in the variable called foo.

Notice that in the line foo = add(2)(foo), the name foo means something different each time it’s used. Just like when you write something like n = n + 3; the name n refers to something different on either side of the equals sign. In the exact same way, in the line foo = add(2)(foo), the variable foo holds two different function objects on the left and right sides.

Class-based Decorators

I lied to you.

I repeatedly told you a decorator is just a function. Well, decorators are usually implemented as functions, that’s true. However, it’s also possible to implement a decorator using classes. In fact, any decorator that you can implement as a function can be done with a class instead.

Why would you do this? Basically, for certain kinds of more complex decorators, classes are better suited, more readable, or otherwise easier to work with. For example, if you have a collection of related decorators, you can leverage inheritance or other object-oriented features. Simpler decorators are better implemented as functions, though it depends on your preferences for OO versus functional abstractions. It’s best to learn both ways, then decide which you prefer in your own code on a case-by-case basis.

The secret to decorating with classes is the magic method __call__. Any object can implement __call__ to make it callable - meaning, the object can be called like a function. Here’s a simple example:

  1. class Prefixer:
  2. def __init__(self, prefix):
  3. self.prefix = prefix
  4. def __call__(self, message):
  5. return self.prefix + message

You can then, in effect, "instantiate" functions:

  1. >>> simonsays = Prefixer("Simon says: ")
  2. >>> simonsays("Get up and dance!")
  3. 'Simon says: Get up and dance!'

Just looking at simonsays("Get up and dance!") in isolation, you’d never guess it is anything other than a normal function. In fact, it’s an instance of Prefixer.

You can use __call__ to implement decorators, in a very different way. Before proceeding, quiz yourself: thinking back to the @printlog decorator, and using this information about __call__, how might you implement printlog as a class instead of a function?

The basic approach is to pass func to the constructor of a decorator class, and adapt wrapper to be the __call__ method:

  1. class PrintLog:
  2. def __init__(self, func):
  3. self.func = func
  4. def __call__(self, *args, **kwargs):
  5. print(f"CALLING: {self.func.__name__}")
  6. return self.func(*args, **kwargs)
  7. # Compare to the function version you saw earlier:
  8. def printlog(func):
  9. def wrapper(*args, **kwargs):
  10. print("CALLING: " + func.__name__)
  11. return func(*args, **kwargs)
  12. return wrapper
  1. >>> @PrintLog
  2. ... def foo(x):
  3. ... print(x + 2)
  4. ...
  5. >>> @PrintLog
  6. ... def baz(x, y):
  7. ... return x ** y
  8. ...
  9. >>> foo(7)
  10. CALLING: foo
  11. 9
  12. >>> baz(3, 2)
  13. CALLING: baz
  14. 9

From the point of view of the user, @Printlog and @printlog work exactly the same.

Class-based decorators have a few advantages over function-based. For one thing, the decorator is a class, which means you can leverage inheritance. So if you have a family of related decorators, you can reuse code between them. Here’s an example:

  1. import sys
  2. class ResultAnnouncer:
  3. stream = sys.stdout
  4. prefix = "RESULT"
  5. def __init__(self, func):
  6. self.func = func
  7. def __call__(self, *args, **kwargs):
  8. value = self.func(*args, **kwargs)
  9. self.stream.write(f"{self.prefix}: {value}\n")
  10. return value
  11. class StdErrResultAnnouncer(ResultAnnouncer):
  12. stream = sys.stderr
  13. prefix = "ERROR"

Another benefit is when you prefer to accumulate state in object attributes, instead of a closure. For example, the countcalls function decorator above could be implemented as a class:

  1. class CountCalls:
  2. def __init__(self, func):
  3. self.func = func
  4. self.count = 1
  5. def __call__(self, *args, **kwargs):
  6. print(f"# of calls: {self.count}")
  7. self.count += 1
  8. return self.func(*args, **kwargs)
  9. @CountCalls
  10. def foo(x):
  11. return x + 2

Notice this also lets us access foo.count, if we want to check the count outside of the decorated function. The function version didn’t let us do this.

When creating decorators which take arguments, the structure is a little different. In this case, the constructor accepts not the func object to be decorated, but the parameters on the decorator line. The __call__ method must take the func object, define a wrapper function, and return it - similar to simple function-based decorators:

  1. # Class-based version of the "add" decorator above.
  2. class Add:
  3. def __init__(self, increment):
  4. self.increment = increment
  5. def __call__(self, func):
  6. def wrapper(n):
  7. return func(n) + self.increment
  8. return wrapper

You then use it in a similar manner to any other argument-taking decorator:

  1. >>> @Add(2)
  2. ... def foo(x):
  3. ... return x ** 2
  4. ...
  5. >>> @Add(4)
  6. ... def bar(n):
  7. ... return n * 2
  8. ...
  9. >>> foo(3)
  10. 11
  11. >>> bar(77)
  12. 158

Any function-based decorator can be implemented as a class-based decorator; you simply adapt the decorator function itself to __init__, and wrapper to __call__. It’s possible to design class-based decorators which cannot be translated into a function-based form, though.

For complex decorators, some people feel that class-based are easier to read than function-based. In particular, many people seem to find multiply nested def​'s hard to reason about. Others (including your author) feel the opposite. This is a matter of preference, and I recommend you practice with both styles before coming to your own conclusions.

Decorators For Classes

I lied to you again. I said decorators are applied to functions and methods. Well, they can also be applied to classes.

(Understand this has nothing to do with the last section’s topic, on implementing decorators as classes. A decorator can be implemented as a function, or as a class; and that decorator can be applied to a function, or to a class. They are independent ideas; here, we are talking about how to decorate classes instead of functions.)

To introduce an example, let me explain Python’s built-in repr() function. When called with one argument, this returns a string, meant to represent the passed object. It’s similar to str(); the difference is that while str() returns a human-readable string, repr() is meant to return a string version of the Python code needed to recreate it. So imagine a simple Penny class:

  1. class Penny:
  2. value = 1
  3. penny = Penny()

Ideally, repr(penny) returns the string "Penny()". But that’s not what happens by default:

>>> class Penny:
...     value = 1
>>> penny = Penny()
>>> repr(penny)
'<__main__.Penny object at 0x10229ff60>'

You fix this by implementing a __repr__ method on your classes, which repr() will use:

>>> class Penny:
...     value = 1
...     def __repr__(self):
...         return "Penny()"
>>> penny = Penny()
>>> repr(penny)
'Penny()'

You can create a decorator that will automatically add a __repr__ method to any class. You might be able to guess how it works. Instead of a wrapper function, the decorator returns a class:

  1. >>> def autorepr(klass):
  2. ... def klass_repr(self):
  3. ... return "{klass.__name__}()"
  4. ... klass.__repr__ = klass_repr
  5. ... return klass
  6. ...
  7. >>> @autorepr
  8. ... class Penny:
  9. ... value = 1
  10. ...
  11. >>> penny = Penny()
  12. >>> repr(penny)
  13. 'Penny()'

It’s suitable for classes with no-argument constructors, like Penny. Note how the decorator modifies klass directly. The original class is returned; that original class just now has a __repr__ method. Can you see how this is different from what we did with decorators of functions? With those, the decorator returned a new, different function object.

Another strategy for decorating classes is closer in spirit: creating a new subclass within the decorator, returning that in its place:

  1. def autorepr_subclass(klass):
  2. class NewClass(klass):
  3. def __repr__(self):
  4. return f"{klass.__name__}()"
  5. return NewClass

This has the disadvantage of creating a new type:

  1. >>> @autorepr_subclass
  2. ... class Nickel:
  3. ... value = 5
  4. ...
  5. >>> nickel = Nickel()
  6. >>> type(nickel)
  7. <class '__main__.autorepr_subclass.<locals>.NewClass'>

The resulting object’s type isn’t obviously related to the decorated class. That makes debugging harder, creates unclear log messages, and has other unexpected effects. For this reason, I recommend you prefer the first approach.

Class decorators tend to be less useful in practice than those for functions and methods. When they are used, it’s often to automatically generate and add methods. But they are more flexible than that. You can even express the singleton pattern using class decorators:

  1. def singleton(klass):
  2. instances = {}
  3. def get_instance():
  4. if klass not in instances:
  5. instances[klass] = klass()
  6. return instances[klass]
  7. return get_instance
  8. # There is only one Elvis.
  9. @singleton
  10. class Elvis:
  11. pass

Note the IDs are the same:

  1. >>> elvis1 = Elvis()
  2. >>> elvis2 = Elvis()
  3. >>>
  4. >>> id(elvis1)
  5. 4333747560
  6. >>> id(elvis2)
  7. 4333747560

Preserving the Wrapped Function

The techniques in this chapter for creating decorators are time-tested, and valuable in many situations. But the resulting decorators have a few problems:

  • Function objects automatically have certain attributes, like __name__, __doc__, __module__, etc. The wrapper clobbers all these, breaking any code relying on them.
  • Decorators interfere with introspection - masking the wrapped function’s signature, and blocking inspect.getsource().
  • Decorators cannot be applied in certain more exotic situations - like class methods, or descriptors - without going through some heroic contortions.

The first problem is easily solved using the standard library’s functools module. It includes a function called wraps, which you use like this:

  1. import functools
  2. def printlog(func):
  3. @functools.wraps(func)
  4. def wrapper(*args, **kwargs):
  5. print("CALLING: " + func.__name__)
  6. return func(*args, **kwargs)
  7. return wrapper

That’s right - functools.wraps is a decorator, that you use inside your own decorator. When applied to the wrapper function, it essentially copies certain attributes from the wrapped function to the wrapper. It is equivalent to this:

  1. def printlog(func):
  2. def wrapper(*args, **kwargs):
  3. print("CALLING: " + func.__name__)
  4. return func(*args, **kwargs)
  5. wrapper.__name__ = func.__name__
  6. wrapper.__doc__ = func.__doc__
  7. wrapper.__module__ = func.__module__
  8. wrapper.__annotations__ = func.__annotations__
  9. return wrapper
  1. >>> @printlog
  2. ... def foo(x):
  3. ... "Double-increment and print number."
  4. ... print(x + 2)
  5. ...
  6. >>> # functools.wraps transfers the wrapped function's attributes
  7. ... foo.__name__
  8. 'foo'
  9. >>> print(foo.__doc__)
  10. Double-increment and print number.

Contrast this with the default behavior:

  1. # What you get without functools.wraps.
  2. >>> foo.__name__
  3. 'wrapper'
  4. >>> print(foo.__doc__)
  5. None

In addition to saving you lots of tedious typing, functools.wraps encapsulates the details of what to copy over, so you don’t need to worry if new attributes are introduced in future versions of Python. For example, the __annotations__ attribute was added in Python 3; those who used functools.wraps in their Python 2 code had one less thing to worry about when porting to Python 3.

functools.wraps is actually a convenient shortcut of the more general update_wrapper. Since wraps only works with function-based decorators, your class-based decorators must use update_wrapper instead:

  1. import functools
  2. class PrintLog:
  3. def __init__(self, func):
  4. self.func = func
  5. functools.update_wrapper(self, func)
  6. def __call__(self, *args, **kwargs):
  7. print(f"CALLING: {self.func.__name__}")
  8. return self.func(*args, **kwargs)

While useful for copying over __name__, __doc__, and the other attributes, wraps and update_wrapper do not help with the other problems mentioned above. The closest to a full solution is Graham Dumpleton’s wrapt library.[13] Decorators created using the wrapt module work in situations that cause normal decorators to break, and behave correctly when used with more exotic Python language features.

So what should you do in practice?

Common advice says to proactively use functools.wraps in all your decorators. I have a different, probably controversial opinion, born from observing that most Pythonistas in the wild do not regularly use it, including myself, even though we know the implications.

While it’s true that using functools.wraps on all your decorators will prevent certain problems, doing so is not completely free. There is a cognitive cost, in that you have to remember to use it - at least, unless you make it an ingrained, fully automatic habit. It’s boilerplate which takes extra time to write, and which references the func parameter - so there’s something else to modify if you change its name. And with wrapt, you have another library dependency to manage.

All these amount to a small distraction each time you write a decorator. And when you do have a problem that functools.wraps or the wrapt module would solve, you are likely to encounter it during development, rather than have it show up unexpectedly in production. (Look at the list above again, and this will be evident.) When that happens, you can just add it and move on.

The biggest exception is probably when you are using some kind of automated API documentation tool,[14] which will use each function’s __doc__ attribute to generate reference docs. Since decorators systematically clobber that attribute, it makes sense to document a policy of using functools.wraps for all decorators in your coding style guidelines, and enforce it in code reviews.

Aside from situations like this, though, the problems with decorators will be largely theoretical for most (but not all) developers. If you are in that category, I recommend optimistically writing decorators without bothering to use wraps, update_wrapper, or the wrapt module. If and when you realize you are having a problem that these would solve for a specific decorator, introduce them then.[15]



[9] Writing decorators builds on the "Advanced Functions" chapter. If you are not already familiar with that material, read it first.

[10] For Java people: this looks just like Java annotations. However, it’s completely different. Python decorators are not in any way similar.

[11] Some authors use the phrase "decorated function" to mean "the function that is decorated" - what I’m calling the "bare function". If you read a lot of blog posts, you’ll find the phrase used both ways (sometimes in the same article), but we’ll consistently use the definitions above.

[12] In a real application, you’d write the average to some kind of log sink, but we’ll use print() here because it’s convenient for learning.

[15] A perfect example of this happens towards the end of the "Building a RESTful API Server in Python" video (https://powerfulpython.com/store/restful-api-server/), when I create the validate_summary decorator. Applying the decorator to a couple of Flask views immediately triggers a routing error, which I then fix using wraps.


Next Chapter: Exceptions and Errors

Previous Chapter: Advanced Functions