Decorators
Python supports a powerful tool called the decorator. Decorators let you add rich features to groups of functions and classes, without modifying them at all; untangle distinct, frustratingly intertwined concerns in your code, in ways not otherwise possible; and build powerful, extensible software frameworks. Many of the most popular and important Python libraries in the world leverage decorators. This chapter teaches you how to do the same.
A decorator is something you apply to a function or method. You’ve
probably seen decorators before. There’s a decorator called
property often used in classes:
- >>> class Person:
- ... def __init__(self, first_name, last_name):
- ... self.first_name = first_name
- ... self.last_name = last_name
- ...
- ... @property
- ... def full_name(self):
- ... return self.first_name + " " + self.last_name
- ...
- >>> person = Person("John", "Smith")
- >>> print(person.full_name)
- John Smith
(Note what’s printed: person.full_name, not person.full_name().)
Another example: in the Flask web framework, here is
how you define a simple home page:
- @app.route("/")
- def hello():
- return "<html><body>Hello World!</body></html>"
The app.route("/") is a decorator, applied here to the function
called hello. So an HTTP GET request to the root URL ("/") will be
handled by the hello function.
A decorator works by adding behavior around a function - meaning, lines of code which are executed before that function begins, after it returns, or both. It does not alter any lines of code inside the function. Typically, when you go to the trouble to define a decorator, you plan to use it on at least two different functions, usually more. Otherwise you’d just put the extra code inside the lone function, and not bother writing a decorator.
Using decorators is simple and easy; even someone new to programming can quickly learn that. Our objective is different: to give you the ability to write your own decorators, in many different useful forms. This is not a beginner topic; it barely qualifies as intermediate. It requires a deep understanding of several sophisticated Python features, and how they play together. Most Python developers never learn how to create them. In this chapter, you will.[9]
The Basic Decorator
Once a decorator is written, using it is easy. You just write @ and
the decorator name, on the line before you define a function:
- @some_decorator
- def some_function(arg):
- # blah blah
This applies the decorator called some_decorator to
some_function.[10] Now, it turns out this syntax with the @
symbol is a shorthand. In essence, when byte-compiling your code,
Python will translate the above into this:
- def some_function(arg):
- # blah blah
- some_function = some_decorator(some_function)
This is valid Python code too, and what people did before the @
syntax came along. The key here is the last line:
- some_function = some_decorator(some_function)
First, understand that a decorator is just a function. That’s it. It
happens to be a function taking one argument, which is the function
object being decorated. It then returns a different function. In the
code snippet above is you defining a function, initially called
some_function. That function object is passed to some_decorator,
which returns a different function object, which is finally stored
in some_function.
To keep us sane, let’s define some terminology:
- The decorator is what comes after the @. It’s a function.
-
The bare function is what’s
def'ed on the next line. It is, obviously, also a function. - The end result is the decorated function. It’s the final function you actually call in your code.[11]
Your mastery of decorators will be most graceful if you remember one thing: a decorator is just a normal, boring function. It happens to be a function taking exactly one argument, which is itself a function. And when called, the decorator returns a different function.
Let’s make this concrete. Here’s a simple decorator which logs a message to stdout, every time the decorated function is called.
- def printlog(func):
- def wrapper(arg):
- print("CALLING: " + func.__name__)
- return func(arg)
- return wrapper
-
- @printlog
- def foo(x):
- print(x + 2)
Notice this decorator creates a new function, called wrapper,
and returns that. This is then assigned to the variable
foo, replacing the undecorated, bare function:
- # Remember, this...
- @printlog
- def foo(x):
- print(x + 2)
-
- # ...is the exact same as this:
- def foo(x):
- print(x + 2)
- foo = printlog(foo)
Here’s the result:
- >>> foo(3)
- CALLING: foo
- 5
At a high level, the body of printlog does two things: define a
function called wrapper, then return it. Many decorators will
follow that structure. Notice printlog does not modify the behavior
of the original function foo itself; all wrapper does is print a
message to standard out, before calling the original (bare)
function.
Once you’ve applied a decorator, the bare function isn’t directly
accessible anymore; you can’t call it in your code. Its name now
applies to the decorated version. But that decorated function
internally retains a reference to the bare function, calling it inside
wrapper.
This version of printlog has a big shortcoming, though. Look what
happens when I apply it to a different function:
- >>> @printlog
- ... def baz(x, y):
- ... return x ** y
- ...
- >>> baz(3,2)
- Traceback (most recent call last):
- File "<stdin>", line 1, in <module>
- TypeError: wrapper() takes 1 positional argument but 2 were given
Can you spot what went wrong?
printlog is built to wrap a function taking exactly one
argument. But baz has two, so when the decorated function is called,
the whole thing blows up. There’s no reason printlog needs to have
this restriction; all it’s doing is printing the function name. You
fix it by declaring wrapper with variable arguments:
- # A MUCH BETTER printlog.
- def printlog(func):
- def wrapper(*args, **kwargs):
- print("CALLING: " + func.__name__)
- return func(*args, **kwargs)
- return wrapper
This decorator is compatible with any Python function:
- >>> @printlog
- ... def foo(x):
- ... print(x + 2)
- ...
- >>> @printlog
- ... def baz(x, y):
- ... return x ** y
- ...
- >>> foo(7)
- CALLING: foo
- 9
- >>> baz(3, 2)
- CALLING: baz
- 9
A decorator written this way, using variable arguments, will potentially work with functions and methods written years later - code the original developer never imagined. This structure has proven very powerful and versatile.
- # The prototypical form of Python decorators.
- def prototype_decorator(func):
- def wrapper(*args, **kwargs):
- return func(*args, **kwargs)
- return wrapper
We don’t always do this, though. Sometimes you are writing a decorator that only applies to a function or method with a very specific kind of signature, and it would be an error to use it anywhere else. So feel free to break this rule when you have a reason.
Decorators apply to methods just as well as to functions. And you
often don’t need to change anything: when the wrapper has a
signature of wrapper(*args, **kwargs), like printlog does, it
works just fine with any object’s method. But sometimes you will see
code like this:
- # Not really necessary.
- def printlog_for_method(func):
- def wrapper(self, *args, **kwargs):
- print("CALLING: " + func.__name__)
- return func(self, *args, **kwargs)
- return wrapper
This is a bit interesting. This wrapper has one required argument,
named self. And it works fine when applied to a method. But for the
decorator I’ve written here, that self is completely unnecessary,
and in fact has a downside.
Simply defining wrapper(*args, **kwargs) causes self to be
considered one of the args; such a decorator works just as well with
both functions and methods. But if a wrapper is defined to require
self, that means it must always be called with at least one
argument. Suddenly you have a decorator that cannot be applied to
functions without at least one argument. (That it’s named self
doesn’t matter; it’s just a temporary name for that first argument,
inside the scope of wrapper.) You can apply this decorator to any
method, and to some functions. But if you apply it to a function that
takes no arguments, you’ll get a run-time error.
Now, here’s a different decorator:
- # This is more sensible.
- def enhanced_printlog_for_method(func):
- def wrapper(self, *args, **kwargs):
- print("CALLING: {} on object ID {}".format(
- func.__name__, id(self)))
- return func(self, *args, **kwargs)
- return wrapper
It could be applied like this:
- class Invoice:
- def __init__(self, id_number, total):
- self.id_number = id_number
- self.total = total
- self.owed = total
- @enhanced_printlog_for_method
- def record_payment(self, amount):
- self.owed -= amount
-
- inv = Invoice(42, 117.55)
- print(f"ID of inv: {id(inv)}")
- inv.record_payment(55.35)
Here’s the output when you execute:
ID of inv: 4320786472 CALLING: record_payment on object ID 4320786472
This is a different story, because this wrapper's body
explicitly uses the current object - a concept that only makes sense
for methods. That makes the self argument perfectly appropriate. It
prevents you from using this decorator on some functions, but it would
actually be an error to apply it to a non-method anyway.
When writing a decorator for methods, I recommend you get in the habit
of making your wrapper only take *args and **kwargs, except
when you have a clear reason to do differently. After you’ve written
decorators for a while, you’ll be surprised at how often you end up
using old decorators on new functions, in ways you never imagined at
first. A signature of wrapper(*args, **kwargs) preserves that
flexibility. If the decorator turns out to need an explicit self
argument, it’s easy enough to put that in.
Data In Decorators
Some of the most valuable decorator patterns rely on using variables inside the decorator function itself. This is not the same as using variables inside the wrapper function. Let me explain.
Imagine you need to keep a running average of what some function
returns. And further, you need to do this for a family of functions or
methods. We can write a decorator called running_average to handle
this - as you read, note carefully how data is defined and used:
- def running_average(func):
- data = {"total" : 0, "count" : 0}
- def wrapper(*args, **kwargs):
- val = func(*args, **kwargs)
- data["total"] += val
- data["count"] += 1
- print("Average of {} so far: {:.01f}".format(
- func.__name__, data["total"] / data["count"]))
- return func(*args, **kwargs)
- return wrapper
Each time the function is called, the average of all calls so far is
printed out.[12] Decorator functions are
called once for each function they are applied to. Then, each time
that function is called in the code, the wrapper function is what’s
actually executed. So imagine applying it to a function like this:
- @running_average
- def foo(x):
- return x + 2
This creates an internal dictionary, named data, used to keep track
of foo's metrics. Running foo several times produces:
- >>> foo(1)
- Average of foo so far: 3.00
- 3
- >>> foo(10)
- Average of foo so far: 7.50
- 12
- >>> foo(1)
- Average of foo so far: 6.00
- 3
- >>> foo(1)
- Average of foo so far: 5.25
- 3
The placement of data is important. Pop quiz:
-
What happens if you move the line defining
dataup one line, outside therunning_averagefunction? -
What happens if you move that line down, into the
wrapperfunction?
Looking at the code above, decide on your answers to these questions before reading further.
Here’s what it looks like if you create data outside the decorator:
- # This version has a bug.
- data = {"total" : 0, "count" : 0}
- def outside_data_running_average(func):
- def wrapper(*args, **kwargs):
- val = func(*args, **kwargs)
- data["total"] += val
- data["count"] += 1
- print("Average of {} so far: {:.01f}".format(
- func.__name__, data["total"] / data["count"]))
- return func(*args, **kwargs)
- return wrapper
If we do this, every decorated function shares the exact same data
dictionary! This actually doesn’t matter if you only ever decorate
just one function. But you never bother to write a decorator unless
it’s going to be applied to at least two:
- @outside_data_running_average
- def foo(x):
- return x + 2
-
- @outside_data_running_average
- def bar(x):
- return 3 * x
And that produces a problem:
- >>> # First call to foo...
- ... foo(1)
- Average of foo so far: 3.0
- 3
- >>> # First call to bar...
- ... bar(10)
- Average of bar so far: 16.5
- 30
- >>> # Second foo should still average 3.00!
- ... foo(1)
- Average of foo so far: 12.0
Because outside_data_running_average uses the same data
dictionary for all the functions it decorates, the statistics are
conflated.
Now, the other situation: what if you define data inside wrapper?
- # This version has a DIFFERENT bug.
- def running_average_data_in_wrapper(func):
- def wrapper(*args, **kwargs):
- data = {"total" : 0, "count" : 0}
- val = func(*args, **kwargs)
- data["total"] += val
- data["count"] += 1
- print("Average of {} so far: {:.01f}".format(
- func.__name__, data["total"] / data["count"]))
- return func(*args, **kwargs)
- return wrapper
-
- @running_average_data_in_wrapper
- def foo(x):
- return x + 2
Look at the average as we call this decorated function multiple times:
- >>> foo(1)
- Average of foo so far: 3.0
- 3
- >>> foo(5)
- Average of foo so far: 7.0
- 7
- >>> foo(20)
- Average of foo so far: 22.0
- 22
Do you see why the running average is wrong? The data dictionary is
reset every time the decorated function is called. This is why it’s
important to consider the scope when implementing your
decorator. Here’s the correct version again (repeated so you don’t
have to skip back):
- def running_average(func):
- data = {"total" : 0, "count" : 0}
- def wrapper(*args, **kwargs):
- val = func(*args, **kwargs)
- data["total"] += val
- data["count"] += 1
- print("Average of {} so far: {:.01f}".format(
- func.__name__, data["total"] / data["count"]))
- return func(*args, **kwargs)
- return wrapper
So when exactly is running_average executed? The decorator function
itself is executed exactly once for every function it
decorates. If you decorate N functions, running_average is executed
N times, so we get N different data dictionaries, each tied to one
of the resulting decorated functions. This has nothing to do with how
many times a decorated function is executed. The decorated function
is, basically, one of the created wrapper functions. That wrapper
can be executed many times, using the same data dictionary that was
in scope when that wrapper was defined.
This is why running_average produces the correct behavior:
- @running_average
- def foo(x):
- return x + 2
-
- @running_average
- def bar(x):
- return 3 * x
- >>> # First call to foo...
- ... foo(1)
- Average of foo so far: 3.0
- 3
- >>> # First call to bar...
- ... bar(10)
- Average of bar so far: 30.0
- 30
- >>> # Second foo gives correct average this time!
- ... foo(1)
- Average of foo so far: 3.0
- 3
Now, what if you want to peek into data? The way we’ve written
running_average, you can’t. data persists because of the
reference inside of wrapper. There is no way you can access it
directly in normal Python code.
But when you do need to do this, there is a very easy solution:
simply assign data as an attribute to wrapper. For example:
- # collectstats is much like running_average, but lets
- # you access the data dictionary directly, instead
- # of printing it out.
- def collectstats(func):
- data = {"total" : 0, "count" : 0}
- def wrapper(*args, **kwargs):
- val = func(*args, **kwargs)
- data["total"] += val
- data["count"] += 1
- return val
- wrapper.data = data
- return wrapper
See that line wrapper.data = data? Yes, you can do that. A function
in Python is just an object, and in Python, you can add new attributes
to objects by just assigning them. This conveniently annotates the
decorated function:
- @collectstats
- def foo(x):
- return x + 2
- >>> foo.data
- {'total': 0, 'count': 0}
- >>> foo(1)
- 3
- >>> foo.data
- {'total': 3, 'count': 1}
- >>> foo(2)
- 4
- >>> foo.data
- {'total': 7, 'count': 2}
It’s clear now why collectstats doesn’t contain any print statement:
you don’t need one! We can check the accumulated numbers at any time,
because this decorator annotates the function itself, with that data
attribute.
Let’s switch to another problem you might run into, and how you deal with it. Here’s an decorator that counts how many times a function has been called:
- # Watch out, this has a bug...
- count = 0
- def countcalls(func):
- def wrapper(*args, **kwargs):
- global count
- count += 1
- print(f"# of calls: {count}")
- return func(*args, **kwargs)
- return wrapper
-
- @countcalls
- def foo(x): return x + 2
-
- @countcalls
- def bar(x): return 3 * x
This version of countcalls has a bug. Do you see it?
That’s right: it stores count as a global variable, meaning every
function that is decorated will use that same variable:
- >>> foo(1)
- # of calls: 1
- 3
- >>> foo(2)
- # of calls: 2
- 4
- >>> bar(3)
- # of calls: 3
- 9
- >>> bar(4)
- # of calls: 4
- 12
- >>> foo(5)
- # of calls: 5
- 7
The solution is trickier than it seems. Here’s one attempt:
- # Move count inside countcalls, and remove the
- # "global count" line. But it still has a bug...
- def countcalls(func):
- count = 0
- def wrapper(*args, **kwargs):
- count += 1
- print(f"# of calls: {count}")
- return func(*args, **kwargs)
- return wrapper
But that just creates a different problem:
- >>> foo(1)
- Traceback (most recent call last):
- File "<stdin>", line 1, in <module>
- File "<stdin>", line 6, in wrapper
- UnboundLocalError: local variable 'count' referenced before assignment
We can’t use global, because it’s not global. But we
can use the nonlocal keyword:
- # Final working version!
- def countcalls(func):
- count = 0
- def wrapper(*args, **kwargs):
- nonlocal count
- count += 1
- print(f"# of calls: {count}")
- return func(*args, **kwargs)
- return wrapper
This finally works correctly:
- >>> foo(1)
- # of calls: 1
- 3
- >>> foo(2)
- # of calls: 2
- 4
- >>> bar(3)
- # of calls: 1
- 9
- >>> bar(4)
- # of calls: 2
- 12
- >>> foo(5)
- # of calls: 3
Applying nonlocal gives the count variable a
special scope that is part-way between local and global. Essentially,
Python will search for the nearest enclosing scope that defines a
variable named count, and use it like it’s a
global.footnote:
You may be wondering why we didn’t need to use nonlocal with the
first version of running_average above - here it is again, for
reference:
- def running_average(func):
- data = {"total" : 0, "count" : 0}
- def wrapper(*args, **kwargs):
- val = func(*args, **kwargs)
- data["total"] += val
- data["count"] += 1
- print("Average of {} so far: {:.01f}".format(
- func.__name__, data["total"] / data["count"]))
- return func(*args, **kwargs)
- return wrapper
When we have a line like count += 1, that’s actually modifying the
value of the count variable itself - because it really means count =
count + 1. And whenever you modify (instead of just read) a variable
that was created in a larger scope, Python requires you to declare
that’s what you actually want, with global or nonlocal.
Here’s the sneaky thing: when we write data["count"] += 1, that is
not actually modifying data! Or rather, it’s not modifying the
variable named data, which points to a dictionary object.
Instead, the statement data["count"] += 1 invokes a method on the
data object. This does change the state of the dictionary, but it
doesn’t make data point to a different dictionary. But count +=1
makes count point to a different integer, so we need to use
nonlocal there.
Decorators That Take Arguments
Early in the chapter I showed you an example decorator from the Flask framework:
- @app.route("/")
- def hello():
- return "<html><body>Hello World!</body></html>"
This is different from any decorator we’ve implemented so far, because it actually takes an argument. How do we write decorators that can do this? For example, imagine a family of decorators adding a number to the return value of a function:
- def add2(func):
- def wrapper(n):
- return func(n) + 2
- return wrapper
-
- def add4(func):
- def wrapper(n):
- return func(n) + 4
- return wrapper
-
- @add2
- def foo(x):
- return x ** 2
-
- @add4
- def bar(n):
- return n * 2
There is literally only one character difference between add2 and
add4; it’s very repetitive, and poorly maintainable. Wouldn’t it be
better if we can do something like this:
- @add(2)
- def foo(x):
- return x ** 2
-
- @add(4)
- def bar(n):
- return n * 2
We can. The key is to understand that add is actually not a
decorator; it is a function that returns a decorator. In other
words, add is a function that returns another function. (Since the
returned decorator is, itself, a function).
To make this work, we write a function called add, which creates and
returns the decorator:
- def add(increment):
- def decorator(func):
- def wrapper(n):
- return func(n) + increment
- return wrapper
- return decorator
It’s easiest to understand from the inside out:
-
The
wrapperfunction is just like in the other decorators. Ultimately, when you callfoo(the original function name), it’s actually callingwrapper. -
Moving up, we have the aptly named
decorator. Hint: we could sayadd2 = add(2), then applyadd2as a decorator. -
At the top level is
add. This is not a decorator. It’s a function that returns a decorator.
Notice the closure here. The increment variable is encapsulated
in the scope of the add function. We can’t access its value outside
the decorator, in the calling context. But we don’t need to, because
wrapper itself has access to it.
Suppose the Python interpreter is parsing your program, and encounters the following code:
- @add(2)
- def f(n):
- # ....
Python takes everything between the @-symbol and the end-of-line
character as a single Python expression - that would be “add(2)” in
this case. That expression is evaluated. This all happens at compile
time. Evaluating the decorator expression means
executing add(2), which will return a function object. That function
object is the decorator. It’s named decorator inside the body of the
add function, but it doesn’t really have a name at the top
level; it’s just applied to f.
What can help you see more clearly is to think of functions as things
that are stored in variables. In other words, if I write def
foo(x):, in my code, I could say to myself "I’m creating a function
called foo". But there is another way to think about it. I can say
"I’m creating a function object, and storing it in a variable called
foo". Believe it or not, this is actually much closer to how Python
actually works. So things like this are possible:
- >>> def foo():
- ... print("This is foo")
- >>> baz = foo
- >>> baz()
- This is foo
- >>> # foo and baz have the same id()... so they
- ... # refer to the same function object.
- >>> id(foo)
- 4301663768
- >>> id(baz)
- 4301663768
Now, back to add. As you realize add(2) returns a function object,
it’s easy to imagine storing that in a variable named add2. As a
matter of fact, the following are all exactly equivalent:
- # This...
- add2 = add(2)
- @add2
- def foo(x):
- return x ** 2
-
- # ... has the same effect as this:
- @add(2)
- def foo(x):
- return x ** 2
Remember that @ is a shorthand:
- # This...
- @some_decorator
- def some_function(arg):
- # blah blah
-
- # ... is translated by Python into this:
- def some_function(arg):
- # blah blah
- some_function = some_decorator(some_function)
So for add, the following are all equivalent:
- add2 = add(2) # Store the decorator in the add2 variable
-
- # This function definition...
- @add2
- def foo(x):
- return x ** 2
-
- # ... is translated by Python into this:
- def foo(x):
- return x ** 2
- foo = add2(foo)
-
- # But also, this...
- @add(2)
- def foo(x):
- return x ** 2
-
- # ... is translated by Python into this:
- def foo(x):
- return x ** 2
- foo = add(2)(foo)
Look over these four variations, and trace through what’s going on in
your mind, until you understand how they are all equivalent. The
expression add(2)(foo) in particular is interesting. Python parses
this left-to-right. So it first executes add(2), which returns a
function object. In this expression, that function has no name; it’s
temporary and anonymous. Python takes that anonymous function
object, and immediately calls it, with the argument foo. (That
argument is, of course, the bare function - the function which we are
decorating, in other words.) The anonymous function then returns a
different function object, which we finally store in the variable
called foo.
Notice that in the line foo = add(2)(foo), the name foo means
something different each time it’s used. Just like when you write
something like n = n + 3; the name n refers to something different
on either side of the equals sign. In the exact same way, in the line
foo = add(2)(foo), the variable foo holds two different function
objects on the left and right sides.
Class-based Decorators
I lied to you.
I repeatedly told you a decorator is just a function. Well, decorators are usually implemented as functions, that’s true. However, it’s also possible to implement a decorator using classes. In fact, any decorator that you can implement as a function can be done with a class instead.
Why would you do this? Basically, for certain kinds of more complex decorators, classes are better suited, more readable, or otherwise easier to work with. For example, if you have a collection of related decorators, you can leverage inheritance or other object-oriented features. Simpler decorators are better implemented as functions, though it depends on your preferences for OO versus functional abstractions. It’s best to learn both ways, then decide which you prefer in your own code on a case-by-case basis.
The secret to decorating with classes is the magic method
__call__. Any object can implement
__call__ to make it callable - meaning, the object can be called
like a function. Here’s a simple example:
- class Prefixer:
- def __init__(self, prefix):
- self.prefix = prefix
- def __call__(self, message):
- return self.prefix + message
You can then, in effect, "instantiate" functions:
- >>> simonsays = Prefixer("Simon says: ")
- >>> simonsays("Get up and dance!")
- 'Simon says: Get up and dance!'
Just looking at simonsays("Get up and
dance!") in isolation, you’d never guess it is anything other than a
normal function. In fact, it’s an instance of Prefixer.
You can use __call__ to implement decorators, in a very different
way. Before proceeding, quiz yourself: thinking back to the
@printlog decorator, and using this information about __call__,
how might you implement printlog as a class instead of a function?
The basic approach is to pass func to the constructor of a
decorator class, and adapt wrapper to be the __call__ method:
- class PrintLog:
- def __init__(self, func):
- self.func = func
- def __call__(self, *args, **kwargs):
- print(f"CALLING: {self.func.__name__}")
- return self.func(*args, **kwargs)
-
- # Compare to the function version you saw earlier:
- def printlog(func):
- def wrapper(*args, **kwargs):
- print("CALLING: " + func.__name__)
- return func(*args, **kwargs)
- return wrapper
- >>> @PrintLog
- ... def foo(x):
- ... print(x + 2)
- ...
- >>> @PrintLog
- ... def baz(x, y):
- ... return x ** y
- ...
- >>> foo(7)
- CALLING: foo
- 9
- >>> baz(3, 2)
- CALLING: baz
- 9
From the point of view of the user, @Printlog and @printlog work
exactly the same.
Class-based decorators have a few advantages over function-based. For one thing, the decorator is a class, which means you can leverage inheritance. So if you have a family of related decorators, you can reuse code between them. Here’s an example:
- import sys
- class ResultAnnouncer:
- stream = sys.stdout
- prefix = "RESULT"
- def __init__(self, func):
- self.func = func
- def __call__(self, *args, **kwargs):
- value = self.func(*args, **kwargs)
- self.stream.write(f"{self.prefix}: {value}\n")
- return value
-
- class StdErrResultAnnouncer(ResultAnnouncer):
- stream = sys.stderr
- prefix = "ERROR"
Another benefit is when you prefer to accumulate state in object
attributes, instead of a closure. For example, the countcalls
function decorator above could be implemented as a class:
- class CountCalls:
- def __init__(self, func):
- self.func = func
- self.count = 1
- def __call__(self, *args, **kwargs):
- print(f"# of calls: {self.count}")
- self.count += 1
- return self.func(*args, **kwargs)
-
- @CountCalls
- def foo(x):
- return x + 2
Notice this also lets us access foo.count, if we want to check the
count outside of the decorated function. The function version didn’t
let us do this.
When creating decorators which take arguments, the structure is a
little different. In this case, the constructor accepts not the func
object to be decorated, but the parameters on the decorator line. The
__call__ method must take the func object, define a wrapper
function, and return it - similar to simple function-based decorators:
- # Class-based version of the "add" decorator above.
- class Add:
- def __init__(self, increment):
- self.increment = increment
- def __call__(self, func):
- def wrapper(n):
- return func(n) + self.increment
- return wrapper
You then use it in a similar manner to any other argument-taking decorator:
- >>> @Add(2)
- ... def foo(x):
- ... return x ** 2
- ...
- >>> @Add(4)
- ... def bar(n):
- ... return n * 2
- ...
- >>> foo(3)
- 11
- >>> bar(77)
- 158
Any function-based decorator can be implemented as a class-based
decorator; you simply adapt the decorator function itself to __init__,
and wrapper to __call__. It’s possible to design class-based
decorators which cannot be translated into a function-based form, though.
For complex decorators, some people feel that class-based are easier
to read than function-based. In particular, many people seem to find
multiply nested def's hard to reason about. Others (including your
author) feel the opposite. This is a matter of preference, and I
recommend you practice with both styles before coming to your own
conclusions.
Decorators For Classes
I lied to you again. I said decorators are applied to functions and methods. Well, they can also be applied to classes.
(Understand this has nothing to do with the last section’s topic, on implementing decorators as classes. A decorator can be implemented as a function, or as a class; and that decorator can be applied to a function, or to a class. They are independent ideas; here, we are talking about how to decorate classes instead of functions.)
To introduce an example, let me explain Python’s built-in repr()
function. When called with one argument, this returns a string, meant
to represent the passed object. It’s similar to str(); the
difference is that while str() returns a human-readable string,
repr() is meant to return a string version of the Python code needed
to recreate it. So imagine a simple Penny class:
- class Penny:
- value = 1
-
- penny = Penny()
Ideally, repr(penny) returns the string "Penny()". But that’s not
what happens by default:
>>> class Penny: ... value = 1 >>> penny = Penny() >>> repr(penny) '<__main__.Penny object at 0x10229ff60>'
You fix this by implementing a __repr__ method on your classes,
which repr() will use:
>>> class Penny: ... value = 1 ... def __repr__(self): ... return "Penny()" >>> penny = Penny() >>> repr(penny) 'Penny()'
You can create a decorator that will automatically add a __repr__
method to any class. You might be able to guess how it works.
Instead of a wrapper function, the decorator returns a class:
- >>> def autorepr(klass):
- ... def klass_repr(self):
- ... return f"{klass.__name__}()"
- ... klass.__repr__ = klass_repr
- ... return klass
- ...
- >>> @autorepr
- ... class Penny:
- ... value = 1
- ...
- >>> penny = Penny()
- >>> repr(penny)
- 'Penny()'
It’s suitable for classes with no-argument constructors, like Penny.
Note how the decorator modifies klass directly. The original class
is returned; that original class just now has a __repr__
method. Can you see how this is different from what we did with
decorators of functions? With those, the decorator returned a new,
different function object.
Another strategy for decorating classes is closer in spirit: creating a new subclass within the decorator, returning that in its place:
- def autorepr_subclass(klass):
- class NewClass(klass):
- def __repr__(self):
- return f"{klass.__name__}()"
- return NewClass
This has the disadvantage of creating a new type:
- >>> @autorepr_subclass
- ... class Nickel:
- ... value = 5
- ...
- >>> nickel = Nickel()
- >>> type(nickel)
- <class '__main__.autorepr_subclass.<locals>.NewClass'>
The resulting object’s type isn’t obviously related to the decorated class. That makes debugging harder, creates unclear log messages, and has other unexpected effects. For this reason, I recommend you prefer the first approach.
Class decorators tend to be less useful in practice than those for functions and methods. When they are used, it’s often to automatically generate and add methods. But they are more flexible than that. You can even express the singleton pattern using class decorators:
- def singleton(klass):
- instances = {}
- def get_instance():
- if klass not in instances:
- instances[klass] = klass()
- return instances[klass]
- return get_instance
-
- # There is only one Elvis.
- @singleton
- class Elvis:
- pass
Note the IDs are the same:
- >>> elvis1 = Elvis()
- >>> elvis2 = Elvis()
- >>>
- >>> id(elvis1)
- 4333747560
- >>> id(elvis2)
- 4333747560
Preserving the Wrapped Function
The techniques in this chapter for creating decorators are time-tested, and valuable in many situations. But the resulting decorators have a few problems:
-
Function objects automatically have certain attributes, like
__name__,__doc__,__module__, etc. The wrapper clobbers all these, breaking any code relying on them. -
Decorators interfere with introspection - masking the wrapped
function’s signature, and blocking
inspect.getsource(). - Decorators cannot be applied in certain more exotic situations - like class methods, or descriptors - without going through some heroic contortions.
The first problem is easily solved using the standard library’s
functools module. It includes a
function called
wraps, which you use like this:
- import functools
- def printlog(func):
- @functools.wraps(func)
- def wrapper(*args, **kwargs):
- print("CALLING: " + func.__name__)
- return func(*args, **kwargs)
- return wrapper
That’s right - functools.wraps is a decorator, that you use inside
your own decorator. When applied to the wrapper function, it
essentially copies certain attributes from the wrapped function to the
wrapper. It is equivalent to this:
- def printlog(func):
- def wrapper(*args, **kwargs):
- print("CALLING: " + func.__name__)
- return func(*args, **kwargs)
- wrapper.__name__ = func.__name__
- wrapper.__doc__ = func.__doc__
- wrapper.__module__ = func.__module__
- wrapper.__annotations__ = func.__annotations__
- return wrapper
- >>> @printlog
- ... def foo(x):
- ... "Double-increment and print number."
- ... print(x + 2)
- ...
- >>> # functools.wraps transfers the wrapped function's attributes
- ... foo.__name__
- 'foo'
- >>> print(foo.__doc__)
- Double-increment and print number.
Contrast this with the default behavior:
- # What you get without functools.wraps.
- >>> foo.__name__
- 'wrapper'
- >>> print(foo.__doc__)
- None
In addition to saving you lots of tedious typing, functools.wraps
encapsulates the details of what to copy over, so you don’t need to
worry if new attributes are introduced in future versions of
Python. For example, the __annotations__ attribute was added in
Python 3; those who used functools.wraps in their Python 2 code had
one less thing to worry about when porting to Python 3.
functools.wraps is actually a convenient shortcut of the more
general
update_wrapper. Since wraps only works with function-based
decorators, your class-based decorators must use update_wrapper
instead:
- import functools
- class PrintLog:
- def __init__(self, func):
- self.func = func
- functools.update_wrapper(self, func)
- def __call__(self, *args, **kwargs):
- print(f"CALLING: {self.func.__name__}")
- return self.func(*args, **kwargs)
While useful for copying over __name__, __doc__, and the other
attributes, wraps and update_wrapper do not help with the other
problems mentioned above. The closest to a full solution is Graham
Dumpleton’s
wrapt library.[13] Decorators created using the wrapt
module work in situations that cause normal decorators to break, and
behave correctly when used with more exotic Python language features.
So what should you do in practice?
Common advice says to proactively use functools.wraps in all your
decorators. I have a different, probably controversial opinion, born
from observing that most Pythonistas in the wild do not regularly
use it, including myself, even though we know the implications.
While it’s true that using functools.wraps on all your decorators
will prevent certain problems, doing so is not completely free. There
is a cognitive cost, in that you have to remember to use it - at
least, unless you make it an ingrained, fully automatic habit. It’s
boilerplate which takes extra time to write, and which references the
func parameter - so there’s something else to modify if you change
its name. And with wrapt, you have another library dependency to
manage.
All these amount to a small distraction each time you write a
decorator. And when you do have a problem that functools.wraps or
the wrapt module would solve, you are likely to encounter it during
development, rather than have it show up unexpectedly in
production. (Look at the list above again, and this will be evident.)
When that happens, you can just add it and move on.
The biggest exception is probably when you are using some kind of
automated API documentation tool,[14]
which will use each function’s __doc__ attribute to generate
reference docs. Since decorators systematically clobber that
attribute, it makes sense to document a policy of using
functools.wraps for all decorators in your coding style guidelines,
and enforce it in code reviews.
Aside from situations like this, though, the problems with decorators
will be largely theoretical for most (but not all) developers. If you
are in that category, I recommend optimistically writing decorators
without bothering to use wraps, update_wrapper, or the wrapt
module. If and when you realize you are having a problem that these
would solve for a specific decorator, introduce them then.[15]
[9] Writing decorators builds on the "Advanced Functions" chapter. If you are not already familiar with that material, read it first.
[10] For Java people: this looks just like Java annotations. However, it’s completely different. Python decorators are not in any way similar.
[11] Some authors use the phrase "decorated function" to mean "the function that is decorated" - what I’m calling the "bare function". If you read a lot of blog posts, you’ll find the phrase used both ways (sometimes in the same article), but we’ll consistently use the definitions above.
[12] In a real application, you’d write the average to some kind of log sink, but we’ll use print() here because it’s convenient for learning.
[13] pip install wrapt. See also https://github.com/GrahamDumpleton/wrapt and http://wrapt.readthedocs.org/ .
[14] See https://wiki.python.org/moin/DocumentationTools for a thorough list.
[15] A perfect example of this happens towards the end of the "Building a RESTful API Server in Python" video (https://powerfulpython.com/store/restful-api-server/), when I create the validate_summary decorator. Applying the decorator to a couple of Flask views immediately triggers a routing error, which I then fix using wraps.
Next Chapter: Exceptions and Errors
Previous Chapter: Advanced Functions