Classes and Objects: Beyond The Basics

This chapter assumes you are familiar with Python’s OOP basics: creating classes, defining methods, and using inheritance. We build on this.

As with any object-oriented language, it’s useful to learn about design patterns - reusable solutions to common problems involving classes and objects. A LOT has been written about design patterns. Curiously, though, much of what’s out there doesn’t completely apply to Python - or, at least, it applies differently.

That’s because many of these design-pattern books, articles, and blog posts are for languages like Java, C++ and C#. But as a language, Python is quite different. Its dynamic typing, first-class functions, and other additions all mean the "standard" design patterns just work differently.

So let’s learn what Pythonic OOP is really about.


In object-oriented programming, a property is a special sort of object attribute. It’s almost a cross between a method and a member variable. The idea is that you can, when designing the class, create pseudo-"member variables" whose reading and writing can be managed by special methods.

  1. class Person:
  2. def __init__(self, firstname, lastname):
  3. self.firstname = firstname
  4. self.lastname = lastname
  5. @property
  6. def fullname(self):
  7. return self.firstname + " " + self.lastname

By instantiating this, I can access fullname as a kind of virtual attribute:

  1. >>> joe = Person("Joe", "Smith")
  2. >>> joe.fullname
  3. 'Joe Smith'

Notice carefully the members here: there are two attributes called firstname and lastname, set in the constructor. There is also a method called fullname. But after creating the object, we reference joe.fullname as an attribute; we don’t call joe.fullname() as a method.

This is all due to the @property decorator. When applied to a method, this decorator makes it inaccessible as a method. You must access it as an attribute. In fact, if you try to call it as a method, you get an error:

  1. >>> joe.fullname()
  2. Traceback (most recent call last):
  3. File "<stdin>", line 1, in <module>
  4. TypeError: 'str' object is not callable

As defined above, fullname is read-only. We can’t modify it:

  1. >>> joe.fullname = "Joseph Smith"
  2. Traceback (most recent call last):
  3. File "<stdin>", line 1, in <module>
  4. AttributeError: can't set attribute

In other words, Python properties are read-only by default. Another way of saying this is that @property automatically defines a getter, but not a setter. If you do want fullname to be writable, here is how you define the setter:

  1. class Person:
  2. def __init__(self, firstname, lastname):
  3. self.firstname = firstname
  4. self.lastname = lastname
  5. @property
  6. def fullname(self):
  7. return self.firstname + " " + self.lastname
  8. @fullname.setter
  9. def fullname(self, value):
  10. self.firstname, self.lastname = value.split(" ", 1)

This lets us assign to joe.fullname:

  1. >>> joe = Person("Joe", "Smith")
  2. >>> joe.firstname
  3. 'Joe'
  4. >>> joe.lastname
  5. 'Smith'
  6. >>> joe.fullname = "Joseph Smith"
  7. >>> joe.firstname
  8. 'Joseph'
  9. >>> joe.lastname
  10. 'Smith'

The first time I saw this, I had all sorts of questions. "Wait, why is fullname defined twice? And why is the second decorator named @fullname, and what’s this setter attribute? How on earth does this even compile?"

The code is actually correct, and designed to work this way. The @property def fullname must come first. That creates the property to begin with, and also creates the getter. By "create the property", I mean that an object named fullname exists in the namespace of the class, and it has a method named fullname.setter. This fullname.setter is a decorator that is applied to the next def fullname, christening it as the setter for the fullname property.

It’s okay to not fully understand how this all works. A full explanation relies on understanding both implementing decorators, and Python’s descriptor protocol, both of which are beyond the scope of what we want to focus on here. Fortunately, you don’t have to understand how it works in order to use it.

(Besides getting and setting, you can handle the del operation for the object attribute by decorating with @fullname.deleter. You won’t need this very often, but it’s available when you do.)

What you see here with the Person class is one way properties are useful: magic attributes whose values are derived from other values. This denormalizes the object’s data, and lets you access the property value as an attribute instead of as a method. You’ll see a situation where that’s extremely useful later.

Properties enable a useful collection of design patterns. One - as mentioned - is in creating read-only member variables. In Person, the fullname "member variable" is a dynamic attribute; it doesn’t exist on its own, but instead calculates its value at run-time.

It’s also common to have the property backed by a single, non-public member variable. That pattern looks like this:

  1. class Coupon:
  2. def __init__(self, amount):
  3. self._amount = amount
  4. @property
  5. def amount(self):
  6. return self._amount

This allows the class itself to modify the value internally, but prevent outside code from doing so:

  1. >>> coupon = Coupon(1.25)
  2. >>> coupon.amount
  3. 1.25
  4. >>> coupon.amount = 1.50
  5. Traceback (most recent call last):
  6. File "<stdin>", line 1, in <module>
  7. AttributeError: can't set attribute

In Python, prefixing a member variable by a single underscore signals the variable is non-public, i.e. it should only be accessed internally, inside methods of that class, or its subclasses.[19] What this pattern says is "you can access this variable, but not change it".

Between "regular member variable" and "ready-only" is another pattern: allow changing the attribute, but validate it first. Suppose my event-management application has a Ticket class, representing tickets sold to concert-goers:

  1. class Ticket:
  2. def __init__(self, price):
  3. self.price = price
  4. # And some other methods...

One day, we find a bug in our web UI, which lets some shifty customers adjust the price to a negative value…​ so we ended up actually paying them to go to the concert. Not good!

The first priority is, of course, to fix the bug in the UI. But how do we modify our code to prevent this from ever happening again? Before reading further, look at the Ticket class and ponder - how could you use properties to make this kind of bug impossible in the future?

The answer: verify the new price is non-zero in the setter:

  1. # Version 1...
  2. class Ticket:
  3. def __init__(self, price):
  4. self._price = price
  5. @property
  6. def price(self):
  7. return self._price
  8. @price.setter
  9. def price(self, new_price):
  10. # Only allow positive prices.
  11. if new_price < 0:
  12. raise ValueError("Nice try")
  13. self._price = new_price

This lets the price be adjusted…​ but only to sensible values:

>>> t = Ticket(42)
>>> t.price = 24 # This is allowed.
>>> print(t.price)
>>> t.price = -1 # This is NOT.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 11, in price
ValueError: Nice try

However, there’s a defect in this new Ticket class. Can you spot what it is? (And how to fix it?)

The problem is that while we can’t change the price to a negative value, this first version lets us create a ticket with a negative price to begin with. That’s because we write self._price = price in the constructor. The solution is to use the setter in the constructor instead:

  1. # Final version, with modified constructor. (Constructor
  2. # is different; code for getter & setter is the same.)
  3. class Ticket:
  4. def __init__(self, price):
  5. # instead of "self._price = price"
  6. self.price = price
  7. @property
  8. def price(self):
  9. return self._price
  10. @price.setter
  11. def price(self, new_price):
  12. # Only allow positive prices.
  13. if new_price < 0:
  14. raise ValueError("Nice try")
  15. self._price = new_price

Yes, you can reference self.price in methods of the class. When we write self.price = price, Python translates this to calling the price setter - i.e., the second price() method. This final version of Ticket centralizes all reads AND writes of self._price in the property. It’s a useful encapsulation principle in general. The idea is you centralize any special behavior for that member variable in the getter and setter, even for the class’s internal code. In practice, sometimes methods need to violate this rule; you simply reference self._price and move on. But avoid that where you can, and you will tend to benefit from higher quality code.

Properties and Refactoring

Properties are important in most languages today. Here’s a situation that often plays out. Imagine writing a simple money class:

  1. class Money:
  2. def __init__(self, dollars, cents):
  3. self.dollars = dollars
  4. self.cents = cents
  5. # And some other methods...

Suppose you put this class in a library, which many developers are using. People on your current team, perhaps developers on different teams. Or maybe you release it as open-source, so developers around the world use and rely on this class.

Now, one day you realize many of Money​'s methods - which do calculations on the money amount - can be simpler and more straightforward if they operate on the total number of cents, rather than dollars and cents separately. So you refactor the internal state:

  1. class Money:
  2. def __init__(self, dollars, cents):
  3. self.total_cents = dollars * 100 + cents

This minor change creates a MAJOR maintainability problem. Can you spot it?

Here’s the trouble: your original Money has attributes named dollars and cents. And since many developers are using these, changing to total_cents breaks all their code!

  1. money = Money(27, 12)
  2. message = "I have {:d} dollars and {:d} cents."
  3. # This line breaks, because there's no longer
  4. # dollars or cents attributes.
  5. print(message.format(money.dollars, money.cents))

If no one but you uses this class, there’s no real problem - you can just refactor your own code. But if that’s not the case, coordinating this change with everyone’s different code bases is a nightmare. It becomes a barrier to improving your own code.

So, what do you do? Can you think of a way to handle this situation?

You get out of this mess is with properties. You want two things to happen:

  1. Use total_cents internally, and
  2. All code using dollars and cents continues to work, without modification.

You do this by replacing dollars and cents with total_cents internally, but also creating getters and setters for these attributes. Take a look:

  1. class Money:
  2. def __init__(self, dollars, cents):
  3. self.total_cents = dollars * 100 + cents
  4. # Getter and setter for dollars...
  5. @property
  6. def dollars(self):
  7. return self.total_cents // 100
  8. @dollars.setter
  9. def dollars(self, new_dollars):
  10. self.total_cents = 100 * new_dollars + self.cents
  11. # And for cents.
  12. @property
  13. def cents(self):
  14. return self.total_cents % 100
  15. @cents.setter
  16. def cents(self, new_cents):
  17. self.total_cents = 100 * self.dollars + new_cents

Now, I can get and set dollars and cents all day:

  1. >>> money = Money(27, 12)
  2. >>> money.total_cents
  3. 2712
  4. >>> money.cents
  5. 12
  6. >>> money.dollars = 35
  7. >>> money.total_cents
  8. 3512

Python’s way of doing properties brings many benefits. In languages like Java, the following story often plays out:

  1. A newbie developer starts writing Java classes. They want to expose some state, so create public member variables.
  2. They use this class everywhere. Other developers use it too.
  3. One day, they want to change the name or type of that member variable, or even do away with it entirely (like what we did with Money).
  4. But that would break everyone’s code. So they can’t.

Because of this, Java developers quickly learn to make all their variables private by default - proactively creating getters and setters for every publicly exposed chunk of data. They realize this boilerplate is far less painful than the alternative, because if everyone must use the public getters and setters to begin with, you always have the freedom to make internal changes later.

This works well enough. But it is distracting, and just enough trouble that there’s always the temptation to make that member variable public, and be done with it.

In Python, we have the best of both worlds. We make member variables public by default, refactoring them as properties if and when we ever need to. No one using our code even has to know.

The Factory Patterns

There are several design patterns with the word "factory" in their names. Their unifying idea is providing a handy, simplified way to create useful, potentially complex objects. The two most important forms are:

  • Where the object’s type is fixed, but we want to have several different ways to create it. This is called the Simple Factory Pattern.
  • Where the factory dynamically chooses one of several different types. This is called the Factory Method Pattern.

Let’s look at how you do these in Python.

Alternative Constructors: The Simple Factory

Imagine a simple Money class, suitable for currencies which have dollars and cents:

  1. class Money:
  2. def __init__(self, dollars, cents):
  3. self.dollars = dollars
  4. self.cents = cents

We looked at this in the previous section, refactoring its attributes - but let’s roll back, and focus instead on the constructor’s interface. This constructor is convenient when we have the dollars and cents as separate integer variables. But there are many other ways to specify an amount of money. Perhaps you’re modeling a giant jar of pennies:

  1. # Emptying the penny jar...
  2. total_pennies = 3274
  3. # // is integer division
  4. dollars = total_pennies // 100
  5. cents = total_pennies % 100
  6. total_cash = Money(dollars, cents)

Suppose your code splits pennies into dollars and cents over and over, and you’re tired of repeating this calculation. You could change the constructor, but that means refactoring all Money-creating code, and perhaps a lot of code fits the current constructor better anyway. Some languages let you define several constructors, but Python makes you pick one.

In this case, you can usefully create a factory function taking the arguments you want, creating and returning the object:

  1. # Factory function taking a single argument, returning
  2. # an appropriate Money instance.
  3. def money_from_pennies(total_cents):
  4. dollars = total_cents // 100
  5. cents = total_cents % 100
  6. return Money(dollars, cents)

Imagine that, in the same code base, you also routinely need to parse a string like "$140.75". Here’s another factory function for that:

  1. # Another factory, creating Money from a string amount.
  2. import re
  3. def money_from_string(amount):
  4. match =
  5. r'^\$(?P<dollars>\d+)\.(?P<cents>\d\d)$', amount)
  6. if match is None:
  7. raise ValueError("Invalid amount: " + repr(amount))
  8. dollars = int('dollars'))
  9. cents = int('cents'))
  10. return Money(dollars, cents)

These are effectively alternate constructors: callables we can use with different arguments, which are parsed and used to create the final object. But this approach has problems. First, it’s awkward to have them as separate functions, defined outside of the class. But much more importantly: what happens if you subclass Money? Suddenly money_from_string and money_from_pennies are worthless. The base Money class is hard-coded.

Python solves these problems in unique way, absent from other languages: the classmethod decorator. Use it like this:

  1. class Money:
  2. def __init__(self, dollars, cents):
  3. self.dollars = dollars
  4. self.cents = cents
  5. @classmethod
  6. def from_pennies(cls, total_cents):
  7. dollars = total_cents // 100
  8. cents = total_cents % 100
  9. return cls(dollars, cents)

The function money_from_pennies is now a method of the Money class, called from_pennies. But it has a new argument: cls. When applied to a method definition, classmethod modifies how that method is invoked and interpreted. The first argument is not self, which would be an instance of the class. The first argument is now the class itself. In the method body, self isn’t mentioned at all; instead, cls is a variable holding the current class object - Money in this case. So the last line is creating a new instance of Money:

  1. >>> piggie_bank_cash = Money.from_pennies(3217)
  2. >>> type(piggie_bank_cash)
  3. <class '__main__.Money'>
  4. >>> piggie_bank_cash.dollars
  5. 32
  6. >>> piggie_bank_cash.cents
  7. 17

Notice from_pennies is invoked off the class itself, not an instance of the class. This already is nicer code organization. But the real benefit is with inheritance:

  1. >>> class TipMoney(Money):
  2. ... pass
  3. ...
  4. >>> tip = TipMoney.from_pennies(475)
  5. >>> type(tip)
  6. <class '__main__.TipMoney'>

This is the real benefit of class methods. You define it once on the base class, and all subclasses can leverage it, substituting their own type for cls. This makes class methods perfect for the simple factory in Python. The final line returns an instance of cls, using its regular constructor. And cls refers to whatever the current class is: Money, TipMoney, or some other subclass.

For the record, here’s how we translate money_from_string:

  1. def from_string(cls, amount):
  2. match =
  3. r'^\$(?P<dollars>\d+)\.(?P<cents>\d\d)$', amount)
  4. if match is None:
  5. raise ValueError("Invalid amount: " + repr(amount))
  6. dollars = int('dollars'))
  7. cents = int('cents'))
  8. return cls(dollars, cents)

Class methods are a superior way to implement factories like this in Python. If we subclass Money, that subclass will have from_pennies and from_string methods that create objects of that subclass, without any extra work on our part. And if we change the name of the Money class, we only have to change it in one place, not three.

This form of the factory pattern is called "simple factory", a name I don’t love. I prefer to call it "alternate constructor". Especially in the context of Python, it describes well what @classmethod is most useful for. And it suggests a general principle for designing your classes. Look at this complete code of the Money class, and I’ll explain:

  1. import re
  2. class Money:
  3. def __init__(self, dollars, cents):
  4. self.dollars = dollars
  5. self.cents = cents
  6. @classmethod
  7. def from_pennies(cls, total_cents):
  8. dollars = total_cents // 100
  9. cents = total_cents % 100
  10. return cls(dollars, cents)
  11. @classmethod
  12. def from_string(cls, amount):
  13. match =
  14. r'^\$(?P<dollars>\d+)\.(?P<cents>\d\d)$', amount)
  15. if match is None:
  16. raise ValueError("Invalid amount: " + repr(amount))
  17. dollars = int('dollars'))
  18. cents = int('cents'))
  19. return cls(dollars, cents)

You can think of this class as having several constructors. As a general rule, you’ll want to make __init__ the most generic one, and implement the others as class methods. Sometimes, that means one of the class methods will be used more often than __init__.

When using a new class, most developer’s intuition will be to reach for the default constructor first, without thinking to check the provided class methods - if they even know about that feature of Python in the first place. So in that situation, you may need to educate your teammates. (Hint: Good examples in the class’s code docs go a long way.)

Dynamic Type: The Factory Method Pattern

This next factory pattern, called "Factory Method", is quite different. The idea is that the factory will create an object, but will choose its type from one of several possibilities, dynamically deciding at run-time based on some criteria. It’s typically used when you have one base class, and are creating an object that can be one of several different derived classes.

Let’s see an example. Imagine you are implementing an image processing library, creating classes to read the image from storage. So you create a base ImageReader class, and several derived types:

  1. import abc
  2. class ImageReader(metaclass=abc.ABCMeta):
  3. def __init__(self, path):
  4. self.path = path
  5. @abc.abstractmethod
  6. def read(self):
  7. pass # Subclass must implement.
  8. def __repr__(self):
  9. return f"{self.__class__.__name__}({self.path})"
  10. class GIFReader(ImageReader):
  11. def read(self):
  12. "Read a GIF"
  13. class JPEGReader(ImageReader):
  14. def read(self):
  15. "Read a JPEG"
  16. class PNGReader(ImageReader):
  17. def read(self):
  18. "Read a PNG"

The ImageReader class is marked abstract, requiring subclasses to implement the read method. So far, so good.

Now, when reading an image file, if its extension is ".gif", I want to use GIFReader. And if it is a JPEG image, I want to use JPEGReader. And so on. The logic is

  • Analyze the file path name to get the extension,
  • choose the correct reader class based on that,
  • and finally create the appropriate reader object.

This is a prime candidate for automation. Let’s define a little helper function:

  1. def extension_of(path):
  2. position_of_last_dot = path.rfind('.')
  3. return path[position_of_last_dot+1:]

With these pieces, we can now define the factory:

  1. # First version of get_image_reader().
  2. def get_image_reader(path):
  3. image_type = extension_of(path)
  4. reader_class = None
  5. if image_type == 'gif':
  6. reader_class = GIFReader
  7. elif image_type == 'jpg':
  8. reader_class = JPEGReader
  9. elif image_type == 'png':
  10. reader_class = PNGReader
  11. assert reader_class is not None, \
  12. "Unknown extension: " + image_type
  13. return reader_class(path)

Classes in Python can be put in variables, just like any other object. We take full advantage here, by storing the appropriate ImageReader subclass in reader_class. Once we decide on the proper value, the last line creates and returns the reader object.

This correctly-working code is already more concise, readable and maintainable than what some languages force you to go through. But in Python, we can do better. We can use the built-in dictionary type to make it even more readable and easy to maintain over time:

  1. READERS = {
  2. 'gif' : GIFReader,
  3. 'jpg' : JPEGReader,
  4. 'png' : PNGReader,
  5. }
  6. def get_image_reader(path):
  7. reader_class = READERS[extension_of(path)]
  8. return reader_class(path)

Here we have a global variable mapping filename extensions to ImageReader subclasses. This lets us readably implement get_image_reader in two lines. Finding the correct class is a simple dictionary lookup, and then we instantiate and return the object. And if we support new image formats in the future, we simply add a line in the READERS definition. (And, of course, define its reader class.)

What if we encounter an extension not in the mapping, like tiff? As written above, the code will raise a KeyError. That may be what we want. Or closely related, perhaps we want to catch that, and re-raise a different exception.

Alternatively, we may want to fall back on some default. Let’s create a new reader class, meant as an all-purpose fallback:

  1. class RawByteReader(ImageReader):
  2. def read(self):
  3. "Read raw bytes"

Then you can write the factory like:

  1. def get_image_reader(path):
  2. try:
  3. reader_class = READERS[extension_of(path)]
  4. except KeyError:
  5. reader_class = RawByteReader
  6. return reader_class(path)

or more briefly

  1. def get_image_reader(path):
  2. return READERS.get(extension_of(path), RawByteReader)

This design pattern is commonly called the "factory method" pattern, which wins my award for Worst Design Pattern Name In History. That name (which appears to originate from a Java implementation detail) fails to tell you anything about what it’s actually for. I myself call it the "dynamic type" pattern, which I feel is much more descriptive and useful.

The Observer Pattern

The Observer pattern provides a "one to many" relationship. That’s vague, so let’s make it more specific.

In the observer pattern, there’s one root object, called the observable. This object knows how to detect some kind of event of interest. It can literally be anything: a customer makes a new purchase; someone subscribes to an email list; or maybe it monitors a fleet of cloud instances, detecting when a machine’s disk usage exceeds 75%. You use this pattern when the code to reliably detect the event of interest is at least slightly complicated; that detection code is encapsulated inside the observable.

Now, you also have other objects, called observers, which need to know when that event occurs, taking some action in response. You don’t want to re-implement the robust detection algorithm in each, of course. Instead, these observers register themselves with the observable. The observable then notifies each observer - by calling a method on that observer - for each event. This separation of concerns is the heart of the observer pattern.

Now, I must tell you, I don’t like the names of things in this pattern. The words "observable" and "observer" are a bit obscure, and sound confusingly similar - especially to those whose native tongue is not English. There is another terminology, however, which many developers find easier: pub-sub.

In this formulation, instead of "observable", you create a publisher object, which watches for events. And you have one or more subscribers who ask that publisher to notify them when the event happens. I’ve found the pattern is easier to reason about when looked at in this way, so that’s the terminology I’m going to use.

Let’s make this concrete, with code.

The Simple Observer

We’ll start with the basic observer pattern, as it’s often documented in design pattern books - except we’ll translate it to Python. In this simple form, each subscriber must implement a method called update. Here’s an example:

  1. class Subscriber:
  2. def __init__(self, name):
  3. = name
  4. def update(self, message):
  5. print(f"{} got message "{message}")

update takes a string. It’s okay to define an update method taking other arguments, or even calling it something other than update; the publisher and subscriber just need to agree on the protocol. But we’ll use a string.

Now, when a publisher detects an event, it notifies the subscriber by calling its update method. Here’s what a basic Publisher class looks like:

  1. class Publisher:
  2. def __init__(self):
  3. self.subscribers = set()
  4. def register(self, who):
  5. self.subscribers.add(who)
  6. def unregister(self, who):
  7. self.subscribers.discard(who)
  8. def dispatch(self, message):
  9. for subscriber in self.subscribers:
  10. subscriber.update(message)
  11. # Plus other methods, for detecting the event.

Let’s step through:

  • A publisher needs to keep track of its subscribers, right? We’ll store them in a set object, named self.subscribers, created in the constructor.
  • A subscriber is added with register. Its argument who is an instance of Subscriber. Who calls register? It could be anyone. The subscriber can register itself; or some external code can register a subscriber with a specific publisher.
  • unregister is there in case a subscriber no longer needs to be notified of the events.
  • When the event of interest occurs, the publisher notifies its subscribers by calling its dispatch method. Usually this will be invoked by the publisher itself, in some other method of the class (not shown) that implements the event-detection logic. It simply cycles through the subscribers, calling .update() on each.

Using these two classes in code is straightforward enough:

  1. # Create a publisher and some subscribers.
  2. pub = Publisher()
  3. bob = Subscriber('Bob')
  4. alice = Subscriber('Alice')
  5. john = Subscriber('John')
  6. # Register the subscribers, so they get notified.
  7. pub.register(bob)
  8. pub.register(alice)
  9. pub.register(john)

Now, the publisher can dispatch messages:

  1. # Send a message...
  2. pub.dispatch("It's lunchtime!")
  3. # John unsubscribes...
  4. pub.unregister(john)
  5. # ... and a new message is sent.
  6. pub.dispatch("Time for dinner")

Here’s the output from running the above:

John got message "It's lunchtime!"
Bob got message "It's lunchtime!"
Alice got message "It's lunchtime!"
Bob got message "Time for dinner"
Alice got message "Time for dinner"

This is the basic observer pattern, and pretty close to how you’d implement the idea in languages like Java, C#, and C++. But Python’s feature set differs from those languages. That means we can do different things.

So let’s explore that. If we leverage Pythonic features, what does that give us?

A Pythonic Refinement

Python’s functions are first-class objects. That means you can store a function in a variable - not the value returned when you call a function, but store the function itself - as well as pass it as an argument to other functions and methods. Some languages support this too (or something like it, such as function pointers), but Python’s strong support gives us a convenient opportunity for this design pattern.

The standard observer pattern requires the publisher to hard-code a certain method - usually named update - that the subscriber must implement. But maybe you need to register a subscriber which doesn’t have that method. What then? If it’s your own class, you can probably just add it. Or if you are importing the subscriber class from another library (which you can’t or don’t want to modify), perhaps you can add the method by subclassing it.

Or sometimes you can’t do any of those things. Or you could, but it’s a lot of trouble, and you want to avoid it. What then?

Let’s extend the traditional observer pattern, and make register more flexible. Suppose you have these subscribers:

  1. # This subscriber uses the standard "update"
  2. class SubscriberOne:
  3. def __init__(self, name):
  4. = name
  5. def update(self, message):
  6. print(f'{} got message "{message}"')
  7. # This one wants to use "receive"
  8. class SubscriberTwo:
  9. def __init__(self, name):
  10. = name
  11. def receive(self, message):
  12. print('{} got message "{message}"')

SubscriberOne is the same subscriber class we saw before. SubscriberTwo is almost the same: instead of update, it has a method named receive. Okay, let’s modify Publisher so it can work with objects of either subscriber type:

  1. class Publisher:
  2. def __init__(self):
  3. self.subscribers = dict()
  4. def register(self, who, callback=None):
  5. if callback is None:
  6. callback = who.update
  7. self.subscribers[who] = callback
  8. def dispatch(self, message):
  9. for callback in self.subscribers.values():
  10. callback(message)
  11. def unregister(self, who):
  12. del self.subscribers[who]

There’s a lot going on here, so let’s break it down. Look first at the constructor: it creates a dict instead of a set. You’ll see why in a moment.

Now focus on register:

  1. def register(self, who, callback=None):
  2. if callback is None:
  3. callback = who.update
  4. self.subscribers[who] = callback

It can be called with one or two arguments. With one argument, who is a subscriber of some sort, and callback defaults to None. Inside, callback is set to who.update. Notice the lack of parentheses; who.update is a method object. It’s just like a function object, except it happens to be tied to an instance. And just like a function object, you can store it in a variable, pass it as an argument to another function, and so on.[20] So we’re storing it in a variable called callback.

What if register is called with 2 arguments? Here’s how that might look:

  1. pub = Publisher()
  2. alice = SubscriberTwo('Alice')
  3. pub.register(alice, alice.receive)

alice.receive is another method object; inside, this object is assigned to callback. Regardless of whether register is called with one argument or two, the last line inserts callback into the dictionary:

  1. self.subscribers[who] = callback

Take a moment to appreciate the remarkable flexibility of Python dictionaries. Here, you are using an arbitrary instance of either SubscriberOne or SubscriberTwo as a key. These two classes are unrelated by inheritance, so from Python’s viewpoint they are completely distinct types. And for that key, you insert a method object as its value. Python does this seamlessly, without complaint! Many languages would make you jump through hoops to accomplish this.

Anyway, now it’s clear why self.subscribers is a dict and not a set. Earlier, we only needed to keep track to who the subscribers were. Now, we also need to remember the callback for each subscriber. These are used in the dispatch method:

  1. def dispatch(self, message):
  2. for callback in self.subscribers.values():
  3. callback(message)

dispatch only needs to cycle through the values, because it just needs to call each subscriber’s update method (even if it’s not called update). Notice we don’t have to reference the subscriber object to call that method; the method object internally has a reference to its instance (i.e. its self), so callback(message) calls the right method on the right object. In fact, the only reason we keep track of subscribers at all is so we can unregister them.

Let’s put this together with a few subscribers:

  1. pub = Publisher()
  2. bob = SubscriberOne('Bob')
  3. alice = SubscriberTwo('Alice')
  4. john = SubscriberOne('John')
  5. pub.register(bob, bob.update)
  6. pub.register(alice, alice.receive)
  7. pub.register(john)
  8. pub.dispatch("It's lunchtime!")
  9. pub.unregister(john)
  10. pub.dispatch("Time for dinner")

Here’s the output:

Bob got message "It's lunchtime!"
Alice got message "It's lunchtime!"
John got message "It's lunchtime!"
Bob got message "Time for dinner"
Alice got message "Time for dinner"

Now, pop quiz. Look at the Publisher class again:

  1. class Publisher:
  2. def __init__(self):
  3. self.subscribers = dict()
  4. def register(self, who, callback=None):
  5. if callback is None:
  6. callback = who.update
  7. self.subscribers[who] = callback
  8. def dispatch(self, message):
  9. for callback in self.subscribers.values():
  10. callback(message)

Here’s the question: does callback have to be a method of the subscriber? Or can it be a method of a different object, or something else? Think about this before you continue…​

It turns out callback can be any callable, provided it has a signature compatible with how it’s called in dispatch. That means it can be a method of some other object, or even a normal function. This lets you register subscriber objects without an update method at all:

  1. # This subscriber doesn't have ANY suitable method!
  2. class SubscriberThree:
  3. def __init__(self, name):
  4. = name
  5. # ... but we can define a function...
  6. todd = SubscriberThree('Todd')
  7. def todd_callback(message):
  8. print(f'Todd got message "{message}"')
  9. # ... and pass it to register:
  10. pub.register(todd, todd_callback)
  11. # And then, dispatch a message:
  12. pub.dispatch("Breakfast is Ready")

Sure enough, this works:

Todd got message "Breakfast is Ready"

Several Channels

So far, we’ve assumed the publisher watches for only one kind of event. But what if there are several? Can we create a publisher that knows how to detect all of them, and let subscribers decide which they want to know about?

To implement this, let’s say a publisher has several channels that subscribers can subscribe to. Each channel notifies for a different event type. For example, if your program monitors a cluster of virtual machines, one channel signals when a certain machine’s disk usage exceeds 75% (a warning sign, but not an immediate emergency); and another signals when disk usage goes over 90% (much more serious, and may begin to impact performance on that VM). Some subscribers will want to know when the 75% threshold is crossed; some, the 90% threshold; and some might want to be alerted for both. What’s a good way to express this in Python code?

Let’s work with the mealtime-announcement code above. Rather than jumping right into the code, let’s mock up the interface first. We want to create a publisher with two channels, like so:

  1. # Two channels, named "lunch" and "dinner".
  2. pub = Publisher(['lunch', 'dinner'])

So the constructor is different; it takes a list of channel names. Let’s also pass the channel name to register, since each subscriber will register for one or more:

  1. # Three subscribers, of the original type.
  2. bob = Subscriber('Bob')
  3. alice = Subscriber('Alice')
  4. john = Subscriber('John')
  5. # Two args: channel name & subscriber
  6. pub.register("lunch", bob)
  7. pub.register("dinner", alice)
  8. pub.register("lunch", john)
  9. pub.register("dinner", john)

Now, on dispatch, the publisher needs to specify the event type. So just like with register, we’ll prepend a channel argument:

  1. pub.dispatch("lunch", "It's lunchtime!")
  2. pub.dispatch("dinner", "Dinner is served")

When correctly working, we’d expect this output:

Bob got message "It's lunchtime!"
John got message "It's lunchtime!"
Alice got message "Dinner is served"
John got message "Dinner is served"

Pop quiz (and if it’s practical, pause here to write Python code): how would you implement this new, multi-channel Publisher?

There are several approaches, but the simplest I’ve found relies on creating a separate subscribers dictionary for each channel. One approach:

  1. class Publisher:
  2. def __init__(self, channels):
  3. # Create an empty subscribers dict
  4. # for every channel
  5. self.channels = { channel : dict()
  6. for channel in channels }
  7. def register(self, channel, who, callback=None):
  8. if callback is None:
  9. callback = who.update
  10. subscribers = self.channels[channel]
  11. subscribers[who] = callback
  12. def dispatch(self, channel, message):
  13. subscribers = self.channels[channel]
  14. for subscriber, callback in subscribers.values():
  15. callback(message)

This Publisher has a dict called self.channels, which maps channel names (strings) to subscriber dictionaries. register and dispatch are not too different: they simply have an extra step, in which subscribers is looked up in self.channels. I use that variable just for readability, and I think it’s well worth the extra line of code:

  1. # Works the same. But a bit less readable.
  2. def register(self, channel, who, callback=None):
  3. if callback is None:
  4. callback = who.update
  5. self.channels[channel][who] = callback

These are some variations of the general observer pattern, and I’m sure you can imagine more. What I want you to notice are the options available in Python when you leverage function objects, and other Pythonic features.

Magic Methods

Suppose we want to create a class to work with angles, in degrees. We want this class to help us with some standard bookkeeping:

  • An angle will be at least zero, but less than 360.
  • If we create an angle outside this range, it automatically wraps around to an equivalent, in-range value.
  • In fact, we want the conversion to happen in a range of situations:

    • If we add 270º and 270º, it evaluates to 180º instead of 540º.
    • If we subtract 180º from 90º, it evaluates to 270º instead of -90º.
    • If we multiply an angle by a real number, it wraps the final value into the correct range.
  • And while we’re at it, we want to enable all the other behaviors we normally want with numbers: comparisons like "less than" and "greater or equal than" or "==" (i.e., equals); division (which doesn’t normally require casting into a valid range, if you think about it); and so on.

Let’s see how we might approach this, by creating a basic Angle class:

  1. class Angle:
  2. def __init__(self, value):
  3. self.value = value % 360

The modular division in the constructor is kind of neat: if you reason through it with a few positive and negative values, you’ll find the math works out correctly whether the angle is overshooting or undershooting. This meets one of our key criteria already: the angle is normalized to be from 0 up to 360. But how do we handle addition? We of course get an error if we try it directly:

>>> Angle(30) + Angle(45)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'Angle' and 'Angle'

We can easily define a method called add or something, which will let us write code like angle3 = angle1.add(angle2). But it’s better if we can reuse the familiar arithmetic operators everyone knows. Python lets us do that, through a collection of object hooks called magic methods. It lets you define classes so that their instances can be used with all of Python’s standard operators. That includes arithmetic (+ - * / //), equality (==), inequality (!=), comparisons (< > >= <=), bit-shifting operations, and even concepts like exponentiation and absolute value.

Few classes will need all of these, but sometimes it’s invaluable to have them available. Let’s see how they can improve our Angle type.

Simple Math Magic

The pattern for each method is the same. For a given operation - say, addition - there is a special method name that starts with double-underscores. For addition, it’s __add__ - the others also have sensible names. All you have to do is define that method, and instances of your class can be used with that operator. These are the magic methods.

When you discuss magic methods in face-to-face, verbal conversation, you’ll find yourself saying things like "underscore underscore add underscore underscore" over and over. That’s a lot of syllables, and you’ll get tired of it fast. So people in the Python community use a kind of verbal abbreviation, with a word they invented: "dunder". That’s not a real word; Python people made it up. When you say "dunder foo", it means "underscore underscore foo underscore underscore". This isn’t used in writing, because it’s not needed - you can just write __foo__. But at Python gatherings, you’ll sometimes hear people say it. Use it; it saves you a lot of energy when talking.

Anyway, back to dunder add - I mean, __add__. For operations like addition - which take two values, and return a third - you write the method like this:

  1. def __add__(self, other):
  2. return Angle(self.value + other.value)

The first argument needs to be called "self", because this is Python. The second does not have to be called "other", but often is. This lets us use the normal addition operator for arithmetic:

>>> total = Angle(30) + Angle(45)
>>> total.value

There are similar operators for subtraction (__sub__), multiplication (__mul__), and so on:


a + b


a - b


a * b


a / b (floating-point division)


a % b


a ** b

Essentially, Python translates a + b to a.__add__(b); a % b to a.__mod__(b); and so on. You can also hook into bit-operation operators:


a << b


a >> b


a & b


a ^ b


a | b

So a & b translates to a.__and__(b), for example. Since __and__ corresponds to the bitwise-and operator (for expressions like "foo & bar"), you might wonder what the magic method is for logical-and ("foo and bar"), or logical-or ("foo or bar"). Sadly, there is none. For this reason, sometimes libraries will hijack the & and | operators to mean logical and/or instead of bitwise and/or, if the author feels the logical version is more important.

The default representation of an Angle object isn’t very useful:

>>> Angle(30)
<__main__.Angle object at 0x106df9198>

It tells us the type, and the hex object ID, but we’d rather it tell us something about the value of the angle. There are two magic methods that can help. The first is __str__, which is used when printing a result:

  1. def __str__(self):
  2. return f"{self.value} degrees"

The print() function uses this, as well as str(), and the string formatting operations:

>>> print(Angle(30))
30 degrees
>>> print(f"{Angle(30) + Angle(45)}")
75 degrees
>>> print("{}".format(Angle(30) + Angle(45)))
75 degrees
>>> str(Angle(135))
'135 degrees'
>>> some_angle = Angle(45)
>>> f"{some_angle}"
'45 degrees'

Sometimes, you want a string representation that is more precise, which might be at odds with a human-friendly representation. Imagine you have several subclasses (e.g., PitchAngle and YawAngle in some kind of aircraft-related library), and want to easily log the exact type and arguments needed to recreate the object. Python provides a second magic method for this purpose, called __repr__:

  1. def __repr__(self):
  2. return f"Angle({self.value})"

You access this by calling either the repr() built-in function (think of it as working like str(), but invokes __repr__ instead of __str__), or by passing the !r conversion to the formatting string:

>>> repr(Angle(75))
>>> print('{!r}'.format(Angle(30) + Angle(45)))
>>> print(f"{Angle(30) + Angle(45)!r}")

The official guideline is that the output of __repr__ is something that can be passed to eval() to recreate the object exactly. It’s not enforced by the language, and isn’t always practical, or even possible. But when it is, doing so is useful for logging and debugging.

We also want to be able to compare two Angle objects. The most basic comparison is equality, provided by __eq__. It should return True or False:

  1. def __eq__(self, other):
  2. return self.value == other.value

If defined, this method is used by the == operator:

>>> Angle(3) == Angle(3)
>>> Angle(7) == Angle(1)

By default, the == operator for objects is based off the object ID. That’s rarely useful:

>>> class BadAngle:
...     def __init__(self, value):
...         self.value = value
>>> BadAngle(3) == BadAngle(3)

The != operator has its own magic method, __ne__. It works the same way:

  1. def __ne__(self, other):
  2. return self.value != other.value

What happens if you don’t implement __ne__? If you define __eq__ but not __ne__, then the != operator will use __eq__, negating the output. Especially for simple classes like Angle, this default behavior is logically valid. So in this case, we don’t need to define a __ne__ method at all. For more complex types, the concepts of equality and inequality may have more subtle nuances, and you will need to implement both.

What’s left are the fuzzier comparison operations; less than, greater than, and so on. Python’s documentation calls these "rich comparison" methods, so you can feel wealthy when using them:

  • __lt__ for "less than" (<)
  • __le__ for "less than or equal" (<=)
  • __gt__ for "greater than" (>)
  • __ge__ for "greater than or equal" (>=)

For example:

  1. def __gt__(self, other):
  2. return self.value > other.value

Now the greater-than operator works correctly:

>>> Angle(100) > Angle(50)

Similar with __ge__, __lt__, etc. If you don’t define these, you get an error:

>>> BadAngle(8) > BadAngle(4)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unorderable types: BadAngle() > BadAngle()

__gt__ and __lt__ are reflections of each other. What that means is that, in many cases, you only have to define one of them. Suppose you implement __gt__ but not __lt__, then do this:

>>> a1 = Angle(3)
>>> a2 = Angle(7)
>>> a1 < a2

This works thanks to some just-in-time introspection the Python runtime does. The a1 < a2 is, semantically, equivalent to a1.__lt__(a2). If Angle.__lt__ is indeed defined, that semantic equivalent is executed, and the expression evaluates to its return value.

For normal scalar numbers, n < m is true if and only if m > n. For this reason, if __lt__ does not exist, but __gt__ does, then Python will rewrite the angle comparison: a1.__lt__(a2) becomes a2.__gt__(a1). This is then evaluated, and the expression a1 < a2 is set to its return value.

Note there are situations where this is actually not what you want. Imagine a Point type, for example, with two coordinates, x and y. You want point1 < point2 to be True if and only if point1.x < point2.x, AND point1.y < point2.y. Similarly for point1 > point2. There are many points for which both point1 < point2 and point1 > point2 should both evaluate to False.

For types like this, you will want to implement both __gt__ and __lt__ (and __ge__ and __le__.) You might also need to raise NotImplemented in the method. This built-in exception signals to the Python runtime that the operation is not supported, at least for these arguments.

Shortcut: functools.total_ordering

The functools module in the standard library defines a class decorator named total_ordering. In practice, for any class which needs to implement all the rich comparison operations, using this labor-saving decorator should be your first choice.

In essence: in your class, you define both __eq__ and one of the comparison magic methods: __lt__, __le__, __gt__, or __ge__. (You can define more than one, but it’s not necessary.) Then you apply the decorator to the class:

  1. import functools
  2. @functools.total_ordering
  3. class Angle:
  4. # ...
  5. def __eq__(self, other):
  6. return self.value == other.value
  7. def __gt__(self, other):
  8. return self.value > other.value

When you do this, all missing rich comparison operators are supplied, defined in terms of __eq__ and the one operator you defined. This can save you a fair amount of typing.

There are a few situations where you won’t want to use total_ordering. One is if the comparison logic for the type is not well-behaved enough that each operator can be inferred from the other, via straightforward boolean logic. The Point class is an example of this, as might some types if what you are implementing boils down to some kind of abstract algebra engine.

The other reasons not to use it are (1) performance, and (2) the more complex stack traces it generates are more trouble than they are worth. Generally, I recommend you assume these are not a problem until proven otherwise. It’s entirely possible you will never encounter one of the involved stack traces. And the relatively inefficient implementations that total_ordering provides are unlikely to be a problem unless deep inside some nested loop. Starting with total_ordering takes little effort, and you can always simply remove it and hand-code the other magic methods if you need to.

Rebelliously Misusing Magic Methods

Magic methods are interesting enough, and quite handy when you need them. A realistic currency type is a good example. But depending on the kind of applications you work on, it’s not all that often you will need to define a class whose instances can be added, subtracted, or compared.

Things get much more interesting, though, when you don’t follow the rules.

Here’s a fascinating fact: methods like __add__ are supposed to do addition. But it turns out Python doesn’t require it. And methods like __gt__ are supposed to return True or False. But if you write a __gt__ which returns something that isn’t a bool…​ Python won’t complain at all.

This creates amazing possibilities.

To illustrate, let me tell you about Pandas. As you may know, this is an excellent data-processing library. It’s become extremely popular among data scientists who use Python (like some of you reading this). Pandas has a convenient data type called a DataFrame. It represents a two-dimensional collection of data, organized into rows, with labeled columns:

  1. import pandas
  2. df = pandas.DataFrame({
  3. 'A': [-137, 22, -3, 4, 5],
  4. 'B': [10, 11, 121, 13, 14],
  5. 'C': [3, 6, 91, 12, 15],
  6. })

There are several ways to create a DataFrame; here I’ve chosen to use a dictionary.[21] The keys are column names; the values are lists, which become that column’s data. So you visually rotate each list 90 degrees:

  1. >>> print(df)
  2. A B C
  3. 0 -137 10 3
  4. 1 22 11 6
  5. 2 -3 121 91
  6. 3 4 13 12
  7. 4 5 14 15

The rows are numbered for you, and the columns nicely labeled in a header. The A column, for example, has different positive and negative numbers.

Now, one of the many useful things you can do with a DataFrame is filter out rows meeting certain criteria. This doesn’t change the original dataframe; instead, it creates a new dataframe, containing just the rows you want. For example, you can say "give me the rows of df in which the A column has a positive value":

  1. >>> positive_a = df[df.A > 0]
  2. >>> print(positive_a)
  3. A B C
  4. 1 22 11 6
  5. 3 4 13 12
  6. 4 5 14 15

All you have to do is pass in "df > 0" in the square brackets.

But there’s something weird going on here. Take a look at the line in which positive_a is defined. Do you notice anything unusual there? Anything strange?

Here’s what is odd: the expression "df > 0" ought to evaluate to either True, or False. Right? It’s supposed to be a boolean value…​ with exactly one bit of information. But the source dataframe, df, has many rows. Realistic dataframes can easily have tens of thousands, even millions of rows of data. There’s no way a boolean literal can express which of those rows to keep, and which to discard. How does this even work?

Well…​ turns out, it’s not boolean at all:

  1. >>> comparison = (df.A > 0)
  2. >>> type(comparison)
  3. <class 'pandas.core.series.Series'>
  4. >>> print(comparison)
  5. 0 False
  6. 1 True
  7. 2 False
  8. 3 True
  9. 4 True
  10. Name: A, dtype: bool

Yes, you can do that, thanks to Python’s dynamic type system. Python translates "df.A > 0" into "df.A.__gt__(0)". And that __gt__ method doesn’t have to return a bool. In fact, in Pandas, it returns a Series object (which is like a vector of data), containing True or False for each row. And when that’s passed into df[] - the square brackets being handled by the __getitem__ method - that Series object is used to filter rows.

To see what this looks like, let’s re-invent part of the interface of Pandas. I’ll create a library called fakepandas, which instead of DataFrame has a type called Dataset:

  1. class Dataset:
  2. def __init__(self, data):
  3. = data
  4. self.labels = sorted(data.keys())
  5. def __getattr__(self, label: str):
  6. "Makes references like df.A work."
  7. return Column(label)
  8. # Plus some other methods.

If I have a Dataset object named ds, with a column named A, the __getattr__ method makes references like ds.A return a Column object:

  1. import operator
  2. class Column:
  3. def __init__(self, name):
  4. = name
  5. def __gt__(self, value):
  6. return Comparison(, value,

This Column class has a __gt__ method, which makes expressions like "ds.A > 0" return an instance of a class called Comparison. It represents a lazy calculation, for when the actual filtering happens later. Notice its constructor arguments: a column name, a threshold value, and a callable to implement the comparison. (The operator module has a function called gt, taking two arguments, expressing a greater-than comparison).

You can even support complex filtering criteria like ds[ds.C + 2 < ds.B]. It’s all possible by leveraging magic methods in these unorthodox ways. If you care about the details, there’s an article delving into that.[22] My goal here isn’t to tell you how to re-invent the Pandas interface, so much as to get you to realize what’s possible.

Have you ever implemented a compiler? If so, you know the parsing phase is a significant development challenge. Using Python magic methods in this manner does much of the hard work of lexing and parsing for you. And the best part is how natural and intuitive the result can be for end users. You are essentially implementing a mini-language on top of regular Python syntax, but consistently enough that people quickly become fluent and productive with its rules. And they often won’t even think to ask why the rules seem to be bent; they won’t notice "df.A > 0" isn’t acting like a boolean. That’s a clear sign of success. It means you designed your library so well, other developers become effortlessly productive.

[19] This isn’t enforced by Python itself. If your teammates don’t already honor this widely-followed convention, you’ll have to educate them.

[20] This is all detailed in the "Advanced Functions" chapter.

[21] Which you will rarely do in real code (you will ingest a CSV file or something instead), but it is convenient for demonstrating here.

[22] See . The article explains these ideas in richer detail, and includes the full code of fakepandas and its unit test suite.

Next Chapter: Automated Testing and TDD

Previous Chapter: Exceptions and Errors