Student Login

Mathematical Physicist Turned Silicon Valley Veteran And O'Reilly Author Reveals:

How To Become The Best Python Developer On Your Team

In Just 5 Hours Per Week, with a supportive community of ambitious like-minded pros, dedicated to mastering your craft.

Are "10X developers" real?

Maybe, maybe not.

But no matter your job title, if you write Python code in your work, you can certainly multiply your effectiveness and success at writing high-quality software... by a factor of 2X, 3X, maybe 5X or more.

This requires you to:

Think Smarter, Not Harder.

How, exactly? The key is an idea from the world of physics, which dates all the way back to Aristotle: the idea of first principles.

"First principles" means the foundational concepts, distinctions, ideas, and mental models that software development is based on. Take for example:

The Strategy Design Pattern

This classic design pattern lets you select from one of several different algorithms at runtime.

The Strategy Pattern is a first principle of software development. Like all design patterns, it's a mental model we use as a "building block" for creating software, independent of programming language.

Typically, you implement Strategy by writing a different class for each possible algorithm - plus other classes to use whatever algorithm you choose. The lines of code add up.

At the same time, each programming language has its own first principles. Python, for example, has function objects. And because Python has this, you can implement the Strategy pattern in Python more easily than in other languages:

>>> numbers = [7, -2, 3, 12, -5]
>>> # Sort the numbers.
>>> sorted(numbers)
[-5, -2, 3, 7, 12]

>>> # Sort them by absolute value.
>>> sorted(numbers, key=abs)
[-2, 3, -5, 7, 12]

>>> # Find the food with highest protein per serving.
>>> foods = [
...     {'name': 'rice', 'calories': 130, 'protein': 2.7},
...     {'name': 'chicken', 'calories': 157, 'protein': 32},
...     {'name': 'eggs', 'calories': 149, 'protein': 10},
...     {'name': 'chocolate', 'calories': 535, 'protein': 7.7},
...     ]
>>> max(foods, key=lambda food: food['protein'])
{'name': 'chicken', 'calories': 157, 'protein': 32}

Other examples of software engineering first principles include:

When you learn to think in terms of first principles...

It's Like A "Cheat Code".

You can see solutions to coding problems that others don't see. Bugs and stack traces that used to baffle you suddenly seem clear and obvious. You surprise yourself by what you can do, and how fast you can do it.

Here's a big secret:

There are first principles of software engineering which do not have standard names and which are not well known, except among the most exceptional performers and thought leaders.

I will tell you some of these "secret" first principles later in this document, and even better, how you can discover your own.

For now, understand this:

When you learn how to think in terms of first principles, and gain knowledge of the first principles... you are suddenly able to do things you could not do before.

Especially with developing software, you get creative insights quickly and repeatedly; routinely solve seemingly intractable problems; and regularly produce surprising new inventions.

The Foundation Is OOP

Most people know that object-oriented programming helps organize your code...

But not many realize it organizes how you think about your code, too.

When you create good classes representing good abstractions, it helps you THINK BETTER about your code. In this form, your code is easier to reason about.. with more clarity and better insights, FASTER.

This is related to a well-known principle in psychology called:

Miller's Law

Also called the “7 plus or minus 2" rule, and created by Harvard psychologist George A. Miller.

Dr. Miller's publication on this concept is one of the most highly cited papers in the history of psychology. It says that human minds optimally reason in terms of a small number of chunks (yes, “chunk" is the formal term used in cognitive psychology). That number is close to 7 in most people, hence the name of the rule.

The obvious way this applies to programming:

If you have a small set of classes which map well to the business logic of what your program is attempting to do, then reasoning about your codebase and accomplishing your goal...

Becomes Exponentially Easier.

That is because classes represent abstractions we can reason about.

When you choose the right abstractions, and design classes which represent those abstractions well, your thinking is denominated in the most empowering mental model for solving the problem at hand.

You find you can write high quality software faster, and with a greater chance of successfully solving the problem you want to solve. Your code seems more elegant and clear; it omits anything not necessary, less cluttered with cruft.

Take for example: DataFrame, from Pandas:

>>> import pandas as pd
>>> df = pd.DataFrame({
...         'A': [-137, 22, -3, 4, 5],
...         'B': [10, 11, 121, 13, 14],
...         'C': [3, 6, 91, 12, 15],
...     })
>>>
>>> positive_a = df[df.A > 0]
>>> print(positive_a)
    A   B   C
1  22  11   6
3   4  13  12
4   5  14  15

DataFrame is a Python class, and it has changed the world.

But it is made up. Once upon a time, there was no such thing as a DataFrame. Someone created it, empowering countless coders to perform remarkable feats of data processing with far less effort than before.

This means you can create your own abstractions - i.e., classes - that unlock massive productivity for you, your team, and possibly...

The Entire Planet.

Another example: Imagine a class called DateInterval, which represents an interval or range of days.You use it like this:

>>> from datetime import date
>>> interval = DateInterval(date(2050, 1, 1), date(2059, 12, 31))

You can check whether a particular day is in the interval or not:

>>> some_day = date(2050, 5, 3)
>>> another_day = date(2060, 1, 1)
>>> some_day in interval
True
>>> another_day in interval
False

You can ask how many days are in the interval:

>>> len(interval)
3652

You can even use it in a for-loop:

>>> for day in interval:
...     process_day(day)

Here is the source code of DateInterval:

from datetime import (
    date,
    MINYEAR,
    MAXYEAR,
    timedelta,
    )

class DateInterval:
    BEGINNING_OF_TIME = date(MINYEAR, 1, 1)
    END_OF_TIME = date(MAXYEAR, 12, 31)
    
    def __init__(self, start=None, end=None):
        if start is None:
            start = self.BEGINNING_OF_TIME
        if end is None:
            end = self.END_OF_TIME
        if start > end:
            raise ValueError(
                f"Start {start} must not be after end {end}")
        self.start = start
        self.end = end

    @classmethod
    def all(cls):
        return cls(cls.BEGINNING_OF_TIME, cls.END_OF_TIME)

    def __contains__(self, when):
        return self.start <= when <= self.end
    
    def __iter__(self):
        for offset in range(len(self)):
            yield self.start + timedelta(days=offset)

    def __len__(self):
        return 1 + (self.end - self.start).days

Whether you currently understand this Python code is not important. (We can teach you how, as explained below.)

What matters is you understand that you are creating a new abstraction, called a date interval, that empowers you to more efficiently reason about your codebase.

And of course, it also makes it easier to write good code, and to...

Write It Faster.

Classes are also important for more classically understood benefits like code reuse, encapsulation, data hiding, and so on...

But even these factors relate to how we think about our code.

Which brings us to the next big "10X Your Python Skills" topic:

Automated Testing

If OOP is the foundation, writing tests is the supercharger.

When you apply the patterns and best practices of writing unit tests, integration tests, and other test forms, you find it tremendously boosts your capability to create sophisticated, powerful software systems.

Automated testing does not help much when writing small scripts. But who cares about those?

Your greatest contributions come from creating complex software systems that solve hard problems people care about. This is where automated testing...

Becomes A Superpower.

In particular, automated testing will tremendously speed up development. Again, not for small programs; but that is not why you are here.

Automated tests do not just let you write better code faster. They make it possible for you to create advanced software systems you simply could not before. They expand your CAPABILITIES.

The full power of automated testing depends on OOP. You can create simple tests without classes. But the most valuable testing patterns require full use of Python’s object system. This is why OOP comes first.

Imagine you are designing a web application framework. You want view objects which represent the HTTP response, and a way to map incoming URLs to those views. You create specialized View classes for different response types, and create a class called WebApp to coordinate this configuration:

app = WebApp()
app.add_route("/api", JSONView({"answer": 42}))
app.add_route("/about", HTMLView('about.html'))

This object has a get() method to retrieve the HTTP response for a URL:

>>> app.get('/api')
‘{"answer": 42}’
>>> app.get('/about')
‘<html><body>About Page</body></html>’

There is enough complexity here that if you just start coding it, odds are high bugs will be missed by manual testing. Even worse, at this threshold of complexity, every new feature you add risks breaking something in an unexpected way.

The only way to maintain full correctness is to exhaustively test after each change, which is extremely labor-intensive.

So What Do You Do?

The solution is to start by writing a suite of automated tests, exercising the key functionality. Like this:

import unittest
from webapp import (
    WebApp,
    HTMLView,
    JSONView,
    )

class TestWebapp(unittest.TestCase):
    def test_route(self):
        app = WebApp()
        json_view = JSONView({"alpha": 42, "beta": 10})
        html_view = HTMLView('about.html')
        app.add_route('/api', json_view)
        app.add_route('/about', html_view)

        # The JSON object as a string
        expected = '{"alpha": 42, "beta": 10}'
        actual = app.get('/api')
        self.assertEqual(expected, actual)

        # The contents of about.html
        expected = 'About Page\n'
        actual = app.get('/about')
        self.assertEqual(expected, actual)

Then it becomes straightforward to implement. You simply write code to make your tests pass. And when that is done...

You Feel CONFIDENT In Your Code.

Because your tests assure you that your program is working correctly. Here is one way to implement it (and make the above tests pass):

import abc
import json

class WebApp:
    def __init__(self):
        self._routes = {}
    def add_route(self, url, view):
        self._routes[url] = view
    def get(self, url):
        view = self._routes[url]
        return view.render()
    def urls(self):
        return sorted(self._routes.keys())

class View(metaclass=abc.ABCMeta):
    @abc.abstractmethod
    def render(self):
        pass

class HTMLView(View):
    def __init__(self, html_file):
        with open(html_file) as filename:
            self.content = filename.read()

    def render(self):
        return self.content
    
class JSONView(View):
    def __init__(self, obj):
        self.obj = obj

    def render(self):
        return json.dumps(self.obj)

The details of this are less important than understanding the pattern, which is:

This is why we say automated testing is a superpower. It simply puts you in a different tier.

That clears the way for your next 10X Pythonista skill:

Data Scalability

When processing large amounts of data, some programs are responsive, efficient, and rock-solid reliable.

Others hog the machine’s resources, randomly hang without warning, or even crash when you push too much data into them.

The difference comes down to space complexity. This is the area of algorithm analysis focused on effective usage of memory by programs. So your algorithm can process a large quantity of data, with a reasonable upper-bound that is independent of input size.

This matters because of how operating systems manage running programs. When a program creates a large data structure, it must request a block of virtual memory from the OS...

But the job of the OS is to allow all programs to run, and prevent any one program from starving the others of enough resources to run.

So the OS does not always give a program the full amount of memory it asks for. If a program asks for too much, the OS will instead “page to disk” - which means giving the program a block of pseudo-memory, written to and read from the persistent storage layer (the hard disk, SSD, etc.) instead of actual memory.

And when this happens...

It Shoves Your Performance Off A Cliff!

The sluggish I/O speeds of disk compared to memory reduce raw performance tremendously, typically between 75% and 90%.

That's right. It can literally make your program 10 times slower.

If a human is interacting with your program directly, they will experience it to ‘hang’ and appear stuck. In extreme cases, to ensure other programs have the resources they need, the operating system will kill your process completely.

Those skilled at writing big-data programs learn to code in memory-efficient ways, fitting in some reasonable memory footprint regardless of total data input size.

This is an essential skill for top performers. Big data is here to stay, for everyone; software processing more data than fits into memory is increasingly the norm, not the exception.

How do you accomplish this in Python? The key is a feature called generators. Many Python users have heard of this; you have probably seen the ‘yield’ keyword. But most do not know its depth, including:

This relates to first principles: of memory-efficient algorithms, which are independent of language; and of Python’s memory model, along with its generator feature. The language-level first principles form a bridge to the higher-level, language-independent first principles in Python programs. You need both.

Let's look at an example of when you need generator functions.

This Function Has A Problem:

# Read lines from a text file, choosing
# those which contain a certain substring.
# Returns a list of strings (the matching lines).

def matching_lines_from_file(path, pattern):
    matching_lines = []
    with open(path) as handle:
        for line in handle:
            if pattern in line:
                matching_lines.append(line.rstrip('\n'))
    return matching_lines

This returns a list of strings. It creates a new data structure - the list - whose size scales up in direct proportion to the size of the input.

If the text file is small, or few lines in the file match the pattern, this list will be small. But if the text file is large, with many matching lines, the memory allocated for the list becomes substantial.

The problem is this can easily cross the threshold where it must be paged to disk...

Plummeting Performance Of Your Entire Program!

Here is another approach:

# Generator function
def matching_lines_from_file(path, pattern):
    with open(path) as handle:
        for line in handle:
            if pattern in line:
                yield line.rstrip('\n')

This is called a generator function, signaled by the ‘yield’ in its final line. And without going into details here, what this accomplishes is to...

Transform The Algorithm Completely.

Rather than a list of all matching lines, it is now producing them one at a time. Typically you are using the result in a “for” loop or something similar, where you only process one at a time anyway.

This means your memory footprint is now the size of the longest line in the file, rather than the total size of all matching lines! And this holds true no matter how large of an input file you feed it. Suddenly your algorithm is exponentially more memory efficient, with NONE of the problems plaguing the first version.

The best approach is to proactively infuse all your Python code with memory-efficient Python generator functions, as a standard practice when developing. This naturally makes your programs perform their best, regardless of hidden memory bottlenecks you missed, and regardless of unexpectedly large input sizes.

Of course, this is the foundation; we do not have time to go into the higher-level generator design patterns, or the important architectural principle I call Scalable Composability. Instead, it is time to move on to the next 10X Pythonista topic:

Metaprogramming.

Maybe you have heard this word before. But what does it even mean?

Think of it this way: Most Python code operates on data. Right? That's what we normally do.

And it turns out you can go "meta" here: instead of writing Python code that operates on data, you instead write Python code that operates on other Python code. And the output of that operation is what operates on data.

Or put another way:

(Code -> Code) -> Data
Instead Of
Code -> Data

This is what we mean by "metaprogramming". It is one of highest-leverage things you can do when creating software.

Coders who use metaprogramming unlock another level. They rapidly create tools and components that other developers could not create in a lifetime.

It is like the difference between riding a tricycle, and flying a jet. You not only "travel" faster; you go places you simply could not reach at all with the lessor vehicle.

You can find examples of metaprogramming in the source code of the most successful Python libraries: Django, Pandas, Flask, Twisted, Pytest, and more. It is no mistake that extremely powerful tools rely on metaprogramming.

As an example, imagine this function which makes an HTTP request to an API endpoint:

import requests
def get_items():
    return requests.get(API_URL + '/items')

Suppose the remote API has an intermittent bug, so that 1% of requests fail with an HTTP 500 error (which indicates an uncaught exception or similar error on the remote server). You realize that if you make the same request again, it has a 99% chance of succeeding. So you modify your get_items() function to retry up to 3 times:

def get_items():
    MAX_TRIES = 3
    for _ in range(MAX_TRIES):
        resp = requests.get(API_URL + '/items')
        if resp.status_code != 500:
            break
    return resp

However, your application does not have just this one function. It has many functions and methods which make requests to different endpoints of this API, all of which have the same problem. How can you capture the retry pattern in an easily reusable way?

The best way to do this in Python is with a metaprogramming tool called a decorator. The code for it looks like this:

def retry(func):
    def wrapper(*args, **kwargs):
        MAX_TRIES = 3
        for _ in range(MAX_TRIES):
            resp = func(*args, **kwargs)
            if resp.status_code != 500:
                break
        return resp
    return wrapper

Then you can trivially apply it to the first function, and other functions like it, by simply typing @retry on the line before:

@retry
def get_items():
    return requests.get(API_URL + "/items/")

@retry
def get_item_detail(id):
    return requests.get(API_URL + f"/item-detail/{id}/")

Even better, the retry logic is fully encapsulated in the retry() function. You can raise the number of maximum retries, implement exponential backoff, or make any other changes you want in one place. All decorated functions and methods immediately reflect your updates.

Metaprogramming has another benefit. In many cases, you find that it allows you to...

Amplify Your Entire Team's Productivity!

The reason is that metaprogramming often allows you to implement solutions to complex and difficult problems in a way that people can easily reuse.

This retry() function - and the @retry decorator it implements - is somewhat advanced code. But your @retry tool is instantly usable by everyone on your team.

Your presence on the team has amplified everyone’s productivity and systematically improved the reliability of the entire application, in a way that everyone recognizes right away.

This sets the stage for the next 10X Pythonista level, called:

Synthesis

Much of your learning is bottom-up. You study specific topics in depth, with a period of intentional focus.

You do this for good reason: because it is efficient. You take each learning curve and snap it into manageable chunks, so you can solidly master that one topic, then build on it as you move on to the next. It's just smart to do it this way, focusing on one topic at a time.

But of course, real software isn't structured that way. In a real program, you constantly use all features of Python, all mixed together, all the time. It is smart to learn the building blocks first, but you also must learn how to compose them together.

There Is Another Factor:

Real software, made under deadline pressure with conflicting priorities, is never perfect.

You could of course make it perfect, by investing enough time and energy. But that is not what you do. In practice, you have many priorities, all of which are important, and all of which have deadlines.

The way skilled programmers manage this is by writing code which meets the requirements, to a ‘good enough’ standard of quality. And instead of investing additional hours refining it, you declare that task complete, and move on to one of your other tasks that were due yesterday.

How do you navigate this ‘good enough’ threshold? How do you make these decisions, these judgment calls, when you have so many competing (and sometimes conflicting) priorities? This is an important part of thriving as a technology professional.

As you build this 10X technical foundation, it's time to turn to a non-technical topic:

How To Promote Yourself.

After all, as important as technical skills are, it's simply not enough to secure best jobs with the best pay. Especially in the current market.

In order to reap the rewards, you must also learn to position yourself as an expert and a thought leader. And the easiest way to do is to create an open-source software project... but with a twist.

You see, the purpose is not based on what will be fun for you to code up. It will not even be based on what technology you want to learn.

Instead, its purpose is to be marketing asset. It is marketing YOU.

We call this special opensource project...

Your Artifact.

Your Artifact is a dynamic marketing asset which efficiently signals your elite status to the job market, and attracts the opportunities you want.

Every word of this matters:

Understand an important point:

Some Artifacts Are Better Than Others.

A well-chosen and well-executed Artifact will bring outsized impact for your invested time and energy. It can be tremendously effective at communicating your capabilities to your peers and potential employers.

Your Artifact can be

Creating your Artifact does NOT need to be difficult, or require a lot of work. The notion that success can only come from extreme exertion is simply:

A Limiting Belief!

The career-boosting results of your Artifact are largely independent of the effort and time you put into it.

Also, your Artifact does NOT need to become "famous" to help you; none of mine ever were. It does not need to be something many people know about, like Pandas or Nginx or Docker.

What you want is to create an Artifact which is famous among the right people in your domain of technology - those who can help you attain your career goals. If it is unknown by others, then who cares.

The mere existence of this Artifact has a shocking effect: it makes people assume you are in the top tier, immediately and uncritically. Just as if you had written a book on the topic.

You can create this effect by actually writing a book. But speaking as someone who has done that, writing a book takes massive amounts of time - typically years. In contrast, you can often create the initial version of an Artifact in...

Just One Weekend!

At least, the first "minimum viable" version. That is enough to start influencing peers and strangers to see you in a completely different light.

Creating the right Artifact will change how you are perceived by your entire peer group. It does so more efficiently, immediately, and usefully than any other method. It simply requires the right strategy.

Which leads nicely to the next step:

Building Your Network.

By "network", I mean the human kind. Your network of professional peers.

This becomes a different kind of asset, if you do it right. And one which is increasingly fulfilling and valuable over time.

You might be surprised to know that your Artifact is critically important for building your network.

Of course you can build your network without that...

However, it will grow so much slower, and your network will not be as useful to you.

Without a solid Artifact, many individuals at the top of your profession simply will not give you the time of day. And even people at the "bottom" will not hurry to engage with you.

But when you have created a strong Artifact, backed by your 10X Python coding skills, your universe has a completely different vibe.

People you have looked up to for years reach out to talk with you. You find amazing people tracking you down on their own and engaging with you.

They tell other people about you and what you've done, naturally building your reputation for you, again without you having to lift a finger.

And when you ask for help - for example, when finding a new job...

They JUMP At The Chance To Help You.

And the most wonderful part: it cycles back on itself, in a positive feedback loop.

Those high performers in your expanding network rub off on you. By direct transmission from them, your engineering foundation will grow faster than ever before.

So you can create better assets faster. Which will continue to open even more doors for you, all the time.

This is when you learn to fly!

But you can't get there without doing all of these steps.

You can't realistically build a great network of great people, without creating Artifacts that work for you while you sleep.

And you can't build a great Artifact without a solid engineering foundation.

At this point, you may be asking yourself how to pull all this off.

Powerful Python Elite Bootcamp is for technology professionals who want to develop world-class software engineering skills and Python language expertise. It includes:

It is designed for busy professionals who are working full time, and requires 5-10 hours per week, most of it on your own schedule.

The Elite Bootcamp’s broad structure is illustrated by this Flowchart:

Powerful Python Elite Bootcamp teaches you everything in this document, and more. Best of all, you are joining a community of skilled technology professionals, so you are learning together as a group.

What You Get

What's In The Technical Track:

Apply

Questions And Answers

If you have a question not answered here, ask us by emailing service@powerfulpython.com.

How much time does this take?

We recommend you invest 5-10 hours per week. This is on your own schedule, except when you attend the live Group Mentoring sessions.

Below 5 hours per week, it is hard to keep your momentum, and we recommend you wait until you have more time. More than 10 hours has diminishing returns; it actually works better if you pace it out more gradually, as that seems to help integrate what you learn into your long-term memory.

How long does it take to complete the Elite Bootcamp?

Students who invest 5-10 hours per week typically complete the core technical training in 2-4 months. It may take longer, or less time, depending on your background and how much you prioritize your participation.

The fastest anyone has ever completed Elite has been 3 weeks. That individual was between jobs, quite intelligent, and extremely motivated. In general it is better to take a more gradual pace, but it is possible to progress quickly. On the other hand, sometimes students pause for a week or even a month or more because they get busy.

Once you have completed the core training, the process of creating your Artifacts and participating in the community can go on potentially forever. We have had students active in the Elite Bootcamp for many years.

What are the minimum requirements?

This is not for people new to programming. At a minimum, you should be able to write simple Python programs using functions, dicts and lists, and execute them on the command line.

The best way to self-assess is to do the sample coding exercises. If you can do them without much trouble, you are qualified for Powerful Python Elite Bootcamp.

What IDEs do you support? What operating systems?

PPB is designed for all Python-using technology professionals, who work in wide and diverse ways. As such, we fully support every IDE and editor and the three major OSes (MacOS, Windows, and Linux).

Do you offer certification?

Yes, students earn a digital certificate of completion for each training module they complete. Note that you must have an active subscription to earn a new certifiate of completion; however, once earned, the digital certificate does not expire.

When do the Group Mentoring sessions happen?

The current schedule is Tuesdays at 5pm Pacific, and Fridays at noon Pacific. These times will occasionally shift. Also, see the next question.

What if I need help, but cannot attend the session live?

No problem. You can submit your question before the session, and we will answer it on detail during the call. After you watch the recording, if you have any follow-up questions, just ask and we'll help you sort it out.

Are the Group Mentoring sessions recorded?

Yes, each session is recorded and made available to students. This is useful if you cannot attend live, or if you did attend but want to review what we discussed and screen shared.

The full archive is over 150 hours, and is filled with priceless insights and live coding demonstrations, for a wide range of practical, real-world topics in Python, software engineering, data science, and much more. It is arguably the most extensive repository of realistic and advanced discourse for python professionals in the world.

I have another question.

We are happy to answer. Simply email us at service@powerfulpython.com and we will reply within the next business day.

Apply

Our Professional Students Work At These Companies

What Our Alumni Say

Written Testimonials

Pythonic OOP

Test-Driven Python

Scaling Python With Generators

Next-Level Python

The Powerful Python Book