Recently I started to read No Rules Rules: a book about Netflix’s culture. For me the most exciting point of the book was that in an organisation with high talent density you can start to remove control mechanisms and rules. But this culture of freedom works only when all employees have a right judgement to find the best interests of the company.

In the past half year, since I work in Wise (a company which I believe share these values with Netflix) one of my greatest challenges was to improve this ability of right judgement.

Principle-driven judgement

One mistake I make frequently is to make (professional) decisions based on some “principles”. Here under the term “principle” I mean such mantras we never question. For example, one of my mantras was:

No one should do a task manually which can be automated.

Based on this mantra I put effort in various automation side-projects in the past years, and often I think I could help people to make their work more enjoyable, and even make my teams more efficient.

Still, this “rule” is not always right. There are some tasks which could, but never should be automated. For example because they are simple, and the automation would be so expensive, or because the executions takes place really rare, and the automation should be maintained before each of them.

Other examples of such mantras could be:

  • Your code should have 100% test coverage
  • You should log the state of your application extensively
  • You should have a useful documentation of your APIs
  • You should always test your changes before going to production
  • Etc.

Outcome-driven judgement

What I want to achieve is an approach where I always estimate the expected outcome of an action, or a decision before I would do anything. This estimation should be as concrete as possible.

Saying something like:

I want to improve our CI/CD pipeline because it will improve engineering experience in our team

isn’t enough. How will it improve the engineering experience? Will it save us some time? Will it make our process more robust? Ideally, the impact should be measurable.

One of the advantages of this approach is that based on the estimated impact you can even compare different initiatives in terms of their “usefulness”.

Turning down on an initiative

Above I said:

There are some tasks which could, but never should be automated.

But actually some decisions can be even trickier: there are some tasks which could, and also should be automated. Its just not so important because the expected outcome won’t be high.

For me sometimes it has been hard to turn down on some of my initiatives, which I was excited about, but it just turned out to be not so important. But now I see that its not like:

Choosing not to do something

its much more like:

Choosing to do something more impactful over what you are working on right now

A good technique could be to keep a prioritised list of possible initiatives, so that the list can be a “reference” to compare other initiatives against.