What about Formal Verification?

In the paper A Formally Verified NAT Stack, the authors describe a software-defined network function that is “semantically correct, crash-free, and memory safe.” That’s right: it has no bugs, it will never crash, and they can prove it! But how important is this?

For decades, formal verification has been lurking with a promise of helping us make better systems. Since the beginning of programming, there has been steady work to create languages, tools and systems to do formal verification, which is an analytic method of exhaustively proving the correctness of code, rigorously specifying mathematical models to ensure correct behaviour.

At the forefront of this effort was Sir C.A.R Hoare, inventor of the quicksort algorithm, the null reference, and lifelong labourer on formal specification languages like Z and CSP.

But formal verification has never really caught on in software, and in 1996, at a talk at a theoretical computer science workshop, Hoare conceded:

Ten years ago, researchers into formal methods (and I was the most mistaken among them) predicted that the programming world would embrace with gratitude every assistance promised by formalisation to solve the problems of reliability that arise when programs get large and more safety-critical.

Programs have now got very large and very critical – well beyond the scale which can be comfortably tackled by formal methods. There have been many problems and failures, but these have nearly always been attributable to inadequate analysis of requirements or inadequate management control. It has turned out that the world just does not suffer significantly from the kind of problem that our research was originally intended to solve.

https://en.wikipedia.org/wiki/Tony_Hoare#cite_note-26

This begs the question, what is the problem that their research was intended to solve?

In product development there are two sides of “correctness”:

  • Validation – did we build the right thing? Did we build something that people want to use?
  • Verification – did we build it the right way? Have we made the thing we were trying to make?

Build the right thing; build the thing right…

As Ousterhout writes in A Philosophy of Software Design, “the most fundamental problem in computer science is problem decomposition: how to take a complex problem and divide it up into pieces that can be solved independently.” This is the essence of analysis. You would be forgiven for thinking, as many did during Hoare’s time, that analytic approaches would be successful in managing software development.

In terms of validation, we have seen a number of spectacular failures come out of analytic management approaches like Waterfall. We have seen a lot of software that technically works, but doesn’t meet the user’s needs:

None Pizza, Left Beef

So over the years, we learned to do validation differently.

We created processes that continuously check our work using empirical approaches – methods designed to integrate the testing of assumptions throughout the lifecycle of the product. A few years after Hoare’s speech, this was first articulated as the Agile Manifesto, and we have been building on it ever since then. I like to call these approaches Synthetic Management practices (synthetic means, “finding truth by recourse to experience”).

What about verification?

The book Software Engineering at Google describes the difference between programming (development) and software engineering (development, modification, maintenance). Software engineering is programming integrated across people and over time. I love this quote that highlights the difference: “It’s programming if ‘clever’ is a compliment, it’s software engineering if ‘clever’ is an accusation.” Being correct in the moment is a lot different than scaling correctness across people and over time, allowing the system to change but still be correct.

  • To scale correctness, we need systems of verification, and here we turn heavily to the synthetic approaches. We manage our code using unit tests, functional tests, integration tests, build systems, ci/cd, and so on. Over the years, we have replaced manual QA processes with programmatic forms of verification that operate continuously throughout the lifecycle of our work. These are the systems of software engineering we use to enable a codebase to change reliably across many people over a long time.

So when it comes to verification techniques, we need to consider not whether a technique works, but whether the technique can be used as part of an organizational system of engineering.

Does formal verification have a place in systems of software engineering?

From the paper:

The proof of correctness of the NAT combines theorem proving for data structures code (stateful) and symbolic execution for the rest of the code (stateless). Even though theorem proving requires a lot of human effort, NF developers can reuse the verified data structures for many NFs. Symbolic execution is automated, and easy to run on a new NF

A Formally Verified NAT Stack, p. 9

There are two key points to make here:

  1. Formal verification is squarely analytic. We would need to decompose the problem upfront, and then use one-off manual processes to create code that is ‘eternally’ correct. This does not comport with the continuous, synthetic style of verification that has become so successful.
  2. To pull off the formal verification, a lot of manual effort goes into proving the data structures and then after the proving, the components become ossified, unchangeable. The code is perfect, so long as it doesn’t need change. It’s correct until we need to add new features. And then we have to do the whole “theorem proving” part again. That part is very hard.

The idea that we can reuse data structures without changing them is naïve. Write once, run forever? The real world doesn’t work that. Code needs to breathe, and the air it breathes comes through the winds of change. (ok, maybe that’s a little too much metaphor).

In other words, formal verification does not scale with systems of software engineering. It violates the principle that makes synthetic verification so effective: become part of the system. Formal proof lives outside the code, outside the system – it does not “take advantage” of the medium of code.

What to make of formal verification then?

Systems built using the medium of code need programmable proof that lives and executes with the code. Tests are easy to implement as part of the system, and so they have become the primary method of demonstrating “correctness”. Software is synthetic and therefore lends itself to synthetic proof. 

But as Bentley says in Programming Pearls:  formal proofs may not become organizationally useful in software, but they have improved and evolved our understanding of how to write good code. For learning about algorithms and expanding our collective understanding of computer science, formal proofs have value. So yes, there is a place for formal proof in software, just not in the software industry.

Collaboration Calculus

Two of my favourite books show us the yin and yang of success – the individual and the team.

Drive: The Surprising Truth About What Motivates Us: Pink, Daniel ...

Dan Pink’s Drive describes three key areas of motivation and personal growth:

  • Autonomy: The desire to be self-directed
  • Mastery: The urge to get better skills
  • Purpose: The need to work with meaning
Turn the Ship Around! eBook by L. David Marquet - 9781101623695 ...

Meanwhile, in David Marquet’s Turn the Ship Around we find a set of values for empowered teams:

  • Autonomy: Give people control over their decisions
  • Alignment: Provide the information people need to make the best decisions
  • Transparency: Ensure people communicating their decisions to everyone around them

Putting these together, you get the calculus of collaboration – two sides of the equation for successful teamwork:

On both sides of the equation, you have autonomy: the essential element for both individuals and organizations. Empowered people are at the core of empowered teams.

Autonomy is the ability for individuals to work independently without of micromanaging. But of course, autonomy unhinged does not result in success. “Divide the fire, and you will sooner put it out.” If everyone went off and did their own thing, work would be confusing and unproductive.

So autonomy, the key to success, must be modified on both sides of the equation:

  • At the personal level, the effectiveness of autonomy is amplified by mastery and purpose. The more we develop our mastery and work with purpose, the greater we are able to make use of our autonomy.
  • At the team level, autonomy is tempered by the group’s need for alignment and transparency. The more aligned the team is and the more transparent team members are with one other, the better we are able to apply our autonomy to collective problem solving.

As individuals, when we achieve mastery and purpose, we make the most of our autonomy.

As teammates, when we practice transparency and alignment, we give the best of our autonomy.

Putting Pink and Marquet together, you get the collaboration equation at the centre of dynamic learning organizations. Solve for a: solve for autonomy.

The Seven Forms of “Not Waste”

Lean manufacturing has made a really big deal about waste. There’s muda, mura and mudi; You’ve got your Type I and Type II; And of course 7, 8 or maybe 9 forms of it, depending on who you ask. The Inuit words for snow have nothing on the Japanese words for waste.

With such an obsession with waste, you might think Taichi Ono had some kind of childhood trauma associated with it. After reading a few books on Lean, I’m looking for waste everywhere, and I’m starting to think that some kind of trauma has been inflicted on me!

By casting out waste in all its forms, they say, we can create a smooth flow of work across our production system. Remove all variance, they say, and only then can a system truly create value.

In highly deterministic systems, this thinking wins arguments. And we desperately want the systems that deliver software (organizational and technical) to be highly deterministic. But our systems are resistant to analysis that grants determinism – they are synthetic. We embrace the uncertainty, adjust for instability, and renounce absolutism.

So instead of trying to come up with deterministic categories of things are waste, maybe it makes sense to think more about things that are NOT waste. Things like,

  1. Running experiments: It’s really hard to run experiments in an active manufacturing line – that would result in a lot of waste. But fortunately for you, code is (relatively) free. We can start all our work with a hypothesis and validate it. Got it wrong? Try again. The reality is that every increment of value begins experiments (whether you call it that or not). But people get confused by this – they think that spending the time to see something running in the system is not value-adding … instead of 2 weeks to see what it looks like, we should spend 4 weeks writing specs? This might seem crazy (because it is), yet people still do it. 
  1. Writing code: Sometimes the hardest thing to do is get started. Just write the code! Try it out. See what happens. Run the experiments. In software, arguments don’t win arguments, working code wins arguments. Sometimes we need a gentle reminder that software is code, not meetings and decks and spreadsheets. This is your reminder.
  1. Showing your work: It’s never too early to share your work with others. You don’t need to polish it. Don’t need clean it up. Don’t need to worry if it looks good yet. Make your work visible, as early as possible. Does showing your work scare you? Bring it up with your manager. Leaders should strive to make your engineering environment ego-free and safe to be wrong. The alternative is to rat-hole for days and turn up later with 500 lines of code in a single pull request – that’s waste. Collaborating with a teammate to get some feedback on your work? That’s value. Get that value.
  1. Estimates: Just kidding! Estimates are waste. Stop doing them. “Velocity” is the biggest trick ever played on Agile. The only valid argument that I know of for having a formal practice of estimating every piece of work is to find out if people understand the work differently. But the estimate is still wrong, and the practice is highly vulnerable to corruption and malice and all kinds of evil. If you make a practice of it, it won’t be long before your estimates are weaponized and someone starts bin packing your milestones. #NoEstimates
  1. Retrospectives: You always have time for a retro. The only valid excuse that I know of for not doing the retro is “I forgot”. I will actually accept that answer. Should we really get together for a whole hour and talk about the past? If you want the future to go well, then yes, you should. To find your team’s cadence, experiment and adjust as necessary. 
  1. Demos: It’s not done-done until you do the demo-demo. Demos have so much value for the team, but it’s easy to make excuses and skip them to “give people their time back”. What do you think they will be doing with that hour on Friday afternoon that couldn’t be done half-zoned out at the demo? (Yeah, I said it…) Just do the demo and let people see what you are working on (see #3). It’s also a good opportunity for people to build their presentation and leadership skills, which is also value. 
  1. Documentation: This is how code scales. I know it’s true because I read about it in this book from Google. You believe Google, don’t you? Software is a Game of Intents. Do not allow intents to get obfuscated. Do not force the reader to dig for intents. Take special care to document anywhere intents are non-obvious. Leave evidence of intended behavior, especially when it is surprising.

So what exactly is waste in a synthetic system? What are all the value-adding steps?

Well, even what Lean might call “defects”, might not be worth calling waste in software. As my friend Lucas Siba like’s to say “Mistakes are just high speed learning opportunities.” An excellent reminder comes from this book:

“It is tempting but incorrect to label anything that goes wrong on a project as waste. Human beings make mistakes. A developer may accidentally push code before running the test suite. Our knowledge is limited. A product manager may write an impractical user story because he or she does not know of some particular limitation. We forget. A developer might forget that adding a new type to the system necessitates modifying a configuration file. Whether we conceptualize these sorts of errors as waste is a matter of opinion, but focusing on them is unhelpful because they are often unpredictable.”

Rethinking Productivity in Software Engineering, p. 225

It might have to do with the non-linear structures of synthetic systems. It might have to do with the difficulty of expressing creative work in the form of an equation. (We are not machines!) Models of complex systems are difficult to use for predicting the future: “Past performance does not predict future results,” and so on. No one has come close to defining an ergodic system for measuring software development, and so it’s hard to put labels like “waste” or even “value” on things.

Remember, once upon a time Yahoo banned remote altogether because … waste (what are they doing now by the way?). Meanwhile, there are many highly successful organizations like Sonatype that are 100% remote work. What looks like waste to some is value to others.

So my recommendation is this: don’t focus on trying to eliminate waste, try to find the good in everything. Capitalize on the hidden value, wherever it is. And have fun doing it.

Synthetic Management

Key takeaways:

  1. The synthetic nature of programming has driven the adoption of experiential and empirical development practices, like TDD and continuous integration
  2. Synthetic management approaches like Agile have emerged to support the practices used to build software systems
  3. Synthetic management contrasts with the traditional “analytic” approaches used to run a business.
  4. The tension between the analytic and synthetic sides of the business creates new constraints to the flow of value.
  5. The pendulum is swinging away from managing software like the business, and towards managing the business like software.

Software needs a management model that supports the synthetic nature of programming. To this end, a variety of management practices have emerged to couple with the practices used to build software systems. I like to call this Synthetic Management: capitalizing on the experiences used to produce repeatable value from creative work.

We know this dragon well. It has come in many forms over the years. Big Ball of MudThe Mythical Man MonthWorse is Better, the Agile Manifesto; each of these was a step forward in explaining the fundamental truth of our work – that synthetic work needs synthetic management.

Unfortunately, it goes against the patterns that gave us digital technology:

  • The earliest days of programming, forced programmers to use analytic patterns to manage how they solved problems. The first computers were so expensive and inaccessible that programmers had to work through their algorithms with pencil and paper before trying anything out (think, Knuth’s Art of Computer Programming). The practice of running experiments to see what works was simply not an option.
  • In those days, programmers had to do the analysis needed to get as close to a solution as possible before committing it to code. And managers had to do the analysis needed to ensure that their incredibly constrained resources were used most efficiently. The pattern of working that developed involved a lot of planning and up-front design. That pattern persists to this day, despite being irrational in the world of cheap computing power.

Synthetic thinking asks us to set aside those traditional roots, leave behind the historical memory of how to solve problems. Instead of formal proofs, we prove things by seeing them work – inside the context of the systems they are intended to work inside.

Management practices like Agile and DevOps de-prioritize the classic analytic approach (think, Project Management). We understand that analysis can only take us so far before we reach the edge of understanding, absent of emergent knowledge, collective learning and systemic properties.

No alt text provided for this image

Agile practices are predictably useful because in software development there is no substitute for experience. We use tests and demos, pair programming and fast feedback, customer interactions and user research. These are all designed to get our work, as quickly as possible, into the “experiencing” part of the process.

Our systems, environments, organizations, and markets are constantly changing, and our teams need to be equipped with the same responsive and adaptive capacities that we expect our systems to have.

  • To manage software systems we adopt flexible approaches that allow us to experience and learn. The inspecting and adapting of Scrum; the sensing and responding of Cynefin; the observing and orienting of OODA – these practices all embrace the non-linear nature of our systems, and the synthetic approach to building understanding within them.
  • We have expanded out definition of what a software system is to include the team that builds and runs it. This brings together the technical, organizational and cultural elements into a single end-to-end value delivery process. We now recognize that a software system is a symmathesy in which learning is done across through emergent networks of people, processes and code. We recognize that synthetic work necessitates a diverse, collectively learning system to get fullness from experiential knowledge and subjective meaning.

To this end, I see our industry struggling to define and refine what we should correctly call Synthetic Management approaches.

This approaches allow our teams to close the gap between the act of creation and the moment of experience, connecting people to their work and to the contexts they inhabit. The next challenge, then, is bringing our work into the orbit of a powerful force that we often call, simply, “the business”.

New Kinds of Constraints

Our fundamental truth is contradicted by the need for business to be stable and analytic, calculating and certain. The business cares about money and metrics, contracts and deadlines. To deliver value to customers, we need to continuously interface with this contrary, yet complementary, set of intentions.

No alt text provided for this image

The tension between the synthetic nature of software and the analytic needs of the business creates constraints that put pressure on our work.

These constraints can be either valuable or harmful, depending on how we design our organizational system. They demand trade-offs, sacrifices and debts, which can be managed deliberately, or left ad-hoc and made cumbersome. Note that it’s not the tension itself, but rather the mismanagement of it, that creates problems. The differences are huge:

  • When organizational systems are not optimized to transfer knowledge between these two contexts, it becomes a painful exercise of extraction. We have to spend time in work that provides the analytic side with confidence, but the synthetic side with nothing. We force our work inside opaque reporting abstractions that do not map to our work structures. We use patterns that inadvertently put developers under constant pressure to compromise on their synthetic values.
  • On the other hand, successful organizations actively facilitate the flow between the analytic and synthetic contexts. Interfaces are created for the business to gather information; social learning is used to route around organizational hierarchies; abstractions like OKRs provide direction without disrupting execution. We optimize the organizational system to create feedback loops, allowing us to share our learning and diffuse knowledge easily and appropriately.

The ways that we manage constraints across this Janus-face of business are critical to how we build dynamic learning organizations, which depend on a balanced flow of information and knowledge. But with an understanding of the analytic-synthetic dichotomy in hand, we can think more deeply about how to be effective.

While there is more investigation to be done around how to manage these constraints to flow across the intellectual boundaries of the business, new ways of working have already emerged to help us.

New Ways of Working

In many ways, computers have changed the world’s understanding of what we consider to be “verifiable information”. Code provides us with new ways to discover truths about the world by opening up a synthetic approach to solving problems that were previously only the domain of analysis.

The synthetic management practices born in software have opened up the frontier of ways to manage a business. We now see greater openness to different ideas, and the digital world demands it: we have at the center of work a complex system that demands looking at how we can push new ways of working right across the business.

The pendulum is swinging away from managing software like we manage the business, and towards managing the business like we manage software.

Significant momentum is growing to soften the analytic sides of the business, align it in a way that interfaces more easily with synthetic thinking. 

  • Beyond Budgeting moves us past the constraints of quarterly budgets, Design Thinking popularizes empirical techniques for discovery and experiential learning, Business Agility seeks to align business operations with the nature of complex adaptive systems, and the Teal movement is gaining traction as an alternative way to manage a business as a self-organizing, collective-learning system.

If we create organizations in which we can harness the power of emergent knowledge – if we are successful in designing a synthetic management system for our business – we are rewarded. But, if we spend too much lost in the paralysis of analysis, we fail.

As we break the management moulds to deliver software products, we get the opportunity to break them somewhere else. With software, we have demonstrated success with practices that do not fit the analytic style. Will they work elsewhere? There’s only one way to find out: go and see.


Portions of this article were  originally published on InfoQ on Jan 6th, 2020

What is does it mean to be “Synthetic”?

Some of the responses to Software is Synthetic included surprise at how the word is used. So what does it mean to be “synthetic”?

1: Classic

Firstly, we should understand “synthetic” in the classical sense, in how we come to know our world.

In this sense, it is a mode of inquiry, a method of solving problems, a way of understanding that uses experience, testing, and experimentation to gather knowledge.

This is how the word was used by the philosophical giants- Kant, Hegel, and Wittgenstein. They put it in opposition to “analytic”. To be analytic is to work with what is knowable independent from experience, which is the basis of logic, mathematics, and philosophy. To be synthetic is to know through experience, to gather information from the world, to synthesize understanding from what we learn.

This is also how the word was used by the early programmers, in the 1940s and ’50s when, they discovered the medium of code, and began using an experiential approach to solve math problems. “Synthetic mathematics,” they called it, playing with algorithms to test hypotheses and generate proofs from the result of from experiments.

Western society was, for centuries, firmly devoted to the analytic approach, the presumption being that we can root out all the “formulae” that underly the world, we can understand it completely. To that end, analytic thinking favors in-depth planning and formal proofs, and it has dominated the culture of technology long before the invention of computers.

But modern software development has reversed the presumption, and Agile methods have emerged as a practice of synthetic thinking, a commitment to the value of experiencing our work.

When we solve problems with software, we primarily manage our work using this approach: if we want to find out if something will work, we need to see it, we need to experience it – this is how we come to know and understand what we are creating.

2: Modern

Here we have the standard usage of the word today – the result of synthesis, the process of bringing together parts to form a unique whole, creating something new. In this sense, to be “synthetic” is to be a compound of simple elements, constructed intentionally, built for a purpose.

On the one hand you might think materials, plastics or petrochemicals, and yes, to be synthetic connotes something artificial, not occurring naturally.

On the other hand, also think of systems of all kinds, mechanical systems like engines or factories, or social systems like government and businesses.

Either way, these “things” are the result of synthesis, bringing together discrete parts that unlock capabilities when they work together. Our lives are deeply entwined with them, we work with them, we are a part of them, they shape our lives.

But understanding them? Figuring out how to use them or how to change them? That’s another story.

Systems Thinking has sprung up in the last century as an effort to understand the dynamics of the phenomena in which we live and work. The discipline studies how systems change, what forces operate on them, and why they change in unexpected ways.

The big idea is: systems are composed of many parts, but those parts do not define them, and, most importantly, they cannot be fully predicted by an analysis of those parts. To know a system, we need to look not at its parts but rather what it is a part of. To learn how to change a system, we return to the classic definition of our word: we need to use the synthetic approach to problem-solving.

What makes software unique is that our materials are also synthetic. In other words, the parts that our systems are composed of are themselves artificial, unlike systems of the world, which rest of the stable structures of reality. In many of its forms, modern software development is able to detach from and ignore the laws of physics that govern the hardware it runs on. This means that we are not bound by the same kinds of constraints that guide creation in the physical world.

In software, we assemble our forces, create our atomicities, and build upon these invented dynamics. Our systems malleable, properties easily changeable, emerging and decaying. I have often heard experienced software engineers telling their product managers, “Its just software, we can make the system do anything you want.” We are not bound by ‘rules’ other than the ones we create for ourselves. Don’t like ’em? Create new rules. They are only a pull request away.

3: Digital 

Finally, software is synthetic in the post-modern sense, in terms of our context: human, all too human. We should not see people as independent of technology. We should understand our systems as cybernetic – part social, part technical, a symbiosis of interconnected forces.

The boundaries between code, infrastructure, tools, people, and organizations are blurred and constantly shifting. Different parts of the system need to be learning from each other, trying to build off the properties that emerge as boundaries shift and interactions create new possibilities.

Here is where the “create new rules” part of software becomes interesting. We created many of our rules through collective decisions. Take for example our core networking protocols like TCP or DNS. These have grown through the RFC process and have not changed much in the last 20 years. Some of these protocols are despised and antiquated, arcane and unworkable, and yes, nearly impossible to change.

We can change these rules insofar as we can influence the social part of our systems to use them.Despite the emergence of newer and arguably better networking protocols, we can’t use them unless everyone is using them, and adoption is very, very hard. So here we are, our cybernetic constraints become the limits to our synthetic systems. We are become locked in to using relatively ancient technology.

This comes up on both is way many of our management practices in software are now focused on how to create software systems that can be successfully (joyfully!) maintained for years.

In sum

The synthetic nature of software dominates it.

Every component is developed in a unique context that shapes its existence in its system, an intentional existence, tied to a particular mode of understanding.

Every change to a component must be evaluated by experiencing it is as part of a system to understand the “why” behind it.

The development of modern software is a practice of navigating through this complex web of whys, weaving them together into systems that solve our very human problems, subject to all the limitations that we would expect to find in our very human existence.

Software is Synthetic

This article was originally published in full at InfoQ on Jan 6th, 2020

Key Takeaways

  1. “Synthetic” means to find truth through experience, to use experiential practices to build understanding.
  2. Software systems demand synthetic problem solving – we test, observe, and experiment to validate our work and ascertain its value.
  3. The history of building software systems is a history of discovering and formalizing synthetic ways of working into practices that guide our process of creation.

A new kind of proof

Long before machine learning algorithms beat the world champions of Go, before searching algorithms beat the grandmasters of Chess, computers took their first intellectual victory against the human mind when they were used to solve the Map Problem.

The solution was the first time a significant achievement in the world of pure mathematics was found using a computer. This startled the world, upending a sacred belief in the supremacy of the human mind, while revealing a new pattern of digital problem solving.

The Map Problem asks how many colours are needed to colour a map so that regions with a common boundary will have different colours.

Simple to explain, easy to visually verify, but difficult to formally prove, the Map Problem captured the imagination of the world in many ways. At first, for the elegant simplicity of its statement, and then, for over a hundred years in the elusiveness of its solution. Finally, for the brutal yet novel approach used to produce the result.

The Map Problem is foundational in graph theory, since a map can be turned into a planar graph that we can analyze. To solve the problem, a discharging method can be used to find reducible graph configurations, the reduced configurations can be restricted to unavoidable sets, and the unavoidable sets can be tested to see how many colours are actually needed.

If we are able to test every possible unavoidable set and find that only four colours are required, then we will have the answer. But the act of reducing and testing sets by hand is extremely laborious, perhaps even barbaric. When the Map Problem was first proposed in 1852, this approach could not possibly be considered.

Of course, as computing power grew, a number of people caught on to the fact that the problem was ripe for exploitation. We can create algorithms that have the computer do the work for us – we can programmatically see if any counterexample exists where a map requires more than four colours. In the end, that’s exactly what happened. And so we celebrated: four colours suffice! Problem solved?

These are not the proofs you are looking for

Mathematics is analytical, logical. When examining a proof, we want to be able to work through the solution. With the Map Problem, mathematicians desired just such a proof, something beautiful, something that reveals a structure to explain the behavior of the system, something that gives an insight into the inner workings of the problem.

When the solution was presented at the 1976 American Mathematical Society summer meeting, the packed room was nonplussed. They did not get something beautiful – they instead got printouts. Hundreds and hundreds of pages of printouts, the results of thousands of tests.

Was this even a proof? The conference was divided. They said the solution was not a proof at all, but rather a “demonstration”. They said it was an affront to the discipline of mathematics to use the scientific method rather than pure logic. Many were unprepared to relinquish their life’s work to a machine.

The anxiety over the solution to the Map Problem highlights a basic dichotomy in how we go about solving problems: the difference between analytic and synthetic approaches.

The solution to the Four Color Theorem was not analytic: it was synthetic – it arrived at truth by observation, exhaustively working through all the discrete cases. It was an empirically-derived answer that used the capability of computers to work through every possibility. Analysis was used to prove the reducibility of the graphs, but the solution was produced from the programmatic observation of every unavoidable sets: encoded experiences, artificial as they may be. There is no proof in the traditional sense – if you want to check the work, you need to check the code.

But this is exactly how we solve all problems in code. We use a dual-synthetic approach of creation and verification: we synthesize our code, and then prove it synthetically. We leverage the unique capabilities afforded by the medium of code to test, observe, experience, and “demonstrate” our results as fast as possible. This is software.

Software is synthetic

Look across the open plan landscape of any modern software delivery organization and you will find signs of it, this way of thinking that contrasts sharply with the analytic roots of technology.

Near the groves of standing desks, across from a pool of information radiators, you might see our treasured artifacts – a J-curve, a layered pyramid, a bisected board – set alongside inscriptions of productive principles. These are reminders of agile training past, testaments to the teams that still pay homage to the provided materials, having decided them worthy and made them their own.

What makes these new ways of working so successful in software delivery? The answer lies in this fundamental yet uncelebrated truth – that software is synthetic.

Software systems are creative compounds, emergent and generative; the product of elaborate interactions between people and technology. These are not the orderly, analytic worlds that our school-age selves expected to find. Full of complexity and uncertainty, we have to use a different way to determine whether a solution is “”.

How “work” works in these synthetic systems is different. We use different approaches, ones that control the “synthetic-ness” and use it to our advantage. When we learn to see this – when we shift our mental models about what the path to certainty looks like – we start to make better decisions about how to organize ourselves, what practices to use, and how to be effective with them.

We are accustomed to working on systems that yield to analysis, and where we find determinism in the physical world, we can get real value from deep examination and formal verification. Since we can’t use a pull request to change the laws of physics (and a newer, better version of gravity will not be released mid-cycle), it’s worth spending the time to plan, analyze, and think things through. Not to mention the significant barriers to entry – there is a cost to building things with physical materials that demands certainty before committing.

Software, being artificial, offers a unique ability to get repeatable value from the uncertainty of open-ended synthesis, but only if we are willing to abandon our compulsion to analyze.

There is no other medium in the world with such a direct and accessible capability for generating value from emergent experiences. In code we weave together ideas and intentions to solve our very real problems, learning and uncovering possibilities with each thread, allowing knowledge to emerge and pull us in different directions.

The synthetic way of working shapes the daily life of development, starting with the most basic action of writing code and moving all the way up into the management of our organizations. To understand this, let’s work our way through the process.

Will it work?

“Most programmers think that with the above description in hand, writing the code is easy. They’re wrong. The only way you’ll believe this is by putting down this column right now and writing the code yourself. Try it.”

on “The Challenge of Binary Search” – Programming Pearls, p. 61

Acts of creation are processes of synthesis, and software primarily is a creative act. It is the practice of bringing together millions of discrete elements to create something new and different. In the process of combining these elements we have the opportunity to create something new, with both beauty and function.

We start with a set of discrete primitives, elements, and structures. We use structure and flow to bring them together into algorithms. In this coupling we produce new elements, discrete in their own right, gathering new properties and capabilities. Arranging algorithms, we create runtimes and libraries, scripts and executables. The joining of these brings forth systems – programs and applications, infrastructure and environments. From these we make products, give birth to businesses, create categories, and build entire industries.

Though our systems are cobbled together from the complex synthesis of millions of tiny pieces, they hang together under the gravity of people and their intentions. “As a user …” But those intentions may come into conflict, and the conflicts only become visible when put things together.

Our understanding of the systems we are building emerges and changes throughout the process of creation (this is the root of much technical debt, by the way). We accept that we cannot truly know our system until we see it: we learn from the understanding that rises bit by bit, commit by commit, as we watch our ideas coming to life. “Will it work?

To answer this question, we write the code, we run it, we test it: we go and see.

This process of creation demands verification through experience. This is how we find the truth of our synthesis. With computers we can do this continuously, we can automate it. In the medium of code, the “experiencing” is cheap and readily available to us. Validating our assumptions is as simple as running the code. This recursive recourse to experience is the fundamental organizing principle in our process of creation.

In synthetic systems, the first thing we need to do is to experience our work, to learn from our synthesis. As we give instructions to the system, we need to give the system the ability to teach us. The greater the uncertainty of our synthesis, the more we have to learn from our experiences.

So we optimize our daily routines for this kind of learning. Test driven development, fast feedback, the test pyramid, pair programming; these are all cornerstones of good software development because they get us closer to experiencing the changes we make, in different ways, within the context of the whole system. Any practice that contributes to or adds dimensions on how we experience our changes will find itself in good company here.

Does it still work?

As a system grows to include large numbers of components and contributors, its dynamics become increasingly unpredictable. When we make a change in a complex system there is no amount of analysis that will allow us to determine with certainty whether the change will work exactly as intended.

In complex systems we are always walking at the edge of our understanding. We are working our way through a complicated networked system, a web of converging intentions, and changes to one part routinely have unintended consequences elsewhere. Not only do we need to test our changes, but we need to know if we broke anything else. “Does it still work?

To answer this question, we push the synthetic approach across the delivery lifecycle and into adjacent domains.

We externalize our core workflow, using support systems to validate and maintain the fidelity of our intentions as our work moves through the value stream. We create proxies for our experience – systems that “do the experiencing” for us and report back the results. We use continuous integration to verify our intentions and get a shared view of our code running as a change to the entire system. We use continuous delivery to push our changes into production-like, source-controlled environments. And we use our pipelines to maintain constancy and provide fast feedback as our work moves into progressively larger boundaries of the system.

The last decade has seen an ever-expanding set of these practices that externalize and proxy our individual experiences: test automation, repeatable builds, continuous integration, continuous delivery, and so on. We put many of these under the banner of DevOps, which we can understand simply as synthetic thinking working its way down the value chain. The concept has proven itself so useful that our industry has developed a compulsion to extend the concept further and further: DevSecOps, DevTestOps, BizDevOps, NetDevOps, etc.

And it makes sense why we would want to create these new ontologies, these combined categories of understanding. We want to build a single view of what it means to be “done” or to “test” or to “create value”, one that combines the different concepts that our disciplines use to experience the world. These are examples of our industry looking to extend what is intuitively useful: the capability to apply synthetic thinking across the software value stream.

Why doesn’t it work?

Once we have our software running in production, we need to extend our synthetic way of working one more time – into the observability of our system. Our primary goal is to build systems that are designed to discover problems early and often so we can learn and improve.

As our system grows, services grind together like poorly fit gears. Intentions conflict, assumptions unravel, and the system begins to operate in increasingly unexpected ways. Rather than spending intellectual capital trying to predict all possible failure modes, we have learned to use practices that allow us to see deeply into the system, detect anomalies, run experiments, and respond to failure.

Observability is required when it becomes difficult to predict the behavior of a system and how users will be impacted by changes. In other words, to ensure that our interactions with system phenomena align with customer experiences.

Making the system observable involves a practice of combining context, information, and specific knowledge about the system to create the conditions for understanding. The investment we make is in our capability to integrate knowledge – the collection of logs, metrics, events, traces and so on, correlated to their semantics and intent. We approach these problems empirically too, using experiments to discover, hypothesize and validate our intuitions.

Planning for failure involves an equally synthetic set of practices. The field of resilience engineering has been gaining momentum in the software industry, asking, how can we build a system that allows us to cope with surprise and learn from our environment? We can build our systems to allow emergent behavior to teach us, while we build our organizations with the adaptive capacity to respond and change.

Chaos engineering also embraces the experimental approach – rather than trying to decompose the system and try to decipher all its faults, let’s get our hands dirty. Let’s inject faults into the system and watch how it fails. Inspect, adapt, repeat. This is an experience-driven approach that has seen a lot of success building scaled software systems.

With operational practices like these, we embrace the fact that our systems are complex and our work will always carry dark debt, no matter how good the code coverage is. Our best recourse is to give ourselves the ability to react quickly to unexpected events or cascading changes.


The solution to the Map Problem was impactful because it changed the world’s understanding of what we consider to be “verifiable information”. It showed us new ways to discover things about the world, and it did so by using the newly emerging synthetic approach to solving problems in the medium of code. But it also demanded a generation of mathematicians to update their understanding of what it means to “solve” a problem.

In software engineering, the separation has been less explicit. We have, unknowingly, been shifting the model of how to solve problems slowly towards the synthetic. To understand we were successful with these approaches, we need to be explicit about the model we are using, and that is why the recognition of the synthetic nature of software will be so important to the successful management of software engineering.

Hello World

I threw this site together because I write about a lot of different things and want to place to put them. There’s really two motivations:

  • Goal #1: stay motivated to write. If I just write in in a notebook, I don’t get the feeling of finishing anything. All the stuff in my notebook is “in progress”. I need a way to say “I’m done”. Even if I come back to edit it later, there is a psychological shift that you make when something physically moves from in progress to done. I want to take advantage of that.
  • Goal #2: find like-minded people. If I don’t put stuff out there, I won’t get any feedback, I won’t be able to take advantage of the network effects of knowledge systems. This is something I believe in, so I need to practice what I preach and start writing in a public place where that can happen.

So with that said… Hello world!

Picture credit to my co-worker Lucas Siba