Delaying Decisions in Continuous Architecture

Delaying Design Decisions Yields Better Results!

In the previous installment of our “Continuous Architecture” blog as well as in our book, we discussed how to effectively capture requirements.

The second Principle of Continuous Architecture provides us with guidance to make architectural and design decisions to satisfy quality attribute requirements, and not to focus exclusively on functional requirements. Functional requirements often change frequently, especially if we are architecting one of those “Systems of Engagement” delivered over a mobile device whose user interface is likely to change frequently in response to changing customer needs, competitive response, and ever-evolving mobile technology.

Even quality attribute requirements are subject to change, or at least to underestimation. When a mobile or web application’s popularity goes “viral”, even the most carefully crafted applications can collapse under the unexpected load. In addition, performance and availability targets may be vaguely described as Service Levels Agreements (SLA’s) or Objectives (SLO’s) are not always clear. A common practice is to err on the side of conservatism when describing those objectives which may result in unrealistic requirements.

We recommend making design decisions based on known facts – not guesses. In our book and elsewhere in this blog, we describe how to leverage a Six-Sigma technique (Quality Function Deployment or QFD for short) to make sound architecture and design decisions. One of the advantages of the QFD process is that it encourages architects to document the rationale behind architecture and design decisions and to base decisions on facts, not fiction.

An interesting parallel can be made between the principle of delaying design decisions and a technique called simulated annealing which is a probabilistic technique used for solving optimization problems.

Image processing is one area where this technique is used.  Basically if you want to clean up a noisy image, instead of applying a deterministic model you iterate over the image thousands of times. At each iteration you make a decision for a particular pixel based on the value of its neighbors. The interesting part is that in the early iterations you allow a high value of uncertainty in that decision – i.e. you make a probabilistic guess. As iterations evolve you restrict the uncertainty, of the probabilistic jump. So the image cools down, just like steel cooling down when you anneal it – hence the simulated annealing term.

How does this relate to architectural decisions? Simply put if you cool down the image too quickly it cracks – just like steel cracking if you cool it down too quickly. The same concept applies for architectural decisions, if you make too many decisions early on in the process your architecture will fracture.

Below is a picture of a simulated annealing example, showing results of two parameter sets used in the algorithm (bottom left and bottom right quadrants)

CA Figure 2-5

As we move away from waterfall application development lifecycles involving large requirement documents and evolve toward the rapid delivery of a viable product, we need to create Minimum Viable Architectures (MVAs) to support the rapid delivery of those products. This concept is discussed in more details in Chapter 7 of our book and in one of our blog entries.

To that effect, limiting the budget spent on architecting is a good thing; it forces the team to think in terms of a Minimum Viable Architecture that starts small and is only expanded when absolutely necessary. Too often, a team will solve problems that don’t exist, and yet fail to anticipate a crippling challenge that kills the application. Getting to an executable architecture quickly, and then evolving it, is essential for modern applications.

 

Please check our blog at https://pgppgp.wordpress.com/ and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture

Advertisements