Tagged: Continuous Architecture Toggle Comment Threads | Keyboard Shortcuts

  • pierrepureur 10:44 pm on March 25, 2016 Permalink | Reply
    Tags: , , Continuous Architecture, , ,   

    Delaying Decisions in Continuous Architecture 

    Delaying Design Decisions Yields Better Results!

    In the previous installment of our “Continuous Architecture” blog as well as in our book, we discussed how to effectively capture requirements.

    The second Principle of Continuous Architecture provides us with guidance to make architectural and design decisions to satisfy quality attribute requirements, and not to focus exclusively on functional requirements. Functional requirements often change frequently, especially if we are architecting one of those “Systems of Engagement” delivered over a mobile device whose user interface is likely to change frequently in response to changing customer needs, competitive response, and ever-evolving mobile technology.

    Even quality attribute requirements are subject to change, or at least to underestimation. When a mobile or web application’s popularity goes “viral”, even the most carefully crafted applications can collapse under the unexpected load. In addition, performance and availability targets may be vaguely described as Service Levels Agreements (SLA’s) or Objectives (SLO’s) are not always clear. A common practice is to err on the side of conservatism when describing those objectives which may result in unrealistic requirements.

    We recommend making design decisions based on known facts – not guesses. In our book and elsewhere in this blog, we describe how to leverage a Six-Sigma technique (Quality Function Deployment or QFD for short) to make sound architecture and design decisions. One of the advantages of the QFD process is that it encourages architects to document the rationale behind architecture and design decisions and to base decisions on facts, not fiction.

    An interesting parallel can be made between the principle of delaying design decisions and a technique called simulated annealing which is a probabilistic technique used for solving optimization problems.

    Image processing is one area where this technique is used.  Basically if you want to clean up a noisy image, instead of applying a deterministic model you iterate over the image thousands of times. At each iteration you make a decision for a particular pixel based on the value of its neighbors. The interesting part is that in the early iterations you allow a high value of uncertainty in that decision – i.e. you make a probabilistic guess. As iterations evolve you restrict the uncertainty, of the probabilistic jump. So the image cools down, just like steel cooling down when you anneal it – hence the simulated annealing term.

    How does this relate to architectural decisions? Simply put if you cool down the image too quickly it cracks – just like steel cracking if you cool it down too quickly. The same concept applies for architectural decisions, if you make too many decisions early on in the process your architecture will fracture.

    Below is a picture of a simulated annealing example, showing results of two parameter sets used in the algorithm (bottom left and bottom right quadrants)

    CA Figure 2-5

    As we move away from waterfall application development lifecycles involving large requirement documents and evolve toward the rapid delivery of a viable product, we need to create Minimum Viable Architectures (MVAs) to support the rapid delivery of those products. This concept is discussed in more details in Chapter 7 of our book and in one of our blog entries.

    To that effect, limiting the budget spent on architecting is a good thing; it forces the team to think in terms of a Minimum Viable Architecture that starts small and is only expanded when absolutely necessary. Too often, a team will solve problems that don’t exist, and yet fail to anticipate a crippling challenge that kills the application. Getting to an executable architecture quickly, and then evolving it, is essential for modern applications.

     

    Please check our blog at https://pgppgp.wordpress.com/ and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture

    Advertisements
     
  • pierrepureur 12:11 am on February 18, 2016 Permalink | Reply
    Tags: , , Continuous Architecture, , ,   

    Requirements in Continuous Architecture: Let’s Clarify! 

    CA Book Cover Small 2

    Clarifying Requirements is Important

    In the previous installment of our “Continuous Architecture” blog as well as in our book, we discussed an effective approach for capturing requirements. We suggested that thinking in terms of testable hypotheses instead of traditional requirements enables teams following the Continuous Architecture approach to quickly deliver systems that evolve in response to users’ feedback and meet or even exceed users’ expectations. But are we sure that we always understand what the system stakeholders want, even when they are able to precisely quantify those requirements?

    Philippe Kruchten tells the following story about the importance of clarifying requirements. Back in 1992, Philippe was leading the Architecture team for the Canadian Air Traffic Control System (CAATS), and the team had a requirement of “125 ms time to process a new position message from the Radar Processing System, from its arrival entry in the Area Control Center till all displays are up-to-date”.

    Here is how Philippe tells the story: “After trying very hard to meet the 125 ms for several months, I was hiking one day, looking at a secondary radar slowly rotating (I think it was the one on the top of Mount Parke on Mayne island, just across from Vancouver Airport). I thought … “mmm, there is already a 12-20 second lag in the position of the plane, why would they bother with 125ms…?”

    radar

    Note: primary radar uses an echo of an object. Secondary radar sends a message “who are you?” and the aircraft respond automatically with its ID and its altitude (http://en.wikipedia.org/wiki/Secondary_surveillance_radar). It looks like this: https://www.youtube.com/watch?v=Z0mpzIBWVG

    I knew all this because I am myself a pilot…

    Then I thought:  “in 125ms, how far across the screen can an aircraft go, assuming a supersonic jet… and full magnification”. Some back of the envelope computation gave me…about 1/2 pixel! When I had located the author of the requirement, he told me: “mmm, I allocated 15 seconds to the radar itself (rotation), 1 second for the radar processing system, 4 seconds for transmission, through various microwave equipment, that left 1 second in the ACC, breaking this down between all equipment in there, router, front end, etc.., it left 125ms for your processing, guys, updating and displaying the position…” These may not have been his exact words as this happened a long time ago, but this was the general line… Before agile methodologies made it a “standard”, it was useful for the architects to have direct access to the customer, often on site… And in this case being able to speak the same language (French)”.

    Philippe’s story clearly stresses that it is important for an Architect to question everything, and not to assume that requirements as stated are absolute.

     

    Please check our blog at https://pgppgp.wordpress.com/ and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel