Tagged: Enterprise Architecture Toggle Comment Threads | Keyboard Shortcuts

  • pierrepureur 10:44 pm on March 25, 2016 Permalink | Reply
    Tags: , , , , Enterprise Architecture,   

    Delaying Decisions in Continuous Architecture 

    Delaying Design Decisions Yields Better Results!

    In the previous installment of our “Continuous Architecture” blog as well as in our book, we discussed how to effectively capture requirements.

    The second Principle of Continuous Architecture provides us with guidance to make architectural and design decisions to satisfy quality attribute requirements, and not to focus exclusively on functional requirements. Functional requirements often change frequently, especially if we are architecting one of those “Systems of Engagement” delivered over a mobile device whose user interface is likely to change frequently in response to changing customer needs, competitive response, and ever-evolving mobile technology.

    Even quality attribute requirements are subject to change, or at least to underestimation. When a mobile or web application’s popularity goes “viral”, even the most carefully crafted applications can collapse under the unexpected load. In addition, performance and availability targets may be vaguely described as Service Levels Agreements (SLA’s) or Objectives (SLO’s) are not always clear. A common practice is to err on the side of conservatism when describing those objectives which may result in unrealistic requirements.

    We recommend making design decisions based on known facts – not guesses. In our book and elsewhere in this blog, we describe how to leverage a Six-Sigma technique (Quality Function Deployment or QFD for short) to make sound architecture and design decisions. One of the advantages of the QFD process is that it encourages architects to document the rationale behind architecture and design decisions and to base decisions on facts, not fiction.

    An interesting parallel can be made between the principle of delaying design decisions and a technique called simulated annealing which is a probabilistic technique used for solving optimization problems.

    Image processing is one area where this technique is used.  Basically if you want to clean up a noisy image, instead of applying a deterministic model you iterate over the image thousands of times. At each iteration you make a decision for a particular pixel based on the value of its neighbors. The interesting part is that in the early iterations you allow a high value of uncertainty in that decision – i.e. you make a probabilistic guess. As iterations evolve you restrict the uncertainty, of the probabilistic jump. So the image cools down, just like steel cooling down when you anneal it – hence the simulated annealing term.

    How does this relate to architectural decisions? Simply put if you cool down the image too quickly it cracks – just like steel cracking if you cool it down too quickly. The same concept applies for architectural decisions, if you make too many decisions early on in the process your architecture will fracture.

    Below is a picture of a simulated annealing example, showing results of two parameter sets used in the algorithm (bottom left and bottom right quadrants)

    CA Figure 2-5

    As we move away from waterfall application development lifecycles involving large requirement documents and evolve toward the rapid delivery of a viable product, we need to create Minimum Viable Architectures (MVAs) to support the rapid delivery of those products. This concept is discussed in more details in Chapter 7 of our book and in one of our blog entries.

    To that effect, limiting the budget spent on architecting is a good thing; it forces the team to think in terms of a Minimum Viable Architecture that starts small and is only expanded when absolutely necessary. Too often, a team will solve problems that don’t exist, and yet fail to anticipate a crippling challenge that kills the application. Getting to an executable architecture quickly, and then evolving it, is essential for modern applications.

     

    Please check our blog at https://pgppgp.wordpress.com/ and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture

    Advertisements
     
  • pierrepureur 12:11 am on February 18, 2016 Permalink | Reply
    Tags: , , , , Enterprise Architecture,   

    Requirements in Continuous Architecture: Let’s Clarify! 

    CA Book Cover Small 2

    Clarifying Requirements is Important

    In the previous installment of our “Continuous Architecture” blog as well as in our book, we discussed an effective approach for capturing requirements. We suggested that thinking in terms of testable hypotheses instead of traditional requirements enables teams following the Continuous Architecture approach to quickly deliver systems that evolve in response to users’ feedback and meet or even exceed users’ expectations. But are we sure that we always understand what the system stakeholders want, even when they are able to precisely quantify those requirements?

    Philippe Kruchten tells the following story about the importance of clarifying requirements. Back in 1992, Philippe was leading the Architecture team for the Canadian Air Traffic Control System (CAATS), and the team had a requirement of “125 ms time to process a new position message from the Radar Processing System, from its arrival entry in the Area Control Center till all displays are up-to-date”.

    Here is how Philippe tells the story: “After trying very hard to meet the 125 ms for several months, I was hiking one day, looking at a secondary radar slowly rotating (I think it was the one on the top of Mount Parke on Mayne island, just across from Vancouver Airport). I thought … “mmm, there is already a 12-20 second lag in the position of the plane, why would they bother with 125ms…?”

    radar

    Note: primary radar uses an echo of an object. Secondary radar sends a message “who are you?” and the aircraft respond automatically with its ID and its altitude (http://en.wikipedia.org/wiki/Secondary_surveillance_radar). It looks like this: https://www.youtube.com/watch?v=Z0mpzIBWVG

    I knew all this because I am myself a pilot…

    Then I thought:  “in 125ms, how far across the screen can an aircraft go, assuming a supersonic jet… and full magnification”. Some back of the envelope computation gave me…about 1/2 pixel! When I had located the author of the requirement, he told me: “mmm, I allocated 15 seconds to the radar itself (rotation), 1 second for the radar processing system, 4 seconds for transmission, through various microwave equipment, that left 1 second in the ACC, breaking this down between all equipment in there, router, front end, etc.., it left 125ms for your processing, guys, updating and displaying the position…” These may not have been his exact words as this happened a long time ago, but this was the general line… Before agile methodologies made it a “standard”, it was useful for the architects to have direct access to the customer, often on site… And in this case being able to speak the same language (French)”.

    Philippe’s story clearly stresses that it is important for an Architect to question everything, and not to assume that requirements as stated are absolute.

     

    Please check our blog at https://pgppgp.wordpress.com/ and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture

     
  • pierrepureur 12:09 am on February 8, 2016 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    Continuous Architecture and Requirements 

    CA Book Cover Small 2

    Don’t Think Functional Requirements, Think Faster Time to Feedback

    As we discussed in Chapter 1 of the “Continuous Architecture” book and elsewhere in this blog, capturing and managing requirements accurately and timely  – especially Quality Attribute Requirements – is an essential part of the Continuous Architecture approach. Traditionally, the approach to capturing and managing requirements has been based on conducting interviews of subject matter experts (SMEs) in order to document  requirements that the system must satisfy, often documented in voluminous documents which are hard to read and analyze. Ideally, the interviewees should be actual or prospective users of the system being developed, but in practice the interviewers often have to settle for representatives from the business who believe that they are familiar with the way the system is (or will be) used.

    Requirements Interviews Dilbert

    Collecting requirements from user representatives who attempt to guess the needs of real users as well as the best ways to satisfy those needs often result in systems that fall short of real users’  expectations. In addition there is often a significant time interval between the interviews and the delivery of the system, and this may lead to further disappointment as requirements  may change due to evolving business conditions.

    As Forrester Research Principal Analyst Kurt Bittner states, “The problem is the SME paradigm itself. No one person can represent the needs of all users, no matter how hard they try. The problem goes deeper: The conscious mind often cannot express what is really needed, and only knows what it doesn’t like when it sees it. As a result, the surest path to success is to put something out there that minimally satisfies some need, sometimes called a minimum viable product, and then improve upon that in rapid cycles.”

    The objective of the Continuous Architecture approach is to enable the rapid delivery of a minimum viable product that may be designed to satisfy some need or validate some hypothesis, and will continuously evolve as feedback from the users is received. As we describe in the book and in this blog, we achieve this objective by creating a “minimum viable architecture” that also continuously evolve as user feedback is received, and enables the delivery of a system that meets or even exceeds its users’ expectations.

    Please check our blog at https://pgppgp.wordpress.com/ and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture.

     
  • pierrepureur 12:36 am on January 20, 2016 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    When Should Continuous Architecture Be Used? 

    With respect to release cycles and deployment mechanisms, modern architectures may need to function at multiple speeds. An IT organization is going to have a “portfolio” of applications, some of which can support high rates of change (due to their loosely coupled architectures), and some of which cannot be released very fast because they are tightly coupled and therefore tend to be brittle.

    A “Continuous Architecture approach” takes this into account and is able to deal with different release rates. For example, Systems of engagement (especially mobile and cloud applications) lend themselves to a Continuous Delivery approach. Systems of Record and Systems of Operation are usually not a good fit for Continuous Delivery as traditional tightly-coupled architectures often require substantial rewrites in order to keep pace with evolving business needs and enhancements are often postponed until the need becomes critical.

    As we discussed in an earlier post, Continuous Architecture is not a formal methodology. We think of it as an approach to adapt formal architecture thinking and discipline to an ever changing and evolving world. Our approach is driven by six simple principles (see figure below) and supporting tools.

    CA Figure 2-1

    As we describe in our book as part of our discussion of Principle 4 (Architect for Change – Leverage “The Power of Small”), loosely coupled architectures enable rapid changes and provide a significant advantage when “future-proofing” architectures.  However very few projects have the luxury of starting from a blank slate, and frequently architects have to deal with legacy, monolithic systems which often contain tightly coupled components. Uncoupling monolithic architectures is often a significant challenge but not an impossible one, and we offer some techniques in our book to deal with this challenge.

    Using a Continuous Delivery approach, IT teams can rapidly and safely implement new requirements from the business by creating a regular, controlled, and highly automated delivery pipeline. Using that approach, an IT organization is able to move from a traditional model where business partners specify requirements to one where they switch to “testable hypothesis”. This in turn creates a new, high-speed IT function that sits alongside the legacy IT function. Using this approach, the high-speed IT function can focus on the systems of engagement for a few business areas, and provide a lot of value to the business. However we believe that the Continuous Architecture approach isn’t limited to Agile and Continuous Delivery projects. We have used it on projects using iterative and even waterfall methodologies, and it produces excellent results!

    Please check other entries in this blog and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture

    CA Book Cover Small 2

     
  • pierrepureur 12:09 am on December 15, 2015 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    Applying Continuous Architecture in Practice 

     

    Continuous Architecture is a set of principles and supporting tools.

    We do not aim to define a detailed architecture methodology or development process. Our main objective is to share a set of core principles and tools we have seen work in real-life practice. So applying Continuous Architecture is really about understanding the principles and applying them to the context of your environment. While doing this you can also decide about the tools you would want to recommend.

    We are responding to the current challenge of creating a solid architectural foundation in the world of agile and Continuous Delivery. However, that does not mean that applying Continuous Delivery is a prerequisite for adopting the Continuous Architecture approach. We realize that some companies may not be ready to adapt agile methodologies. Moreover, even if a company if fully committed to agile methodologies, there may be situations such as working with a third party software package where other approaches such as Iterative or incremental may be more appropriate.

    CA Figure 1-5

    Does this mean that Continuous Architecture would not work in this situation? Absolutely not.  This is one of the key benefits of the “Toolbox” approach. Its contents can be easily adapted to work with Iterative or Incremental instead of Agile.

    Continuous Architecture also operates in two dimensions: Time and Scale

    CA Figure 1-6

    The time dimension addresses how we enable architectural practices in a world of increasingly rapid delivery cycles, while the scale dimension looks at the level we are operating at (such as project, line of business, enterprise, etc…). We believe that the Continuous Architectural principles apply consistently at all scales, but the level of focus and the tools used might vary.

    Please check our blog at https://pgppgp.wordpress.com/
    and our “Continuous Architecture” book (http://www.store.elsevier.com/9780128032848) for more information about Continuous Architecture

     
  • pierrepureur 1:20 am on December 2, 2015 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    The Benefits of Continuous Architecture 

    CA Book Cover Small 2

    The cost-quality-time triangle is a well-known project management aid that basically states the key constraints of any project.

    CA Figure1-4c

    The basic premise is that it is not possible to optimize all three corners of the triangle; you are asked to pick any of the two corners and sacrifice the third.

    We do not claim that Continuous Architecture solves this problem, but the triangle does present a good context to think about benefits of Continuous Architecture. If we identify good architecture as representing quality in a software solution, then with Continuous Architecture, we have a mechanism that helps us balance time and cost. Another way of saying this is that Continuous Architecture helps us balance time and cost constraints while not sacrificing quality.

    The time dimension is a key aspect of Continuous Architecture. We believe that architectural practices should be aligned with Agile practices and not contradict them. In other words, we are continuously developing and improving the architecture rather than doing it once and creating the Big Architecture up Front (BARF). As we discuss in detail in our book  (“Continuous Architecture“- http://www.store.elsevier.com/9780128032848) and elsewhere in this blog, Continuous Architecture puts special emphasis on Quality Attributes (Principle 2: Focus on Quality Attributes, not on functional requirements). We believe that cost is one of the Quality Attributes that is often overlooked but is critical in making the correct architectural decisions.

    Continuous Architecture does not solve the cost-quality-time triangle, but it gives us tools to balance it while maintaining quality. An element that the cost-quality-time triangle does not address is sustainability. Most large enterprises have a complex technology and application landscape as a result of years of business change and IT initiatives. Agile and Continuous Development practices focus on delivering solutions and ignore addressing this complexity. Continuous Architecture tackles this complexity and strives to create a sustainable model for individual software applications as well as the overall enterprise.

    Applying Continuous Architecture at the individual application level enables a sustainable delivery model and a coherent technology platform resilient against future change. Applying Continuous Architecture at the
    enterprise level enables increased efficiency in delivering solutions and a healthy ecosystem of common platforms.

     
  • pierrepureur 11:47 pm on August 4, 2015 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    Continuous Architecture and the Quality Assurance Group 

    Continuous ArchitectureSeveral companies including Pivotal Labs and Microsoft[1] have eliminated their formal Quality Assurance groups, and moved the testing function back to the developers.  Those companies believe that moving the testing function back to the developers together with the appropriate automated tools to deploy and test software empowers them, and enables them to produce higher quality software. When developers are responsible for testing their software and supporting it in production, they become concerned with how hard their applications are to deploy, test and run, and not just with how quickly they can write software.

    However this approach may be too radical for some companies, and we believe that there is still a role for Quality Assurance groups in a Continuous Delivery world. The key is to ensure that the testing group collaborates closely with Development and Operations as part of the DevOps process.

    According to Bret Pettichord’s 2007 Schools Of Software Testing talk[2], testers can be grouped into the following five “Schools”:

    • Analytic School: sees testing as rigorous and technical with many proponents in academia
    • Standard School: sees testing as a way to measure progress with emphasis on cost and repeatable standards
    • Quality School: emphasizes process, policing developers and acting as the gatekeeper
    • Context-Driven School: emphasizes people, seeking bugs that stakeholders care about (Pettichord aligns himself with that school)
    • Agile School: uses testing to prove that development is complete; emphasize automated testing

    Testing groups aligned with the Agile or the Context-Driven schools are likely to be the most supportive of the Continuous Architecture approach as well as of the Continuous Delivery process, while testing groups aligned with the three other schools may have a challenge adapting to that process. When testers act as gatekeepers as emphasized in the “Quality School”, they negatively impact the collaboration between development, operations and testing which is at the core of the “DevOps” process.

    Please refer to Pettichord’s 2002 article, “Don’t Become The Quality Police[3] “ for a discussion of how positioning the testing group in  the “process police” role may generate confrontation, and could degrade relationships with development and operations.

    Do you still have a formal Quality Assurance group and have successfully implemented Continuous Delivery? We would love to read your observations – please drop us a note!

    Notes:

    [1] http://www.bloomberg.com/news/articles/2015-02-19/microsoft-ceo-nadella-looks-to-future-beyond-windows

    [2] https://www.prismnet.com/~wazmo/papers/four_schools.pdf

    [3] http://www.stickyminds.com/article/dont-become-quality-police

     
  • pierrepureur 11:29 pm on July 14, 2015 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    The Value of (Continuous) Architecture 

    The Value of (Continuous) Architecture

    What is the real value of architecture? We think of architecture as an enabler for the delivery of valuable software.  Software architecture’s concerns, quality attribute requirements such as performance, maintainability, scalability and security are at the heart of what makes software successful.

    A comparison to building architecture may help illustrate this concept. Stone arches are one of the most successful building architecture constructs. Numerous bridges built by the Romans around 2000 years ago using stone arches are still standing – for example, the Pont du Guard, built in the first century AD. How were stone arches being built at that time? A wooden frame known as “centring” was first constructed in the shape of an arch. The stone work was built up around the frame and finally a keystone was set in position. The key stone gave the arch strength and rigidity. The wood frame could then be removed and the arch was left in position. The same technique was later used in the middle ages when constructing arches for Gothic Cathedrals.

    CA Figure 11-2

    Source: http://www.bbc.co.uk/history/british/launch_ani_build_arch.shtml

    We think of software architecture as the “centring” for building successful software “arches”.  When Romans built bridges using this technique, we do not believe that anybody worried about the aesthetics or the appearance of the “centring”. Its purpose was the delivery of a robust, strong, reliable, usable and long lasting bridge.

    Similarly, we believe that the value of software architecture should be measured by the success of the software it is helping to deliver, not by the quality of its artifacts. Sometimes, architects use the term “value evident architecture” to describe a set of software architecture documents they created and are really proud of, and that development teams should not (ideally) need to be sold on in order to use this architecture. However, we are somewhat skeptical about these claims – can you really evaluate a “centring” until the arch is complete, the key stone has been put in place and the bridge can be used safely?

     
  • pierrepureur 11:48 pm on July 6, 2015 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    How to Evolve Continuous Architecture over Time? Think “Minimum Viable Architecture” 

    How to Evolve Continuous Architecture over Time? Think “Minimum Viable Architecture”

    Let’s assume that a team has successfully developed and implemented an application by following the six Continuous Architecture principles.  Now we’ll turn our attention to their next challenge – how do they evolve the architecture to cope with the unavoidable requirement changes that are already piling up upon them?  This is where they need to leverage a “Minimum Viable Architecture” strategy.

    Let’s first explain what we mean by “Minimum Viable Architecture”. This concept is often associated with the concept of “Minimum Viable Product”, so we’ll start by giving a brief overview of this concept.

    What Exactly is a “Minimum Viable Product”?

    A “Minimum Viable Product” can be defined as follows:

    In product development, the minimum viable product (MVP) is the product with the highest return on investment versus risk (…)

    A minimum viable product has just those core features that allow the product to be deployed, and no more. The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. It is a strategy targeted at avoiding building products that customers do not want, that seeks to maximize the information learned about the customer per dollar spent (from Wikipedia, the free encyclopedia and: S. Junk, “The Dynamic Balance Between Cost, Schedule, Features, and Quality in Software Development Projects”, Computer Science Dept., University of Idaho, SEPM-001, April 2000, Eric Ries, March 23, 2009, Venture Hacks interview: “What is the minimum viable product?”, Lessons Learned, Perfection By Subtraction – The Minimum Feature Set, SyncDev (http://www.syncdev.com/index.php/minimum-viable-product/), Holiday, Ryan The single worst marketing decision you can make The Next Web. 1 April 2015, Ries, Eric (August 3, 2009). “Minimum Viable Product: a guide”).

    The concept of Minimum Viable Product has been actively promoted by proponents of Lean and Agile approaches, and it certainly has worked very well at several startups . The concept sounds attractive at first – being able to quickly and inexpensively create a product to gauge the market before investing time and resources into something that may not be successful is a great idea.

    However, in a highly regulated industry like Insurance or Banking, the concept of Minimum Viable Product has limitations – some product capabilities such as regulatory reporting, security and auditability are not optional and cannot be taken out of scope. Also software vendors routinely launch their products as “alpha” or “beta” versions, but very few Financial Services companies would consider launching anything but a production ready version, especially to external audiences.

    Of course some other features such as some inquiry screens or activity reports may be omitted from the initial release, but those features are usually easy and inexpensive to build so taking them out of scope for the initial release may not save much time or money.

    In addition, implementing new products may involve leveraging existing capabilities implemented in older back-end systems (such as rate quoting in Insurance), and interfacing with those systems is likely to represent a significant portion of the effort required to create a new product – unless those interfaces have already been encapsulated by developing reusable services as part of a previous effort. Unfortunately, that’s not often the case, and teams attempting to implement a Minimum Viable Product in Financial Services companies often struggle with defining a product that has enough capabilities to be moved to production – yet which is also small enough to be created quickly and with a minimal investment of time and money.

    What about Minimum Viable Architecture?

    On the other hand, using a Minimum Viable Architecture strategy is an effective way to bring a product to market faster with lower cost. Let’s examine a sample Quality Attributes Utility Tree to clarify this point:

    CA Figure 7-12

    Under each of those Quality Attributes are specific Quality Attribute Refinements – for example, “Latency” further refines “Performance”. In addition, a Quality Attribute Refinement is illustrated by an Architecture Scenario, in terms if Stimulus/Response/Measurement. The Architecture Scenarios themselves are a very effective way to express Quality Attribute Requirements, since they are concrete and measurable, and should be easy to implement in a prototype.

    There is also a time/release dimension to Quality Attributes Analysis that answers the following questions:

    • How many concurrent users will be on the system at initial launch?
    • How many concurrent users will be on the system within the first 6 months?
    • How many concurrent users will be on the system within the first year?
    • How many transactions per second is the system expected to handle at initial launch?
    • How many transactions per second is the system expected to handle within the first 6 months?
    • How many transactions per second is the system expected to handle within a year?

    This time dimension can be represented in the Quality Attributes Utility Tree as shown below:

    CA Figure 7-13

    Many architects consider the worst case scenario when designing a system – for example, they would ask their business partners for the “maximum number of concurrent users the system should be able to support” without mentioning a time frame – and add a “safety margin” on top of that number, just to be on the safe side. Unfortunately, they do not realize that the number of concurrent users provided by the business is likely to be an optimistic guess (business partners would like to believe that every new system is going to be a big success!) – Unless the system that they are architecting replaces an existing system, and usage volumes are precisely known.

    As a result they end up architecting the new system to handle an unrealistic number of concurrent users which may not be reached for a few years, and sometimes add unnecessary complexity (such as caching components) to their design. We are recommending instead to adopt a  “Minimum Viable Architecture “approach  based on realistic estimates at launch time, and evolve that architecture based on actual usage data. Also keep in mind that technology becomes more efficient over time, and keep Principle 3 in mind: Delay design decisions until they are absolutely necessary, and design the architecture based on facts, not guesses!

    A useful strategy is to limit the budget spent on architecting. This forces the team to think in terms of a Minimum Viable Architecture that starts small and is only expanded when absolutely necessary.

     
  • muraterder 8:29 pm on September 17, 2014 Permalink | Reply
    Tags: , , , Enterprise Architecture,   

    Architects Unit of Work – Architectural Decisions 

    If you ask most people what is the most visible output from architecture, they will most likely point to a fancy diagram that highlights the key components and their interactions. Usually the more color and complexity the better. Ideally it should be too difficult to read on a normal page and requires a special large-scale printer to produce. Though such diagrams give the authors and readers the false sense of being in control – they normally have no impact on driving any architectural change.

    The most important output of any architectural activity is the set of decisions made along the product development journey. It comes as a surprise that so little effort is spent on arriving at and documenting architectural decisions in a consistent and understandable manner.

    Most architectural decisions have the following elements:

    • Problem Statement: State the problem are we addressing and the context.
    • Motivation: Explain why we need to make  the decision at this point in time.
    • Constraints: It is important to clearly articulate all constraints related to a decision – architecture is in essence about finding the optimal solution within the constraints given to us.
    • Requirements: As stated in Principle 2: Focus on Quality Attributes – not on Functional Requirements, it is important to explicitly document non-functional requirements.
    • Alternatives: List the alternative solutions to the problem, clearly describe the pros and cons of each.
    • Decision: Clearly articulate the final decision leaving no room for ambiguity
    • Rationale: Outline the thought process that resulted in the decision.

    Finally, there is one piece of critical information required: Who has made this decision and when? Appropriate accountability and visibility of decisions is a must:

    In his excellent book Visual Explanations, Edward Tufte provides an incisive analysis of the Decision to Launch the Space Shuttle Challenger. Over 10 pages of detailed analysis Tufte demonstrates the multiple failings of communication that resulted in the shuttle being launched in a temperature too low; resulting in the fatal O-ring failure. The day before the launch the rocket engineers prepared a presentation outlining the temperature concerns. As Tufte describes: “The charts were unconvincing; the arguments against the launch failed; the Challenger blew-up.” One of the most interesting points raised by Tufte was that the title chart and all other displays used did not provide the names of the people who prepared the material.

    We would also recommend not only documenting architectural decisions, but defining the architectural decisions you need to make upfront and identifying the dependencies between them. Following is an example for the architectural decisions required in defining an integration approach for a specific business area:

    Architecural Decision

    It is appropriate to remember at this point Principle 3: Delay design decisions until they are absolutely necessary. What we are saying here is not in conflict with this principle. It is important to clearly understand all the architectural decisions that you need to make; or as Donald Rumsfeld said the known unknowns. Then as more data becomes available you can start making and documenting the decisions.

    Finally communicating the list of architecture decisions, past, present and future has tremendous value. A good practice is to be public with all the architectural decisions and all the background dialogue utilizing social collaboration tools available in the enterprise. Why not make everyone finally understand what architects spend time on. Basically, the unit of work of architecture are the architectural decisions. The difference between what Enterprise Architects and Solution Architects is nothing more than the level of abstraction.

     
    • FM 1:30 pm on September 21, 2014 Permalink | Reply

      The decision log has been a key artifact I’ve learned to use over the years. I believe it captures not just the architect’s thought process but should reflect the zeitgeist and various constraints of the time. We’ve used it successfully recently to manage a consistent decision cadence in agile projects with little certainty and high discovery.

      You have a few more decision attributes in your list that I think I will incorporate into our next set of initiatives.

      Like

      • muraterder 6:47 pm on September 23, 2014 Permalink | Reply

        Great to see that the decision logs are being successfully utilized in agile projects.

        Like

    • pkruchten 10:33 pm on September 26, 2014 Permalink | Reply

      Other views on Architectural design decisions (ADD) include:
      J. Tyree and A. Akerman, “Architecture Decisions: Demystifying Architecture,” IEEE Software, vol. 22(2), pp. 19-27, 2005. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1407822&tag=1
      and mine, here:
      http://pkruchten.files.wordpress.com/2009/07/kruchten-2004-design-decisions.pdf

      One of the key element is the many relationships between design decisions.

      Several tools have been tried to help capture and manage ADD, with not great success.
      Philippe

      Liked by 1 person

    • Charles T. Betz (@CharlesTBetz) 1:08 pm on September 29, 2014 Permalink | Reply

      I would not be quite so dismissive of the power of architecture diagrams. Have written about this here: http://www.lean4it.com/2014/09/thoughts-on-agile-and-enterprise-architecture.html

      Like

    • muraterder 9:25 pm on September 29, 2014 Permalink | Reply

      Very good point about the value of pictures and images, which relates to how critical collaboration and communication are to the success of the architecture function in an enterprise. I have added an additional post that touches on this topic.

      Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel