Recent Updates Page 2 Toggle Comment Threads | Keyboard Shortcuts

  • pierrepureur 1:20 am on December 2, 2015 Permalink | Reply
    Tags: , , , ,   

    The Benefits of Continuous Architecture 

    CA Book Cover Small 2

    The cost-quality-time triangle is a well-known project management aid that basically states the key constraints of any project.

    CA Figure1-4c

    The basic premise is that it is not possible to optimize all three corners of the triangle; you are asked to pick any of the two corners and sacrifice the third.

    We do not claim that Continuous Architecture solves this problem, but the triangle does present a good context to think about benefits of Continuous Architecture. If we identify good architecture as representing quality in a software solution, then with Continuous Architecture, we have a mechanism that helps us balance time and cost. Another way of saying this is that Continuous Architecture helps us balance time and cost constraints while not sacrificing quality.

    The time dimension is a key aspect of Continuous Architecture. We believe that architectural practices should be aligned with Agile practices and not contradict them. In other words, we are continuously developing and improving the architecture rather than doing it once and creating the Big Architecture up Front (BARF). As we discuss in detail in our book  (“Continuous Architecture“- and elsewhere in this blog, Continuous Architecture puts special emphasis on Quality Attributes (Principle 2: Focus on Quality Attributes, not on functional requirements). We believe that cost is one of the Quality Attributes that is often overlooked but is critical in making the correct architectural decisions.

    Continuous Architecture does not solve the cost-quality-time triangle, but it gives us tools to balance it while maintaining quality. An element that the cost-quality-time triangle does not address is sustainability. Most large enterprises have a complex technology and application landscape as a result of years of business change and IT initiatives. Agile and Continuous Development practices focus on delivering solutions and ignore addressing this complexity. Continuous Architecture tackles this complexity and strives to create a sustainable model for individual software applications as well as the overall enterprise.

    Applying Continuous Architecture at the individual application level enables a sustainable delivery model and a coherent technology platform resilient against future change. Applying Continuous Architecture at the
    enterprise level enables increased efficiency in delivering solutions and a healthy ecosystem of common platforms.

  • pierrepureur 11:47 pm on August 4, 2015 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture and the Quality Assurance Group 

    Continuous ArchitectureSeveral companies including Pivotal Labs and Microsoft[1] have eliminated their formal Quality Assurance groups, and moved the testing function back to the developers.  Those companies believe that moving the testing function back to the developers together with the appropriate automated tools to deploy and test software empowers them, and enables them to produce higher quality software. When developers are responsible for testing their software and supporting it in production, they become concerned with how hard their applications are to deploy, test and run, and not just with how quickly they can write software.

    However this approach may be too radical for some companies, and we believe that there is still a role for Quality Assurance groups in a Continuous Delivery world. The key is to ensure that the testing group collaborates closely with Development and Operations as part of the DevOps process.

    According to Bret Pettichord’s 2007 Schools Of Software Testing talk[2], testers can be grouped into the following five “Schools”:

    • Analytic School: sees testing as rigorous and technical with many proponents in academia
    • Standard School: sees testing as a way to measure progress with emphasis on cost and repeatable standards
    • Quality School: emphasizes process, policing developers and acting as the gatekeeper
    • Context-Driven School: emphasizes people, seeking bugs that stakeholders care about (Pettichord aligns himself with that school)
    • Agile School: uses testing to prove that development is complete; emphasize automated testing

    Testing groups aligned with the Agile or the Context-Driven schools are likely to be the most supportive of the Continuous Architecture approach as well as of the Continuous Delivery process, while testing groups aligned with the three other schools may have a challenge adapting to that process. When testers act as gatekeepers as emphasized in the “Quality School”, they negatively impact the collaboration between development, operations and testing which is at the core of the “DevOps” process.

    Please refer to Pettichord’s 2002 article, “Don’t Become The Quality Police[3] “ for a discussion of how positioning the testing group in  the “process police” role may generate confrontation, and could degrade relationships with development and operations.

    Do you still have a formal Quality Assurance group and have successfully implemented Continuous Delivery? We would love to read your observations – please drop us a note!





  • muraterder 8:35 pm on July 22, 2015 Permalink | Reply  

    Open Source and Continuous Architecture 

    Setting technology standards is one of the more common activities that Enterprise Architecture groups fulfill. Setting standards is challenging enough when considering commercially available products. The last fifteen years has seen the emergence of Open Source software that has increased the challenges in this space. According to the Open Source Initiative, the term is defined as:

    Open source software is software that can be freely used, changed, and shared (in modified or unmodified form) by anyone. Open source software is made by many people, and distributed under licenses that comply with the Open Source Definition.

    Commercially Open Source has been quite successful and acted as a true disruptive source within the software industry. At its best Open Source can be considered to embody all the good things about the internet age. It demonstrates the ability of individuals to collaborate to achieve a common goal. The common wisdom being that multiple people addressing the same problem solve it more elegantly with fewer defects. The Open Source movement has also taught us quite a lot about how communities can collaborate effectively across different geographies and time zones. Though seemingly very democratic, it is interesting to note the concept of a “benevolent dictator”(also known as the “committer”) being key to quite a few of the successful Open Source initiatives. Briefly, the benevolent dictator can be considered the architect, or the person responsible for the conceptual integrity of the solution.

    The impact of Open Source can be seen from the market leading solutions it has given rise to; from Linux, to the multiple projects under the Apache Foundation (TomCat, Camel and Hadoop to name a few well known ones). From a commercial perspective Open Source has also created new business models, where companies provide versions of Open Source software that are supported in a more traditional software model. These have been instrumental for large organizations to feel comfortable in using the Open Source technology. You could say  that in the future we may get to a point where there are only two types of software left: open source and Software as a Service (i.e. software delivered via the Cloud)

    Does Open Source have any significance from an architectural approach perspective? From one perspective, Open Source code has been a huge asset for software reuse. Effectively used Open Source components can significant reduce the time to market of a software development team. On the other hand, Open Source technology creates significant challenges in terms of introducing unknowns into your software code base. This can happen from multiple angles:

    • Commercially, the teams might not be aware of what Open Source licensing they are operating under. This could cause significant headaches if it is found that the organization is not fully adhering to the license conditions. For example, legal headaches when signing a new contract for open source support – e.g. indemnification!
    • The development teams might not be downloading the most recent version of the Open Source component, or more likely they will not be keeping their version of the component up to date with newer versions. This can result in non-performant or defective code existing in the code base.
    • A special challenge is around security risks. There is no guarantee that all security flaws will be fixed rapidly and the alerting mechanisms for known security breaches can be suboptimal.
    • Finally, there is always the possibility of the community losing interest in the Open Source initiative. Resulting in a portion of your code base being orphaned with no one really supporting it or understanding it fully. From one perspective this is no different than the legacy Cobol code base existing organizations.

    From a Continuous Architecture perspective we are very supportive of the ethos of Open Source initiatives. We believe that the code itself, as well as the community models it espouses are tremendously valuable. We would encourage organizations to embrace Open Source and apply some of the Open Source development approaches internally as well; commonly known as Internal Open Source models.  An area where this approach can be used is for orphaned Enterprise Services that the enterprise no longer wants to maintain because there is not enough demand for them. If a few groups still want to use those services, they can continue to maintain them using our internal open source model.

    While implementing Open Source within a commercial enterprise a controlled approach in terms of tracking Open Source usage and addressing the challenges highlighted above is required. We believe that following the Continuous Architecture principles like Principle 4 Architect for Change –  Leverage “The Power of Small”  and Principle 1 Architect Products – Not Just Solutions for Projects will help in addressing the challenges. Both of these principles enable you to manage to scope and context of your code base so that you can keep track of the areas where you use Open Source more effectively. In addition, open source software tends to be more cloud friendly than proprietary software. This is where applying Principle 5 – Architect for Build, Test  and Deploy comes in handy.

    • Robert Baty 9:03 pm on July 24, 2015 Permalink | Reply

      Good post, at my current employer we have wrestled with Open Source for some time now. We have a process to review the licensing, track the usage and ensure the open source is from a more reputable provider like Apache, however, it still seems lacking.

      Developers we hire nowadays expect to be able to download and use whatever tool they like and get frustrated working at a big shop with some of the bureaucracy and security policies in place. How do you balance the need to allow creativity and innovation in solving technical problems with the corporate governance of large organizations?


  • pierrepureur 11:29 pm on July 14, 2015 Permalink | Reply
    Tags: , , , ,   

    The Value of (Continuous) Architecture 

    The Value of (Continuous) Architecture

    What is the real value of architecture? We think of architecture as an enabler for the delivery of valuable software.  Software architecture’s concerns, quality attribute requirements such as performance, maintainability, scalability and security are at the heart of what makes software successful.

    A comparison to building architecture may help illustrate this concept. Stone arches are one of the most successful building architecture constructs. Numerous bridges built by the Romans around 2000 years ago using stone arches are still standing – for example, the Pont du Guard, built in the first century AD. How were stone arches being built at that time? A wooden frame known as “centring” was first constructed in the shape of an arch. The stone work was built up around the frame and finally a keystone was set in position. The key stone gave the arch strength and rigidity. The wood frame could then be removed and the arch was left in position. The same technique was later used in the middle ages when constructing arches for Gothic Cathedrals.

    CA Figure 11-2


    We think of software architecture as the “centring” for building successful software “arches”.  When Romans built bridges using this technique, we do not believe that anybody worried about the aesthetics or the appearance of the “centring”. Its purpose was the delivery of a robust, strong, reliable, usable and long lasting bridge.

    Similarly, we believe that the value of software architecture should be measured by the success of the software it is helping to deliver, not by the quality of its artifacts. Sometimes, architects use the term “value evident architecture” to describe a set of software architecture documents they created and are really proud of, and that development teams should not (ideally) need to be sold on in order to use this architecture. However, we are somewhat skeptical about these claims – can you really evaluate a “centring” until the arch is complete, the key stone has been put in place and the bridge can be used safely?

  • pierrepureur 11:48 pm on July 6, 2015 Permalink | Reply
    Tags: , , , ,   

    How to Evolve Continuous Architecture over Time? Think “Minimum Viable Architecture” 

    How to Evolve Continuous Architecture over Time? Think “Minimum Viable Architecture”

    Let’s assume that a team has successfully developed and implemented an application by following the six Continuous Architecture principles.  Now we’ll turn our attention to their next challenge – how do they evolve the architecture to cope with the unavoidable requirement changes that are already piling up upon them?  This is where they need to leverage a “Minimum Viable Architecture” strategy.

    Let’s first explain what we mean by “Minimum Viable Architecture”. This concept is often associated with the concept of “Minimum Viable Product”, so we’ll start by giving a brief overview of this concept.

    What Exactly is a “Minimum Viable Product”?

    A “Minimum Viable Product” can be defined as follows:

    In product development, the minimum viable product (MVP) is the product with the highest return on investment versus risk (…)

    A minimum viable product has just those core features that allow the product to be deployed, and no more. The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. It is a strategy targeted at avoiding building products that customers do not want, that seeks to maximize the information learned about the customer per dollar spent (from Wikipedia, the free encyclopedia and: S. Junk, “The Dynamic Balance Between Cost, Schedule, Features, and Quality in Software Development Projects”, Computer Science Dept., University of Idaho, SEPM-001, April 2000, Eric Ries, March 23, 2009, Venture Hacks interview: “What is the minimum viable product?”, Lessons Learned, Perfection By Subtraction – The Minimum Feature Set, SyncDev (, Holiday, Ryan The single worst marketing decision you can make The Next Web. 1 April 2015, Ries, Eric (August 3, 2009). “Minimum Viable Product: a guide”).

    The concept of Minimum Viable Product has been actively promoted by proponents of Lean and Agile approaches, and it certainly has worked very well at several startups . The concept sounds attractive at first – being able to quickly and inexpensively create a product to gauge the market before investing time and resources into something that may not be successful is a great idea.

    However, in a highly regulated industry like Insurance or Banking, the concept of Minimum Viable Product has limitations – some product capabilities such as regulatory reporting, security and auditability are not optional and cannot be taken out of scope. Also software vendors routinely launch their products as “alpha” or “beta” versions, but very few Financial Services companies would consider launching anything but a production ready version, especially to external audiences.

    Of course some other features such as some inquiry screens or activity reports may be omitted from the initial release, but those features are usually easy and inexpensive to build so taking them out of scope for the initial release may not save much time or money.

    In addition, implementing new products may involve leveraging existing capabilities implemented in older back-end systems (such as rate quoting in Insurance), and interfacing with those systems is likely to represent a significant portion of the effort required to create a new product – unless those interfaces have already been encapsulated by developing reusable services as part of a previous effort. Unfortunately, that’s not often the case, and teams attempting to implement a Minimum Viable Product in Financial Services companies often struggle with defining a product that has enough capabilities to be moved to production – yet which is also small enough to be created quickly and with a minimal investment of time and money.

    What about Minimum Viable Architecture?

    On the other hand, using a Minimum Viable Architecture strategy is an effective way to bring a product to market faster with lower cost. Let’s examine a sample Quality Attributes Utility Tree to clarify this point:

    CA Figure 7-12

    Under each of those Quality Attributes are specific Quality Attribute Refinements – for example, “Latency” further refines “Performance”. In addition, a Quality Attribute Refinement is illustrated by an Architecture Scenario, in terms if Stimulus/Response/Measurement. The Architecture Scenarios themselves are a very effective way to express Quality Attribute Requirements, since they are concrete and measurable, and should be easy to implement in a prototype.

    There is also a time/release dimension to Quality Attributes Analysis that answers the following questions:

    • How many concurrent users will be on the system at initial launch?
    • How many concurrent users will be on the system within the first 6 months?
    • How many concurrent users will be on the system within the first year?
    • How many transactions per second is the system expected to handle at initial launch?
    • How many transactions per second is the system expected to handle within the first 6 months?
    • How many transactions per second is the system expected to handle within a year?

    This time dimension can be represented in the Quality Attributes Utility Tree as shown below:

    CA Figure 7-13

    Many architects consider the worst case scenario when designing a system – for example, they would ask their business partners for the “maximum number of concurrent users the system should be able to support” without mentioning a time frame – and add a “safety margin” on top of that number, just to be on the safe side. Unfortunately, they do not realize that the number of concurrent users provided by the business is likely to be an optimistic guess (business partners would like to believe that every new system is going to be a big success!) – Unless the system that they are architecting replaces an existing system, and usage volumes are precisely known.

    As a result they end up architecting the new system to handle an unrealistic number of concurrent users which may not be reached for a few years, and sometimes add unnecessary complexity (such as caching components) to their design. We are recommending instead to adopt a  “Minimum Viable Architecture “approach  based on realistic estimates at launch time, and evolve that architecture based on actual usage data. Also keep in mind that technology becomes more efficient over time, and keep Principle 3 in mind: Delay design decisions until they are absolutely necessary, and design the architecture based on facts, not guesses!

    A useful strategy is to limit the budget spent on architecting. This forces the team to think in terms of a Minimum Viable Architecture that starts small and is only expanded when absolutely necessary.

  • muraterder 9:52 pm on February 2, 2015 Permalink | Reply  

    Business Context for Architecture – Using the Right Term? 

    In 2004 we published an article in IEEE IT Professional titled ‘Defining Business Requirements Quickly and Accurately’. In hindsight we probably should have called it ‘Defining the Business Context for Architecture’.

    Eleven years have passed since this article, but we believe that the key tenets of the article are still valid. But have the terms of the IT industry changed? In particular, I am interested in views on use-cases vs. user stories.

    In 2004 we used the concept of use-cases to define how we can provide a dynamic view of the business. However, these days though the term use-case is quite widely used, we believe that it has lost its core meaning. Similar to Kleenex and Xerox – the brand is so successful that it is used to represent something more general than originally intended.

    Do we really find many software development teams that create use-case models and write use-case descriptions – probably not. Instead everyone is focused on user-stories and agile development, or at least pretends to be. According to Wikipedia a user story is defined as:

    “In software development and product management, a user story is one or more sentences in the everyday or business language of the end user or user of a system that captures what a user does or needs to do as part of his or her job function.”

    The challenge with this is the scope of a user story. A few sentences are a great way of documenting requirements on an index card; but are they sufficient to operate at a larger scope? We address this by  grouping user stories into themes, or what we call user story themes.

    So, should we still use the term use-cases or switch to user story themes? Let us now present our key idea about providing business context to an architecture to illustrate our point.

    The first step in the developing an architecture is to understand the functional scope of the product or project we are building. We use value chains to better understand and describe this functional scope. The objectives of developing a value chain are to:

    • Understand the “why” and “what” of the business. Why are we in business? What are the core activities that provide value to our clients and us?
    • Understand the “how” of the business. How do we actually execute within the core activities of our business?
    • Understand the “whom” of the business. Who are the internal and external users of our technology?

    The role of the “Value Chain” is to depict activities (“chevrons”) of the Enterprise that are involved in generating value, as well as the supporting areas. The following example depicts a sample value chain for a loan servicing organization.

    Value Chain

    A value chain provides an overview of the business by depicting major processes that collectively generate value for the organization and its clients. A value chain is a long lasting view of the business. As long as the core business does not change, the same value chain can be used by the organization for years. One of the first authors to introduce the Value Chain concept was Michael Porter in 1985.

    The value chain is great at setting the high level context, but to drive architectural definitions we want to animate the value chain. For this in 2004 we proposed using use-cases, which was first introduced by Ivar Jacobson in 1992. For the rest of this blog we will put “use-cases / user story themes”, in place of where we had use-cases.

    Let’s use the loan servicing organization from our previous example to further expand. We know from the value chain that the first two chevrons are Loan Application and Funds Disbursement. But what actually happens and who is involved in these activities? We can get this information by looking at the “use-cases / user story themes” for each chevron.

    Value Chain and Use Case

    As can be seen from the above diagram, we now know more about how our business operates. We have determined three main events (Apply for Loan, Disburse Loan and Pay Fees), which we will call “use-cases / user story themes”, and eight roles (Applicant, Guarantor, Lender, Credit Bureau, Core Processing, Loan Origination, Servicing and Designated Recipient), which we will call actors. The “use-cases / user story themes” tell us the “how” and the actors the “who” of the business.

    Going back to our Loan Servicing Organization, we find that we have five external actors, the Applicant, Lender, Guarantor, Credit Bureau and Designated Recipient. From this we can understand the core business revolves around connecting three of these actors, the Applicant, Lender and Guarantor. We can also infer that the organization in question is not a lender itself, but acts as a broker between the lender, the applicant and the guarantor of the loan.

    We can go further and create a high-level sequence diagram outlining the major components that will be involved in the ‘“use-case / user story theme”:

    Sequence diagram

    This sequence diagram highlights the need for one major system for Originations and two supporting components; an Imaging Service and a Formatting Service. You can see that we have started making architectural decisions already in the process of drawing a sequence diagram. When we make decisions on such matters it is not usually in confines of one sequence diagram, but across the breadth of the overall architecture. Other sequence diagrams we would have developed would have pointed out a need for common services such as the Imaging and Format services.

    Now that we have provided a brief overview of how we want to animate the value chain, we can come back to our question.

    Should we stick with the concept of use-cases? I believe that use-cases provide an excellent approach to identifying scope and keeping the focus on the end user. However, will the term be misunderstood?

    Or, should we acknowledge the relevance of user stories and group them into user story themes?

    Finally, does it really matter? You might say one challenge with the software industry is that we keep on re-inventing the wheel and leave behind key lessons at each ‘iteration’ of fashionable terms.

  • muraterder 9:23 pm on September 29, 2014 Permalink | Reply  

    Collaboration and Communication – 50% of the Work 

    Ivory tower is a label quite often applied to architects within an enterprise. They can be seen as people that draw pretty diagrams and add no value. The overall objective is to make architects become part of the organization and not perceived as a separate entity.

    Trying to extract value from architects can result in quite interesting scenarios. At one point, an enterprise decided that all architects needed to be formally part of a central organization. This was to ensure that all architects were trained and applied a consistent approach, but was really more  a “power play” by a senior manager.

    About six months after this change, the senior manager realized he had a group of a couple of hundred people and was getting challenged by his management about the value his architects provided. To make sure that he could justify the group, he decreed that every architect had to charge their time to actual projects. This resulted in a scenario where anyone in IT who launched a project suddenly found a queue of 5-6 architects knocking on their door – from the data architect to the network architect. Obviously this did not do so well in increasing the trust of the architecture function.

    Most architects focus their effort in ‘building’ the architecture. This is natural since normally fairly technical people that enjoy building systems become architects. However, ‘socializing’ the architecture is equally (if not more) important.  In summary: Improving the perception of the architecture function is 50% of an architect’s job and requires soft skills.

    What do real architects do?

    The recent decades have seen an increase in the concept of the ‘celebrity’ architect. They are successful due to their ability to sell an expensive vision, fund it but also have the ability to execute. If you look at the opening page of Gehry Partners website ( you see the following quote:

    Gehry Partners employs a large number of senior architects who have extensive experience in the technical development of building systems and construction documents, and who are highly qualified in the management of complex projects.

    The most interesting part about the above extract is the emphasis on management of complex projects – normally not seen as a core competency of IT architects, maybe it should be?


    The Dancing House by Frank Gehry and Vlado Milunic in Prague.

    There is no easy cook-book on how to improve communication around the architecture function. However, focus on the following communication tools would be a good starting point:

    1. Vision and Storyboard: This is stating the obvious – you have to know what your product is before you can try to market it. It is important for an architecture function to clearly identify how it is adding value to the enterprise. This is usually done at the senior management level via a set of PowerPoint presentations. Though we do agree that a vision and mission statement for architecture is required; once it is articulated, the value of the architecture should be self-evident.
    2. Communication Channels: The architecture function in an enterprise has to communicate with all sets of stakeholders. This goes from the CxO level – who has to be convinced of the value of the architecture to the DBA who is trying to figure out if they should use stored procedures. Creating channels for open and bi-directional communication with the different stakeholder groups is required to reach out to such a wide variety of interests.
    3. Artifacts and Tools: One of our favorite definitions of architecture is: “ A bag of tools and the wisdom to know when to use them”. Crystallizing and packaging architecture artefacts and tools so that they can be easily digested and used by different stakeholders is another key success factor. The most important principle here is self-service. In other words, the artefacts and tools should not require from the reader to have a PhD to understand them.
    4. Process: At the end of the day the architecture activities live within a process within the enterprise. It is important to understand the impact on SDLC and change management.
    5. Financials: Ability to articulate value in financial terms is a holy grail in terms of architecture. This can become an all consuming task it taken to the nth degree of detail. However, a level of financial awareness is a must. 

    As can be seen from the above points, focusing on the collaboration and communication aspects of architecture is not an effort that is left as an afterhours exercise. It should be taken seriously and any architecture function in a large enterprise should have dedicated roles focusing on these aspects.

    The table below provides a simple overview of how the communication tools can be applied to different stakeholder groups. It has been color coded where green represents most relevant, amber somewhat relevant and grey not relevant at all.


    As can be seen different tools and techniques are required to address different stakeholder groups. It should be noted that the only tool that is highly relevant for all stakeholder groups is the establishment of the correct communication channels.

    • Jeff Ryan 9:36 pm on September 29, 2014 Permalink | Reply

      Nice post. I lived through a scenario similar to the one outlined where architects were brought into a central organization and had to learn how to show value to stakeholders. I was surprised not see business partners listed as a stakeholders architecture must gain alignment and support with, particularly, since architects scope and frame efforts from business / technical perspectives, and need to have credibility with the business. Showing value to business partners can accelerate acceptance by other IT roles…


      • muraterder 7:17 pm on September 30, 2014 Permalink | Reply

        Hi Jeff,

        Very good comment on business partners. You are totally right that they are a very critical stakeholder. The table was focused on a sample set and looks like I have fallen into the IT trap of focusing internally more than externally; just what I was trying to avoid.

        From one perspective business users seem to understand architecture better – they naively assume that IT should be acting in a structured manner. However, if you try to play the business case/financial benefit angle they can be very critical.


  • muraterder 8:29 pm on September 17, 2014 Permalink | Reply
    Tags: , , , ,   

    Architects Unit of Work – Architectural Decisions 

    If you ask most people what is the most visible output from architecture, they will most likely point to a fancy diagram that highlights the key components and their interactions. Usually the more color and complexity the better. Ideally it should be too difficult to read on a normal page and requires a special large-scale printer to produce. Though such diagrams give the authors and readers the false sense of being in control – they normally have no impact on driving any architectural change.

    The most important output of any architectural activity is the set of decisions made along the product development journey. It comes as a surprise that so little effort is spent on arriving at and documenting architectural decisions in a consistent and understandable manner.

    Most architectural decisions have the following elements:

    • Problem Statement: State the problem are we addressing and the context.
    • Motivation: Explain why we need to make  the decision at this point in time.
    • Constraints: It is important to clearly articulate all constraints related to a decision – architecture is in essence about finding the optimal solution within the constraints given to us.
    • Requirements: As stated in Principle 2: Focus on Quality Attributes – not on Functional Requirements, it is important to explicitly document non-functional requirements.
    • Alternatives: List the alternative solutions to the problem, clearly describe the pros and cons of each.
    • Decision: Clearly articulate the final decision leaving no room for ambiguity
    • Rationale: Outline the thought process that resulted in the decision.

    Finally, there is one piece of critical information required: Who has made this decision and when? Appropriate accountability and visibility of decisions is a must:

    In his excellent book Visual Explanations, Edward Tufte provides an incisive analysis of the Decision to Launch the Space Shuttle Challenger. Over 10 pages of detailed analysis Tufte demonstrates the multiple failings of communication that resulted in the shuttle being launched in a temperature too low; resulting in the fatal O-ring failure. The day before the launch the rocket engineers prepared a presentation outlining the temperature concerns. As Tufte describes: “The charts were unconvincing; the arguments against the launch failed; the Challenger blew-up.” One of the most interesting points raised by Tufte was that the title chart and all other displays used did not provide the names of the people who prepared the material.

    We would also recommend not only documenting architectural decisions, but defining the architectural decisions you need to make upfront and identifying the dependencies between them. Following is an example for the architectural decisions required in defining an integration approach for a specific business area:

    Architecural Decision

    It is appropriate to remember at this point Principle 3: Delay design decisions until they are absolutely necessary. What we are saying here is not in conflict with this principle. It is important to clearly understand all the architectural decisions that you need to make; or as Donald Rumsfeld said the known unknowns. Then as more data becomes available you can start making and documenting the decisions.

    Finally communicating the list of architecture decisions, past, present and future has tremendous value. A good practice is to be public with all the architectural decisions and all the background dialogue utilizing social collaboration tools available in the enterprise. Why not make everyone finally understand what architects spend time on. Basically, the unit of work of architecture are the architectural decisions. The difference between what Enterprise Architects and Solution Architects is nothing more than the level of abstraction.

    • FM 1:30 pm on September 21, 2014 Permalink | Reply

      The decision log has been a key artifact I’ve learned to use over the years. I believe it captures not just the architect’s thought process but should reflect the zeitgeist and various constraints of the time. We’ve used it successfully recently to manage a consistent decision cadence in agile projects with little certainty and high discovery.

      You have a few more decision attributes in your list that I think I will incorporate into our next set of initiatives.


      • muraterder 6:47 pm on September 23, 2014 Permalink | Reply

        Great to see that the decision logs are being successfully utilized in agile projects.


    • pkruchten 10:33 pm on September 26, 2014 Permalink | Reply

      Other views on Architectural design decisions (ADD) include:
      J. Tyree and A. Akerman, “Architecture Decisions: Demystifying Architecture,” IEEE Software, vol. 22(2), pp. 19-27, 2005.
      and mine, here:

      One of the key element is the many relationships between design decisions.

      Several tools have been tried to help capture and manage ADD, with not great success.

      Liked by 1 person

    • Charles T. Betz (@CharlesTBetz) 1:08 pm on September 29, 2014 Permalink | Reply

      I would not be quite so dismissive of the power of architecture diagrams. Have written about this here:


    • muraterder 9:25 pm on September 29, 2014 Permalink | Reply

      Very good point about the value of pictures and images, which relates to how critical collaboration and communication are to the success of the architecture function in an enterprise. I have added an additional post that touches on this topic.


  • pierrepureur 10:41 pm on September 2, 2014 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture Principles (6 of 6) 

    This is (finally) the last installment in our discussion of the six principles of Continuous Architecture:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    In the previous pages, we discussed Principles 1, 2, 3 , 4 & 5. This page will discuss Principle 6 – how to organize teams!

    Principle 6: Model the organization of your teams after the design of the system you are working on, in order to promote interoperability

    The first five principles dealt with process and technology – but what about the third dimension of every IT project we are involved in – people? How does Continuous Architecture impact the organization of software delivery teams – or conversely how can we better architect our teams to support Continuous Delivery? First some scoping considerations here: For Continuous Architecture to be effective, we need to include all constituents in the delivery team – not only those responsible for the build activities (designers and developers) but also the groups responsible for testing and deployment, as well as the groups responsible for requirements (Principle # 4).  Collaboration is a key element of Continuous Architecture.

    Going back to our hypothetical example from our discussion of Principle 5, the IT Group in the case study initially attempted to organize the “WebShop” project resources as a number of Agile Teams, each team being focused on one of the  layers of the system: User Interfaces, Mid-Tier Services, Database Services, Back-end Interface services – and quickly discovered that this approach was counterproductive. Organizing teams in layers did  not promote collaboration, but created  communication issues. In addition, the adoption of Agile techniques varied by teams: the UI team decided to be as “Agile” as possible, while the team responsible for the services interfacing with back-end Systems elected to follow their traditional “waterfall” approach. Using several SDLC approaches simultaneously created more misunderstandings and communication issues: since back-end interface services weren’t ready for testing until the end of their waterfall development cycle, the UI and mid-tier developers had to “stub out” their calls to those services during their iteration testing, which prevented them from testing with realistic test cases and resulted in the majority of the defects being discovered in the QA phase. This in turn resulted in significant unplanned rework to fix those defects. In addition working sessions between the teams were few and far between and that lack of communication contributed to additional misunderstandings resulting in more defects.

    What happened here to our hapless IT Team is that they were the victims what is known in the IT community as “Conway’s Law”. Back in 1968 (according to Wikipedia) computer programmer Melvin Conway introduced what became “Conway’s Law” at the National Symposium on Modular Programming: “organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations”.  This was further elaborated by James O. Coplien and Neil B. Harrison as follows: “If the parts of an organization (e.g. teams, departments, or subdivisions) do not closely reflect the essential parts of the product, or if the relationship between organizations do not reflect the relationships between product parts, then the project will be in trouble… Therefore: Make sure the organization is compatible with the product architecture” (Coplien and Harrison – July 2004 – Organizational Patterns of Agile Software Development).  Putting Conway’s Law to work for you – and not against you – means organizing your teams after the design of the system you are working on, in order to promote interoperability. In this case study, it would have been much better to organize the teams vertically (i.e. by transaction type) rather than by architecture layer.

    In Summary: The Principles of Continuous Architecture At A Glance:

    In this blog, we defined “Continuous Architecture” as an architecture style that follows the following six Principles:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    Continuous Architectures are designed to have the following capabilities: They are resilient to change, they are testable, they can respond to feedback and in fact they are driven by feedback. The following pages will discuss how Continuous Architecture begins, how it evolves and the role of an Architect in Continuous Architecture, so stay tuned!

  • pierrepureur 11:25 pm on August 28, 2014 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture Principles (5 of 6) 

    We are almost done with our discussion of the six principles of Continuous Architecture:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    In the previous two pages, we discussed Principles 1, 2, 3 & 4. This page will discuss Principle 5 – where things get even more interesting

    Principle 5: Architect for Build, Test and Deploy

    So far the first four principles for Continuous Architecture are not specific to projects using the Continuous Delivery approach. This changes with the fifth Principle – “Architect for Build, Test and Deploy”. Adopting a Continuous Delivery model implies that all phases of the Software Development Life Cycle (SDLC) need to be optimized for Continuous Delivery. Adopting an Agile Methodology such as SCRUM is a good first step towards Continuous Delivery – but it’s not enough by itself.

    Let’s use a simple, hypothetical example to illustrate this point. An IT group in a large US Financial Services Corporation has just received a request from their business partners to build and implement a new web-based on-line system to allow prospective customers to compare one of their products to the competitions’ offerings – we will call this the “WebShop” system. They have an excellent track record of delivering projects on time and within budget, and their focus is on stability and security rather than time to market. Major software releases follow a quarterly schedule, while minor releases are delivered on a monthly basis. Historically, they have been using a “waterfall” approach for their Software Development Life Cycle (SDLC), with some recent attempts at moving toward a more iterative approach (really a “fast waterfall”) and even some agile pilots for small projects. Their infrastructure is optimized for this delivery schedule – they use fixed, pre-defined “silos” (common to all applications) for their centralized Quality Assurance (QA) testing group to test the new versions of their application software before it is deployed in production as part of their release schedule.

    The IT organization decides to use some of the Agile techniques as well as a Continuous Integration[1] approach to building software, and delivers the new system on time, with few defects. Their only concern is that despite adopting Agile techniques and Continuous Integration practices – including some automated tests as part of each build – the vast majority of defects are still found during QA testing. In addition, deployment of application software to the various testing environments and to the production environment is still a long and error prone process.

    After the system has been delivered, their business partners want to quickly make several changes to the user interfaces (UIs), and change some business rules. They also want to have multiple versions of the application in production, to test various hypotheses and understand which UI configuration works better with their target audience (this approach is known as “Champion/Challenger” testing or A/B testing). The IT group discovers that their skills and processes are not well adapted to this new situation. Adopting Agile techniques enables them to optimize the “Design/Build” phase of the SDLC but their cross-system integration, testing and deployment processes (described as the “last mile” in the picture below) remain a bottleneck, and prevent them from achieving their rapid delivery goal.

     Principle 5-1

     Attempts at speeding up existing integration, testing and deployment processes by taking short cuts result in errors and production issues. In effect, they are attempting to force a process designed to release software on a monthly cycle to release on a weekly (or even faster) cadence, and the process is no longer functioning smoothly.

    What could they have done differently? We believe that it is important to design an architecture optimized for the whole SDLC process, not just the “Design/Build” phase of the process. In Continuous Architecture, the architect needs to take into account the integration, testing and deployment requirements – and Use Cases and Scenarios are an excellent way to document those requirements. Each iteration includes a design/build, integration/testing and deployment component.

     Principle 5-2

     In practice, this is achieved by designing small, API-testable services and components. It also means keeping coupling of components to an absolute minimum (remember Principle # 4 and the Robustness Principle). Finally, it means avoiding putting business logic in hard to test areas such the Messaging Infrastructure.

    The goal here is to keep testing and deployment in mind at all times – and avoiding the “We’ll fix it in testing” attitude so often used to create defective code in order to make an artificial deadline. Defects are waste – it takes time an effort to create them, find them and fix them!

    [1] See for example Martin Fowler’s blog at Martin Fowler’s blog at for an excellent introduction to Continuous Delivery

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc