Updates from August, 2014 Toggle Comment Threads | Keyboard Shortcuts

  • pierrepureur 11:25 pm on August 28, 2014 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture Principles (5 of 6) 

    We are almost done with our discussion of the six principles of Continuous Architecture:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    In the previous two pages, we discussed Principles 1, 2, 3 & 4. This page will discuss Principle 5 – where things get even more interesting

    Principle 5: Architect for Build, Test and Deploy

    So far the first four principles for Continuous Architecture are not specific to projects using the Continuous Delivery approach. This changes with the fifth Principle – “Architect for Build, Test and Deploy”. Adopting a Continuous Delivery model implies that all phases of the Software Development Life Cycle (SDLC) need to be optimized for Continuous Delivery. Adopting an Agile Methodology such as SCRUM is a good first step towards Continuous Delivery – but it’s not enough by itself.

    Let’s use a simple, hypothetical example to illustrate this point. An IT group in a large US Financial Services Corporation has just received a request from their business partners to build and implement a new web-based on-line system to allow prospective customers to compare one of their products to the competitions’ offerings – we will call this the “WebShop” system. They have an excellent track record of delivering projects on time and within budget, and their focus is on stability and security rather than time to market. Major software releases follow a quarterly schedule, while minor releases are delivered on a monthly basis. Historically, they have been using a “waterfall” approach for their Software Development Life Cycle (SDLC), with some recent attempts at moving toward a more iterative approach (really a “fast waterfall”) and even some agile pilots for small projects. Their infrastructure is optimized for this delivery schedule – they use fixed, pre-defined “silos” (common to all applications) for their centralized Quality Assurance (QA) testing group to test the new versions of their application software before it is deployed in production as part of their release schedule.

    The IT organization decides to use some of the Agile techniques as well as a Continuous Integration[1] approach to building software, and delivers the new system on time, with few defects. Their only concern is that despite adopting Agile techniques and Continuous Integration practices – including some automated tests as part of each build – the vast majority of defects are still found during QA testing. In addition, deployment of application software to the various testing environments and to the production environment is still a long and error prone process.

    After the system has been delivered, their business partners want to quickly make several changes to the user interfaces (UIs), and change some business rules. They also want to have multiple versions of the application in production, to test various hypotheses and understand which UI configuration works better with their target audience (this approach is known as “Champion/Challenger” testing or A/B testing). The IT group discovers that their skills and processes are not well adapted to this new situation. Adopting Agile techniques enables them to optimize the “Design/Build” phase of the SDLC but their cross-system integration, testing and deployment processes (described as the “last mile” in the picture below) remain a bottleneck, and prevent them from achieving their rapid delivery goal.

     Principle 5-1

     Attempts at speeding up existing integration, testing and deployment processes by taking short cuts result in errors and production issues. In effect, they are attempting to force a process designed to release software on a monthly cycle to release on a weekly (or even faster) cadence, and the process is no longer functioning smoothly.

    What could they have done differently? We believe that it is important to design an architecture optimized for the whole SDLC process, not just the “Design/Build” phase of the process. In Continuous Architecture, the architect needs to take into account the integration, testing and deployment requirements – and Use Cases and Scenarios are an excellent way to document those requirements. Each iteration includes a design/build, integration/testing and deployment component.

     Principle 5-2

     In practice, this is achieved by designing small, API-testable services and components. It also means keeping coupling of components to an absolute minimum (remember Principle # 4 and the Robustness Principle). Finally, it means avoiding putting business logic in hard to test areas such the Messaging Infrastructure.

    The goal here is to keep testing and deployment in mind at all times – and avoiding the “We’ll fix it in testing” attitude so often used to create defective code in order to make an artificial deadline. Defects are waste – it takes time an effort to create them, find them and fix them!

    [1] See for example Martin Fowler’s blog at Martin Fowler’s blog at http://martinfowler.com/bliki/ContinuousDelivery.html for an excellent introduction to Continuous Delivery

  • pierrepureur 11:31 pm on August 26, 2014 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture Principles (4 of 6) 

    Let’s continue our discussion of the six principles of Continuous Architecture:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    In the previous two pages, we discussed Principles 1, 2 & 3. This page will discuss Principle 4 – and this is where things get really interesting, so please read on!

    Principle 4: Architect for Change – Leverage “The Power of Small”

    The third “Principle for Continuous Architecture” implies that the architecture and the design of the system will change as requirements – especially non-functional requirements – emerge. Change is unavoidable and as more architecture and design decisions are being made, the structure of the system will start to evolve. But how does an architect create a design based on the few non-functional requirements known at the beginning of a project (Principles 1 and 2), and yet that is resilient to change? This is where our fourth Principle comes to our rescue. Specifically, our recommendation is to design an architecture based on smaller, loosely coupled components – and to replace (not change) those components as new requirements emerge. Using loosely coupled components isn’t a new idea – it has been around since at least the 1980’s. In fact, David Parnas’ work on information hiding and modular programming back in the early 1970’s introduced this concept, and the Simula language implemented similar ideas in the 1960’s.

    But what is loose coupling? According to Wikipedia, “In computing and systems design a loosely coupled system is one in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components”, and this concept was introduced by Karl Weick in 1976. Coupling refers to the degree of direct knowledge that one component has of another.  The objective of using loose coupling is to reduce the risk that a change made within one component will result in an unintended behavior in another component. Limiting interconnections between components can help isolate problems and simplify testing, maintenance and troubleshooting.

    A loosely coupled system can be easily broken down into specific parts, which are easier to understand and replace. One consideration to keep in mind when using this design strategy is that loose coupling can create challenges when a high degree of interaction between components is required. For example, in some mobile applications a high degree of element interaction is necessary for synchronization in real time – so like any other design approach loose coupling needs to be used carefully, with a full understanding of the requirements. Another benefit of loose coupling and replaceability is that they enable the delay of design decisions, and even the reversal of some decisions, by substituting different implementations of services.  In order to do this, the services must be partitioned to appropriately separate concerns.


    In a nutshell, architecting for change requires a focus on interoperability and avoiding coupling. Keeping the Robustness Principle (see below) in mind is a good Continuous Architecture practice!

    The Robustness Principle, also known as Postel’s Law
    This principle is defined as “Be conservative in what you do, be liberal in what you accept from others – often reworded as “Be conservative in what you send, be liberal in what you accept”[1]. In other words, code that sends commands or data to other machines (or to other programs on the same machine) should conform completely to the specifications, but code that receives input should accept non-conformant input as long as the meaning is clear.

    But what about using small components? According to Wikipedia, “The Unix philosophy emphasizes building short, simple, clear, modular, and extendable code that can be easily maintained and repurposed by developers other than its creators. The philosophy is based on composable (as opposed to contextual) design”. “Micro-services” are a good example of applying this design philosophy beyond the Unix Operating System. Using this approach, many services are designed as small, simple units of code with as few responsibilities as possible (a single responsibility would be optimal), but leveraged together can become extremely powerful. The “Micro Service” approach can be thought of as a refinement of Service Oriented Architectures (SOA’s) please see for example Martin Fowler’s and James Lewis’ blog at http://martinfowler.com/articles/microservices.html for an excellent introduction to Microservices.

    Amazon is of course a good illustration of this approach as they are strongly committed to micro-services. Using this design philosophy, the system needs to be architected so that each of its capabilities must be consumable independently and on demand. The concept behind this design approach is that applications should be built from components that do a few things well, are easily understandable – and are easily replaceable should requirements change. Those components should be easy to understand, and small enough to be thrown away and replaced if necessary.

    In practice, building a system entirely from micro-services today could be a stretch since integration and deployment challenges can become overwhelming as the number of micro-services grows exponentially. However, significant progress is being made in managing and deploying micro-services and it is very likely that building a large system entirely with micro-services will become a viable proposition in the near future. In the mean time, micro-services can be leveraged for designing those parts of the system that are most likely to change – and therefore making the entire application more resilient to change. Micro-services are a critical tool in the Continuous Architecture toolbox, as they enable loose coupling of services as well as replaceability – and therefore quick and reliable delivery of new functionality.

    Continuous Architecture vs. “Emergent Architecture”
    Please note that we are not talking about the “Emergent Architecture” process of some Agile projects here. Those projects apply Agile Manifesto Principle # 11 (“The best architectures, requirements and designs emerge from self-organizing teams”) and this results in architectures that usually require significant amounts of refactoring as new requirements emerge. This approach may be appropriate for smaller projects but for larger systems some amount of architecture planning and governance is required up front to ensure that Quality Attributes are met and that appropriate infrastructure is available in time to support development, testing and implementation activities

    [1] Definition from Wikipedia

  • pierrepureur 4:07 pm on August 24, 2014 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture Principles (3 of 6) 

    In this page, we are continuing our discussion of the six principles of Continuous Architecture:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    In the previous two pages, we discussed Principles 1 & 2. This page will discuss Principle 3, and the remaining principles will be discussed in subsequent pages.

    Principle 3: Delay design decisions until they are absolutely necessary

    The second “Principle for Continuous Architecture” provides us with guidance to make architectural and design decisions to satisfy quality attribute requirements, and not to focus exclusively on functional requirements. This is of course a good idea (why would we be recommending it in this blog if it wasn’t?) since functional requirements are likely to change often – especially if we are architecting one of those “Systems of Engagement” delivered over a mobile device with a User Interface that’s likely to be obsolete very shortly after it was specified!

    But what about quality attribute requirements? Surely performance and availability requirements are not likely to change often, and our business partners should be able to describe them accurately early enough in the project life cycle to drive accurate and appropriate design decisions. Unfortunately, that’s not often the case. Performance and availability targets may be vaguely described as Service Levels Agreements (SLA’s) or Objectives (SLO’s) are not always clear. A common practice is to err on the side of conservatism when describing those objectives which may result in unrealistic requirements.

    Philippe Kruchten tells the following story about the importance of clarifying requirements. Back in 1992, Philippe was leading the Architecture team for the Canadian Air Traffic Control System (CAATS), and the team had a requirement of “125 ms time to process a new position message from the Radar Processing System, from its arrival entry in the Area Control Center till all displays are up-to-date”. After trying very hard to meet the 125 ms for several months, I was hiking one day, looking at a secondary radar slowly rotating (I think it was the one on the top of Mount Parke on Mayne island, just across from Vancouver Airport). I thought … “mmm, there is already a 12-20 second lag in the position of the plane, why would they bother with 125ms…?”
    Note: a primary radar uses an echo of an object. A secondary radar sends a message “who are you?” and the aircraft respond automatically with its ID and its altitude (http://en.wikipedia.org/wiki/Secondary_surveillance_radar) It looks like this: https://www.youtube.com/watch?v=Z0mpzIBWVG4 (I knew all this because I am myself a pilot…).


    Then I thought:  “in 125ms, how far across the screen can an aircraft go, assuming a supersonic jet… and full magnification”. Some back of the envelope computation gave me…about 1/2 pixel! When I had located the author of the requirement, he told me: “mmm, I allocated 15 seconds to the radar itself (rotation), 1 second for the radar processing system, 4 seconds for transmission, through various microwave equipment, that left 1 second in the ACC, breaking this down between all equipment in there, router, front end, etc.., it left 125ms for your processing, guys, updating and displaying the position…” These may not have been his exact words as this happened a long time ago, but this was the general line….
    Before agile methodologies made it a “standard”, it was useful for the architects to have direct access to the customer, often on site… And in this case being able to speak the same language (French)”.
    Philippe’s story clearly stresses that it is important for an Architect to question everything, and not to assume that requirements as stated are absolute.

     “Modifiability” is especially hard to quantify or to describe – how do you measure the capability of a system to respond to changes that are not yet known? Responding to a poorly defined “modifiability” or “configurability” requirements may lead the architect to unnecessarily introduce complex components. For example, it may seem appropriate for an architect to include  a Rules Engine into her design in order to implement a set of business rules. This approach would enable future-proofing the solution to any change or addition to the business rules. However, the same architect may discover a few years later that this kind of flexibility wasn’t really necessary. One of the issues associated with Rules Engines is that they tend to take control of the architecture over time, as the Rules Engine often ends up being tightly coupled with nearly every aspect of the architecture.  Those “changing” business rules could perhaps have been better implemented in a traditional programming language since their low rate of change and low complexity didn’t justify the added time and expense associated with a Rules Engine. When following this approach (i.e. writing rules in the code itself), it is useful to identify the rules (perhaps with some sort of comment) so they can be easily located later.

    The following diagram depicts two examples of commonly used 3-Tier Architectures: one without a Rules Engine (rules are embedded in each Tier, either as configuration or as part of the application code) and one with a Rules Engine – In the second example, the Rules Engine is tightly coupled with each Tier. We may face a long and expensive project should we ever need to replace or eliminate it.


    Our recommendation is to make design decisions based on known facts – not guesses. A  Six-Sigma technique (Quality Function Deployment or QFD for short) can be leveraged to make sound architecture and design. Please see “QFD in the Architecture Development Process” – IT Professional (IEEE Computer Society publication) November/December 2003 for more details on this approach.

    One of the advantages of the QFD process is that it encourages architects to document the rationale behind architecture and design decisions and to base decisions on facts, not fiction.

    As we move away from waterfall application development lifecycles involving large requirement documents and evolve toward the rapid delivery of a Minimum Viable Product (MVP), we need to create Minimum Viable Architectures (MVAs) to support MVPs. This concept will be discussed in more details in subsequent pages of this blog.

    To that effect, limiting the budget spent on architecting is a good thing; it forces the team to think in terms of a Minimum Viable Architecture that starts small and is only expanded when absolutely necessary. Too often, a team will solve problems that don’t exist, and yet fail to anticipate a crippling challenge that kills the project. Getting to an executable architecture quickly, and then evolving it, is essential for modern applications.

  • pierrepureur 10:34 pm on August 20, 2014 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture Principles (2 of 6) 

    In the previous page, we introduced the six principles of Continuous Architecture:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    We also discussed Principle 1. This page will discuss Principle 2, and the other principles will be discussed in subsequent pages.

    Principle 2: Focus on Quality Attributes – not on Functional Requirements
    Requirements for any IT system can be classified in the following two categories:

    1. Functional Requirements: Functional requirements describe the business capabilities that the system must provide, as well as its behavior at run-time. Common approaches for documenting functional requirements include Use Cases (if an iterative methodology is being used) and User Stories (if an agile methodology is being used)
    2. Non-Functional Requirements: these requirements describe the “Quality Attributes” that the system must meet in delivering functional requirements. They are usually classified into “Quality Attribute Requirements” (defined as “qualifications of the functional requirements or of the overall product”) and “Constraints” – which are “design decisions with zero degrees of freedom”. The following diagram depicts a set of sample Quality Attributes which we may identify as part of our Requirements:

    Utility Tree

    Intuitively, we gravitate towards Functional Requirements. For example, Philippe Kruchten in his seminal article[1] recommends focusing on a “small subset of important scenarios – instances of use cases – to show that the elements of the four views work together seamlessly… The scenarios are in some sense an abstraction of the most important requirements”, which are are generally assumed to be Functional Requirements. Those requirements tend to be well documented and carefully reviewed by the business stakeholders, while quality attribute requirements tend to be more succinct (usually provided as a simple list that may fit on a single page) and perhaps not as carefully scrutinized. Real life examples of poorly documented Quality Attributes requirements include:

    • The system must operate 24/7
    • The system must be extremely user-friendly
    • The system must be very fast

    However, “Quality Attribute Requirements” often have a more significant impact on the architecture of a product. Specifically, the architecture of a system determines how well the non-functional requirements will be implemented by our system. The architect makes architectural and design decisions in order to implement quality attributes, and those decisions often are compromises, since a decision made to better implement a given quality attribute may negatively impact the implementation of other quality attributes. Therefore, accurately understanding quality attribute requirements is one of the most critical pre-requisites to adequately designing a system. Use Cases and scenarios (traditionally used to document functional requirements) are a very useful tool to capture quality attribute requirements as well.

    What about functional requirements? Surely, they must have a significant impact on the architecture of our systems? Of course, functional requirements define the work that the system must do – but do not define how it does it. Functional requirements generally have quality attributes associated to them – for example, in terms of performance, availability or cost. It is entirely possible to design a system that meets all of its functional requirements – yet fails to meet its performance or availability goals, or costs too much money to develop and to maintain, and is too hard to change. If we limit ourselves to designing for functional requirements without taking into account quality attribute requirements, we end up with a large number of candidate architectures. Designing for quality attribute requirements enables us to limit the candidate architectures to a few choices – and usually one candidate will satisfy all of our requirements.

    [1]  “The 4+1 View Model of Architecture”, IEEE, November 1995

  • pierrepureur 10:23 pm on August 18, 2014 Permalink | Reply
    Tags: , , , ,   

    Continuous Architecture Principles 

    The Principles of Continuous Architecture

    The concept of Continuous Architecture arose as a response to the need for an architectural approach that is able to support very rapid delivery cycles as well as more traditional delivery models. Due to advances in Software Engineering as well as the availability of new enabling technologies , the approach used for architecting modern systems that require agility (our “systems of engagement”) is different from the approach we used to architect older systems (such as our “systems of record” and “systems of operation”). As a result, we can deliver modern applications rapidly, sustainably and with high quality using a Continuous Delivery (see Note 1) approach – which is enabled by using a Continuous Architecture approach. But what is Continuous Architecture? We believe that it can be defined as an architecture style that follows the following six simple principles:

    1. Architect Products – not Projects
    2. Focus on Quality Attributes – not on Functional Requirements
    3. Delay design decisions until they are absolutely necessary
    4. Architect for Change – Leverage “The Power of Small”
    5. Architect for Build, Test and Deploy
    6. Model the organization of your teams after the design of the system you are working on

    We will be discussing these principles in detail starting with Principle 1 in this blog page. The other principles will be discussed in subsequent pages.

    1. Architect Products – not Projects

    Architects generally tend to fall into two broad categories: those who concentrate on projects (“Solution Architecture” which is considered a tactical, shorter term approach) and those who work on company-wide strategies (“Enterprise Architecture” which is seen as a strategic, longer term approach). While a Project-only focus may lead to making short-term architecture decisions, an Enterprise-only focus may be perceived as theoretical and impractical, especially in the Application domain. After all, who can accurately predict which technologies our Systems of Engagement in Financial Services will leverage a few years years from now to interface with prospects and customers? Will they be using tablets, smart phones, intelligent watches, glasses or even some new device still waiting to be invented?

    How can we solve this challenge? Projects are temporary in nature, and meant to deliver a well-defined result. In reality, projects rarely exist in isolation – they are often part of a larger endeavor (sometimes called a “Program”) that exists in order to enhance a product or service. Looking at the architecture required by a project in isolation can be misleading, and hides the need for a longer-term product-level architecture, which is more strategic in nature than the project level architecture.  

    Practically speaking, creating and maintaining an architecture is an expensive task so it makes a lot of sense to leverage it across multiple projects. In that context, a product-level architecture can be reused many times, avoiding rework and promoting planned software reuse – as opposed to random reuse paradigms that seldom work (see Note 2) . Using this approach, individual projects are instantiated from common product architecture.

    According to Forrester Research, “a distinctive, value-based approach to software development has emerged, identifiable by a high-performing class of “product-centric” development teams that characteristically support their company’s value chain, partner with both their customers and business stakeholders, and own the business results that their software delivers” (see Note 3). As IT organizations evolve from today’s project-centric focus to a product-centric focus, it is critical for Software Architecture to lead the way by focusing on products. Continuous Architecture’s focus on products and product lines blurs the distinction between Solution Architects and Enterprise Architects. As Continuous Architecture becomes a reality, architectures evolve into strategic assets that can be leveraged continuously and rapidly to deliver business value.


    1. See Martin Fowler’s Blog dated 30 May 2013 ( http://martinfowler.com/bliki/ContinuousDelivery.html) for a great overview of Continuous Delivery. See also Jez Humble’s and David Farley’s book “Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series) for more information on this topic
    2. For an in-depth discussion of this topic please refer to “Refactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant and William Opdyke (Jul 8, 1999)”

    3. Forrester Research – “Product-Centric Development Is A Hot New Trend”, December 23, 2009.
  • pierrepureur 11:38 pm on August 7, 2014 Permalink | Reply
    Tags: , , , ,   

    What is Continuous Architecture? 

    This blog contains the details you need to know about a new architectural approach called “Continuous Architecture” that enables Continuous Delivery and explains how you can leverage it in practice to deliver applications rapidly but in a sustainable way and with high quality.

    The current trend in the industry is away from Enterprise Architecture (EA). We do not believe that the pendulum will swing back to traditional EA, and there is a need for an architectural approach that can support modern application delivery methods, such as Continuous Delivery, while providing them with a broader architectural perspective.

    We call this Approach “Continuous Architecture”. This is about using the appropriate tools to make the right decisions and support approaches such as Continuous Integration, Continuous Testing and Continuous Delivery. Using this approach will result in significantly reducing Time to Value (TTV) for the projects and products that IT is delivering.

    The pace of innovation – especially software driven innovation – is accelerating exponentially. Smartphones have taken over the Internet, smart devices are creating digital experiences unimagined just a few years ago, and cloud computing has become ubiquitous. The days when a successful IT department could release software once a year or even a quarter are over, and while the release pace has accelerated, there is still much room for improvement. Time scales have compressed, customer expectations have soared, and releasing software daily – the ultimate goal of “Continuous Delivery” – is no longer a dream. Leading companies are already doing it, and their competitors are racing to catch up.

    Continuous Delivery is the logical evolution of Agile. It has always been a part of some Agile approaches, but most Agile adoption in the past few years has seemed to focus on optimizing workflow within development teams rather than across the whole Software Development Life Cycle (SDLC) process. The real goal is to go faster – meaning to get solutions into the hands of customers faster. Continuous Delivery does not replace Agile; rather, it enables Agile to deliver on its promise of faster delivery of true business value. Implementing Continuous Delivery requires challenging traditional IT processes including application testing, software deployment and software architecture.

    Little information has been published to date about the impact of Continuous Delivery on Software Architecture, and this blog will do exactly that: it will provide a broad architectural perspective for Continuous Delivery, and describes a new architectural approach (Continuous Architecture) that supports and enables Continuous Delivery.

    The theme of this blog is to present a new approach to system architecture called “Continuous Architecture (CA) ”: how to use the appropriate tools to make the right decision and support continuous delivery (CD), continuous integration (CI) and continuous testing (CT). This blog will explain how CA can be used for:

    • Creating an architecture that can evolve with applications, that may begin by either refactoring or through traditional architectural analysis, that is testable, that can respond to feedback and in fact is driven by feedback
    • Making enterprise architecture real
    • Making solution architecture sustainable
    • Creating real world, actionable, useful strategies
Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc