Updates from July, 2015 Toggle Comment Threads | Keyboard Shortcuts

  • muraterder 8:35 pm on July 22, 2015 Permalink | Reply  

    Open Source and Continuous Architecture 

    Setting technology standards is one of the more common activities that Enterprise Architecture groups fulfill. Setting standards is challenging enough when considering commercially available products. The last fifteen years has seen the emergence of Open Source software that has increased the challenges in this space. According to the Open Source Initiative, the term is defined as:

    Open source software is software that can be freely used, changed, and shared (in modified or unmodified form) by anyone. Open source software is made by many people, and distributed under licenses that comply with the Open Source Definition.

    Commercially Open Source has been quite successful and acted as a true disruptive source within the software industry. At its best Open Source can be considered to embody all the good things about the internet age. It demonstrates the ability of individuals to collaborate to achieve a common goal. The common wisdom being that multiple people addressing the same problem solve it more elegantly with fewer defects. The Open Source movement has also taught us quite a lot about how communities can collaborate effectively across different geographies and time zones. Though seemingly very democratic, it is interesting to note the concept of a “benevolent dictator”(also known as the “committer”) being key to quite a few of the successful Open Source initiatives. Briefly, the benevolent dictator can be considered the architect, or the person responsible for the conceptual integrity of the solution.

    The impact of Open Source can be seen from the market leading solutions it has given rise to; from Linux, to the multiple projects under the Apache Foundation (TomCat, Camel and Hadoop to name a few well known ones). From a commercial perspective Open Source has also created new business models, where companies provide versions of Open Source software that are supported in a more traditional software model. These have been instrumental for large organizations to feel comfortable in using the Open Source technology. You could say  that in the future we may get to a point where there are only two types of software left: open source and Software as a Service (i.e. software delivered via the Cloud)

    Does Open Source have any significance from an architectural approach perspective? From one perspective, Open Source code has been a huge asset for software reuse. Effectively used Open Source components can significant reduce the time to market of a software development team. On the other hand, Open Source technology creates significant challenges in terms of introducing unknowns into your software code base. This can happen from multiple angles:

    • Commercially, the teams might not be aware of what Open Source licensing they are operating under. This could cause significant headaches if it is found that the organization is not fully adhering to the license conditions. For example, legal headaches when signing a new contract for open source support – e.g. indemnification!
    • The development teams might not be downloading the most recent version of the Open Source component, or more likely they will not be keeping their version of the component up to date with newer versions. This can result in non-performant or defective code existing in the code base.
    • A special challenge is around security risks. There is no guarantee that all security flaws will be fixed rapidly and the alerting mechanisms for known security breaches can be suboptimal.
    • Finally, there is always the possibility of the community losing interest in the Open Source initiative. Resulting in a portion of your code base being orphaned with no one really supporting it or understanding it fully. From one perspective this is no different than the legacy Cobol code base existing organizations.

    From a Continuous Architecture perspective we are very supportive of the ethos of Open Source initiatives. We believe that the code itself, as well as the community models it espouses are tremendously valuable. We would encourage organizations to embrace Open Source and apply some of the Open Source development approaches internally as well; commonly known as Internal Open Source models.  An area where this approach can be used is for orphaned Enterprise Services that the enterprise no longer wants to maintain because there is not enough demand for them. If a few groups still want to use those services, they can continue to maintain them using our internal open source model.

    While implementing Open Source within a commercial enterprise a controlled approach in terms of tracking Open Source usage and addressing the challenges highlighted above is required. We believe that following the Continuous Architecture principles like Principle 4 Architect for Change –  Leverage “The Power of Small”  and Principle 1 Architect Products – Not Just Solutions for Projects will help in addressing the challenges. Both of these principles enable you to manage to scope and context of your code base so that you can keep track of the areas where you use Open Source more effectively. In addition, open source software tends to be more cloud friendly than proprietary software. This is where applying Principle 5 – Architect for Build, Test  and Deploy comes in handy.

    Advertisements
     
    • Robert Baty 9:03 pm on July 24, 2015 Permalink | Reply

      Good post, at my current employer we have wrestled with Open Source for some time now. We have a process to review the licensing, track the usage and ensure the open source is from a more reputable provider like Apache, however, it still seems lacking.

      Developers we hire nowadays expect to be able to download and use whatever tool they like and get frustrated working at a big shop with some of the bureaucracy and security policies in place. How do you balance the need to allow creativity and innovation in solving technical problems with the corporate governance of large organizations?

      Like

  • pierrepureur 11:29 pm on July 14, 2015 Permalink | Reply
    Tags: , , , ,   

    The Value of (Continuous) Architecture 

    The Value of (Continuous) Architecture

    What is the real value of architecture? We think of architecture as an enabler for the delivery of valuable software.  Software architecture’s concerns, quality attribute requirements such as performance, maintainability, scalability and security are at the heart of what makes software successful.

    A comparison to building architecture may help illustrate this concept. Stone arches are one of the most successful building architecture constructs. Numerous bridges built by the Romans around 2000 years ago using stone arches are still standing – for example, the Pont du Guard, built in the first century AD. How were stone arches being built at that time? A wooden frame known as “centring” was first constructed in the shape of an arch. The stone work was built up around the frame and finally a keystone was set in position. The key stone gave the arch strength and rigidity. The wood frame could then be removed and the arch was left in position. The same technique was later used in the middle ages when constructing arches for Gothic Cathedrals.

    CA Figure 11-2

    Source: http://www.bbc.co.uk/history/british/launch_ani_build_arch.shtml

    We think of software architecture as the “centring” for building successful software “arches”.  When Romans built bridges using this technique, we do not believe that anybody worried about the aesthetics or the appearance of the “centring”. Its purpose was the delivery of a robust, strong, reliable, usable and long lasting bridge.

    Similarly, we believe that the value of software architecture should be measured by the success of the software it is helping to deliver, not by the quality of its artifacts. Sometimes, architects use the term “value evident architecture” to describe a set of software architecture documents they created and are really proud of, and that development teams should not (ideally) need to be sold on in order to use this architecture. However, we are somewhat skeptical about these claims – can you really evaluate a “centring” until the arch is complete, the key stone has been put in place and the bridge can be used safely?

     
  • pierrepureur 11:48 pm on July 6, 2015 Permalink | Reply
    Tags: , , , ,   

    How to Evolve Continuous Architecture over Time? Think “Minimum Viable Architecture” 

    How to Evolve Continuous Architecture over Time? Think “Minimum Viable Architecture”

    Let’s assume that a team has successfully developed and implemented an application by following the six Continuous Architecture principles.  Now we’ll turn our attention to their next challenge – how do they evolve the architecture to cope with the unavoidable requirement changes that are already piling up upon them?  This is where they need to leverage a “Minimum Viable Architecture” strategy.

    Let’s first explain what we mean by “Minimum Viable Architecture”. This concept is often associated with the concept of “Minimum Viable Product”, so we’ll start by giving a brief overview of this concept.

    What Exactly is a “Minimum Viable Product”?

    A “Minimum Viable Product” can be defined as follows:

    In product development, the minimum viable product (MVP) is the product with the highest return on investment versus risk (…)

    A minimum viable product has just those core features that allow the product to be deployed, and no more. The product is typically deployed to a subset of possible customers, such as early adopters that are thought to be more forgiving, more likely to give feedback, and able to grasp a product vision from an early prototype or marketing information. It is a strategy targeted at avoiding building products that customers do not want, that seeks to maximize the information learned about the customer per dollar spent (from Wikipedia, the free encyclopedia and: S. Junk, “The Dynamic Balance Between Cost, Schedule, Features, and Quality in Software Development Projects”, Computer Science Dept., University of Idaho, SEPM-001, April 2000, Eric Ries, March 23, 2009, Venture Hacks interview: “What is the minimum viable product?”, Lessons Learned, Perfection By Subtraction – The Minimum Feature Set, SyncDev (http://www.syncdev.com/index.php/minimum-viable-product/), Holiday, Ryan The single worst marketing decision you can make The Next Web. 1 April 2015, Ries, Eric (August 3, 2009). “Minimum Viable Product: a guide”).

    The concept of Minimum Viable Product has been actively promoted by proponents of Lean and Agile approaches, and it certainly has worked very well at several startups . The concept sounds attractive at first – being able to quickly and inexpensively create a product to gauge the market before investing time and resources into something that may not be successful is a great idea.

    However, in a highly regulated industry like Insurance or Banking, the concept of Minimum Viable Product has limitations – some product capabilities such as regulatory reporting, security and auditability are not optional and cannot be taken out of scope. Also software vendors routinely launch their products as “alpha” or “beta” versions, but very few Financial Services companies would consider launching anything but a production ready version, especially to external audiences.

    Of course some other features such as some inquiry screens or activity reports may be omitted from the initial release, but those features are usually easy and inexpensive to build so taking them out of scope for the initial release may not save much time or money.

    In addition, implementing new products may involve leveraging existing capabilities implemented in older back-end systems (such as rate quoting in Insurance), and interfacing with those systems is likely to represent a significant portion of the effort required to create a new product – unless those interfaces have already been encapsulated by developing reusable services as part of a previous effort. Unfortunately, that’s not often the case, and teams attempting to implement a Minimum Viable Product in Financial Services companies often struggle with defining a product that has enough capabilities to be moved to production – yet which is also small enough to be created quickly and with a minimal investment of time and money.

    What about Minimum Viable Architecture?

    On the other hand, using a Minimum Viable Architecture strategy is an effective way to bring a product to market faster with lower cost. Let’s examine a sample Quality Attributes Utility Tree to clarify this point:

    CA Figure 7-12

    Under each of those Quality Attributes are specific Quality Attribute Refinements – for example, “Latency” further refines “Performance”. In addition, a Quality Attribute Refinement is illustrated by an Architecture Scenario, in terms if Stimulus/Response/Measurement. The Architecture Scenarios themselves are a very effective way to express Quality Attribute Requirements, since they are concrete and measurable, and should be easy to implement in a prototype.

    There is also a time/release dimension to Quality Attributes Analysis that answers the following questions:

    • How many concurrent users will be on the system at initial launch?
    • How many concurrent users will be on the system within the first 6 months?
    • How many concurrent users will be on the system within the first year?
    • How many transactions per second is the system expected to handle at initial launch?
    • How many transactions per second is the system expected to handle within the first 6 months?
    • How many transactions per second is the system expected to handle within a year?

    This time dimension can be represented in the Quality Attributes Utility Tree as shown below:

    CA Figure 7-13

    Many architects consider the worst case scenario when designing a system – for example, they would ask their business partners for the “maximum number of concurrent users the system should be able to support” without mentioning a time frame – and add a “safety margin” on top of that number, just to be on the safe side. Unfortunately, they do not realize that the number of concurrent users provided by the business is likely to be an optimistic guess (business partners would like to believe that every new system is going to be a big success!) – Unless the system that they are architecting replaces an existing system, and usage volumes are precisely known.

    As a result they end up architecting the new system to handle an unrealistic number of concurrent users which may not be reached for a few years, and sometimes add unnecessary complexity (such as caching components) to their design. We are recommending instead to adopt a  “Minimum Viable Architecture “approach  based on realistic estimates at launch time, and evolve that architecture based on actual usage data. Also keep in mind that technology becomes more efficient over time, and keep Principle 3 in mind: Delay design decisions until they are absolutely necessary, and design the architecture based on facts, not guesses!

    A useful strategy is to limit the budget spent on architecting. This forces the team to think in terms of a Minimum Viable Architecture that starts small and is only expanded when absolutely necessary.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel