Author: Chuck Conway

Efficiency with Algorithms, Performance with Data Structures – Chandler Carruth

"Software is getting slower more rapidly than hardware becomes faster. -Niklaus Wirth: A Plea for Lean Software"

  • Niklaus Wirth – Algorithms + Data Structures = Programs

There are two sides to the performance coin:

Efficiency through Algorithms – How much work is required by a task.

  • Improving efficiency involves doing less work.
  • An efficient program is one that does the minimum (that we’re aware of) amount of work to accomplish a given task.

Performance through Data Structures – How quickly a program does it work.

  • Improving performance involves doing work faster
  • But there is no such thing as a "performant", not even work in the English language.
  • There is essentially no point at which a program cannot do work any faster… until you hit Bremermann’s limit…
  • What does it mean to improve the performance of software?
    • The software is going to run on a specific, real machine
    • There is some theoretical limit on how quickly it can do work
  • Try to light up all the transistors, get the most work done as fast as possible.

All Comes back to Watts

  • Every circuit not used on a processor is wasting power
  • Don’t reduce this to the absurd — it clearly doesn’t make sense to use more parts of the CPU without improving performance!


  • Complexity theory and analysis
  • Common across higher-level languages, etc.
  • Very well understood by most (I hope)
  • Improving algorithmic efficiency requires finding a different way of solving the problem.
  • Algorithmic complexity is a common mathematical concept that spans languages and processors. Don’t get lazy, you have to pay attention to your algorithms.
  • Example: – "it’s about doing less work", analyze the current approach and find ways to do it more efficiency
    • Initially, you might have a basic O(n^2) algorithm
    • Next, we have a Knuth-Morris-Pratt (a table to skip)
    • Finally, we have a Boyer-Moore (use the end of the needle)

Do Less Work by Not Wasting Effort, Example 1

std::vector<X> f(int n) {
 std::vector<X> result;
	for(int i = 0; i < n; ++i)
    return result;	

Initialize the collection size

std::vector<X> f(int n) {
 std::vector<X> result;
	for(int i = 0; i < n; ++i)
    return result;	

Do Less Work by Not Wasting Effort, Example 2

X *getX(std::string key,
        std::unordered_map<std::string, std::unique_ptr<X>> &cache) {   
       return cache[key].get();
    cache[key] = std::make_unique<X>(...);
    return cache[key].get();    

Retain the reference to the cache entry

X *getX(std::string key,
        std::unordered_map<std::string, std::unique_ptr<X>> &cache) {   

	std::unique_ptr<X> &entry = cache[key];

       return entry.get();

    entry = std::make_unique<X>(...);
    return entry.get();

Always do less work!

Design API’s to Help

Performance and Data Structures = "Discontiguous Data Structures are the root of all (performance) evil"

  • Just say "no" to linked lists

CPUs Have Hierarchical Cache System

Data Structures and Algorithms

  • They’re tightly coupled, see Wirth’s books
    • You have to keep both factors in mind to balance between them
  • Algorithms can also influence the data access pattern regardless of the data structure used.
  • Worse is better: bubble sort and cuckoo hashing

Rapid Development – Steve McConnell

2: Weak personnel. After motivation, either the individual capabilities of the team members or their relationship as a team probably has the greatest influence on productivity (Boehm 1981, Lakhanpal 1993). Hiring from the bottom of the barrel will threaten a rapid-development effort. In the case study, personnel selections were made with an eye toward who could be hired fastest instead of who would get the most work done over the life of the project. That practice gets the project off to a quick start but doesn’t set it up for rapid completion. LOCATION: 1793

Uncontrolled problem employees. Failure to deal with problem personnel also threatens development speed. This is a common problem and has been well-understood at least since Gerald Weinberg published Psychology of Computer Programming in 1971. Failure to take action to deal with a problem employee is the most common complaint that team members have about their leaders (Larson and LaFasto 1989). In Case Study: Classic Mistakes, the team knew that Chip was a bad apple, but the team lead didn’t do anything about it. The result—redoing all of Chip’s work—was predictable. LOCATION: 1798

Heroics. Some software developers place a high emphasis on project heroics (Bach 1995). But I think that they do more harm than good. In the case study, mid-level management placed a higher premium on can-do attitudes than on steady and consistent progress and meaningful progress reporting. The result was a pattern of scheduling brinkmanship in which impending schedule slips weren’t detected, acknowledged, or reported up the management chain until the last minute. A small development team and its immediate management held an entire company hostage because they wouldn’t admit that they were having trouble meeting their schedule. An emphasis on heroics encourages extreme risk taking and discourages cooperation among the many stakeholders in the software-development process. LOCATION: 1808

Some managers encourage heroic behavior when they focus too strongly on can-do attitudes. By elevating can-do attitudes above accurate-and-sometimes-gloomy status reporting, such project managers undercut their ability to take corrective action. They don’t even know they need to take corrective action until the damage has been done. As Tom DeMarco says, can-do attitudes escalate minor setbacks into true disasters (DeMarco 1995). LOCATION: 1820

Unrealistic expectations. One of the most common causes of friction between developers and their customers or managers is unrealistic expectations. In Case Study: Classic Mistakes, Bill had no sound reason to think that the Giga-Quote program could be developed in 6 months, but that’s when the company’s executive committee wanted it done. Mike’s inability to correct that unrealistic expectation was a major source of problems. LOCATION: 1851

11: Lack of user input. The Standish Group survey found that the number one reason that IS projects succeed is because of user involvement (Standish Group 1994). Projects without early end-user involvement risk misunderstanding the projects’ requirements and are vulnerable to time-consuming feature creep later in the project. LOCATION: 1873

13: Wishful thinking. I am amazed at how many problems in software development boil down to wishful thinking. How many times have you heard statements like these from different people: "None of the team members really believed that they could complete the project according to the schedule they were given, but they thought that maybe if everyone worked hard, and nothing went wrong, and they got a few lucky breaks, they just might be able to pull it off." "Our team hasn’t done very much work to coordinate the interfaces among the different parts of the product, but we’ve all LOCATION: 1887

20: Shortchanged upstream activities. Projects that are in a hurry try to cut out nonessential activities, and since requirements analysis, architecture, and design don’t directly produce code, they are easy targets. On one disastrous project that I took over, I asked to see the design. The team lead told me, "We didn’t have time to do a design." LOCATION: 1949

The results of this mistake—also known as "jumping into coding"—are all too predictable. In the case study, a design hack in the bar-chart report was substituted for quality design work. Before the product could be released, the hack work had to be thrown out and the higher-quality work had to be done anyway. Projects that skimp on upstream activities typically have to do the same work downstream at anywhere from 10 to 100 times the cost of doing it properly in the first place (Fagan 1976; Boehm and Papaccio 1988). If you can’t find the 5 hours to do the job right the first time, where are you going to find the 50 hours to do it right later? LOCATION: 1956

22: Shortchanged quality assurance. Projects that are in a hurry often cut corners by eliminating design and code reviews, eliminating test planning, and performing only perfunctory testing. In the case study, design reviews and code reviews were given short shrift in order to achieve a perceived schedule advantage. As it turned out, when the project reached its feature-complete milestone it was still too buggy to release for 5 more months. This result is typical. Shortcutting 1 day of QA activity early in the project is likely to cost you from 3 to 10 days of activity downstream (Jones 1994). This shortcut undermines development speed. LOCATION: 1969

24: Premature or overly frequent convergence. Shortly before a product is scheduled to be released, there is a push to prepare the product for release—improve the product’s performance, print final documentation, incorporate final help-system hooks, polish the installation program, stub out functionality that’s not going to be ready on time, and so on. On rush projects, there is a tendency to force convergence early. Since it’s not possible to force the product to converge when desired, some rapid-development projects try to force convergence a half dozen times or more before they finally succeed. The extra convergence attempts don’t benefit the product. They just waste time and prolong the schedule. LOCATION: 1987

26: Planning to catch up later. One kind of reestimation is responding inappropriately to a schedule slip. If you’re working on a 6-month project, and it takes you 3 months to meet your 2-month milestone, what do you do? Many projects simply plan to catch up later, but they never do. You learn more about the product as you build it, including more about what it will take to build it. That learning needs to be reflected in the reestimated schedule. LOCATION: 2001

28: Requirements gold-plating. Some projects have more requirements than they need, right from the beginning. Performance is stated as a requirement more often than it needs to be, and that can unnecessarily lengthen a software schedule. Users tend to be less interested in complex features than marketing and development are, and complex features add disproportionately to a development schedule. LOCATION: 2023

29: Feature creep. Even if you’re successful at avoiding requirements gold-plating, the average project experiences about a 25-percent change in requirements over its lifetime (Jones 1994). Such a change can produce at least a 25-percent addition to the software schedule, which can be fatal to a rapid-development project. LOCATION: 2027

What To and Not To Test – Mark Seemann

Blog Post:

The Purpose of Testing

  • To exercise the Api and the design. A hard class to test, is a poorly designed interface.
  • To prevent regressions.

The Cost of Regressions

Dimensions of the Risk

  • The likelihood the event.
  • The impact of the event.

To reduce risk, you either decrease the likelihood or the impact of the event.

What is the impact of the error if happens? If it’s a dev app, there is almost zero risk. If it’s a Voyager probes. It could doom the project.

Udemy – TDD Class


  • Fast
  • Independent
  • Repeatable
  • Self-Validatable
  • In-Time, written before production code

Types of Tests

  • Unit Tests – verify the behavior of a unit under tests in isolation
  • Integration Tests – verify the behavior of either a part of the system or a whole system. These tests are often brittle.
  • Acceptance Tests – verify the software from the user’s point of view.

Frameworks & Tools

  • SpecFlow – BDD
  • xUnit – unit testing
  • Moq – Mocking
  • NCrunch
  • Resharper


  • Triangulation
    • We generalize the implementation when we have two or more test cases. We add test cases until the right way of implementation starts to emerge.
  • Faking
    • Faking means returning constants and gradually replacing the value with variables and evolving the solution.
  • Obvious Implementation
    • Sometimes the implementation appears obvious, but as you start to poke at it with tests, it begin to crumble.
  • Grabbing the gold
    • You start from writing unit tests which cover the core functionality right away, and it fails with edge cases.
    • Suggestion: Write as many tests as possible before approaching the core functionality. Check for max values, empty strings, null values, min values… etc.
    • Uncle Bob’s Suggestions: Write tests exactly in the following order: exceptional, degenerate, and ancillary
      • Degenerative cases are those which cause the core functionality to do "nothing"
      • Ancillary behaviors are those which support the core functionality

The Future of Software Engineering – Mary Poppendieck

One database that is a magic integrator of everything else. In a traditional application (circa 2019), most application store all the data in a central store. For example, user data and application data are stored in the same database.

Moving Beyond a Database

"We need to learn how to do application Architecture with API’s not databases."

  • Think of API’s as architecture, they localize architecture. It’s just me an that other server over there. Not me and those 50 other services.

  • API’s have local persistence.

  • Think how we connected to multiple API’s access stored data. i.e. There isn’t one big database to access data from. We have to piece together the data for our system to work.

Big Data Systems are inherently distributed and their architectures must explicitly handle

  • partial failures
  • concurrency
  • consistency
  • replication
  • communication latenecies

Resilient Architectures

  • Replicate data to ensure availability in the case of failure
  • Design componetes to be
    • stateless
    • replicated
    • tolerant of failures of dependent services

Thinking in Antifragile

  • Things better when it gets attacked
  • Chaos monkey

Change in thinking

  • Stop thinking of serial (sync) execution and start thinking in concurrent (async) execution
  • We need to (re)learn event driven programming, instead of procedure programing

Software Engineering in the Cloud



We poke the system, and then respond to the small contained poke.

Continuous Delivery

  • Deploying to the trunk
    • Having no branching
  • The code is always projection ready
  • Deploying all the time
  • Release is turning something on by a switch (enable and disable features)


End to End feedback

  • Monitor the code and it’s health
  • Get feedback from the customers

Get the product feedback to the stakeholders.

Likely 2/3 of the features are unnecessary.

Move from delivery Teams to problem solving teams.

  • A delivery team is given an order and get things done. Like ordering food at a restaurant
  • A Problem Solving team
    • Here’s the goal, here’s the metric, figure out how to improve the metric.

Experimentation and Learning

Conformity Bias: Those who think they are in the minority self-silence


Book Recommendations

Why Scaling Agile Doesn’t Work – Jez Humble

Water – Scrum – Fall

  • Dave West

Design and product discovery usually is defined upfront, which is not agile, adding an agile development process in the middle isn’t all the benefits, especially with the testing and the deployment not part of the process.

Estimating the amount of work

What Should We do

  • Don’t optimize for the case where we are tight
  • Focus on value, not cost
  • create feedback loops to validate assumptions
  • make it economic to work in small batches – working in small batches allows us to get the process working (i.e. faster feedback loops).
  • enable an experimental approach to product dev
    • Allows developers to test a hypothesis on a feature and get almost immediate feedback.


  • Users don’t know what they want. They know what they don’t want. Requirements are the wants of the HIPPO, the highest-paid person.
    • A lot of the time the value isn’t defined. The "So that…" definition of value is never discussed or defined.

Tool Impact Mapping

A tool that maps the impact of the outcome. We work back from the outcome to the code of adding the feature.

The important thing is the shared understanding of the thing you’re building, not the artifacts, which agile emphasizes. This means the developers understanding the "Why" behind something, instead of tossing tasks over the cubical wall.

Pick a task with the smallest amount of work for the biggest outcome. "Minimize output, maximize outcome" – Jeff Geofels

Hypothesis-driven delivery


Is about delivering things in a way that will satisfy our customers. The hard part is trying to discover what the customers want without sinking a whole ton of money into discovering, because, then you might get sucked into the sunk cost fallacy, where you’ve put a ton of money into a solution, and because of this, it’s the correct approach.

Do Less

Only 1/3 of the features improved the key metric. As Jez pointed out, this means 2/3rd’s of the work one doesn’t bring value. In fact, some of the work brings negative value to the company. This means 2/3rd’s of the time someone could spend time on the beach and have the same effect.

Feature Branching is a poor person modular’s architecture

HP Firmware Case Study

The Case Study: A Practical Approach to Large-Scale Agile Development – Gruver, Young, Fulghum


  1. Get Firmware off the critical path
  2. 10x productivity increase.

The incremental approach doesn’t just apply to software, but how we improve our processes and how we improve our companies. It’s how we improve anything.

The Four Steps of Improvement Kata Model

If you’re not constantly improving, you are degrading.

To get started, start with your job and then start networking with people in the company to understand their pain points.

Don’t fight stupid, make more awesome. – Jessie Owens

Tactical Design Patterns

Apply the design pattern when the design is ready to accept it.

  • Don’t force a design pattern onto a problem. Let the code evolve until it’s obvious the design pattern is needed. It is better to be late than early.

  • This idea comes back to my thoughts on keep things simple don’t over architect code. Don’t add it, until it’s a requirement.

Multiple Iterations

  • The first attempt is to produce the tailored design
    • Design patterns do not fit into this phase
  • Then refactor to reach a better design
    • this is where design patterns fit well.

Principle of Least Surprise

Design Patterns are generally more complex than they are in the examples and text books.

Software Architecture Fundamentals: Understanding the Basics

Lessons Learned as an architect

  1. negotiation and political skills are sometimes, and most often, more important as an architect than technology skill
  2. It’s all about the data
  3. The world’s best architectures are not the perfects ones, but rather the feasible ones.

What is software design?

Our plan, our design is the complete source code.

Neil Ford One of the reasons Software is so complex is we don’t have any constraints like the real world. So we don’t have any limits on what we can do.


"The final goal of any traditional engineering activity is some type of documentation."

"When the design effort is complete, the design documentation is turned over to the manufacturing team"

Fixing a manufacturing defect in from a factory is expensive and costly. Much effort is put into place to eliminate defects before the final goods are created. In software, compiling (manufacturing the software) is so cheap, that it’s cheaper to manufacture (compile) the software and test that the item matches our expectations. This is analogous to planning a pre-manufacture rigor found in the manufacturing industry.

In the software world we are in the world of bits not atoms. This makes making changes over time much easier than in the world of atoms.

The plan in the manufacturing world is the source code in the software world.

"Given that software designs are relatively easy to turn out and essentially free to build, an unsurprising revelation is that software designs tend to be incredibly large and complex." – Jack Reeves

Each piece of software is a handcrafted piece. We don’t have the componentization that the real-world has.

In traditional engineering, plans are engineered up front large amounts of time and money are invested to get it right before it’s implemented. In software, manufacturing is so cheap, that we can create it up front and then test to ensure that it’s implemented correctly.

"Software may be cheap to build, but it is incredibly expensive to design." – Jack Reeves

You want to architect for change. This does not mean you want to have mass abstractions. This means pithiness and keeping is simple. If you can make changing architecture less expensive to do, it gives your flexibly of evolving your application over time.

Software always becomes iterative. You will always need to change or fix the software. The difference is how tight your feedback loop is. In waterfall it might be a long drawn out process, in agile it’s short a short feedback loop.

Architecture also becomes iterative. No one architecture/approach can solve every problem. Understand why things work, not mechanics of what things are doing.

Separate goals from approaches.

Architecture isn’t an equation to be solved; it’s a snapshot of a process. It’s always in flux, it’s always changing, a perfect architecture always fails when it meets the real world.

Architecture is coupled to process (especially, continuous delivery). It is coupled to the technical solution and to the process. This is what makes architecture so difficult.

Course outline The 3 area’s of software architecture

  1. Application
    • Techniques for change
    • patterns & anti-patterns
    • tools & documentations
  2. Integration
    • overview
    • architectural styles
  3. Enterprise
    1. introduction to enterprise architecture

Architecture cross-cutting concerns

  • Soft Skills
  • Continuous Delivery
  • Understanding large codebases

Soft Skills

Expectation of an Architect

  1. Analyze technology, industry, and market trends and keep current with those latest trends.
  2. Analyze the current technology environment and recommend solutions for improvement.
    • Look at the full life-cycle of the project. It is the responsibility of the architect to make sure it works. This includes thinks like process and continuous delivery.
  3. Ensure compliance with the architecture.
    • This is an ongoing responsibility of an architect.
  4. Having expose to multiple and diverse technologies, platforms and environments.
  5. Possess exceptional interpersonal skills, including teamwork, facilitation, and negotiation
  6. define the architecture and design principles to guide technology decisions for the enterprise.
    • Don’t dictate specific technology choices.
  7. understand the political climate of the enterprise and be able to navigate the politics.

Architecture Aspects

  • Leadership and communication
  • technical knowledge
  • business domain knowledge
  • methodology and strategy

Continuous Delivery – "Fast, automated feedback on the production readiness of your application every time there is a change — to code, infrastructure, or configuration" **** Software is always in a production ready state. Any change will trigger a build. This includes software patches on a server or stalling new software on the server.

  • integrate early and often
    • The longer you put it off, the worse it gets at an exponential level.
    • At one point in history, people would go off and create their own bits of software. After a short, or sometimes longtime, they would return and try to integrate the different pieces. More often than, not, the pieces were very difficult, if not impossible to put together. It would often take months to put all the pieces together on a big project.
  • As with most things in software, the longer you put it off, the harder it gets.
  • Continuous Delivery is the final stage of CI.

continuous delivery ideal

  • software is always production ready
  • deployments are reliable and commonplace
  • everyone can self-service deployments
  • releases occur according to business needs, not operational constraints


A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior.

Any fool can write code that a computer can understand. Good programmers write code that humans can understand. – Martin Fowler

Technical debt is the sum of the following coding shortcuts:

  • Duplication
  • Excess Coupling
  • Quick Fixes
  • Hacks

Why Refactor?

  • Improves Design
  • Improves Readability
  • Reveals Defects
  • Helps You Program Faster

Other than when you are very close to a deadline, however, you should not put off refactoring because you haven’t got time. Experience with several projects has shown that a bout of refactoring results in increased productivity. Not having enough time usually is a sign that you need to do some refactoring. – Martin Fowler

Refactoring Principles

  • Keep it Simple
  • Keep it DRY
  • Make It Expressive
  • Reduce Overall Code
  • Separate Concerns
  • Appropriate Level of Abstraction

Refactor Tips

  • Keep Refactoring’s Small
  • One At a Time
  • Make a Checklist
  • Make a ‘later’ list
  • Check in Frequently
  • Add Test Cases
  • Review the Results

Premature optimization is the root of all software evil. – Donald Knuth

Coding Philosophy

  • Keep it simple.
  • Deliver value.
  • Avoid over-engineering.
  • Avoid gold plating your code.

Principle of Least Surprise

  • Do what users expect
  • Be Simple
  • Be Clear
  • Be Consistent
  • Design API to behave as programmers would expect.

Organizing Code Smells into 5 groups

  • The Bloaters
    • Do it all god objects…
      • Small lean code bases and changed quickly
    • Long Methods
      • Keep methods smaller
        • 10 lines or fewer
        • Avoid regions
  • The Object-Orientation Abusers
  • The Change Preventers
  • The Dispensables
  • The Couplers
  • The obfuscators
  • Environment Smells
  • Test Smells
  • Architecture Smells

Refactoring Long Methods

  • Extract Method
    • Introduce Parameter Object
    • Replace Temp with Query
    • Replace Method with Method Object
  • Compose Method
    • Place bits of functionality into smaller methods.
    • The result is a method that has a number of method calls.
  • Replace Nested Conditional with Guard Clause
  • Replace Conditional Dispatcher(switch statement) with Command Pattern
  • Move Accumulation to Visitor ()
  • Replace Conditional Logic Strategy pattern

Refactor Large Class

Violating Single Responsibility Principle

Too many instance variables

Too many private methods (Iceberg class)

Lack of cohesion

Some methods work with some instance variables, others with others

Compartmentalized class

Refactoring Large Classes Strategy

Extract Method (and hopefully combine logic)

Extract class

Extract Subclass/Extract Interface

Replace Conditional Dispatch with Command

Replace State-Altering Conditionals with State Machine

Replace Implicit Language with Interpreter Pattern

Primitive Obsession

  • Over-use of primitives, instead of better abstractions, results in excess code to shoehorn the type. The primitives themselves can contain invalid states and therefore must be checked when the value is set or used.

    • Often Guard clauses and validation is used to everytime the value changes for the type.
    • Les Intention – Revealing code
  • Refactor away from Primitive Obsession

    • Replace Data value with Object
    • Replace Type Code with Class
    • Replace Type Code with Subclass
    • Extract Class
    • Introduce Parameter Object
    • Replace Array with Object
    • Replace State-Altering Conditionals with State
    • Replace Condition Logic with Strategy
  • Examples of a Primitive Data Obsession

    • ZIP / Postal Codes
    • Phone Numbers
    • Social Security Numbers
    • Telephone Numbers
    • Money
    • Age
    • Temperature
    • Address
    • Credit Card Information

Long Parameter List

  • May indicate procedural rather than OO programming style

  • Related Smells

    • Message Chains
    • MiddleMan
    • Rules of Thumb
      • No more than three arguments
      • No output arguments
        • Not intuitive
      • No flag arguments
        • means the method is doing more than one things
      • Selector arguments
        • MEans the method is doing more than one thing.
  • In general

    • Prefer more, smaller, well-named methods to fewer methods with behavior controlled by parameters

Evolutionary Architecture – Rebecca Parsons

What does good look like? What are characteristics are good, what characteristics are bad?

Design and Architecture – Big ‘A’ Architecture. How do different parts of the system communicate with each other. What is our security? What is our persistence strategy? Design are things that are closer to the code. How the system interacts with devices is Architecture.

What are the principles of Evolutionary Architecture?

  1. Last responsible Moment
    • Delay decisions as long as you can, but no longer
    • Maximizes the information you have
    • Minimizes technical debt from complexity
    • Decide early what your drivers are, and prioritize decisions accordingly
    • Evolutionary, neither emergent nor based on guesswork
  2. Architect and develop for evolvability
    • Sensible breakdown of functionality
    • Consider data lifecycle and ownership
      • Who has access to it
      • How long should we keep it.
    • Appropriate coupling
      • Coupling is inevitable
        • Two systems that must communicate, there has to be coupling.
    • Lightweight tooling and documentation
    • Develop for Evolvability
      • Software internal quality metrics focusing on each of change
        • You don’t want to optimize for understanding code, you want to optimize for understanding code
        • OnceI have these metrics, how to I focus my attention on making a piece of code more evolable.
        • Look for change hotspots
        • Measure continually, focus on trends.
          • Gives visibility into changes and software evolution
    • Reversibility
      • Change that decision later down the road.
  3. Postel’s Law
    • How systems communicate with each other.
    • Be conservative in what you send, if you put it out there people will use it, because you are now committed.
    • Be liberal in what you receive. If you only want a phone number, but receive the entire address. Only validate the phone number. You don’t want to be sensitive to changes that don’t impact you.
    • Use version numbering when you have to break a contract.
  4. Architect for testability
    • If you think about testability in your architecture, the side effect is a well architected system. Things that tend to make your system hard to tests also tend to make it hard to change and maintain.
    • Single Responsibility Principle – Example: Messaging infrastructure used for messaging, not business logic. Testing business logic embedded in Messaging is hard. Separate the two and test business logic outside the message infrastructure.
    • Map your business components to the business concepts. When the business requests changes, in a business concept if your components are similarly mapped the change will be less impactful.
    • Automated build and deployments is an essential underpinning
  5. Conway’s Law
    • Organizations design system reflecting their communication structures
    • Broken communications imply complex integration
    • Silos often result in broken communication
    • If you don’t want your product to look like your organization, change your organization (or your product)

Techniques for implementing evolutionary architecture

  • Database Refactoring
    • Decompose big change into series of small changes
    • Each change is a refactoring/migration pair(or triple of you include access code)
      • This should be easy, but data gets dirty overtime making it difficult.
    • Changes compose in the same way functions compose
    • And of course, version control the changes
    • And apply in the various environments during promotion
  • Continuous Delivery
    • Automate environments and configuration
      • Is the underpinning that makes everything works.
    • Automate builds and deployments and use continuous integration
    • Automate testing at all levels
    • Deployments should be boring!!
    • Just because you CAN release at any time, doesn’t mean you HAVE to
  • Choreography
    • How different parts of the system works together as opposed to orchestration
    • Scripted outcomes and vision
    • Individuals perform to the vision without a conductor
    • Distributes authority about interactions
    • Introduces new kinds of failure scenarios
  • Contract Testing
    • Acceptance tests at the systems interface
    • Documents assumptions made
    • Maximizes parallel independent work
    • Used in congestion with Postel’s Law
    • One traditional role of Enterprise Architect

Evolutionary Architecture

  • Define your architectural fitness function
    • Must think about this upfront
      • What is most important to you
        • What is your security threat landscape?
        • What are your scaling requirements?
        • What are your performance requirements?
        • What are the things you can’t compromise on?
  • Delay your decisions as long as you can
  • Understand various forms of technical debt
  • Implement evidence based re-use.
  • Create and maintain the testing safety net

Book Recommendations

  • Patterns of Enterprise Architecture
  • Enterprise integration PAtterns
  • Craft of being a Architecture
  • Data Architecture

Rebecca Parsons – Publishing a book on

Technical debt is some drag on your development process because of some technical issue.

EBSITE}","toolbarBottom":""}};(e.EnlighterJSINIT=function(){EnlighterJS.init(o.selectors.block,o.selectors.inline,o.options)})()}else{(n&&(n.error||n.log)||function(){})("Error: EnlighterJS resources not loaded yet!")}}(window,console); /* ]]> */