Lessons Learned: The Quality of Design is not Fuzzy
Design Quality can be measured, which is the only way to make a design evaluation less personal and therefore more useful. In a previous blog, I mentioned that design is inextricably tied to ego, meaning that criticism is usually taken personally. One of the most important lessons that I have learned in IT is that there is NO ONE good design. The true measure of a design is how it meets the stated requirements of the stakeholders. In this blog, I will describe ways of arriving at such measures and why they matter.
Design Quality can be Articulated
So what is quality anyway? A former CEO of mine once stated to a group of his architects that our focus on quality was a hopeless task, as quality was such a fuzzy concept. He was concerned about where we were focusing our efforts on fixed time and fixed priced projects. I knew his assertion was inherently incorrect due to my experience, but to my dismay at the time, I could not express why his assumption was wrong. So I did some digging and found out that Design Quality can and should be measured. The Software Engineering Institute, has done an exceptional job of educating the IT community in this area over the last 15 years. The bottom line —When operational requirements are expressed as qualities, they can be measured either qualitatively (e.g. the need for flexible workflow) or quantitatively (e.g. process 10,000 messages per second with no queue). Functional Requirements can also be expressed as qualities, but this blog will focus on operational qualities because they tend to be neglected until very late in the project life cycle.
Quality is in the Eyes of the Stakeholder
I felt an urgent need to understand and then meet operational requirements early in my career because I programmed in assembly language, building applications that read, normalized and augmented market data feeds. As a result, Low Latency, High Throughput and very High Availability were on my mind as explicit qualities that were defined to meet the operational requirements of maintaining a reliable information service that companies would pay for. I later realized that most of my colleagues did not need to design applications under such tight operational constraints, and so these types of qualities were implicit or non-existent.
I also had to worry about adding new market data streams to our platform with very little advanced notice, and so procedural flexibility and modular re-usability were critical implicit requirements (i.e. the explicit requirement came 3 days before the new data feed was due on line, where my manager stated “it’s not like you have anything to do this weekend”). These business driven requirements were implicit qualities, meaning that they were expected to be built in with everything else, on time and within budget.
Most of my colleagues in the late 1970’s and early 1980’s spent their time making printed reports look and feel intuitive to their user base. This was a very important set of qualities that weren’t on my radar at all. My stakeholders were the consumers of the added value data I produced, and they just wanted that transformed data transmitted fast, reliably and accurately. They would make the pretty reports themselves.
Measuring Qualities is a Rigorous Art
An application’s operational and functional requirements represent how business demand must be met to avoid the perception of failure (i.e. not all failure comes from down time). Designing to meet these requirements usually means dealing with constraints and trade-offs, as mentioned in my previous blog on design. The real dilemma stems from the following questions:
- What Qualities should be Measured?
- How should the measures be represented?
- Who Should have the final say on the measurement?
As mentioned above, some qualities are not only explicit, but quantifiable (e.g. message throughput), but there are lots of subtleties regarding throughput, (e.g. how many peaks are there, how long does each peak last). These subtleties can have significant impact on a design. Then there are implicit and qualitative measures (e.g. is the UI intuitive enough to promote usage; can a workflow be re-sequenced without recompiling and redeploying the software). In these instances Design Context is the guiding principle: How do the stated requirements and their quality measures guide the design in the business, component, data or hardware context? Can the design be judged based on the qualities that have been chosen? If all the stated qualities are met, will the stakeholders consider this design a success? These questions must be addressed in order to decide upon what tactics to use when solving a design problem.
Measuring Qualities – A Question of Balance
Utilizing the power of qualities engages the art of balance. Meeting an extreme subset of requirements will likely cause other requirement subsets to be missed. I received a lesson in that ugly reality early in my career when confronted with extremely low latency, very high throughput and very high availability. Something had to give, and it depended on the business drivers which gave FIRST. In my market data world of the late 1970’s, very high availability software tactics were deprecated in priority because the hardware wasn’t that reliable to being with, so all the software tactics in the world were not going to prevent a disk crash; therefore performance was given the highest priority. Availability was addressed in a brute force manner as triple redundancy of the production platform was considered acceptable.
In today’s world of rapidly changing client facing applications, it could be that perfect reliability might be given a back seat in order to promote maximum flexibility, but Amazon wouldn’t think so. Understanding the business context is so critical when balancing qualities in a design and the business stakeholders need to weigh in on these decisions. These decisions have consequences that need to be conveyed and understood, they can have powerful effects on a company’s brand.
IT Design – Applying Capabilities to meet Quality Measures
The current move to Cloud, including the ability to use a predefined Platform as a Service (PaaS), is making some of these difficult trade-off decisions a little less severe. Now, the design phase can include a decision process regarding what new platforms and technical capabilities to test and choose, as opposed to the traditional process of designing every piece of technical capability through the creation of a software tactic. This shift from building to buying components and platforms will change the expectations of what can be built in a constrained period of time. Once the critical capability decisions have been made, it is now possible to consider specific PaaS products such as EMC’s Pivotal, among others.