Human Scale Software Architecture

In the physical built world there is the concept of “human scale” architecture, in other words, architecture that has been designed explicitly with the needs and constraints of humans in mind:data model humans that are typically between a few feet and 7 ft. tall and will only climb a few sets of stairs at a time, etc.

What’s been discovered in the physical construction of human scale architecture is that it is possible to build buildings that are more livable and more desirable to be lived in, which are more maintainable, can be evolved and turned into different uses over time, and need not be torn down far short of their potential useful life. We bring this concept to the world of software and software architecture because we feel that some of the great tragedies of the last 10 or 15 years have been the attempts to build and implement systems that are far beyond human scale.

Non- human scale software systems

There have been many reported instances of “runaway” projects; mega projects and projects that collapse under their own weight. The much quoted Standish Group reports that projects over $10 million in total cost have close to a 0% chance of finishing successfully, with success being defined as most of the promised functions within some reasonable percent of the original budget.

James Gosling, father of Java, recently reported that most Java projects have difficulty scaling beyond one million lines of code. Our own observations of such mega projects as the Taligent Project, the San Francisco project, and various others, find that tens of thousands or in some cases hundreds of thousands of classes in a class library are not only unwieldy for any human to comprehend and manage but are dysfunctional in and of themselves.

Where does the “scale” kick in?

What is it about systems that exceeds the reach of humans? Unlike buildings where the scale is proportional to the size of our physical frames, information systems have no such boundary or constraint. What we have are cognitive limits. George Miller famously pointed out in the mid-fifties that the human mind could only retain in its immediate short-term memory seven, plus or minus two, objects. That is a very limited range of cognitive ability to hold in one’s short-term memory. We have discovered that the short-term memory can be greatly aided by visual aids and the like (see our paper, “The Magic Number 200+/- 50”), but even then there are some definite limits in the realm of visual acuity and field of vision.

Leveling the playing field

What data modelers found a long time ago, although in practice had a difficult time disciplining themselves to implement, was that complex systems needed to be “leveled,” i.e., partitioned in the levels of detail such that at each level a human could comprehend the whole. We need this for our enterprise systems now. The complexity of existing systems is vast, and in many cases there is no leveling mechanism.

The Enterprise Data Model: Not Human Scale

Take, for instance, the corporate data model. Many corporations constructed a corporate data model in the 1980s or 1990s. Very often they may have started with a conceptual data model which was then transformed into a logical data model and eventually found its way to a physical data model; an actual implemented set of tables and columns and relationships in databases. And while there may have been some leveling or abstraction in the conceptual and logical models, there is virtually none in the physical implementation. There is merely a partitioning which has usually occurred either by the accidental selection of projects or by the accidental selection of packages to acquire and implement.

As a result, we very often have the very same concept implemented in different applications with different names or sometimes a similar concept with different names. In any case, what is implemented or purchased very often is a large flat model consisting of thousands and usually tens of thousands of attributes. Any programmer and many users must understand what all or many of these attributes are and how are they used and how they are related to each other in order to be able to safely use the system or make modifications to it. Understanding thousands or tens of thousands of attributes is at the edge of human cognitive ability, and generally is only done by a handful of people who devote themselves to it full time.

Three approaches to taming the complexity

Divide and Conquer

One of the simplest ways of reducing complexity is to divide the problem down. This only works if after you’ve made the division you no longer need to understand the rest of the parts in detail. Merely dividing an ERP system into modules generally does not reduce the scope of the complexity that need to be understood.

Useful Abstraction

By abstracting we can gain two benefits. First there are fewer things to know and deal with, and second we can concentrate on behavior and rules that apply to the abstraction. Rather than deal separately with twenty types of licenses and permits (as one of our clients was doing) it is possible to treat all of them as special cases of a single abstraction. For this to be useful two more things are needed: there must be a way to distinguish the variations, without having to deal with the difference all the time; and it must be possible to deal with the abstraction without invoking all the detail.

Just in Time Knowledge

Instead of learning everything about a data model, withmodeproper tools we can defer our learning about part of the model until we need to. This requires an active metadata repository that can explain the parts of the model we don’t yet know in terms that we do know.

Written by Dave McComb

Scroll to top
Skip to content