It Isn’t Architecture Until It’s Built

It’s our responsibility as architects to make sure our work is implemented. We’ve been dealing a lot lately with questions about what makes a good architecture, what should be in an architecture, what’s the difference between a technical architecture and an information architecture, etc. But somewhere along the line we failed to emphasize perhaps one of the most important points in this business.

“It isn’t architecture until it’s built.” While that seems quite obvious when it’s stated, it’s the kind of observation that we almost need to have tattooed or at least put on our wall where we won’t forget it. It’s very easy to invest a great deal of time in elegant designs and even plans. But until and unless the architecture gets implemented it isn’t anything at all; it’s just a picture. What does get implemented has an architecture. It may not be very good architecture. It may not lend itself to easy modification or upgrade or any of the many virtues that we desire in our “architected” solutions. But it is architecture.

So the implication for architects is that we need to dedicate whatever percent of our time is necessary to ensure that the work gets implemented. It’s really very binary. You belong to an architectural group. Maybe you are the only architect, or maybe there are five people in your group. In either case, if your work product results in a change to the information architecture the benefits can be substantial. Almost any architectural group could be justified by a 10 or 20% improvement. Frankly, in most shops a 50 to 90% improvement in many areas is possible. So on the one side, if a new architecture gets adopted at all it’s very likely to have a high payback. But the flip side is that a new architecture, no matter how elegant, is not worth anything if it’s not implemented and the company would be acting rationally if it terminated all the architects.

The implication is that as architects we need to determine the optimal blend between designing the “best architecture” and investing our time in the various messy and political activities that ensure that an architecture will get implemented. These range from working through governance procedures to making sure that management is clear about a vision to continually returning to the cost-benefit advantages, etc. The specifics are many, and varied. In many organizations you may be lucky enough that you may not have to invest a great deal of your time to get a new architecture and implement it. Perhaps you’re fortunate enough to have insightful leadership or a culture that is eager to embrace a new architecture. If that’s the case, you might get away with spending 10 or 20% of your time ensuring that your architecture is getting implemented and spend the vast majority on developing, designing and enhancing the architecture itself.

However, if you’re like many organizations, life for the architect will not be that easy. You might find it profitable to spend nearly half your time in activities that are meant to promote the adoption of the architecture. Certainly you should never pass up an opportunity to make a presentation or help goad a developer along. Indeed, the importance is so great that given an opportunity to present you would do well to invest disproportionately in the quality of the presentation, as a perfunctory presentation about the status or a particular technical standard is not likely to move developers or management to adopt it and you may need to return to the theme over and over again.

As someone once pointed out to me, in matters such as this the optimal amount of communication is to over communicate. The rule is when you’re absolutely certain that you’ve communicated an idea so many times, so thoroughly, and so exhaustively that it just is not possible that anyone could tolerate hearing it any more, that’s probably just about right. Experience says that when we think we’ve communicated something thoroughly and repeatedly three-fourths of the audience has still not internalized the message. People are busy and messages bounce off them and need to be repeated over and over and over. And you’ll find that each part of your audience will accept and internalize a message in a different way from a different presentation and at different rates. I’m continually amazed when I get some feedback with a particular stakeholder at some meeting. The coins finally drop and they become advocates for some position we’d taken. In some cases I’ve gone back and realized that we’ve presented it many times to that person and somehow finally one time it took. In other cases we realized that, in fact, we hadn’t presented already it to that individual. We thought we’d covered everyone but they weren’t in certain meetings or we repeat something so many times we think everybody must have heard it and, in fact, that’s not the case at all.

In closing, I’d like to recommend that every architect make a little plaque and put it near their desk that says: “It isn’t architecture until it’s built.” That might help you decide what you are going to do tomorrow morning when you come into work.

Written by Dave McComb

Event-Driven Architecture

Event-driven architecture as the latest buzzword in the enterprise architecture space.

If you’ve been reading the trade press lately, you no doubt have come across the term event-driven architecture as the latest buzzword in the enterprise architecture space.

So you dig about to find out just what is this event-driven architecture. And if you dig around a bit, you’ll find that event-driven architecture (EDA) is an application architecture style that is defined primarily by the interchange of real-time or near-real-time messages or events.

Astute readers of this web site, our white papers, attendees at our seminars, and of course our clients, recognize that this is exactly what we have been espousing for years as to what a good service oriented architecture looks like. You may recall our Enterprise Message Modeling architecture that prominently featured publishing. Event analysis was to define key messages being sent from application to application. You may recall our many exhortations to use “publish and subscribe” approaches for message dispatch whenever possible. You may recall us relying on events to populate managed replicated stores for just this purpose.

So, you might ask, why does the industry need a new acronym to do what it should have been doing all along?

First, a bit of history. In the 1960s MRP (Material Requirements Planning) was born. To the best of my knowledge, the first commercial implementation was at the GE Sylvania television plant. The system started from the relatively simple idea that a complex Bill of Material could be exploded and time phased to create a set of requisitions for either inventory parts or purchased parts. But these early systems went considerably beyond that and “closed the loop,” checking inventory, lead times, etc. After the successes of these early systems, a number of packaged software vendors began offering MRP software. However, to meet the common denominator and make the product as simple as possible, these products very often did not “close the loop;” they did not factor in changes in demand to already existing schedules, etc. Then a mini-industry, APICS, the American Production Inventory Control Society, sprang up to help practitioners deal with these systems. What they soon proposed was that these MRP systems needed to be “closed loop.” Sure enough, a few vendors did produce “closed loop” systems. This created a marketing problem. The response was MRPII and a change in the acronym; it now stood for Manufacturing Resource Planning.

“MRPII is what everyone needs.” And most of the education and marketing was about the shortcomings of the earlier MRP systems. Of course, the earlier MRP systems were, for the most part, just bad implementations, not something that was more primitive, in the way that we look at Paleolithic art.

And so it is with SOA. Apparently what has happened is that the Web services movement has become associated with service oriented architecture. However, most practitioners of Web services are comfortable using Web services as a simple replacement for the remote procedure call (RPC). As a result, many organizations are finding their good intentions of SOA being sucked down into a distributed request/reply environment, which is not satisfying the issues the architecture was meant to address. Nor is it delivering on the promises of the architecture: loose coupling, and the commoditization of shared services.

Perhaps it’s inevitable we’ll have to deal with new acronyms like EDA. But if you’ve been tuned in here for a while, think of EDA as SOA done right.

Written by Dave McComb

 

Strategy and Your Stronger Hand

Those of us in the complex sale sector need to be aware that volume operations from adjacent marketplaces will soon enter ours.

The December 2005 issue of the Harvard Business Review has excellent articles by two of my favorite business authors, Geoffrey Moore (“Strategy and Your Stronger Hand“) and Clayton Christiansen (“Marketing Malpractice: The Cause and the Cure,” which is applicable as we start looking at commercializing Semantic Technology).

Moore’s article has many fresh insights; chief among them is that companies have a dominant business model. The model does not depend on the industry they are in, nor their age or size. He likens this to our dominant “handed-ness” and as the editor pointed out on the editorial page, “It’s easier to convert a shortstop into an outfielder than it is to change a southpaw into a righty.”

Some firms’ dominant model is “volume operations” and for others it is “complex systems.” The first relies on many customers, brands, advertising, channels and compelling offers. The latter relies on targeted customers and the integration of third party products into total solutions. For each the grass often looks greener in the other model, but almost no business succeeds when they attempt to change models.

The rhythm of most high tech sectors is that the complex sale companies forge new territories and solve unique customer problems. The volume companies come in later and try to commoditize the solution. To survive, the complex sale companies need to do two things simultaneously: defend, for as long as possible, the position they have already won, and move up the solution chain and incorporate the newly commoditized components into an even more interesting solution.

The one thing they need to avoid is trying to convert their own early wins into volume opportunities. What does this have to do with semantics? We are just beginning the commercial roll out of this technology. We will have all the fits and starts of any new high tech sector. We have an opportunity to be a bit more self aware.

Those of us in the complex sale sector need to be aware that volume operations from adjacent marketplaces will soon enter ours. We need to be continually vigilant about incorporating rather than competing, and moving on up the solution chain. Consumers of this technology have the opposite challenge: how to recognize which aspects of their problems require “complex” solutions and which aspects are ripe to be solved with “volume” solutions.

 
 

 

The Zachman Framework

Shortly after you start your inquiry about software architecture, or enterprise architecture as it is often called, you will come across the Zachman Framework.

The Zachman Framework is a product of John Zachman who has been championing this cause for at least 15 years, first with IBM and then on his own. As with so many things in this domain, we have an opinion on the Zachman Framework and are more than willing to share it.

What Is the The Zachman Framework?

First though, let’s describe just what the Zachman Framework is. John Zachman believes, as we do, that software is a human created artifact of large-scale and as such we may learn a considerableEnterprise Architecture: The Zachman Framework amount from the analogy between software and other large-scale artifacts of human creation. In the early days of software development, there were many who felt that perhaps software was a creative activity more akin to writing or artistry or perhaps craftsmanship on a small-scale.

However, this idea has mostly faded as the scale and interconnectedness of the pieces has continued to increase. Some of John’s early writings compared software or enterprise architecture to the architectures needed to support airframe manufacturing, aircraft, rockets and the like. His observation was that in order to deal with the complexity of the problem in these domains, people have historically divided the problem into manageable pieces. The genius of John’s approach and his framework has been in the orthogonality of the dimensions of which he divided the framework.

The Zachman Framework is displayed as a matrix. However, John is careful to point out that it is not a “matrix” but a “schema” and should not be extended or modified, as in his belief it is complete in its depth and breadth.

The orthogonal dimensions referred to above are shown in rows and columns here. In the rows are the points of view of the various stakeholders of the project or the enterprise. So for instance, at the highest level is the architecture from the point of view of the owner of the project or the enterprise and as we transform the architecture through the succeeding rows we gradually get to a more and more refined scope, as would be typical of the people who need to implement the architecture or eventually the products of the architecture. In a similar way, the columns are orthogonal dimensions and in this case, John refers to them as the interrogatories So, each column is an answer to one of Rudyard Kipling’s six able servants: who, what, when, where, how, and why. Each column is a different take on the architecture, so for instance, the “what” column deals with information, the “things”about which the system is dealing.

In the aircraft analogy, it would be the materials and the parts. Likewise, the “how” column refers primarily to functions or processes; the “where” to the distribution of networking; the “when” to scheduling cycle times and workflow; the “who” to the people and organizations involved in each of the processes; and the “why” to strategy and eventually down to business rules.

Behind this framework then are models which allow you to describe an architecture or any artifact; a specific design of a part of a product or database table, or whatever, within the domain of that cell. Many people have the misperception that at the high-level there is less detail and that you add detail as you come down the rows. As John is very fond of pointing out, you need “excruciating” detail at every level and what is occurring at the transition from row to row is the addition of implementation constraints that make the thing buildable.

John has been a tireless champion of this cause, and from that standpoint we have him to thank for pointing out that this is an issue, and furthermore for championing it and keeping it in the forefront of discussion for a long, long period of time. He’s been instrumental in making sure that senior management understands the importance and the central role of enterprise architecture.

What the Zachman Framework Is Not

At this point though, we need to point out that the Zachman Framework is not an architecture. And the construction of models behind the framework is not, in and of itself, an architecture. It may be a way to describe an architecture, it may be a very handy way for gathering and organizing the input you need into an architectural definition project, but it is not an architecture nor is it a methodology for creating one. We believe the framework is an excellent vehicle for explaining, communicating, and understanding either a current architecture or a proposed architecture.

However, it is our belief that a software architecture, much like a building architecture or an urban plan, is a created and designed artifact that can only be described and modeled after it has been created and that the act of modeling it is not the act of creating it. So in closing, to reconcile our approach with the Zachman Framework we would say that firstly, we have a methodological approach to creating enterprise software architecture. Secondly, we have considerable experience in actually performing this and creating architectures that people have used to develop and implement systems. Thirdly, these architectures that we have designed/developed can be modeled, described, and communicated using the Zachman Framework. But that does not mean that they were, or in our opinion, even could be created through a methodological modeling process as suggested by the Zachman Framework.

Architecture and Planning

“Action without planning is folly but planning without action is futile.”

In this write-up, we explore the intimate connection between architecture and planning. At first blush, they seem to be completely separate disciplines. On closer examination, they appear to be twoArchitecture and Planning sides of the same coin. But in the final examination, we find that they are intimately intertwined but still separate and potentially independent. The motivation for this paper was an observation that much of our work deals with system planning of some variety. And yet, there is virtually nothing on a web site on this topic.

On one level that may be excusable. There is nothing drastically new about our brand of planning that distinguishes it from planning as it has been practiced for decades. On the other hand, system architectures typically are new and evolving and there are new observations to be made. But there’s more to it than that. We have so baked planning into our architectural work that we no longer notice that it’s there. This paper is the beginning of an attempt to extricate the planning and describe it as a sub discipline of its own. Are architecture and planning the same thing? Can we have one without the other? This is where we begin our discussion.

Certainly, we can have planning without architecture. Any trivial planning is done without architecture. We can plan a trip to the store or a vacation without dealing with architecture. We can even do a great deal of business planning, even system planning, as long as the implicit assumption is that the new projects will continue using any existing architecture. So certainly, we can have planning without architecture. But can we have architecture without planning? Well, certainly it’s possible to do some architectural work without planning.

There are two major ways this can come to be. One is that we can allow developers to develop whatever architecture they want without subjecting it to a planning process. The end product of this is the ad hoc or accidental changes that so characterize the as built architectures we find. The other way, which is as common, is to allow an architectural group to define an architecture without requiring that they determine how we get from where we are to where we want to be. Someone once said, “Action without planning is folly but planning without action is futile.” The architect who does architectural work without doing any planning is really just participating in an exercise in futility.

An intentional architecture requires a desired “to be” state, where some aspect of software development, maintenance or operation is better than it currently is. There are many potential aspects to the better state in the “to be” architecture: it could be less risky, it could be more productive, it could scale better, it could be more flexible, it could be easier for end-users to use, it could be more consistent, etc.

What they all share is that it is not the same as what exists now and in order to migrate from the “as is” to the “to be” requires planning. In the nineties, we seemed able to get away with a much more simplistic view of planning. “Rip and replace” was the order of the day once you determined what the target architecture looked like. Most organizations now have far too much invested in their legacy systems to contemplate a “rip and replace” strategy to improve either their architectures or their applications. As a result, the onus is on the architects to determine incremental strategies for shifting the existing architecture to the desired one. The company must continue to run through the potentially long transition period.

The constraints of the many interim stages of the evolving architecture and applications create many challenges for the planner. In some ways, it’s much like the widening of the heavily trafficked highway: it would be quite simple to widen it, if we could merely get all this traffic off of it but given that we can’t, there is often an extremely elaborate series of detours that each has to be planned, implemented and executed. In conclusion, I think we can see that architecture desperately needs planning. Indeed, the two are inseparable. While planning can certainly live on in the absence of architecture, architecture will not make any meaningful progress in any established company without an extreme commitment to planning.

By Dave McComb

Response Time Quanta

How do we perceive software response time? (I’m indebted to another author for this insight, but unfortunately I cannot credit him or her because I’ve lost the reference and can’t find it either in my pile of papers I call an office, nor on the Internet. So, if anyone is aware whose insight this was, please let me know so I can acknowledge them.)

Basic Thesis

  • In most situations, a faster response from a system (whether it is a computer system or a human system) is more desirable than a slower one.
  • People develop strategies for dealing with their experience of and expectation of response times from systems.
  • Attempts to improve response time will not even be perceived (and therefore will be effort wasted) unless the improvement crosses a threshold to where the user changes his or her strategy.

These three observations combine to create a situation where the reaction to response time improvement is not linear: a 30% improvement in response time may produce no effect, while a 40% improvement may have a dramatic effect. It is this “quantum-like” effect that gave rise to the title.

First Cut Empirical Model – No Overlaps

Our first cut of the model lumps each response into a non-overlapping range. As we’ll observe later, it is not likely that simple, however, it is surprising how far you can get with this.

Quanta Name Response timeExampleUser perceptionUser response/ strategy
SimultaneousLess than 1/10th of a secMouse cursor delay on a fast system, selection highlight, turning on an incandescent light bulbUsers believe that the two things are the same thing. That there is no indirection. Moving the mouse is moving the cursor, that the click directly selects the item and that the switch turns on the lightTransparency. Users are not aware there is an intermediary between their action and the result
Instant1/10th – ½ secondScrolling, dropping physical objectBarely perceptible difference between the stimulus and the response, but just enough to realize the stimulus causes the effect.Users are aware but in control. Their every action is swiftly answered with a predictable response. No strategy required.
Snappy/ Quick ½ – 2 secondsOpening a new window, pulling a drop down list, turning on a fluorescent lightMust pay attention, "did I click that button?" (Have you ever spun the knob on a bedside lamp in a hotel, thinking it wasn't working, when you were just too fast for the fluorescent?)Brief pause, to prevent initiating the response twice. Requires conscious attention to what you are doing, which distracts from the direct experience.
Pause2–10 secondsA good web site, on a good connection. The time for someone to orally respond to a questionI have a few seconds to focus my attention elsewhere. I can plan what I'm going to do next, start another task etc. Frustration if it's not obvious the activity is in progress (hourglass needed).Think of or do something else. Many people now click on a web link, and then task switch to another program, look at their watch or something else. This was the time when data entry people would turn the page to get to the next document.
Mini Task 10 – 90 secondsLaunching a program, shutting down, asking for someone to pass something at the dinner tableThis task is going into the background until it is complete. Time to start another task (but not multiple other tasks).Time for a progress bar.You're obligated to do something else to avoid boredom. Pick up the phone, check your todo list, engage in conversation, etc.
Task90 seconds – 10 minutesA long compile, turning on your computer, rewinding a video tapeNot only do I start another task of comparable length, I also expect to have some notification that the first task is complete (a dialog box, the click the video makes).This is where the user starts another task, very often changing context (leaving the office, getting on the phone, etc.), however, the second task may be interruptible when the first task finishes.
Job10 – 60 minutesVery long compile, do a load of laundryJob is long enough that it is not worth hanging around until it is complete.Plan ahead for this, do not casually start a process that will take this long until you have other filler tasks planned (lunch, a meeting, something to read, etc.). Come back when you're pretty sure it will be done
Batch process1 – 12 hoursOld-fashioned MRP or large report run, airplane flight.Deal with the schedule more than monitoring the actual event in progress.Schedule these.
Wait½ – 3 daysResponse to email, Reference check call back, Dry cleaning,I potentially have too many of these at once. I'll lose track of them if I don't write them down.Todo lists
Project3 days – 4 monthsSoftware Project, Marketing campaign, GardeningThis is too long to wait to find out what is happening.Active statusing at periodic intervals

My contention is that once a user recognizes a situation and categories it into one of these quanta, they will adopt the appropriate strategy. For many of the strategies they won’t notice if the response time has improved, until and unless it improves enough to cause them to change strategies. Getting a C++ compile time down from 4 minutes to 2 minutes likely won’t change anyone’s work habits, but going to a Pause or Snappy turnaround, like in a Java IDE, will. In many cases the strategy obviates any awareness of the improvement. If I drop my car at the car wash before lunch and pick it up afterward, I’ll have no idea if they improved the throughput such that what used to take 40 minutes now only takes 15. However a drive-through that only takes 10 minutes might cause me to change how I do car washes.

Overlapping Edges

While I think the quantum effect is quite valid, I don’t believe that the categories are quite as precise as I suggested, and I think they may vary as someone is moving up and down the hierarchy. For instance a 2.5 second response time may in some contexts be considered snappy.

Implications

I think this has implication for systems design as well as business design. The customer facing part of a business presents a response time to the customer. The first implication is that in any project (software, hardware or network improvement, or business process reengineering) there should be a response time goal, with a reason for that, just as valid as any other requirement of a project. Where an improvement is desired, it should require that the improvement cross at lease one quanta threshold and the benefit ascribed from doing so be documented. IBM made hay in the 70’s with studies showing that dramatic productivity gains from sub-second response time on their systems more than made up for the increased cost of hardware. What was interesting was that the mathematical savings from the time shaved off each transaction wasn’t enough to justify the change, but that users worked with their systems differently (i.e., they were more engaged) when the response time went down. Some implications for… call center response time: if you expect it will be a “job” [> 10 minutes] you will plan your call much more carefully. on line ordering: when products arrive first thing the next morning and people expect that, they deal with ordering, and setting up reminders that somethings will arrive. installation programs: unless it is a “mini task” and can be done in-line (like getting a plug-in) you need to make sure that all the questions can be answered up front and the install can then run in the background. Many writers of installation programs wrongly believe that asking the user questions throughout the installation process will have them think the installation is snappy. Hello — nobody thinks that, they expect it to be a “task” and would like to turn their attention elsewhere. However, if they do something else and come back and find the install stopped because it was waiting for more info from the user, they get pissed (it was supposed to be done when they got back to it.)

Written by Dave McComb

Time Zones

Reflections on low-level ontology primitives.

We had a workshop last week on gist (our minimalist upper ontology). As part of the aftermath, I decided to get a bit more rigorous about some of the lowest level primitives. One of the basic ideas about gist is that you may not be able to express every distinction you might want to make, but at least what you do exchange through gist will be understood and unambiguous. In the previous version of gist I had some low level concepts, like distance, which was a subtype of magnitude. And there was a class distanceUnit which was a subclass of unitOfMeasure. And unit of measure has a property that points to conversion factor (i.e., how to convert from one unit of measure to the base unit of that “dimension”). But what occurred to me just after the workshop is that two applications or two organizations communicating through gist could still create problems by just picking a different base (i.e., if one said their base for distance was a meter and another a foot, they have a problem).

This was pretty easily solved by going to NIST, and getting the best thinking on what these dimensions should be and what the base unit of each dimension should be. Looking at it, I don’t think there ought to be much problem with people adopting these. Emboldened, I thought I would do the same for time.

For starters, universal time seems to the way to go. However, many applications record time in local time so we need some facility to recognize that and provide an offset. Here’s where the problem came in and maybe you dear readers can help. After about an hour of searching the web the best I could find for a standard in this area is something called the tz database. While you can look up various cities, I didn’t see anything definitive on what the geographical regions are that make up each of the time zones. To make things worse, the abbreviations for time zones are not unique, for instance, there is an EST in North America and one in Australia. If anyone has a thought in this area, I’m all ears.

Semantisize

Semantic technology resources

I was alerted to this site: www.semantisize.com from a comment. It’s pretty cool. You can while away a lot of time on this site which is rounding up lots of podcasts, videos, etc., all related to Semantic Technology. I got a kick out of a video of Eric Schmidt taking a question from the floor on “What is Web 3.0?” Schmidt’s answer: “I think you [the questioner] just made it up.”

Part 6: Definitions are even more important than terms are

In recent posts, we stated that while terms are less important than concepts, and they mean nothing from a formal semantics perspective, they are very important for socializing the ontology. The same is true for text definitions, but even more so. Just like terms, the text definitions and any other comments have zero impact on the inferences that will be sanctioned by the ontology axioms. However, from the perspective of communicating meaning (i.e. semantics) to a human being, they play a very important role. Many of the people that want to understand the enterprise ontology to will mainly be looking at the terms and the text definitions, and never see the axioms. Text definitions help the human get a better idea of the intended semantics for a term, even for those that choose to view the axioms as well. For those interested in the axioms, the text helps clarify the meaning and makes it possible to spot errors in the axioms. For example, the text may imply something that conflicts with or is very different from with what the axioms say. The text definitions also say things that are too difficult or are unnecessary to say formally with axioms. Other comments that are not definitions, but that should be included in the ontology include: examples and counter examples, things that are true about a concept, but that are not part of defining it. Collectively all this informal text that is hidden from the inference engine contributes greatly to human understanding of the ontology, which is on the critical path to putting the ontology to use.

Read Next:

Part 2: Don’t let terms get in the way!

It frequently happens that a group of experts use a term so differently that they just cannot agree on a single meaning or definition.  This problem arises in spades in the area of ‘risk’. For example, in traditional operational risk management (ORM), when you measure risk, you multiply the probability of a loss times the amount of the loss.  In the modern view of ORM, risk is a measure of loss at a level of uncertainty. The modern definition of risk requires both exposure and uncertainty[1].  So you get two different numbers if you measure risk from these different perspectives. One can go round and round with a group of experts trying to agree on a definition of ‘risk’ and generate a lot of heat with little illumination.   But, when we change our perspective from the term, and instead start looking for underlying concepts that everyone agrees on we don’t have to look very far.  When we found them, we expressed them in simple non-technical terms to minimize ambiguity.   Here they are:

  1. Something bad might happen
  2. There is a likelihood of the bad thing happening
  3. There are undesirable impacts whose nature and severity varies (e.g. financial, reputational)
  4. There is a need to take steps to reduce the likelihood of the bad thing happening, or to reduce the impact if it does happen.

After many discussions and no agreement on a definition the term, ‘risk’, we wrote down these four things and asked the experts: “when you are talking about risk, are you always talking about some combination of these four things”?  “Yes” was unanimous. The experts differ on how to combine them and what to call them. For example, the modern view and the traditional view of risk each combine these underlying concepts in different ways to define what they mean by ‘risk’.  In the modern view, if the probably of loss is 100%, there is no risk because there is no uncertainty.   The concept that is called ‘risk’ in the traditional view, is called ‘expected loss’ in the modern view, but it is the same underlying concept. Compared to wading through the muck and the mire of trying to agree on terms, focusing on the underlying concepts using simple non-jargon terms is like a hot knife going through cold butter. Terms get in the way of a happy marriage too!  How many times have you disagreed with your partner on the meaning of a word?  It’s more than just semantics, it’s often emotional too.   I believe we are all divided by a common language, in that no two people use words to mean exactly the same thing, even everyday words like “support” or “meeting”.    I have learned that it is easier to learn and use the language of my spouse than it is to convince her that the term I use is the right one (despite the seductive appeal of the latter).


[1] “A New Approach for Managing Operational Risk Addressing the Issues Underlying the 2008 Global Financial Crisis”  Sponsored by: Joint Risk Management Section, Society of Actuaries,  Canadian Institute of Actuaries, and Casualty Actuarial Society

For further reading, refer to Michael Uschold’s additional posts in this series.

Read Next: