How do we perceive different software response times?
(I'm indebted to another author for this insight, but unfortunately I cannot credit him or her because I've lost the reference and can't find it either in my pile of papers I call an office, nor on the Internet. So, if anyone is aware whose insight this was, please let me know so I can acknowledge them.)
- In most situations, a faster response from a system (whether it is a computer system or a human system) is more desirable than a slower one.
- People develop strategies for dealing with their experience of and expectation of response times from systems.
- Attempts to improve response time will not even be perceived (and therefore will be effort wasted) unless the improvement crosses a threshold to where the user changes his or her strategy.
These three observations combine to create a situation where the reaction to response time improvement is not linear: a 30% improvement in response time may produce no effect, while a 40% improvement may have a dramatic effect. It is this "quantum-like" effect that gave rise to the title.
First Cut Empirical Model – No Overlaps
Our first cut of the model lumps each response into a non-overlapping range. As we'll observe later, it is not likely that simple, however, it is surprising how far you can get with this.
|Quanta Name||Response time||Example||User perception||User response/ strategy|
|Simultaneous||Less than 1/10th of a sec||Mouse cursor delay on a fast system, selection highlight, turning on an incandescent light bulb||Users believe that the two things are the same thing. That there is no indirection. Moving the mouse is moving the cursor, that the click directly selects the item and that the switch turns on the light||Transparency. Users are not aware there is an intermediary between their action and the result|
|Instant||1/10th – ½ second||Scrolling, dropping physical object||Barely perceptible difference between the stimulus and the response, but just enough to realize the stimulus causes the effect.||Users are aware but in control. Their every action is swiftly answered with a predictable response. No strategy required.|
|Snappy/ Quick||½ – 2 seconds||Opening a new window, pulling a drop down list, turning on a fluorescent light||Must pay attention, "did I click that button?" (Have you ever spun the knob on a bedside lamp in a hotel, thinking it wasn't working, when you were just too fast for the fluorescent?)||Brief pause, to prevent initiating the response twice. Requires conscious attention to what you are doing, which distracts from the direct experience.|
|Pause||2–10 seconds||A good web site, on a good connection. The time for someone to orally respond to a question||I have a few seconds to focus my attention elsewhere. I can plan what I'm going to do next, start another task etc. Frustration if it's not obvious the activity is in progress (hourglass needed).||Think of or do something else. Many people now click on a web link, and then task switch to another program, look at their watch or something else. This was the time when data entry people would turn the page to get to the next document.|
|Mini Task||10 – 90 seconds||Launching a program, shutting down, asking for someone to pass something at the dinner table||This task is going into the background until it is complete. Time to start another task (but not multiple other tasks).Time for a progress bar.||You're obligated to do something else to avoid boredom. Pick up the phone, check your todo list, engage in conversation, etc.|
|Task||90 seconds – 10 minutes||A long compile, turning on your computer, rewinding a video tape||Not only do I start another task of comparable length, I also expect to have some notification that the first task is complete (a dialog box, the click the video makes).||This is where the user starts another task, very often changing context (leaving the office, getting on the phone, etc.), however, the second task may be interruptible when the first task finishes.|
|Job||10 – 60 minutes||Very long compile, do a load of laundry||Job is long enough that it is not worth hanging around until it is complete.||Plan ahead for this, do not casually start a process that will take this long until you have other filler tasks planned (lunch, a meeting, something to read, etc.). Come back when you're pretty sure it will be done|
|Batch process||1 – 12 hours||Old-fashioned MRP or large report run, airplane flight.||Deal with the schedule more than monitoring the actual event in progress.||Schedule these.|
|Wait||½ – 3 days||Response to email, Reference check call back, Dry cleaning,||I potentially have too many of these at once. I'll lose track of them if I don't write them down.||Todo lists|
|Project||3 days – 4 months||Software Project, Marketing campaign, Gardening||This is too long to wait to find out what is happening.||Active statusing at periodic intervals|
My contention is that once a user recognizes a situation and categories it into one of these quanta, they will adopt the appropriate strategy. For many of the strategies they won't notice if the response time has improved, until and unless it improves enough to cause them to change strategies. Getting a C++ compile time down from 4 minutes to 2 minutes likely won't change anyone's work habits, but going to a Pause or Snappy turnaround, like in a Java IDE, will. In many cases the strategy obviates any awareness of the improvement. If I drop my car at the car wash before lunch and pick it up afterward, I'll have no idea if they improved the throughput such that what used to take 40 minutes now only takes 15. However a drive-through that only takes 10 minutes might cause me to change how I do car washes.
While I think the quantum effect is quite valid, I don't believe that the categories are quite as precise as I suggested, and I think they may vary as someone is moving up and down the hierarchy. For instance a 2.5 second response time may in some contexts be considered snappy.
I think this has implication for systems design as well as business design. The customer facing part of a business presents a response time to the customer. The first implication is that in any project (software, hardware or network improvement, or business process reengineering) there should be a response time goal, with a reason for that, just as valid as any other requirement of a project. Where an improvement is desired, it should require that the improvement cross at lease one quanta threshold and the benefit ascribed from doing so be documented. IBM made hay in the 70's with studies showing that dramatic productivity gains from sub-second response time on their systems more than made up for the increased cost of hardware. What was interesting was that the mathematical savings from the time shaved off each transaction wasn't enough to justify the change, but that users worked with their systems differently (i.e., they were more engaged) when the response time went down. Some implications for… call center response time: if you expect it will be a "job" [> 10 minutes] you will plan your call much more carefully. on line ordering: when products arrive first thing the next morning and people expect that, they deal with ordering, and setting up reminders that somethings will arrive. installation programs: unless it is a "mini task" and can be done in-line (like getting a plug-in) you need to make sure that all the questions can be answered up front and the install can then run in the background. Many writers of installation programs wrongly believe that asking the user questions throughout the installation process will have them think the installation is snappy. Hello -- nobody thinks that, they expect it to be a "task" and would like to turn their attention elsewhere. However, if they do something else and come back and find the install stopped because it was waiting for more info from the user, they get pissed (it was supposed to be done when they got back to it.)