|
NN/g Home AskTog Interaction Design Section The Scott Adams Meltdown: Anatomy of a Disast
AskTog, February 2006
Scott Adams, father of the Dilbert comic strip, recently had the misfortune to spend a number of hours editing comments for his blog only to have them disappear without a trace, much to his surprise, shortly after he published them. Veky wants to know who was at fault. His question implies that it just might have been Scott himself.
Scott Adams moderated 500 comments to his blog and then deleted them permanently despite prominent warnings about permanent deletion. Whose fault was it?
~Veky
A chain of five errors led to Scott Adams losing his work. Not one of those errors was his. They had been made months and even years before Scott Adams ever started work on his blog. His was an accident waiting to happen, an accident that has almost certainly befallen a large number of other individuals who have had the misfortune to use the same software.
When I was becoming a pilot, my instructor stressed that aircraft accidents typically are the result of a series of unfortunate errors and events, each of which in isolation would have been survivable, but, piling as they did one atop the other, resulted in tragedy.
For example, en route to Tahoe, my instructor and I landed at a small Sierra Nevada fly-in campground. Another aircraft had landed earlier, and the pilot discovered a mechanical problem that prevented them from taking off. He asked if we would give him a ride to Tahoe, so he could call for help. (We were way out of cell phone territory.)
My instinct was to immediately say, “yes.” Our aircraft was fitted out for four passengers, after all. My instructor, however, gave him a clear, “no,” telling him that he would arrange for someone to come to his aid, but that the guy wasn't getting aboard our airplane. Sure, it was a hot day, which reduces lift, and we were at a somewhat high altitude (though little higher than Denver, Colorado) but still, it seemed embarrassingly rude.
My instructor's response continued to seem rude right up until I was on final for South Lake Tahoe airport, at 6750 feet, and hit a downdraft. We began to drop like a rock. I made the runway, but with no altitude to spare. Had we had a passenger on board, we and our aircraft would have been strewn across a broken granite landscape. Almost certainly, none of us would have survived.
In my case, three events would have led up to that crash:
Numbers one and three were unavoidable. Number two was avoidable, and, fortunately, my instructor had the experience and judgment to break the chain by refusing to add a third passenger to what was already a fully-loaded plane for the prevailing conditions. (Small plane manufacturers are permitted to fit planes with more seats than can be used safely under more extreme conditions, such as high altitude and high heat.) Today, I would break the chain, too.
Quite often, pilot error is a factor in plane crashes. Pilots can be trained and retrained, so spotting pilot error is an important component of aviation accident investigation. We, however, gain no benefit from seeing our pilots, our users, as at fault. Our job is to build systems that either eliminate or survive pilot error. Scott Adams is an experienced "pilot," with thousands of hours of technology use behind him. For him to lose this amount of information, something must be seriously flawed in the design, and it's the design we must examine.
Let's look at what went wrong in the case of Scott Adams's blog. What were the cascading errors?
Scott Adams believed that there were two documents holding his comments. The first was what he termed the “temporary holding" database, useful only until he “published” the 500 comments to what he assumed was a second, publicly-accessible database.
The designers, on the other hand, thought there was only one database, on account of there really was only one database.
Per Don Norman's conceptual model, the designer develops a “Design Model” of the software, communicates it through the “System Image”the look, feel, and behavior of the interface. The user, in turn, attempts to recreate the Design Model through experiencing the software. This understanding then forms the user's assumptions, the filter through which they will interpret all further instructions.
Norman Conceptual Model, Illustrated by Laurie Vertelney
The communication failed in this case, as evidenced by Scott's dangerously wrong User Model, but why didn't he realize his mistake?
Users can fail in two different ways in their attempts to accurately recreate the Design Model:
In the first case, the user has a lot of questions and is generally on notice that things could go awry. In the second case, the users believe they have successfully "solved the puzzle." They have no questions since they don't realize that the reconstructed model is wrong. This second case is the dangerous one: The Designer and Users are working from two different sets of assumptions, neither knows it, and neither takes any action to correct it. The users, for minutes, days, or even months, systematically misinterpret what the designer “says.”
Consider the following instructions for entering a swimming pool:
Now, consider what would happen if you thought these instructions were, instead, for descending into the crater of a volcano.
If users only worked from the information contained in the System Image, they would likely create reasonably accurate User Models. They don't. Rather, they intermingle their past experiences with information extracted from the System Image when building the User Model. When their past experiences fail to jibe with the system image, or when the system image is ambiguous, users are highly likely to misinterpret the designer's intention.
Start with proper field studies to ensure that you understand the previous experiences of your users, so that your System Image either tracks the users' previous experiences or clearly communicates where the System Image parts with those experiences.
Designers, knowing users depend on previous experience, attempt to invoke specific experiences by using metaphors.
Metaphors, accurately applied and properly communicated, can greatly accelerate the process of building an accurate User Model. Objects like the trashcan, for example, enable users to instantly grasp a deep understanding of a computer feature they may have never previously encountered.
Metaphors must be true to the original. You can, with some success, improve upon the original, but you must never create something that is either “less than” or that simply ignores the most fundamental features of the original. If you do so, you will confuse the users every time, always to their detriment.
An example of “less than” can be seen in the spread of the trashcan icon into individual applications. Usually, throwing something in an application's trashcan instantly and permanently destroys it. That's clearly “less than” either a real trashcan or a desktop trashcan, where you can pull stuff out within a reasonable length of time.
Ignoring/changing the fundamental features of the original is just wrong. That's how Scott Adams was victimized:
“Publish” has had a very specific meaning in the public's mind since at least the 1800s. It consists of the mass replication and distribution of a document, with the original document, the draft, having little or no further usefulness after publication has been accomplished. Scott Adams is keenly aware of this real-world publish model, being a published author, used to supplying drafts and computer images that are then reproduced in the millions.
At some point in the past, some people associated with databases, almost certainly a group of systems programmers, decided to drastically redefine “publish” from the traditional “mass replicate” to “set a little flag.” That one naive move set the stage for tens of thousands of disasters.
(BOCTAOE. The blog people could make the argument that, if we go back to the 1400s and before, publish had a much looser definition, one that could even encompass “setting a little flag.” I invite them to use that more generous interpretation as long as all their users were all born before 1400. Otherwise, they should stick with the contemporary, more narrow, understanding.)
Choose only metaphors you can implement in a manner sufficiently analogous to the real world that users do not misinterpret the System Image. If you are forced to use a defective metaphor, like the database “publication” metaphor, explicitly and repeatedly tell new users the differences between the definition with which they are likely familiar and the redefined meaning within your application or weblication.
A confirmation (warning) dialog must use wording that ensures a prudent person will not be able to misinterpret the meaning. In this case, the dialogs kept warning Scott Adams about destroying what he considered now-useless information. Of course, he Okayed them. Who wants to keep useless information? Had the dialogs said, “Remove from publication and Destroy the only copy of this information in existence?” he probably would have reconsidered his decision. (The confirmation should have also offered help that further explained the difference between the real-world understanding of publication and the blogging application's use of the term.)
The blog programmers didn't invent this error. Microsoft, for years and years, has been sabotaging new users with their “Save Changes?” dialog that greets the user when they attempt to close a document window. I had a friend of generally high intelligence, but new to computers, who spent eight hours creating a long and complex document, one he assumed was being continuously saved as had been done when he used a typewriter or pen and paper.
Right before he was ready to quit, he decided to make a few changes to what he had written. Then he clicked on the close box only to be asked, “...Save changes?” He considered what he had typed in the last few minutes and decided that it was better before his changes. He clicked, “No.” Instantly, his last 8 hours of work was irrevocably deleted.
The word, “changes,” invokes the revising/remodeling experiences of new users. People reserve that word exclusively for changing something that already exists. The laziness of the Microsoft programmers in not having alternate wording for a previously-unsaved document cost my friend and probably tens of thousands of other new users a lot of hard work.
Any time you inject ambiguity-noise-into the System Image, you are causing your users distress or worse. In the case of Microsoft, they have probably caused users a loss of hundreds of thousands of hours of productivity over the years, all to save an hour or less of programming, a pretty poor trade-off.
Work with a writer to develop all your dialog wordings. Then ask people other than designers and engineers to feed back to you what the dialog means. When you discover an ambiguity, rework and retest the dialog until ambiguity ceases to exist.
Often, developers wanting to avoid undo will throw in a confirmation dialog instead. Confirmation dialogs are only effective in the odd case; confirmations that pop up every single time an operation is completed are quickly ignored, with the user learning, for example, to click, then press Return, instead of just clicking. The only effect of such dialogs is to make the developers feel good: “The users may be screwing up, but we warned them, so it is their own fault.”
No, it isn't.
Any time your user loses any work, consider it your fault, and figure out how to prevent it from happening to anyone else.
For some bizarre reason, we seem to have settled on always, always, always giving users undo for such critical operations as deleting a single character. When users delete an entire document, however, we offer no possible recovery. That, in the real world, would be evidence of insanity. In the earliest days of the computer world, it was sometimes unavoidable. Now, it is inexcusable, particularly since large-scale undo is often so easy to implement with the use of a little magic.
The key to magic is that there are two performances: The one the magician is actually doing, and the one you think he is doing. If they coincide, the magic doesn't work. One of the keys to this separation is time: The actual manipulation-removal of the ball from under the cup, moving the coin from the top of the table to the underneath, etc.-either occurs before you think it did (anticipation) or after you think it did (premature consumption).
This separation of illusion from reality can work to our advantage as well. In the case of a deletion,
What if we left out step three, at least for a while? The user can now experience that familiar sinking feeling that only occurs after we tell them the document is gone, go to the Edit menu, discover to their delight an active Undo and, magically, get all their information back. It requires no reconstruction of anything. We just reopen the same window we just blinked out.
In many cases, you won't be able to give the user the ability to undo for very long, sometimes not even past the user's next action. Even this allows the user to recover from accidental keystrokes or mouse clicks, all without putting undue burden on the programmer.
In the case of removing 500 records permanently from a database, however, leave them around for a little while longer, long enough for the user to become aware of his or her error. In this case, flag each “deleted” record, causing it to no longer be displayed, but still available for undo. If it is the entire database, flag the whole document, keeping it around at least until the end of the session.
Do consider privacy and security tradeoffs when offering such functionality. Any time the user believes information is gone, but it isn't, mischief can ensue. Most users, however, are now aware that even information they have explicitly and irrevocably removed can be reconstructed by any 17 year old kid with the proper track and sector disk utility.
Ensure that you first offer universal Undo. Only then, display confirmation dialogs for unusual activities.
The first four errors, in this case, are pretty fundamental, and no human-computer interaction designer should have made any of them. Still, errors, particularly in failing to recognize ambiguity, are going to creep in. Our safety net is user testing, something obviously missing in this case. I doubt the blogging developers would have had to study more than a handful of users before uncovering Scott's “temporary holding database” User Model. One can only hope that, with that realization, the team would have made significant changes to ensure the flawed interpretation was no longer possible.
In my thirty years of programming/designing, I have never seen a serious design flaw like this reach the light of day where even a minimal usability program was in place.
Do usability testing, even if it is informal and cursory. Even testing a single user representative of your intended audience can be a real eye-opener.
This was a preventable accident. If any one of these design errors had not been committed, Scott Adams would likely have 500 more comments and fewer gray hairs.
Join my intensive (and fun!) lecture/ workshop course. Sign up now!Interaction Design course: Go from zero to interaction designer in just three days. User Experience Conference Website There's more than my course at an NN/g conference. You'll find a breadth of other specialized courses and networking opportunities that will put you and your company at the leading edge of the design curve. |
|
The Scott Adams Meltdown might be a great example of a failed design process but it might also be a significant failure on the part of the user. Since we’re only privy to the hesaid/shesaid details we can’t be sure if Scott had completed the tutorial, read the manual, or at least practiced a couple of posts before totally imploding.
Consider someone buying a manual transmission car for the first time and driving it 400 km in first gear. Everything appears to be working fine but things couldn’t be further from the truth. We don’t even need to be as extreme to find an example of users abusing the designers. On a can of stain there is almost always a warning reading “test in a small, inconspicuous space for actual colour results.”
I also think that years of steadily improving UI are gradually taking their toll on society. I might even go as far as to say that 10 years ago we wouldn’t have blamed a company for our incorrect use of their product. Well, except for that hot coffee we spilled in our laps. Man, that coffee was too hot!
Best regards,
Jacob K
Let's examine your two examples, however. Vis-a-vis the automatic transmission, first, the automobile industry spent 35 years developing an automatic transmission just because they realized that people were having problems with manual transmission operation. It was one of the two most important innovations in automotive history, the other, even more important, being the electric starter. Second, the person driving 400km in 1st receives 400km-worth of high-quality feedback. Few people of legal driving age would be likely to drive more than 1/2 block with a car stuck in 1st.
The can-o'-stain example is different: The designers cannot predict the outcome of use of the product on differing surfaces. One half of the marriage of stain to surface is completely beyond their control. No amount of testing will help because the potential surfaces number in the millions, and nothing short of applying the product will reveal its effect.
Everything about the blog task is well within the control of the blog designers. They know everything about the "stain" and everything about the "surface." If they failed to test, as is apparent in this case, that is their fault, not the users'. Users cannot and should not be expected to "practice a couple of posts" or do an equivalent stain test when encountering a new piece of software. Nor do I think that would have worked in this case. I suspect whether he published one or all 500 comments that had flowed in from readers since he'd last checked, deleting the one-and-only database would have taken them all out.
Finally, I agree with your assessment of the direction American law has taken. When I was young, the standard for liability was based on the "prudent man." Would a prudent man have slipped and fallen? Would a prudent man have poured hot coffee in his lap through inattention?
Now, the standard is the "least abled." People collect millions for slipping and falling even though they had to climb over a fence and were in a drunken stupor at the time. Again, however, in this case, Scott remains blameless even by the old standard. Scott was and is a prudent man. He did read the screen instructions carefully. Then, he equally carefully destroyed what he believed to be a temporary database. This was not only an egregious example of bad design and flawed methodology, it would have been easily fixed, as suggested by the next letter.
To convey that "publish" means "set a little flag", the blog programmers could instead call it "make [comments] visible", which suggests that you are changing an aspect of this comment, not making a copy.
~nils
As for methodology, I would have first done some basic user testing. That would have revealed the problem. Subsequently, I would have made these sorts of wording changes, then tested again. However, this time, I would have explained to people that the program worked exactly as Scott assumed it worked, then seen if they turned around to me and said, "this seems to be saying it works a different way." If that happened consistently, then I'd know the program was now transmitting the design model clearly.
Finally, I must mention that there is always a second option when design model and user model clash: You can change the design model. In this case, that would mean maintaining, in fact, two different databases. That, in my opinion, would not be a good option in this case, but, in other cases, a change in the design model to conform to what users expect can be very effective. For example, if users expect a word processor to at least have the functionality of paper and pencil, one could add Continuous Save.
Have a comment about this article? Send a message to Tog.
Previous AskTog Columns >
Don't miss the next action-packed column! Receive a brief notice when new columns are posted by sending a blank email to asktoglist-subscribe@yahoogroups.com. |
Contact Us: Bruce Tognazzini Copyright Bruce Tognazzini. All Rights Reserved |