|
Performance can be divided into two considerations: Machine performance and user performance.
Machine performance is the traditional provence of engineering and many approaches and techniques are routinely applied by engineers toward the goal of improving machine performance. Increased machine performance typically speeds processes, increases machine efficiency, and reduce cost-per-transaction.
Increases in machine performance typically produce roughly equal increases in human performance, but exceptions do exist. First, the entire "machine" must improve, not just one layer. We have had the strange situation over the last 20 years of computing power going up more than 1000-fold, while the eventual user experience has, in some cases, actually slowed down due to bloated operating systems and applications. (In 1978, it took me three and a half minutes to load the OS and an application off cassette tape into my Apple II. My Mac often takes upwards of five minutes to reach the same point.)
Second, a tension can exist between the need for user-performance and the natural inclination of engineers to maximize machine performance. For example, human performance is best served by having data available at the user's fingertips, without having to wait for lengthy downloads or fetches. The traditional method for doing this is to pre-fetch data. When the user may take several paths, one may prefetch a lot of data that is never actually looked at, a clear collision between machine and user performance.
Fortunately, many techniques exist to increase human performance that do not require throwing hardware at the problem.
Business owners/managers want their employees to do the most amount of work in the least amount of time for the least amount of money. Consumers want to interact with the computer in a way that is both efficient and pleasant. Two approaches result in leveraged increases in human performance well beyond that achieved through just machine-performance improvements alone.
While few would argue with the importance of these approaches, applying them is not as simple as it may sound. It requires both careful analysis and a willingness to pay the price in development time and even machine performance to pull them off. The balance of this section will discuss techniques for applying these approaches.
Users perform three tasks when using traditional machines.
For example, when using an automobile, users decide where they want to go. Then they supply themselves with the data necessary to formulate a route to get there, drawn from sources such as road maps and radio traffic reports. Finally, they manipulate the steering wheel, accelleration and brake pedals, giving moment-to-moment instructions to the machine in order to carry out the task. This is not a strict sequence. Exigencies may cause them to decide on a change in route, requiring new information, etc.
More sophisticated than the automobile is the modern mechanical home sewing machine. The user might decide to embroider a pattern of some sort along one edge of a garment. Rather than having to manipulate the machine directly, as would be the case in the automobile, the user can select a plastic cam wheel to do most of the manipulating for them. The selection of the particular stitching pattern represents the user's judgement of what would make the garment look most attractive and appropriate. The plastic cam wheel itself carries the data necessary to carry out that judgement. The user's machine manipulation is reduced to guiding the fabric in a straight line while the needle is moved about by the machine to achieve the desired result. In the latest sewing machines, a computer takes over the job of the mechanical cam mechanism. The user just sets a dial to the desired patternthe judgementwithout having to enter the data, which is already contained within.
User-performance is maximized by attacking each of the three steps, reducing the need for decision-making, enabling the machine to gather its own data, and cutting back on the amount of machine-manipulation necessary to achieve the goal. Let's take them in reverse order.
Consider the 35mm camera. The only judgement the average user cares to make is who and what will appear in the viewfinder and, ultimately, the picture. This explains the overwhelming popularity of point-and-shoot cameras that handle the wealth of machine manipulations necessary to achieve a properly lighted and focused result.
Such cameras prevent the kind of low-level decision-making that a professional might make. In return, the odds of a non-professional getting a usable picture rise enormously. Today, even most professional gear offers the possibility of point-and-shoot, in the realization that by the time the photographer has made the many low-level judgements possible and translated them into the mechanics of the camera, the event to be captured may well be over.
Software often displays the same mechanical complexity as real machines, demanding the user serve the machine instead of the other way around. Anyone who has ever updated system software is aware of how complex and demanding such a task can be even though few, if any, actual judgements need be made. (Usually, when the users must make judgements, the OS suppliers fail to offer the information necessary to do so, resulting in a lot of haphazzard guessing.)
Some existing tasks may be an intricate blending of machine-manipulation and decision-making. For example, 35mm professional cameras have two rings surrounding the lens. One ring sets the aperaturethe wider the aperature, the more light that floods into the camera. The other ring sets the exposure timethe longer the time, the more light that floods into the camera. The mechanics are that if you want a properly-exposed picture, you must set an aperature and speed combination that will let just the right amount of light into the camera. However, there are many such "correct" combinations, from a narrow aperature and slow speed to a wide aperature and fast speed.
Judgment comes into play in deciding whether you want a little light over a long time or a lot of light over a short time. Why choose one or the other? When a lens is "closed-down"set to minimum lightyou get a lot of depth of field, meaning that everything from the foreground to the background will be in sharp focus. On the other hand, if any kind of movement is going on, you are liable to end up with streaks and smudges, as your subject moves during the exposure time. Therefore, to capture a pack of race horses at full gallop, the professional photographer foregoes depth of field and sets his or her camera at fast exposure with the lens wide open. As long as the third ring on the lensthe focus knobhas been set correctly, the horse will be in focus, even if the background is not.
Professional photographers, having learned what balance of depth-of-field to speed will produce a desired result, mentally compute the roughly correct numbers based on the desirable exposure indicated by their light meter. They then set either the aperature or the speed ring, then crank the other ring around until the light meter indicates proper exposure. If the resulting ratio is not as expected, they refine the process until they get it just right. By then, the next horse race may be in full swing.
This complex mechanical interface could be replaced with a single ring that simply went from high depth of field to high speed, always maintaining proper exposure. Such a system would continue to support the decision-making capability of the user, while eliminating the machine-manipulation portion of the operation.
(Professionals also sometimes purposely throw off the exposure either for effect or because they intend to push the film or they are using a technique known as bracketing. A second ring could give them that power back. While we would again have two rings, they would track the thinking process of the photographer, rather than the mechanics of a lens system, still a major win.)
What can we learn from this? When analysing a design, constantly separate out those components of an operation that are machine-manipulation vs. the more abstract task of telling the machine something it otherwise couldn't know, either in the form of external data or user-judgments. Then:
Two techniques can increase performance in data entry by minimizing the amount of information to be entered.
Approach one is most effective when the previously-entered information can be depended upon to be up to date and accurate enough for the task. Otherwise, the user can quickly expend any time savings in comparing the old with the new, looking for disparities.
Approach one depends on necessary information being available when needed. At first glance, when dealing with the natural latency of the web, this seems quite impractical. How can the contents of thousands of records be instantly available at a remote client site? Fortunately, in many cases, it doesn't have to be. Rather, what the user needs to know most immediately is whether that information is available at all. If it is, the user can proceed to enter the specifics of the current instance, rather than the background information. By the time the user is finished with that, the system will likely have had time to transmit the background information, should the user need to review it.
To make such a system work, careful attention will need to be made to gathering the identifier information as soon as possible. In a typical case, the user may enter a patient's social security number first-thing. Then, that number can be matched against a local list of numbers for a match. If there is no match, it is a new patient and a new record. If there is a match, the user can skip ahead.
Where even this limited local storage is vetoed, strongly consider having the user enter the identifier information for the next patient when starting the current patient. This will give the system time to fetch the full record for the next patient and have it ready for instant viewing. Depending on the tasks and the designed workflow, entering subsequent identifiers could either be done just one record ahead or as a separate pre-pass through the entire batch.
Approach two, minimizing data entry, can often be quite difficult to achieve for a reason that is often unexpected. The first thing many clients will attempt to do once they detect that a new system is going to save them lots of time and money is to try to reduce the efficiency of the system just as much as possible, so they can give all that time and money back. It's not that they really want to maintain an inefficient system, it's just that they recognize that now, for the first time, they can gather all that extra, secondary information that was prohibitively expensive to gather before. Work with your clients and show them exactly how much gathering that information is going to cost them. They will make the final decision; your job is to ensure that it is an informed decision.
Approach three, obtaining information by other means, is worth considerable effort. Make sure that you are looking at the "big picture." For example, one means of getting information off of paper forms and into the computer is to put the form through an optical character recognition system. Such systems are costly and, depending on the cleanliness and redundancy of the incoming information, may require more hand-work than they save.
Take a step back: Where are these paper forms coming from? Another machine? Consider how to eliminate the paper step entirely. Even if it can't be accomplished overnight, can it be in one year, two years, five years? What can you do now to begin a process tha will result in enormous labor savings over that long haul?
Decision-making should be reduced in a similar way:
Much of what passes for decision-making is really decision-reporting: Employees working from a standard set of rules apply those rules to the current record. Much of this activity can be taken over by a suficiently-sophisticated machine, removing the human operator from the process completely.
In step two, ensure that the remaining decisions actually pertain to the task, not the machine. If the user must decide whether or not to grant a request, that has to do with the task. If the user must decide whether to use method A to grant the request or method B to grant the same request, that likely has to do with the machine.
Most designers do not advocate limiting users to only a single way of doing every task. Indeed, the entire freedom proposition of the graphical user interface is that the designer provides the environment and the user decides how to traverse that environment. Maze walls are eliminated in favor of open spaces with a variety of tracings showing the way successful travelers have gone before.
The aim of such design is to have people eventually settle on a way of work that is comfortable to them. That is very different from ending up in a situation where users are deciding each and every time they reach the same fork in the road as to which way they will turn. That indicates a defective design with a heavy cost in human performance.
Step three ensures that the user is given all the information necessary to make an informed decision. Often in software, you see users being asked a question that most cannot possibly answer without going somewhere else for information. This usually implies an application that was never user-tested; the designers, knowing enough to form a judgement, just assumed everyone else did, too.
Step four, removal of extraneous material, is extremely important. Many web pages today bristle with dozens of links. The web browsers themselves offer scores of buttons and menu options. What's a poor user to do? Usually, select the wrong option. If the purpose is to browse, with no real ambition except that on the part of the web host to bring in advertising revenue, such serendipitous wandering is just great. On the other hand, if the purpose is to churn out work, such bumbling about can be a disaster.
Habitual users will eventually learn what's signal and what's noise, what's a true path and what's a yawning chasm, no matter how noisy the interface. They will be slowed, but they will not be stopped. In single-use applications, however, such a situation can be fatal.
Step 5: Users should be able to see and get to high-probability answers easily. Quite often, designers will present users with an ill-understood choice and two equally-likely-looking answers, even though one of those two decisions may be wrong for all but a handful of users. Instead, present the choice so the odds become clear:
Left-footed hippopotamus? Yes No Sometimes
|
Say:
|
Not only is the question clear, the expected answer is clear.
Another similar "gottcha" is to ask the user a question about an obscure option that requires the user to learn everything about the option in order to know they don't want it or need it. If it is obscure, keep it that way. Hide it under an "advanced" or similar label, and offer users a "restore normal values" option if they insist on messing around with the advanced options, then find themselves in trouble.
(On the other hand, don't do what Apple does, which is to hide it so thoroughly no one can find it. The user shouldn't have to learn through word-of-mouth that the way to get to the advanced options is to hold down the control key while alternately ringing their doorbell and flicking the light on and off in their refrigerator.)
Because a computer suffers a delay is no reason to visit that on the user. By spinning off an asynchronous background task, we can decouple the machine's "experience" from the user's experience, keeping the user working away without interruption.
Print operations over LANs have been asynchronous for more than 15 years. Users click Print and go about their business while the task takes place in the background. Printing was attacked early-on because:
If the printer was on a high-speed network with no one ahead of the user in the queue, things could go pretty fast. However, if someone had just launched a 300 page document, the user's machine could be frozen up for an extended period of time.
We now have an analogous situation on the web. Web round-trips, particularly in a high-productivity environment, take a relatively long time, they require no user involvement in the process, and, with the public network, one cannot predict whether the delay will be five seconds or a minute.
Any operation that fulfills the above criteria and can be split off as a separate task, should be.
If a user must confirm patient-eligibility to submit a form, that eligibility check should be initiated in the background while the user goes on about his or her work. It should not suddenly lock up the system for a half-minute or better.
If a lengthy form must be transmitted after the user clicks Submit, it should be done so in the background while the user moves on to the next form.
Tasks that require asynchronicity must be enumerated in the Functional Spec to avoid performance surprises down the road. Asynchronicity is a serious engineering task, but it is far easier to accomplish when planned for, rather than waiting for the inevitable client-complaints to roll in.
All the above techniques and approaches attack the measurable time it takes a user to perform a task. Customer complaints, however, rise as often from a "feeling" that a process is too slow as they do from actual analysis.
A classic example occurred in the 1930s in New York City, where "users" in a large new high-rise office building consistently complained about the wait times at the elevators. Engineers consulted concluded that there was no way to either speed up the elevators or to increase the number or capacity of the elevators. A designer was then called in, and he was able to solve the problem.
What the designer understood was that the real problem was not that wait time was too long, but that the wait time was perceived as too long. The designer solved the perception problem by placing floor-to-ceiling mirrors all around the elevator lobbies. People now engaged in looking at themselves and in surreptitiously looking at others, through the bounce off multiple mirrors. Their minds were fully occupied and time flew by.
In one study of this phenomenon (Tognazzini, Tog on Interface, 1992.), users were asked to do the same task using the keyboard and the mouse. The keyboard was powerfully engaging, in the manner of many videogames, requiring the user to make many small decisions. The mouse version of the task was far less engaging, requiring no decisions and only low-level cognitive engagement.
Each and every user was able to perform the task using the mouse significantly faster, an average of 50% faster.
Interestingly, each and every user reported that they did the task much faster using the keyboard, exactly contrary to the objective evidence of the stopwatch.
The most obvious take-away message from this is that people's subjective beliefs as to what is or is not quick are highly-suspect. No matter how heart-felt the belief, until a stopwatch shows it is true, do not accept personal opinion about speed and efficiency as fact. Instead, user-test.
The user's perception of time, even when dead-wrong, however, is of extreme importance to the designer. Those elevators weren't any faster after the mirrors were installed, and it took just as long for folks to get to work. However, the building management office was blessed with a significant upturn in efficiency, because they were no longer fielding a raft of irate phone calls.
The one central strategy for reducing subjective time:
When inevitable pauses occur in the workflow because you must make a server round-trip before the user can proceed, for example, make sure that the user is engaged and entertained. The ideal engagment is engagment with the task being performed. Before leaving for the server, give the user something to read that will set them up for the next task.
Time indicatorsThe forumulae below all depend on use of a time indicator. The following choices of time indicators are listed from most to least desirable.
|
Sometimes a queue has been filled, and users are expected to wait a variable amount of time for new work to show up. In such instances, they should be able to indicate just once that they would like more work, rather than having to sit there repeatedly pressing the enter key. Acknowledge their request and tell them they need not press again. If they are one of several people waiting for work, tell them their position in the queue.
Piecemeal workers without such feedback will often develop the superstition that repeatedly requesting new work will somehow help. This wastes their time and can lead to RSI problems. A good feedback mechanism can relieve the anxiety.
|
Don't miss the next action-packed column! Receive a brief notice when new columns are posted by sending a blank email to asktoglist-subscribe@yahoogroups.com. |
Contact Us: Bruce Tognazzini Copyright Bruce Tognazzini. All Rights Reserved |