Usability: An Annotated Bibliography

This annotated bibliography was compiled by students in Dale Sullivan’s and Michael Moore’s HU4628 Reading and Usability couse in the Scientific and Technical Communication Program at Michigan Tech, Spring 2001.

Contributors: Lucas Baker, Cheryl Ball, Nicholas Bateman, Kara Dodge, Nathan McKimpson, Jonathan Pechta, Curtis Petersen, Michael Robertson, Dale Sullivan, Melinda Vanderbok, Denny Wagner.

N.B. these annotations are written in précis form.

Adler, Paul S. and Terry A. Winograd. “The Usability Challenge.” Usability: Turning Technologies into Tools. Oxford UP, 1992, 3-14.

Adler and Winograd use this chapter of the book they edited to explain the relevance of usability in the design process. They stress the danger in not taking time to do usability tests at all, and also the danger of not positioning users and testers appropriately, in relation to the design process and timeline. They argue that instead of making machines so perfect, as to make users unnecessary, instead usability testers need to design tests that make technology better for the users. They stress that all persons involved in technology-creation, documentation, and use-need to have equally valued positions in their relationship to the technology. Their purpose is to stress the importance of doing usability testing, but doing so while ensuring collaborative and meaningful relationships between the designers, documenters and users. Adler and Winograd’s audience is likely an academic one, because their arguments are theoretical in nature, and do not offer practical solutions to the problems they identify. 

Andre, Terrance and Aaron Schopper. “Designing for Human Error.” Human Factors in System Design. British Columbia Teacher, 1997, 165-195.

Andre and Schopper discuss the significance of human error and suggest that errors should be viewed from social and technical perspective. By comparing and analyzing the consequences and causes of many dangerous accidents involving a system-design failure, he attempts to convince designers to create systems that are both error-tolerant and error-resitant. To limit the consequences of accidents made by humans. The audience for this chapter are designers of Air Force equiptment interfaces and other heavy equiptment. 

Barnett, Mark. “Testing a Digital Library of Technical Manuals.” IEEE Transactions on Professional Communication (41.2) June 1998, 116-22.

In this article, Barnett claims that since digital libraries are now available, more usability tests must be done to help developers understand practical uses as well as problems with current digital libraries. Barnett backgrounds three digital libraries, stating that only one other such usability test was done on one of those libraries; He compares and contrasts the usability between a paper library and a digital library of technical documents, a limitation of library materials the previous usability test does not make. “The study employs a classical experimental design with two tests,” (118) and participants executed each test in both libraries; this test was conducted in order to provide information on the time differences in doing each task in each library, to provide information on what advantages and disadvantages each library has, and which library has preference for the user. This usability study and article was designed to help digital library developers who read this journal. 

Bethke, Frederick J. Measuring the Usability of Publications. IBM, 1982.

Bethke claims that measuring a reader’s perception of ease of use of a technical document would be a better approach to measuring ease of use than measuring it by the time and energy spent on a technical document He explains that a baseline for all the manuals needed to be defined by questioning known users about random selection of IBM manuals The purpose of this article is to show how users perceive the ease of use of the manuals in order to make better manuals in the future This report was written for IBM manuals specifically. So while the information in the report could be useful to all technical communicators. The main audience of the report are those technical communicators that work for IBM.

Bravo, Ellen. ” The Hazards of Leaving Out the Users.” Participatory Design: Principles and Practices. Hillsdale, 1993, 3-11.

Bravo addresses the reasons why “real” users must be asked for input in not only the design of technologies they use on the job, but also why such persons need to also be involved in testing regarding how their jobs should be designed. She argues that too often persons who are low on the totem pole of the employment hierarchy (such as secretaries and other positions typically held by poorly paid women) are not given a valued place to voice their opinions and input on the jobs they do, and the technologies used to get those jobs done. Her purpose is to call attention to this problem that she sees. She intends to make women on all levels of employment a part of usability testing, so as to better not only their work situation, but also that of others (men and women) in similar situations. Bravo’s audience is composed of usability testers and designers who believe in and practice participatory design. Her audience could value theoretical discussions of the position of users in usability testing, but would also likely be practitioners who are looking for input on whose needs are not being addressed in the work that the designers do. 

Cherry, Joan M. and Jackson, So-Ryang. “Online Help: Effects of Content and Writing style on User Performance and Attitudes.” IEEE Transactions on Professional Communication (32) December 1989, 294 – 299.

In the December 1989 issue of IEEE Transactions on Professional Communication, Cherry and Jackson suggest that content and writing style do not greatly effect overall user performance, but they do play a role in the efficiency and user satisfaction of online help systems. After describing their motivation for conducting this test (reading a that suggested that the content of online help is more important than the format of it, and personal observations), Cherry and Jackson conducted both an interactive task scenario and a hard copy review using two versions of online help developed for screen design software for use on an mid-range IBM system. The purpose of this test was to assist online help developers at IBM in creating a set of empirically-based guidelines for developing more efficient online help systems. This article is aimed primarily at professionals, working to develop online help systems, who want to improve the usefulness of the help system as well as create a more satisfying experience for the user. 

“Methods.” Usability First (http://www.usabilityfirst.com/methods/index.txl) post date: 2001; download date: April 23, 2001.

The authors present an overview of various methods of evaluating usability. The methods described are cognitive walkthrough, focus groups, GOMS, prototyping, task analysis, usability inspection, and user testing. The authors present this information in order to promote usability in design. The site appears to be aimed at website and software designers. 

Diani, Marco. “The Social Design of Office Automation.” Design Issues (3.2) (pre-1989; date unknown), 73-82.

Marco Diani argues in his article for a “science of design” that would consider and account for the changing trends in office automation caused by the rise of computer-based technologies. Diani argues that a “science of design” is necessary to homogenize the multi-disciplinary research on office automation so that the following factors can be used to help researchers come to a consensus regarding the commonalities of automation: “(1) how office work changes in nature and quality, (2) when and how people modify their own social identity to cope with ‘new’ jobs and technological tools, (3) which skills and qualifications have to be abandoned and which ones have to be developed, and (4) what outcomes will result from different strategies of office automation in large organizations.” Diani’s purpose is to remind his audience that humans are still important factors in office design, and that scientists and others who study/research how and why offices should be designed in particular ways must equally consider the effects of office design on the people who work in these settings. Diani intends his audience to be scholars and researchers who might be interested in office design from a scientific and functional research perspective. I say “functional” as opposed to “usable” since it seems that Diani invites researchers to understand the notion of usability in relation to compassion for office workers who are now part of a computer-based technology revolution in their offices. 

Dieli, Mary. “The Usability Process: Working with Iterative Design Principles.” IEEE Transactions on Professional Communication, Special Issue on Usability Testing (32), December 1989, 272-278.

Dieli addresses 1989 current literature in usability testing and engineering, and also a specific usability test, in order to show the constraints that testers face in real world situations, and to offer solutions for overcoming those constraints in testing environments. She argues that Document Designers, Technical Communicators and Human Factors Engineers all face constraints of short deadlines, and the incomplete condition of testing materials at early points in the development process. She argues that such constraints can lead to a loss in quality of the testing analysis and results, but that awareness of those constraints, and overcoming them by way of tradeoffs, can lead to better testing in the future. Her purpose is to further the field of usability testing, that in many ways was still emerging at the time the article was written, so that testers could do better tests, and persons receiving the results, or designers being encouraged to have their products tested, would give the usability tests more credence in the design and documentation process. Dieli’s audience would be practitioners in the field of technology: technical communicators, human factors engineers, and designers. Because of the nature of the journal where this article was published, her audience would be more practically oriented than theoretically. 

Grice, Roger A. and Lenore S. Ridgway. “A Discussion of Modes and Motives for Usability Evaluation.” IEEE Transactions on Professional Communication (32), December 1989, 230-237.

Grice and Ridgway describe several types of usability evaluation for testing documents, suggesting what each type is used for and explaining when it should be used in the process of developing documents They explain that testing can be used to explore, to verify, or to compare documents; that testing can be done formally or informally; and that feedback can be used immediately to “fix” documents or collected and used later to develop prinicples guiding future document production. Their apparent purpose is to introduce the reader to a variety of usability test methods and to help the reader know which to employ, why, and when. Since this article appears in IEEE Transactions for Professional Communication and since it seems to be very practical in its orientation, its implied audience seems to be practitioners, technical communicators in industry who need to test their documents and convince their employers that such testing is worth the extra time and expense involved. 

Guillemette, Ronald A. “Usability in Computer Documentation Design: Conceptual and Methodological Considerations.” IEEE Transactions on Professional Communication (32) December 1989, 217 -225.

In the December 1989 issue of IEEE Transactions on Professional Communication, Guillemette suggests several factors, that document designers should take into account when designing effective documents, and then gives a brief description of these factors and how they can influence a document. Guillemette breaks down his list of factors into two main groups, human factors involved with design, and the concept of usability, and then describes how a documetn can be designed to minimize the effects of these factors on the effectiveness of the document. The purpose of this paper is to inform professional designers about outside factors that can influence their documents and provide them with suggestions for improving future documents. This article is written for technical communication professionals documentation and are concerned about how outside influences can effect the usability of their documentation. 

Holtzblatt, Karen and Sandra Jones. “Contextual Inquiry: A Participatory Technique for System Design.” Participatory Design: Principles and Practices. Hillsdale, NJ: Erlbaum,1993. 177-210.

In this chapter, Holtzblatt and Jones suggest that contextual inquiry is a viable usability technique for incorporating user participation into system design. The authors divide their description and defense of contextual inquiry into six sections: a brief perspective on usability, a detailed discussion of the principles of contextual inquiry, suggestions on how to conduct a contextual interview, suggestions on how to analyze the results, ideas for using the technique with other participatory design strategies, and a final push for using contextual inquiry throughout the development cycle. Clearly this is an informational text that is meant to describe a particular participatory design technique in order to provide usability specialists with yet another method to use. The book in which this chapter resides is a sharing of different techniques and discussions of participatory design, so the audience is certainly intended to be a mix of both usability experts and members of the academic community looking to enhance their knowledge of usability studies. 

Hom, James. “The Usablity Methods Toolbox.” The Usablity Methods Toolbox. (http://www.best.com/~jthom/usability/) post date: 1996; download date: May 2, 2001.

Horn describes several types of usablity testing methods, suggesting what cases each method shoudl be emplyed for. He answers the following questions: “What is it” (this method of usablity testing), “How do i do it?” (How to conduct a usablity test using this method), “When should I use this method” (when should this particular method be employed), and “Who can tell me more?”(a list links to related articles and books). He explains the different methods of usablity testing in order to provide an informational site that can be acessed and used by beginners to the usablity field. 

Jastrzebowski, Wojciech. “An Outline of Ergonomics, or the Science of Work based upon Truths Drawn from the Science of Nature.” Jastrzebowski: First Treatise on Ergonomics (1857).

The author outlines the central concepts of ergonomics. Jastrzebowski defines and/or classifies ergonomics, science of work, and useful work, and lists the four “chief considerations” that concern all useful work. The author proposes this outline in order to present work as an object of academic or scientific investigation. This was written in 1857 for the readers of _Nature and Industry_. 

Lentz, Leo and De Jong, Menno. “The Evaluation of Text Quality: Expert-Focused and Reader-Focused Methods Compared.” IEEE Transactions on Professional Communication (40.3) September 1997, 224-33.

In this article, Lentz and De Jong answer the question of whether experts, including subject matter and technical writers “can [effectively] predict the results of a reader-focused text evaluation.” (224) Lentz and De Jong compare and contrast an expert-focused test method in concurrence with a reader-focused test method of reading a government brochure on alcohol and the results of reader problems given by each. The authors record the problems the experts feel readers will experience against the problems readers actually experienced in order to answer the question of expert problem predictability. The intended audience is Usability testers who decide which test method is best suited for reading problem evaluation. 

Marion, Craig. “Quality, Usability, and the Ontological Argument for the Existence of God.” Usability Interface (http://www.stcsig.org/usability/) post date: October 2000; download date: April 22, 2001.

The author of this article argues that the quality of a usable product is as important as the functionality of the product. Marion starts his argument with a narrative of his first philosophy class, remembering the arguments for the existence of God as made by Anselm and later taken on by Immanuel Kant. Kant’s argument for the existence of God hinged on the quality of real versus imaginary sauerbraten. And it is this definition of quality, Marion says, that relates directly to the usability versus functionality of products for users. Marion’s purpose in his round-about argument is to foreground the importance of quality when testing for usability. His claims stretch to meet the typical audience of Usability Interface, which is geared more towards quantitative research and results rather than qualitative or theoretical ideas. 

Microsoft. “What is Usability at Microsoft?.” Microsoft Usability: What is Usability at Microsoft
(http://www.microsoft.com/usability/faq.htm) post date: September 27, 2000; download date: April 22, 2001.

On their website, Microsoft claims the usability testing that they do is done by a wide range of individuals from different departments. They further claim that the subjects they choose are from all across the nation and from every walk of life The website illustrates how the applicants are chosen and what goes on with both side involved in the test The website shows all of this information in order to show the steps that Microsoft takes to ensure customer satisfaction with their products This website is written for anyone that is interested in knowing the steps that Microsoft takes in testing the usability of their products.

Mirel, Barbara. “Critical Review of Experimental Research on the Usability of Hard Copy Documentation.” IEEE Transactions on Professional Communication (34.2) June 1991, 109-122.

Mirel argues that certain experimental studies of hard copydocumentation do not yield firm conclusions due to methodologicalshortcomings. Mirel examines 22 experimental usability studies that appearedbetween 1980 and 1989 and argues that research design flaws cast doubtupon the conclusions these studies reach. Mirel offers 13 recommendations in order to persuade usabilityresearchers to conduct their studies with greater methodologicalrigor. The audience for this article consists of usability researchers 

Mirel, Barbara. “Product, Process, and Profit: The Politics of Usability in a Software Venture.” ACM Journal of Computer Documentation (24.4) November 2000, 185-203.

Mirel tells the story of a software development project in which she worked as a human-factors expert. By tracing the stages of the development process, showing political battles between system-centered developers and user-centered developers, Mirel shows how tensions may grow when members of the same team do not place the same value on user-centered design. She seems to want to show that inflexibility leading to political battles, even though it may produce a more user-centered product, can cause such lasting damage to relationships among workers that the gain is not worth the cost. Appearing as it does in the ACM Journal of Computer Documentation, the article’s intended audience seems to be composed of both academic and professional readers interested in the field of technical communication in the computer industry. 

Ostrander, Elaine. “Usability Evaluations: Rationale, Methods, and Guidelines.” Intercom (46.6) June 1999, 18-21.

In the June 1999 issue of Intercom, Ostrander argues that performing usability evaluations on documentation can make it more accessible to its intended audience. She describes seven justifications for performing usability evaluations, discusses how these relate to Donald Kirkpatrick’s four levels of evaluation, and lists some general guidelines for each of these levels of evaluation. The purpose of this paper is to pursuade documentation creators to include usability evaluations as part of their normal documentation development cycle in order to increase the accessability of the documentation for its intended audience. This article is written for technical communication professionals who develop product documentation on a routine basis, primarily for computer applications. 

Rubin, Jeffery. Handbook of usability testing: how to plan, and conduct effective tests. Wiley,1994.

Rubin claims that the way products are made had changed to make them more usable because of the increase in usability testing. Since usability testing is a newer discipline many don’t understand it. Rubin writes this book as a guide to how usability testing is done Through the book, Rubin illustrates the way in which multiple types of usability tests are done. He shows what responsibilities a person must take when running a usability test. He also shows how these usability tests are done to make the products more user-centered Rubin wrote this book in order to give people, who are doing a usability test, a practical guide on how these tests should be done. The audience of this book was written for all that are new to the discipline whether they are students, new professionals, and experienced professionals who are new to usability testing 

Rosenbaum, Stephanie. “Usability Evaluations Versus Usability Testing: When and Why?.” IEEE Transactions on Professional Communication (32) December 1989, 210-216.

With this article, Stephanie Rosenbaum argues that it is unwise to jump right into usability testing, suggesting that instead, all usability projects would benefit by starting with a usability evaluation. Rosenbaum presents her argument by defining and distinguishing usability evaluations and usability testing, discussing the important role a usability specialist plays in usability projects, explaining why testing is not enough by itself, and finally the process of usability evaluations and how they fit with testing and other methods of obtaining information on a product’s usability. Rosenbaum wrote this article to help her peers in the usability field to expand their methodology, and therefore their results. Featured in a professional (or technical) communication journal, this article is definitely aimed at technical communicators, especially those interested in usabiliy. 

Schellens, Peter Jan and Menno DeJong. “Revision of Public Information Brochures on the Basis of Reader Feedback.” Journal of Business and Technical Communication (11.4) October 1997, 482-501.

Schellens and DeJong argue that revising documents on the basis of pretesting can be made more effective if such revisions are based on the revision practices of professionals. They support this claim by reporting the results of a study they conducted in which they used the “plus-minus” method to conduct reader response interviews. Having discovered points in public information brochures where readers had problems with cmprehension, acceptance, appreciation, or confidence in completeness and relevance, the authors then gave these passages to professional writers and correlated the kinds of revisions they made with the problems in the drafts. By reporting their procedure in this study, the authors offer a research strategy which they believe will imporve present practice. Because the article appears in the Journal of Buisiness and Technical Communication, a journal devoted to research, the intended audience appears to be other researchers in the field of technical communication, probably academics.  

Schriver, Karen A.. “What Document Designers Can Learn from Readers” Dynamics in Document Design. Wiley Computer Publishing Co., 1997, 443-495.

Schriver summarizes the rest of the book and states that document designers need to understand how people think and feel as they engage with documents. The designers must make conscious choices when designing, and the most effective way for designers to make the reader’s job easier is through reader feedback. Shriver analyses the results from two studies to support her claim. One is on feedback-driven audience analysis and one is a teaching method for improving document designers’ ability to anticipate readers’ problems. Shrivers purpose is to encourage document designers to think consciously about how a reader will respond to a document, and to make subsequent revisions. Her audience is members of the Scientific and Technical Communication field, including students and practitioners. 

Schulz, Erin; Judith Ramey; Maarten Van Alphen; William Rasnake. “Discovering User-Generated Metaphors Through Usability Testing.” IEEE Transactions on Professional Communication (40.4) December 1997, 255-264.

The authors of this article argue that user-generated metaphors are more relevant to the design process and usability testing than previously thought. They claim that during a usability test on the Fluke [sic] ScopeMeter, which analyzes electrical signals, they discovered users procedures in approaching the task scenarios differed according to their experience with similar equipment and those differences were based on the metaphors users brought to the new equipment. The authors said novice users worked with a propositional approach to the task scenarios, using a step-by-step method of analyzing the outcomes of each hypothetical push of a button on the ScopeMeter. The audience is intended to be a range of human-computer interaction specialists including those in liberal arts fields to those in engineering and computer science fields. This analysis is supported by the authors references in the background portion of their article ranging from book metaphors in hypertext to typing skills of expert typists . 

Sienot, Matthijs. “Pretesting Web Sites: A Comparison Between the Plus-Minus Method and the Think-Aloud Method for the World Wide Web.” Journal of Business and Technical Communication (11) October 1997, 469-482.

In this 1997 special issue on formative evaluation, Matthijs Sienot suggests that the plus-minus method is more useful for pretesting web sites than the think-aloud method because it encouraged the testers to act as evaluators rather than users, as was the case with the latter method. Sienot presents his comparison of the two methods through a study of how forty web-experienced participants used the two methods to test a tourism web site. This study leads to his conclusion that the plus-minus method is better suited for web site pretesting since it was the most successful at detecting “as many different kinds of reader problems as possible.” By presenting his findings and conclusions, Sienot clearly aims to offer web site developers an effective way to detect problems with their sites during the design process. Since this journal has readers from both the technical communication and business worlds, this article seems suited to corporate web developers and testers, although it is certainly not limited to this audience. 

Sullivan, Patricia. “Beyond a Narrow Concept of Usability Testing.” IEEE Transactions on Professional Communication (32.4) December 1989, 256-264.

Patricia Sullivan is arguing that interpretating the data from usability testing should be expanded to cover a range of possible feedback and not just a specific concept because different investigative processes yield different but large amount of diverse information, which may be more natural and broad. The author then supports this argument by saying that the way a usability test is setup and carried out will help shape the information that is taken from the testing environment to help us learn more about usability and the testing process. In essence, “What is it that we are doing in relation to others who study usability, and what might we need to be doing?” The apparent purpose of the article is to define and investigate the different varieties of usability testing and then evaluate the components of that testing to help employ new methods and expanding usability testing and the data that can be collected from the testing. This article entitled, “Beyond a Narrow Conception of Usability Testing” is found in IEEE Transactions on Professional Communication and is directed towards an audience of professionals who conduct usability testing and those interested in the impacts and data created from usability testing. 

Van der Geest, Thea, and Van Gemert, Lisette.. “Review as a Method for Improving Professional Texts.” Journal of Business and Technical Communication: Special Issue, Formative Evaluation of Texts (11.4) December 1989, 433-450.

Van der Geest and van Gemert’s article, “Review as a Method for Improving Professional Texts”, found in a special issue on the Formative Evaluation of Text contained in the Journal of Business and Technical Communication suggests that a more structured review process will add to the effectiveness of the review. The authors describe a systematic (four-step) method used to review and compare this method with others for a “formative evaluation”. They base their recommendation on the goals, practices and effectiveness of the review process followed in people in three examined areas: a writers survey, a text production expert’s interview, and a case study involving engineers. Geest and Gemert hope to prove, by examining the goals and practices followed by the three study groups, that “a clear shared view on the goals and topics of a review” will “improve the effectiveness of the review cycle.” Reviewers and Reviewees. 

Wagner, J. M. and Spyridakis, H. J.. “The Relevance of Reliability and validity to usability Testing.” Journal of Business and Technical Communication: Special Issue, Formative Evaluation of Texts (11.4) December 1989, 265-272.

Wagner and Spyridakis suggest that “the concepts of reliability and validity are relevant to usability testing. Furthermore, it suggests that a concern for reliability and validity will enhance the credibility and effectiveness of usability tests.” The authors note that humans are by their very nature, “unreliable instruments”. He constructs an argument that reliability and validity are “ignored constraints.” The authors evaluate the 5 most common usability testing methods (protocols, surveys, interviews, expert reviews, and director observation), and point out areas of weakness, backed by empirical support for their claim that reliability and validity are not being recognized as important.” The purpose of this article to make usability testers aware of reliability and validity as important factors to consider when conducting future usability test in order to add to the credibility of their research and their field overall. usablity testers. 

Zimmerman, Donald E., Muraski, Michel L., Slater, Michael D.. “Taking Usability Testing to the Field.” Technical Communication (46. 4) November 1999, 495-500.

The authors’ article explains that usability testing and the fundamental principles behind usability testing can be applied to a variety of products outside a standard usability testing environment. The authors’ claim that few pieces of documentation or case studies have been produced to help other technical comunicators through the process of a usability test on a product that is not computer software or hardware. The authors’ are writing this document as a case study for other technical communicators to follow as an example when performing a usability test that is outside an environment or in the field. This article appears in the March 1999 issue of Technical Communication and is guidance for novice technical communicators who are neophyte usability testers.

No Comments

No comments yet.

 l;