Usability engineering methods for software developers
Wondershare Mockitt has a free option that is available for the users. Therefore, anyone who is just starting to explore the field of designing can get their hands on with this software. Once you are done designing, this tool allows you to share your design with its sharing options efficiently.
Therefore, Mockitt is an excellent software for creating high fidelity prototype. Try it Free. Peter Martinez updated on Product-related questions? The role of engineering is to apply scientific knowledge to produce working systems that are economically devised and fulfill specific needs. Our software usability group has adapted engineering techniques to the design of user interfaces.
To understand user needs, engineers must observe people while they are actually using computer systems and collect data from them on system usability. Observation and data collection can be approached in the following ways:. Our group uses these methods to gather information directly from users, not through second hand reports. We use these methods to study the usability of current versions of our products, competitive systems, prototypes of new systems, and manual paper-based systems.
Our software usability engineering process evolves as we use it in product development. As of , the process consists of three principal activities:. These three development activities are parallel, not sequential. We do not view user-interface design as a separate and initial part of the development process but as an ongoing process in system development.
These usability engineering techniques apply to most software development environments and are most effective in improving software usability when applied together.
During interviews of users actually working with their systems, we ask about their work, about the details of their system interfaces, and about their perception of various aspects of the system.
The user and the engineer work together to reveal how the user experiences the system as it is being used. Ideally, the number of interviews conducted per product depends on how much data is being generated in each succeeding interview.
The interview process stops when new interviews no longer reveal much new usability data. In practice, resource and time limitations may stop the interview process before this point. In any event, our approach is to start with a small number of interviews four or less with people in various jobs We use these interviews to determine how many and what type of users will be most useful for uncovering new usability data.
Data on ongoing experience provides a richer source of ideas for interface design than data on summary experience. For example, data collected from field studies has revealed the importance of interface transparency to users.
A transparent interface allows the user to focus on the task rather than on the use of the interface. Our understanding of transparency as a fundamental usability concept comes from an analysis of data on ongoing experience. Some interface techniques can help keep the user in the flow of work, thus increasing interface transparency. One example can be drawn from a workstation application for desktop publishing.
Users do not have to move their eyes and hands to a static menu area to issue commands, making this an effective interface feature for experienced users. We will consider using pop-up menus in new workstation software applications when we believe their use will keep the user in the flow of work. We have developed our understanding of transparency by observing people using a variety of applications in different jobs.
Transparency is an aspect of usability that we find across many different contexts. In developing new products, it is also important to consider the diversity of environments in which people will use the system. Different users in different contexts have different usability needs.
All these aspects influence the usability of a system for each individual. As with other products, software systems are used in the field in ways not anticipated by the designers. Because the context in which a system is used is so important, we interview a variety of users who use particular products to perform different tasks. We look for common elements of usability for groups of people, as well as the distinctive elements of usability for individual users.
Interviewers bring a focus, or background, 1 to their visits with users. The focus determines what is revealed and what remains hidden during a visit. The engineer needs to enter an interview with a focus appropriate to achieve his goals. For example, in some visits an engineer may need to look for new product ideas; in others, the engineer may need ideas to improve an existing product.
To avoid losing data, interviewers should not try to extensively analyze their data during the session. We use two-person teams, where one team member concentrates on the interview and the second member records the data.
Contextual interviews rapidly generate large amounts of data. To generate such data, interviewers need to concentrate on their relation ships with users and understand what users do during the session. Whenever possible, we videotape interviews. If users are unwilling to have their work videotaped, we audiotape the session while the second team member takes detailed notes to supplement the taped information.
The two team members meet after the interview to reconstruct an accurate record of events. Even without any taping or note-taking, engineers can learn a great deal from user visits. Although the detail from the interview may not be remembered, the understanding gained during the interview is still a valuable source of insight. Studying users provides a rich, holistic under standing of how people experience software systems. However, each person will have his or her own interpretation of user experience as it relates to usability.
Keeping these understandings private and unarticulated can have two undesirable results. First, team members work toward different and some times mutually exclusive goals. Our group constructs shared, measurable definitions of usability in the form of operational usability specifications.
Each attribute is associated with a measuring method and a range of values that indicates success and failure. Five items are defined for each attribute: the measuring technique, the metric, the worst-case level, the planned level, and the best-case level. The measuring technique defines the method used to measure the attribute. Details of the measuring technique not shown in Table 1 accompany the brief description in the summary table. There are many different techniques for measuring usability attributes.
We have usually measured usability attributes by asking users to perform a standardized task in a laboratory setting. We can then use this task as a benchmark for comparing usability attribute levels of different systems. Initial users were Digital employees who had experience with the VMS operating system and the Digital Command Language but not with conferencing systems.
The users completed their initial evaluations using item Likert-style questionnaires after they finished the benchmark task. Error recovery was measured by a critical-incident analysis. In the analysis, we used questionnaires and interviews to collect information about costly errors critical incidents made by users of the prototype versions of the VAX NOTES software.
The metric specifies how an attribute is expressed as a measurable quantity. For the initial-use attribute, the metric was the number of successful interactions in the first 30 minutes of the bench mark task. For the initial-evaluation attribute, we scored the questionnaire on a scale ranging from 0 strongly negative to strongly positive , with 50 representing a neutral evaluation.
The worst-case and planned levels define a range from failure to meet minimum acceptable requirements to meeting the specification in full. It is easier to specify a range of values than a single value for success and failure.
The form of the interview can be adjusted to respond to the user and encourage elaboration. Lewis, J. General References: Bevan, N. Carroll, J. Shneiderman, B. Stephanidis, C. Thomas, C. Conference Sponsor: Fa. Software oder Websites, verschrieben hat.
Bio: Dr. Abstract: During the development of an application of a medical image viewer for exploring 3D datasets acquired by modern scanning technologies the need for an evaluation of the user interface arose. This was conducted by two methods: a Heuristical Evaluation an evaluation by usability experts and a Thinking Aloud test novice users exploring a user interface , both carried out on a paper prototype.
A reduced version of this program meant to run on handheld computers for discussion of diagnosis with patients was also tested using a Thinking Aloud test. The results of the evaluations showed that the prototypes were in generally fit for use. A point where some work is still necessary is the navigation within the dataset, especially in 3D. For the evaluation, this mobile device was assumed to be a Personal Digital Assistant PDA ; however, a successful design should make it equally valid for other hardware.
After analyzing the problem and determining a solution, the theoretical possibilities are explored using a paper-mockup. First using a non-professional test group in order to evaluate the solution in an informal setting, it was possible to identify and eliminate possible weaknesses in the structure. A second and more rigorous test was then made with a general practitioner, a member of the target user group.
The outcome of this test was documented and will form the foundation of a number of alterations in the basic design. Throughout, the basic tenets of HCI were applied in order to ensure that the resulting user interface design conformed to the requirements of the target user group.
Abstract: Designing user interfaces is a common thing nowadays. But what people often describe as designing for end users, is only a nice looking interface without any well structured usability. Unfortunately user centered is not common sense yet. But a well structured interaction design is the ultimate key to success for your application. In this article we show how evaluation methods on user interfaces can give you a lot of constructive feedback.
This feedback is essential for effective designing. By applying the following methods you can achieve this feedback with relatively little effort in relation to the profit. Abstract: Imagine being in a foreign city, and you have to use the public transport.
Being in a city for the very first time, you hardly know the traffic system of the city Therefore it would be fine if you have a little of interactive help. Exactly this is the goal of the system being introduced to you in this paper. While reading or searching recent information like news articles geographical or historical background information often proves useful. This background information is mostly static, like historical information, but has to be linke to the recent articles to allow users browsing for further information.
Within the application Zeitgeist user can read current news linked to the background. Additionally a retrieval mechanism allows searching for information that is not linked. The main navigation component, the Zeitleiste, visualizes information in temporal context.
Within this paper the evaluation of this application is presented. After that we evaluate the results of both usability engineering methods and draw the conclusion that those methods complement each other and should be used in combination. The most concrete conclusion is the need for a more sophisticated help system. Also the importance of working with actual end users becomes evident in this paper.
Andreas Holzinger - Usability Engineering Methods. One of the basic lessons that we have learned in the area of HCI is that usability must be considered before prototyping takes place. Heuristic Evaluation HE inspection method. It involves having usability specialists judge whether each dialogue element follows established usability principles Nielsen and Mack, The original approach is for each individual evaluator to inspect the interface alone.
Only after all the evaluations have been completed are the evaluators allowed to communicate and aggregate their findings. This is important in order to ensure independent and unbiased evaluations. During a single evaluation session, the evaluator goes through the interface several times and inspects the various dialogue elements and compares them with a list of recognized usability principles e.
There are different versions of HE currently available which for example have also a cooperative character. The heuristics to be used need to be carefully selected so that they reflect the specific system being inspected, this especially under the viewpoint of Web-based services where additional heuristics become increasingly important.
Usually 3 to 5 expert evaluators are necessary cost factor , less experienced people can perform a HE, but the results are not as good. At the same time this version of HE is appropriate at times, depending on who is available to participate.
0コメント