ListPro

Helping you write professional-grade lists

.io

How to use a coding schema on raw data from moderated usability tests on software applications
(Qualitative Research)

Using a coding schema in qualitative research, such as moderated usability tests on software applications, is an effective way to organize and analyze raw data. Here's a simplified guide to follow:


Step 1: Understand Your Data and Define Your Coding Schema


Before you can apply a coding schema, you must first understand your data thoroughly. In moderated usability tests, your data can be extensive notes, recorded audio, or video transcripts of users' interactions and feedback. These raw data contain valuable insights about the user's experience, such as difficulties encountered, features appreciated, and suggestions for improvement.


To define your coding schema, you need to identify the key themes and patterns you aim to investigate. This could be categories such as navigation issues, interface design feedback, bug reports, or specific feature feedback. Your coding schema should be comprehensive enough to cover all potential themes you expect to see in your data.


Step 2: Code Your Data


In this step, you will apply your coding schema to your raw data. This usually involves reading, watching, or listening to your usability test data and assigning appropriate codes to relevant pieces of information. It's a process of labeling parts of your data according to the themes they represent. This could be done manually, but there are also various qualitative data analysis software (like NVivo, Atlas.ti, or MAXQDA) that can help streamline this process.


Remember, coding is not a one-time process. It's iterative. As you code your data, you might find new themes emerging that were not initially included in your coding schema. When that happens, you need to revisit your coding schema, refine it, and apply the changes consistently across your data.


Step 3: Analyze and Interpret Coded Data


Once your data is coded, the next step is to analyze and interpret your findings. Look for patterns, similarities, and differences in your coded data. This will help you draw conclusions about the usability of the software application under test. Generate lists where needed.


You might find recurring issues in certain areas, or user suggestions that can influence future design improvements. Your analysis should provide actionable insights that contribute to enhancing the user experience of the software application.


Remember, a coding schema is a tool to help you organize and understand your data, but it's your interpretation and insights that will drive improvements in your software application.


Step 4: Present Your Findings


Finally, your findings need to be communicated effectively to the relevant stakeholders. Present your results in a clear, organized manner, focusing on the key insights and suggestions for improvement. Use visual aids, like graphs and charts, to illustrate patterns and trends. Ultimately, your findings should contribute to informed decision-making about the software application's design and usability.


---


Example study structure

Analyzing data from a moderated usability test involves several steps to identify patterns and insights that can help improve the user experience of a product or a service. Here's a general process you can follow:


1. Data Collection: In a moderated usability test, a facilitator interacts with participants, guides them through tasks, and asks questions to better understand their experiences. Your data can come in many forms such as audio or video recordings, notes taken during the session, screen recordings, etc.


2. Transcription: If the usability test was recorded, the next step is to transcribe the audio. Transcribing the sessions helps you review and analyze the sessions thoroughly. There are tools available that can automate this process.


3. Data Coding: Coding is a process to categorize or tag parts of your data to make them easier to analyze. Start by creating a coding scheme based on what you're looking for. This could be based on behaviors, comments, or actions taken during the test. Coding should be done for each participant individually.


4. Data Analysis: After coding, you analyze the data to identify patterns and insights. This involves reviewing your notes, transcriptions, and coding, and then identifying issues and opportunities. You can use qualitative data analysis techniques such as thematic analysis, content analysis, or grounded theory for this. Also, if you've collected any quantitative data, such as task completion rates, error rates, or time-on-task, use appropriate statistical analysis to understand the significance of these results.


5. Findings and Recommendations: Summarize your findings and make recommendations based on your analysis. This could be in the form of a report or presentation. It should include key insights, supporting evidence (like quotes or clips from the usability test), and recommendations for improvement.


6. Sharing Results: Share your results with the relevant stakeholders. This might include developers, designers, product managers, or executives. Make sure your presentation is tailored to the audience so they can understand and take action on your findings.


Remember, the goal of analyzing moderated usability test data is to understand the user experience, uncover usability issues and opportunities for improvement. Always focus your analysis on the objectives set out before the usability test was conducted.


On Data Coding


Data coding in qualitative research involves categorizing and tagging your data to make it easier to analyze. In the context of a moderated usability test, this means reviewing your data (transcriptions, notes, videos, etc.) and labeling or 'coding' parts of it based on specific themes or patterns you're interested in. For example, you might code instances of 'user confusion,' 'navigation errors,' or 'successful task completion.'


Creating a 'codebook' is a common practice, which is essentially a list of all the codes you're using, along with definitions for each one. This helps keep your coding consistent and allows others to understand your coding process.


As you code your data, you may notice new themes emerging that you hadn't considered before. This is a normal part of the process and it's okay to add new codes as you go along. The goal is to create a set of data that is manageable and can be reviewed systematically.


On Data Analysis


Once your data is coded, it's time to analyze it. The goal of data analysis in this context is to identify patterns, draw conclusions, and provide actionable insights.


For qualitative data, you can use techniques such as:


 Thematic analysis: This involves identifying recurring themes in your data. For example, if 'user confusion' was a code you used often, that's a theme. You'll want to dig deeper to understand why users were confused and how that impacted their experience.


 Content analysis: This is a method for interpreting meaning from the content of text data. It involves looking at the presence of certain words, themes, or concepts within your data.


 Grounded theory: This is a research method that involves generating theory from data. You start with observations (like your coded usability test data) and then move to forming a theory that explains these observations.


For quantitative data, such as task completion times or error rates, you'll want to use appropriate statistical analysis methods. Descriptive statistics like mean, median, mode, and range can help summarize your data, while inferential statistics can help you understand whether your findings are likely to apply to a larger population.


Remember that it's important to triangulate your data - that is, don't rely on just one source of data or one method of analysis. Combining qualitative and quantitative data can provide a more complete picture of the user experience.



---


Examples of coding schema's


When setting up a code schema for usability tests on software, the codes can be broadly categorized into four main sections:


1. Task Performance

2. User Behaviors

3. User Emotions

4. System Usability

Let's dive into each category with some detailed examples of the possible codes you might use.


1. Task Performance


This category involves codes related to how well a participant performs the tasks they're given.


Some examples:

 Task Completion - Whether the participant was able to complete the task or not.

 Task Success - Levels of success for completed tasks (e.g., Fully successful, partially successful, not successful).

 Task Duration - How long the participant took to complete the task.

 Task Sequence - The path or steps the participant took to complete the task.

 Error Frequency - Number of errors the participant made during the task.

 Help Seeking - Instances when the participant sought help or hints.


The "Task Performance" category is a critical one in usability testing, as it directly pertains to how well users are able to complete the tasks they've been assigned within the software or system.


Here's a bit more detail on each sub-category within Task Performance:


1. Task Completion: This code tracks whether or not the participant was able to complete the task they were assigned. Typically, this might be coded as a binary outcome (e.g., "completed" or "not completed"), but it could also include a "partial completion" code for situations where the participant only partially completed the task.

2. Task Success: This is a more detailed look at how well the participant was able to complete the task. Even if a task is completed, it might not have been done correctly or efficiently. For example, "fully successful" could mean the task was completed correctly on the first attempt, "partially successful" could mean the task was completed but with some errors or inefficiencies, and "not successful" could mean the task was completed incorrectly.

3. Task Duration: This is a measure of how long it took the participant to complete the task. This is usually measured from the time the task is introduced to the time it is completed. Task duration can provide insights into the complexity of tasks or indicate potential usability issues that may be slowing users down.

4. Task Sequence: This looks at the specific steps or path the participant took to complete the task. This can be especially important when looking at software usability, as there may be more efficient paths that users are not taking. It might be coded as "optimal path" (if the user took the most efficient possible route), "inefficient path" (if the user took extra unnecessary steps), or "incorrect path" (if the user went in the wrong direction entirely).

5. Error Frequency: This counts the number of errors made by a participant during a task. An "error" could be anything from a misclick to a misunderstanding of instructions. Noting the frequency and types of errors can give you insight into common mistakes and areas of the system that may be confusing or misleading to users.

6. Help Seeking: This code tracks instances where the participant sought help or hints to complete a task. This could be referring to a help guide, using a "hint" feature, or asking the moderator for assistance. Regular help seeking might indicate that the task or system is not intuitive enough.


As you're coding and analyzing this data, you'll likely start to see patterns emerging that can give you valuable insights into how users interact with the system and where they might be having trouble. This can then inform design decisions to improve the system's usability.


2. User Behaviors


This category involves coding the various behaviors and actions of users as they interact with the system. Some examples:


 Navigation - User's actions related to moving around within the system (e.g., Backtracking, looping, smooth progression).


 User Interaction - Specific interactions with the system (e.g., Clicking, scrolling, typing).

 Shortcut Use - Whether the user used shortcuts and if so, which ones.

 Problem Discovery - Instances when the user identified a problem or issue.

 Workarounds - Instances where the user found a workaround to solve a problem or complete a task.

User Behaviors is a broad category that looks at how users interact with the software. It includes physical interactions with the interface, decision-making processes, strategies for accomplishing tasks, and any other behaviors observed during testing. Here are some possible codes:


1. Navigation: This refers to how users move within the software. This could include:

    Linear Navigation: Moving through the software in the intended sequence.

    Non-linear Navigation: Skipping around, rather than following a set path.

    Backtracking: Returning to a previous point in the software.

    Looping: Repeatedly returning to the same point in the software.

    Lost: Unable to find the correct path and seeming unsure about where to go next.


2. User Interaction: This covers physical interactions with the software. For example:

    Clicking: Using the mouse to select or interact with elements in the software.

    Scrolling: Moving up and down or side to side within a page or area of the software.

    Typing: Inputting information via keyboard.

    Dragging/Dropping: Moving items within the software.

    Zooming: Increasing or decreasing the size of the viewable area.


3. Shortcut Use: If your software includes keyboard shortcuts or other ways to accomplish tasks more quickly, you might have codes like:

    Shortcut Used: Successfully using a shortcut.

    Shortcut Attempted: Trying but failing to use a shortcut.

    Shortcut Ignored: Not using a shortcut, even when it would have been beneficial.


4. Problem Discovery: Users often discover problems or bugs during usability testing. This could include:

    Problem Identified: User explicitly identifies a problem.

    Problem Encountered: User experiences a problem but doesn't necessarily articulate it.

    Workaround Used: User finds a way around a problem, either by using a feature in an unintended way or through some other creative solution.


5. Strategy Use: Users often develop strategies for using software, especially when tasks are complex or the interface is challenging. This could include:

    Planning: User takes time to think about how they'll approach a task before starting it.

    Trial and Error: User tries different actions to see what works, without a clear plan.

    Consistent Strategy: User applies the same strategy across multiple tasks.

    Changing Strategy: User switches strategies in the middle of a task.


Remember, these are just examples. The actual behaviors you observe may vary depending on your specific software and tasks. As you review your usability test data, you may discover new behaviors that you hadn't anticipated and need to add to your code schema.


3. User Emotions


This category involves coding the emotional reactions of users. These codes are often derived from user's comments, facial expressions, tone of voice, etc. Some examples:

 Frustration - Instances when the user expressed or exhibited frustration.

 Confusion - Instances when the user appeared or stated they were confused.

 Satisfaction - Instances when the user expressed satisfaction or delight.

 Disappointment - Instances when the user expressed disappointment.

 Surprise - Instances when the user expressed surprise (positive or negative).

When coding user emotions, it's essential to consider a wide range of emotional responses that users might have while interacting with a product. This requires attention to non-verbal cues, tone of voice, facial expressions, as well as explicit comments from users.


1. Frustration: Frustration often emerges when users encounter roadblocks or when the system does not respond as expected. You might note raised voices, signs of agitation, or explicit comments such as "This is so annoying!" or "Why isn't this working?"


2. Confusion: Signs of confusion might include long pauses, furrowed brows, or questions like "What am I supposed to do here?" or "Where am I supposed to go next?" Users might also express confusion indirectly, by making incorrect assumptions about how the system works.


3. Satisfaction: Signs of satisfaction might include smiles, relaxed postures, or positive comments like "That was easy!" or "I really like how this works." This code could be especially useful for identifying features of your product that users find particularly enjoyable or helpful.


4. Disappointment: Disappointment can occur when the system fails to meet users' expectations. Users might sigh, look disappointed, or say things like "I thought this would be better" or "This isn't what I was expecting."


5. Surprise: This could be either positive (e.g., delight) or negative (e.g., shock). Look for widened eyes, raised eyebrows, or comments like "Wow, I wasn't expecting that!"


6. Anxiety: Users might feel anxious if they're uncertain about how to use the system or afraid of making mistakes. Signs of anxiety could include nervous laughter, fidgeting, or comments like "I'm afraid I'm going to mess this up."


7. Boredom: If a task is monotonous or the user is not engaged, they might show signs of boredom. Look for signs like yawning, a lack of focus, or comments like "This is kind of boring."


8. Relief: After completing a difficult task or overcoming a challenge, users might express relief. They might sigh, relax their posture, or say something like "I'm glad that's over."


Coding user emotions can be a bit more subjective than coding other types of data, as it often relies on interpretation of non-verbal cues and indirect comments. As with any coding, it's essential to maintain consistency and to clearly define each code in your codebook. You might also consider having multiple people code the data independently and then comparing their codes to ensure reliability.


4. System Usability


This category involves codes related to the usability of the system itself. Some examples:


 Usability Issues - Specific issues with the system's usability (e.g., Complex navigation, inconsistent terminology, poor feedback).

 Design Issues - Issues related to the visual design of the system (e.g., Cluttered layout, unclear icons, inappropriate color scheme).

 Functionality Issues - Problems related to the system's functionality (e.g., Broken features, missing features).

 Content Issues - Issues related to the content within the system (e.g., Incorrect information, unclear instructions, irrelevant content).

 Accessibility Issues - Problems related to accessibility (e.g., Poor color contrast, small clickable targets, inaccessible for screen readers).


Here's more detail on each of the sub-categories within the "System Usability" category:


1. Usability Issues: These codes refer to problems related to the general usability of the system. For example:

 Complex Navigation: Instances where the user has difficulty finding their way around the system due to complex or unclear navigational structures.

  

 Inconsistent Terminology: Instances where inconsistent terminology leads to user confusion or misunderstanding.

  

 Poor Feedback: Instances where the system fails to provide adequate feedback to the user's actions, leaving them unsure of what to do next.


2. Design Issues: These codes refer to issues related to the visual design and layout of the system. For example:

Cluttered Layout: Instances where the layout of the screen is overly complex, leading to cognitive overload or difficulty focusing on key elements.

  

Unclear Icons: Instances where icons are not clear or intuitive, causing users to misunderstand their function.

  

Inappropriate Color Scheme: Instances where the color scheme hinders usability, such as poor color contrast making text hard to read.


3. Functionality Issues: These codes refer to problems related to the system's features and functionality. For example:

Broken Features: Instances where a feature doesn't work as expected or at all.

  

Missing Features: Instances where the user expects a feature that is not present in the system.

  

Unexpected Behavior: Instances where the system behaves in a way that the user doesn't expect, leading to confusion or errors.


4. Content Issues: These codes refer to issues related to the content of the system. For example:

Incorrect Information: Instances where the information provided by the system is incorrect, misleading, or out of date.

  

Unclear Instructions: Instances where instructions or guidance provided by the system are unclear or ambiguous.

  

Irrelevant Content: Instances where the content presented is not relevant or useful to the user, causing confusion or frustration.


5. Accessibility Issues: These codes refer to problems related to the system's accessibility for people with disabilities. For example:

    Poor Color Contrast: Instances where poor color contrast makes content hard to see for users with visual impairments.

  

    Small Clickable Targets: Instances where clickable elements are too small, making them difficult to interact with for users with motor impairments.

  

    Inaccessible for Screen Readers: Instances where content is not accessible to screen readers, making it unusable for users with visual impairments.

Remember that these are just examples, and your actual codes might be different based on the specifics of your system and the goals of your usability test. The important thing is to capture data that will help you identify and understand the usability issues with your system, so you can improve it in future iterations.