The word taxonomy is essentially defined as the study of general principles of scientific classification, or the orderly classification of things according to their presumed relationships. In other words, taxonomy is the process of describing how things are related by putting them in groups. So what does taxonomy have to do with user experience? Well, for plenty of organizations, it’s everything.
Let’s imagine you were planning a little get together and your plan was to have some friends over for an arts & crafts night where you’d be doing some knitting, sewing, quilting, etc. You hop on your computer and go to the website of your favorite local arts & crafts store to shop for supplies. The first thing you need to get is yarn, you aren’t too sure what kind, but you know you need some yarn. If you look at this example below, you clearly have a decision to make. Within which of these menu categories would you expect to find yarn? Is it ‘Crafts & Hobbies’? Is it ‘Knitting & Crochet’? Is it ‘Sewing & Fabric’? Or is it something else entirely?
While some items on a website are easier to find than others, such as this one, something like yarn may be a bit more difficult. Therefore, it’s up to the company to devote the time and necessary resources to improve the findability of all the products on their website.
Nothing frustrates web users more than poor navigation and confusing content structure. Per recent data from Google’s Consumer Barometer, the majority of consumers are looking for something specific when they search a website [Figure 1]. Additionally, while we can see that price is often the most important purchase influencer [Figure 2], consumers are incapable of seeing the price of the product they’re looking for if they aren’t able to find it on the website. The principal concern businesses have is the fact that if consumers come to their website to find something and they fail repeatedly trying to find it, they will simply leave the site and go somewhere else.
Source: Consumer Barometer with Google – The Connected Consumer Survey 2014 / 2015
So how can navigation and content structure problems be avoided? Well, the best and most fundamental tactic used to improve site navigation and content structure is to conduct a tree test of your site content. Tree testing is a usability technique for evaluating the findability of products and information on a website.
Take the aforementioned arts & crafts website example – you have a website that is organized into a hierarchy (a “tree”) of primary categories and within each of those are sub-categories. A well-organized website is one that makes it easy for the user to navigate through the categories and any sub-categories that follow in order to find what they are looking for. The tree shown in this example below would look something like: Beads & Jewelry > Beads > Strung Beads
How to Conduct a Tree Test
A typical tree test involves several tasks for study participants to complete. Just to give you a quick look at what a tree test would look like, I’ve built out an actual task scenario using the aforementioned arts & crafts website. However, before we look at the example, here’s some general information explaining how tree tests are set up.
Users are shown a welcome message thanking them for taking the time to participate in the study.
Users are often told the expected length of the study (how long it will take them to complete it), which is typically 15-20 minutes at the most.
Lastly, it’s good practice to let users know that their answers are very valuable in helping to organize the content on your website and that there are no right or wrong answers, as it’s the content being tested, not their ability.
Users are presented with a list of links and they are asked to find a certain item.
The user will click through the links in the tree until they feel they have reached a point where they feel confident they would find the item they were asked to find.
Users are informed that if they want to go back for any reason, they can simply click on the link above where they currently are in the process.
Thank You Message
After users complete the task(s) they should be presented with a thank you message thanking them, again, for their participation and letting them know they’re finished and it’s safe to close the browser.
Tree Test Example
As you can see with the above example, users are given a task to find a specific item and then they are shown a set of options to choose from. Within each of those initial options is a set of sub-options and within those sub-options are more sub-options. Depending on the item they are being asked to find, and also depending on how deep the content structure of the site is built out, the number of sub-options and categories will vary.
So, once you’re finished collecting the data from your tree test study, how do you analyze the results? Well, it’s quite simple and it’s fascinating how much you can learn. You would be able to observe and analyze key data points such as:
Number of Direct Successes – The number of participants that were able to locate the item on their first try without having to go back at any point.
Number of Indirect Successes – The number of participants that successfully located the item, but in doing so they navigated back at some point, then ultimately found the correct path.
Number of Direct Fails – The number of participants that went down the wrong path and selected an option other than where the item they were looking for was located.
Number of Indirect Fails – The number of participants that navigated back at some point, then ultimately selected an option other than where the item they were looking for was located.
Time on Task Metrics
You can obtain the mean (average), median, and mode as it relates to the time it took each participant to complete any of the tasks in the study.
If you wanted to, when building your tree test study you could add a question after specific tasks asking users to provide qualitative feedback, such as why they selected the option that they chose or you could ask them if they had any suggestions for how the process overall could be made easier.
Now that you’re equipped with some knowledge on tree testing and have some fresh examples to reference, take a look at your website and ask yourself if your site’s content is organized in a way that it’s providing the best possible experience for your users. Provide your customers with a pleasant user experience, help them find what they’re looking for quickly and easily, and you’ll be on your way to reaping countless benefits.
This blog post is an introduction to Participatory Design (PD) and the methodologies that encompass PD. This is the first in a series of PD themed blog posts, so stay tuned for the next installment!
Participatory Design, User-Centered Design, and Human-Centered Design, all refer to methods which involve users and stakeholders during the iterative design process in hopes of meeting the wants, needs, and affordances of end-users. Participatory Design can be implemented in a variety of ways depending on what type of information the team is trying to capture– from design requirements to usability, the choice is yours.
Participatory Design was initially used in the design and development of computer applications and systems in Scandinavia and was referred to as Cooperative Design (Bødker et al., 2004). As the theory moved westward to the US, the term Participatory replaced Cooperative due to the nature of the first applications in business and the need to stress the vested interest of the participants.
The primary goal of PD is to help provide greater consideration and understanding of the needs and wants of system users. Participatory Design can be used to carefully integrate the needs, perspectives, and contexts of stakeholders, therefore, increasing the likelihood of diffusion, adoption, and impact of the resulting user-centered system.
For example, the design of a new mobile yellow page application created to target certain populations and connect users with providers. Wouldn’t it make sense to involve the end-users of this application from the onset of the project? Absolutely! Again, PD can be implemented in a variety of forms, for this example let’s assume we begin by asking our end-users to participate in a design needs session where the design team meets with end-users and fleshes out the necessary design requirements for the mobile app. From the beginning of the project, the users will have their voice heard and incorporated into the design of the final system.
Iterative Usability Testing is paramount to the success of any system, and this is another point where users can assist the design team in shaping the usability of the system. By conducting iterative usability tests, perhaps as short weekly lean UX sprints, the design team and engineers can quickly test and iterate the design of a new system- and be agile in the process.
IDEO has put together its own version of a ‘Human Centered Design Toolkit’. Check it out. Lots of cool techniques, tips, and more to get yourself in the HCD head space.
Remember: by incorporating your users feedback throughout the creation of your system, you are moving towards a better design and adopted system for all stakeholders.
In the technologically advanced and incredibly mobilized world we live in today, there’s constant pressure on organizations and businesses to provide customers with a great mobile user experience. According to Google’s Consumer Barometer and the Connected Consumer Survey (2014 / 2015), 57% of the population currently uses a smartphone. Moreover, smartphones play an integral role throughout various phases of product research. Simply put, people are using their smartphones to read about your business and your products, making it imperative that your mobile site be very user-friendly.
Source: Consumer Barometer with Google – The Connected Consumer Survey 2014 / 2015
So, how do businesses ensure that the mobile experience they’re providing their customers with is a great one? Well, that’s a great question, and a great start to answering that question would be to conduct a mobile usability expert review.
At its core, a usability expert review is an actual usability inspection of your site conducted by a usability specialist in order to identify potential usability issues. A usability expert review is one of the most in-demand, cost-effective usability techniques. Expert reviews are a great way to identify glaring usability blunders. They are quick, inexpensive, and provide an immediate sanity check in regards to your user experience.
I recently conducted a mobile expert review of three auto manufacturer mobile websites (MiniUSA, SmartUSA, and Fiat) in order to assess their overall user experience and ease of use. I used a handful of usability metrics and assigned scores to each of them in order to determine which mobile site was the most user-friendly. Here are some of the top-level findings and results from my review.
General: Mobile-Centric Usability Concerns – Is the site optimized for mobile?
Home / Start Page – Are key tasks easy to locate on the home / start page?
Navigation – Are there convenient and obvious ways to move between pages and sections and is it easy to return to the homepage?
Search – Is it easy to locate the search box? Can you easily filter/refine search results?
Task Criteria – Is the info on the site presented in a simple, natural and logical order?
Location of search icon was quick and intuitive on the MiniUSA site – Quick access to search is a must these days. The MiniUSA site was the clear winner in this respect, as SmartUSA and Fiat failed to provide a search feature on their homepage.
Uncommon, small CTAs were problematic on the SmartUSA site – Several CTA’s, such as ‘meet me’, ‘back to menu’, and ‘find your smart’, on the SmartUSA site proved to be quite confusing, as it’s not clear where users would be taken if they clicked/tapped on these CTAs. Also, with very precise touch targets, the CTAs were very small and difficult to tap on.
Homepage on the Fiat site provided minimal direction – It was not intuitive where to begin a search when looking to buy/lease an automobile. Additionally, while the burger menu was easy to see and access, it provided options far too vague for users to know where they needed to go subsequently to continue their search.
Now that I’ve shared a few examples from an expert review of my own, here are some tips for how to conduct an expert review of your own. While conducting an actual usability test of your mobile site is the ideal route, conducting a quick usability review is still a great start!
Tips for Conducting a Mobile Expert Review
Identify the critical goals and tasks of your mobile site –It is imperative that you identify the primary goal(s) of your site so that you can know what usability issues are wreaking the most havoc on your bottom line. For example, if you are in the clothing business and you have seen a recent decline in online sales of t-shirts, a crippling usability issue may be present that is preventing users from completing the checkout process, hence the decline in sales. In the e-commerce world, shopping cart abandonment is an extremely widespread issue. Therefore, by conducting an expert review you’ll be able to uncover the specific error(s) occurring at major touch points within the checkout process that are impeding your customers from completing their purchase.
Define your typical users via a customer persona –The majority of web, mobile sites, and applications have typical users who share a relatively familiar set of skills and expertise when it comes to critical tasks. It’s the job of your organization to identify a “Persona”, which is basically a fictional representation of your typical user or customer. Constructing and modifying your mobile site based on your specific customer personas will allow you to custom tailor site attributes such as terminology, information architecture, and navigation schema precisely to the customers that will be interacting with your site most often.
Don’t just look at your site, go use it! –This is the part of the expert review where the hands-on review takes place. Since you’ve already identified the critical goals and tasks of your site, as well as your customer personas, now you can put yourself in the shoes of your customers and go through those critical tasks yourself. Take the previously identified critical tasks and walk through them one at a time as if you were the customer, all the way down to completing the t-shirt purchase (using the aforementioned clothing business example).
Now that you’re equipped with some tips for how to conduct a great usability expert review, you can grab your smartphone and put this recently acquired knowledge to work. Your managers, business owners, stakeholders, and most importantly your customers, will surely thank you!
Are you a User Experience professional who uses online survey tools to deliver insights? If so, you’re in luck! For the last four years, I’ve been working extensively with various online survey tools to deliver everything from simple one-off consumer surveys, to large scale multinational Competitive Benchmarking tests. Throughout that time, I’ve had endless opportunities to experiment with different design methods and survey tools – and to make mistakes, and learn from them – so that you don’t have to. In this article, I would like to share with you some of the potential pitfalls to designing and programming these studies that you can avoid in your next survey. Proper survey design can save you countless hours of frustration when it comes time to analyze the data and deliver your report.
Not Scripting to Your Reporting Tool
Sometimes when you script a survey, you want branching pathways for users who are successful or not, so you create multiple paths. For instance, in UserZoom, you can have a “Success Questionnaire” and an “Error Questionnaire” depending on a particular question. If you only want to look at the success group and the failure group individually, that’s a perfectly sound approach. However, if you want to look at any of those answers cumulatively in the results, you’ll now force yourself to manually compile answers to the same question from those two questionnaires. If you do this across multiple tasks and multiple studies, suddenly you’ll find yourself doing busywork that could have been avoided, had you taken some time to assess how these results would look in the reporting tool. If you’re unsure, run a preview or soft launch with yourself as the participant, and see how the data looks. This could save you hours of time when you get to the analysis phase, trust me!
Not Making the Most of a Tool’s Capabilities
Knowledge of the survey tool you’re using is extremely valuable when scripting. For example, many survey tools allow you to tag your questions with an alphanumeric code, allowing easier identification when you export the results. Taking a moment to label your questions with clear, concise tags will make your analysis phase easier and less prone to errors.
Script Once, QA Twice (or more!)
Okay, the old adage is measure twice, cut once, but you get the picture. It’s important to lock everything down before you go into field. If you make sure that you have everything you need before gathering results, you avoid common pitfalls like leaving out a key question, or any number of logic issues that could tank a survey. Survey software typically makes it difficult to make script changes once the study has launched, so you could end up throwing away data from days of fielding. That’s why I recommend at least two QA’s, one from the panelist or participant perspective, and one from the client perspective. Ideally this QA will be done by another member of your team, not the person who wrote the script. Experienced survey designers know that it’s easy to develop blind spots for your own script. A proper QA should first take in the participant point of view, making sure the instructions make sense to someone with no knowledge of what is being tested. The second QA should both verify logic and setup, but more importantly, map back the study design to the goals of yourself or your client. This added verification can prevent costly mistakes and time lost when the study goes live.
The Kitchen Sink
Finally, the kitchen sink. It’s tempting to shove everything you can into a survey – especially when pressure mounts from your client and stakeholders – but remind them that the most elegant surveys avoid unnecessary questions and focus on what is most important. It’s of paramount importance to minimize survey fatigue, a real problem that lowers response rate and quality. A good rule of thumb for the longest surveys is a length of 20-25 minutes maximum, and that’s stretching it. Even at 20 minutes there is a large drop off on quality comments near the end of the survey. You may end up throwing out results that would have been valid in a 15-20 minute survey. Ask yourself, or your client, “Do we want 50 questions of middling to poor data or 35 questions with high quality responses?”
That’s all for now. I hope you’ve found this educational, or at the very least, entertaining! Subscribe to our newsletter for monthly articles and updates.
Last week a gift arrived at our doorstep. Wooden martini glasses. Impressed, excited (to drink a martini from a beautiful wooden glass!), and a tad confused we opened the envelope. It read: “Happy 5th Anniversary, Key Lime Interactive. May we someday have an opportunity to send you a gift of gold.”
One of our very first customers recalled the fact that 5 years ago this month Key Lime Interactive was born and in celebration sent us a gift for our traditional “Wooden” Anniversary.
5 years. Half a decade. How it flew! We’ve seen significant growth in revenue, clients and industries served, team members, in the beautiful growing families of team members, in office space, geographical reach, in available tools, in demanded products. It’s been quite a journey and we’re still looking forward so intently that we nearly missed our own birthday!
Revenue: 10x what it was in 2009.
Clients: We once had a small list of a few Fortune 5000 folks, today we’ve worked with 56 different clients (and this doesn’t count the wonderful clients that we interact with via our agency relationships, thanks agency friends!), 40% are Fortune 500!
10% of our clients have integrated us into their organization and hold retainers with KLI; we just adore our clients.
We’ve opened an official office in NYC, based on the demand of said client base and the KLI talent we have in the Northeast.
We’ve begun hiring West coast talent to cater to our clients in the bay area; that’s been working out just perfectly. We’re continuing to hire – interested?
We joined the UXFellows and are pleased that 1/3 of our work supports global research needs.
Which insurance provider offers the most comprehensive mobile capabilities? How do they compare to their competition? Which provider shows the best understanding of how to deliver a useful and easy mobile experience?
KLI compared State Farm, Allstate, GEICO and Progressive and their respective mobile sites and applications in a mobile competitive review study. Each property (mobile site, iPhone App, Android App) was compared against a list of standard auto insurance tasks to assess their capabilities and features.
Join us at 2pm EST on May 19th to to hear the detailed report about how each company ranks and who produces the best mobile experience to date. Register here.