Home & Renters Insurance Competitive Analysis

Home & Renters Insurance UX Competitive Analysis

See How You Rank
REQUEST A DEMOLEARN MORE
mobile ux home report

Home & Renters Insurance Mobile UX Competitive Analysis

(Published Bi-Annual in March and September)

This Mobile UX Competitive benchmark report goes far beyond a basic heuristic evaluation or expert review. Additionally, thanks to our innovation arm Key Lime Labs, these insights are now available in a comprehensive platform designed for business intelligence leaders, market analysts, UX/CX strategists, and product managers who want on-demand access to the competition’s mobile customer experience.

Home & Renters Insurance Competitive Index Overview

Key Lime Interactive’s (KLI) Home/Renter’s Insurance Mobile Competitive Index Report reviews eight (8) of the largest home and renter’s insurance companies in the U.S. including:

  • Allstate
  • Farmers Insurance
  • Liberty Mutual
  • Nationwide
  • State Farm/span>
  • The Hartford
  • Travelers
  • USAA

KLI’s methodology is unique because it incorporates consumer preferences and goes far beyond a basic heuristic evaluation or expert review. An important part of our analysis is a 500-person survey of smartphone owners in which they are asked to weigh in and identify the features and capabilities that they consider critical for a successful experience with the mobile site or app offered by their home/renter insurance company.

KLI’s intent in creating this third-party syndicated report is to:

    1. Provide consumer-driven data to help guide insurance companies as they prioritize features to implement. This guidance will be based on the results of a current consumer survey.
    2. Summarize how the insurance companies differentiate themselves from their competitors through the capabilities and features that they offer.

View Competitive Benchmark Overview: Home/Renters Insurance Competitive Overview

Mobile UX Methodology

To create an overall score, we combine a capabilities assessment with user feature importance ratings. The capabilities assessment is created by examining the feature coverage of insurance sites and applications. User ratings are determined by a consumer survey and card sort.

Our review of the primary mobile properties includes a full verification of the insurance companies’ capabilities. The unmodified score represents feature coverage or the company’s offerings by category. This is a binary evaluation reflecting the possession of certain criteria.

Additional information about KLI’s methodology is detailed in the detailed report. Request Detailed Report.

Incorporating User Feedback

KLI conducted a consumer survey and card sort (n=500) to gather feedback about how customers prioritized features when using a home/renter’s insurance company’s mobile property. Individual feature scores were then weighted by their value to customers. The goal is to provide a metric of relative importance so that the highest-scoring company is also the one providing customers’ desired features.

 

Summary of Capabilities & Features within UX Analysis

  • Secure Log In Process
  • Access Policy Info
  • Account Settings/Management
  • Bill Payment
  • Get Quote
  • Claims
  • Alerts
  • Help/Self Service
  • Locate an Agent
  • App Interface
  • Customer Support
  • Social Media

View More Mobile UX Reports:

Mobile Banking Benchmark Report

The report features key areas that drive user satisfaction and quickly identifies areas of improvement to satisfy existing and prospective customers. KLI’s research experts highlight opportunities based on a rigorous independent evaluation and that ultimately identifies trends and opportunities that move the entire industry forward.

View Details

Credit Card

These insights summarize new, successful, and well-received features across the finance industry.

View Now

Auto Insurance

The evaluation includes a general discussion about what these companies are doing for their existing and prospective customers via mobile-optimized websites and apps (iOS and Android). Data is captured from actual customers of each issuer and includes screenshots of features from behind-the-login screens of each provider.

Learn More

Cruise Competitive Index

Each year the benchmark will examine sites’ top problem areas from the previous year and any site redesigns/changes made subsequently, to answer the question: Have the sites made changes that have improved or worsened the user experience?

View Insights

Brand Ratings

Curious who the top-ranking brands in your vertical are? Here’s a sneak peek of the top 3, but click below to request a full UX competitive analysis:

1

State Farm

Mobile Web Score: 86%

2

USAA

Mobile Web Score: 78%

3

Allstate

Mobile Web Score: 63%

Unbiased Third-party Mobile UX Research

Get the Competitive Intelligence you need to stay ahead in your industry. Competitive Insights is an intuitive competitive intelligence software, incorporates consumer preferences of features and capabilities that they consider critical for a successful experience including:

 

  • A complete listing of overall rankings
  • A detailed scoring breakdown of all capabilities and features reviewed
  • Screenshots from behind the login screens of all competitors sites & apps
  • A comparative list of current mobile capabilities available on each mobile app
  • Detailed descriptions of how users can complete tasks on the mobile device
  • User feedback collected regarding feature preferences
  • Identifies key success factors, best-in-class features, areas for improvement, and why some companies are lagging behind
KLI UX Consultants

Mobile UX Competitive Insights Subscription

Best-in-class Mobile CX Comparisons

Gain insights into typical behaviors and context for the use of your product with year-over-year brand performance ratings across mobile web + mobile app across iOS and Android.

Play-by-Play Screenshot

Get full access to a library of screenshots of your competitor’s latest mobile designs indexed by feature, brand, and release date.

Dynamic Change Tracking Tables

See when top brands in your industry update, remove or add new mobile features.

Features Ranked By Consumers

View results from over 500 active consumers on the mobile features they value most in your industry.

Chart your Course for a Better Customer Experience Today

Meet with a KLI Team Member

 

{

"These rankings acknowledge our continued focus on providing both current and prospective State Farm customers with the same great experience, regardless of how they come to us"

Patty Gaumond / State Farm

"Our C-suite asked us to find an A-Team… Having worked with Key Lime at a previous employer, I knew that there was no better team than them for actionable insights and strategic design recommendations.”

VP of Digital Strategy & Innovation / Miami Heat

“…This methodology provides the details we need in order to focus our development plans in the coming year....”

VP of eCommerce / Norwegian Cruise Line

Locations

Subscribe to Our CX Newsletter

Survey Design Sins to Avoid

by Phil McGuinness

 

checklist

Are you a User Experience professional who uses online survey tools to deliver insights? If so, you’re in luck! For the last four years, I’ve been working extensively with various online survey tools to deliver everything from simple one-off consumer surveys, to large scale multinational Competitive Benchmarking tests. Throughout that time, I’ve had endless opportunities to experiment with different design methods and survey tools – and to make mistakes, and learn from them – so that you don’t have to. In this article, I would like to share with you some of the potential pitfalls to designing and programming these studies that you can avoid in your next survey. Proper survey design can save you countless hours of frustration when it comes time to analyze the data and deliver your report.

  1. Not Scripting to Your Reporting Tool

Sometimes when you script a survey, you want branching pathways for users who are successful or not, so you create multiple paths. For instance, in UserZoom, you can have a “Success Questionnaire” and an “Error Questionnaire” depending on a particular question. If you only want to look at the success group and the failure group individually, that’s a perfectly sound approach. However, if you want to look at any of those answers cumulatively in the results, you’ll now force yourself to manually compile answers to the same question from those two questionnaires. If you do this across multiple tasks and multiple studies, suddenly you’ll find yourself doing busywork that could have been avoided, had you taken some time to assess how these results would look in the reporting tool. If you’re unsure, run a preview or soft launch with yourself as the participant, and see how the data looks. This could save you hours of time when you get to the analysis phase, trust me!

 

  1. Not Making the Most of a Tool’s Capabilities

Knowledge of the survey tool you’re using is extremely valuable when scripting. For example, many survey tools allow you to tag your questions with an alphanumeric code, allowing easier identification when you export the results. Taking a moment to label your questions with clear, concise tags will make your analysis phase easier and less prone to errors.

 

  1. Script Once, QA Twice (or more!)

Okay, the old adage is measure twice, cut once, but you get the picture. It’s important to lock everything down before you go into field. If you make sure that you have everything you need before gathering results, you avoid common pitfalls like leaving out a key question, or any number of logic issues that could tank a survey. Survey software typically makes it difficult to make script changes once the study has launched, so you could end up throwing away data from days of fielding. That’s why I recommend at least two QA’s, one from the panelist or participant perspective, and one from the client perspective. Ideally this QA will be done by another member of your team, not the person who wrote the script. Experienced survey designers know that it’s easy to develop blind spots for your own script. A proper QA should first take in the participant point of view, making sure the instructions make sense to someone with no knowledge of what is being tested. The second QA should both verify logic and setup, but more importantly, map back the study design to the goals of yourself or your client. This added verification can prevent costly mistakes and time lost when the study goes live.

 

Kitchensink

  1. The Kitchen Sink

Finally, the kitchen sink. It’s tempting to shove everything you can into a survey – especially when pressure mounts from your client and stakeholders – but remind them that the most elegant surveys avoid unnecessary questions and focus on what is most important. It’s of paramount importance to minimize survey fatigue, a real problem that lowers response rate and quality. A good rule of thumb for the longest surveys is a length of 20-25 minutes maximum, and that’s stretching it. Even at 20 minutes there is a large drop off on quality comments near the end of the survey. You may end up throwing out results that would have been valid in a 15-20 minute survey. Ask yourself, or your client, “Do we want 50 questions of middling to poor data or 35 questions with high quality responses?”

 

That’s all for now. I hope you’ve found this educational, or at the very least, entertaining! Subscribe to our newsletter for monthly articles and updates.

 

New Director of Quantitative Research Leads Competitive Initiatives

KLI couldn’t be happier to welcome Dana Bishop to our team as our new Director of Quantitative Research. Dana has been working in the field of user research for 20 years and flaunts extensive experience with a variety of research methods. Above all, Dana has perfected the art and science of creating simple, yet highly-informative large-scale online user experience research studies. Her graceful orchestration of traditional scaled questions and directed tasks for users results in detailed feedback, thoughtful analysis and poignant evidence that informs design for clients far and wide.
Prior to joining Key Lime Interactive, Dana was lead researcher and manager of Keynote Systems’ Competitive Research group. While at Keynote, Dana led longitudinal quantitative research studies across numerous verticals and global markets for companies such as Carnival, Expedia, Travelocity, Wells Fargo, U.S. Bank, Yahoo!, and State Farm Insurance. Dana began her career in the 1990s in San Francisco where she spent 3 years at Charles Schwab & Co conducting a nationwide field study and weekly in-lab sessions with customers; as well as time spent running usability testing for edu-tainment software in school environments.
After just three short months as part of the KLI team, Dana’s expertise is in high-demand! Custom studies are exceeding client expectations and all the while, Dana and other Key Limers are preparing the following types of reports for incremental release:
KLI Competitive Research. Naturally, with the addition of Dana to the Key Lime team, we’ve both expanded and refined our competitive research. Dana is spearheading several existing and new reports that fall under the following categories:
Competitive Index:
Currently our Auto Insurance Competitive Index and our Mobile Banking Competitive Index are widely used by nearly all top players in their respective industries. For this research KLI runs a survey to deeply understand the perceptions, beliefs, needs and desires of users when using their mobile devices (both web and apps) in context of a given industry and then indexes and compares capabilities across major players; ultimately ranking them and revealing strengths and opportunities for the industry and individual companies to move ahead. Inquire about the purchase of either of these reports, or suggest an index for your industry…
Competitive Benchmark Studies
Additionally, KLI publishes annual Cruise Competitive Benchmark results in June each year. This is a task-based assessment of the leading cruise industry websites by users (a mix of first-time and experienced users). The study analyzes the user experience in trying to learn about the cruise line, find a cruise of interest, and book online. It measures the user experience in terms of satisfaction, site reliability and performance, as well as NPS and likelihood to return and purchase. Dana’s keen understanding of what the cruise industry needs and pays attention to when executing sound design changes is part of what makes this benchmark study novel and desired. The study capitalizes on the value proposition offered by the various brands: Are they selling the ratio of cost to experience well to their digital consumers? Are they painting a clear picture that informs decisions and promotes action? At present, leaders in the industry are working with Dana to refine the June release to include exactly what they’ve been missing. Want to be involved in that conversation? Have ideas for a similar study in a different vertical? Learn more…
Custom Competitive Benchmark Studies
To take this one-step farther and truly meet the demands of KLI clients, Dana is leading the development of Custom Competitive Benchmark studies for several clients in the retail, travel, medical and financial industries. These studies are quite similar to the general Competitive Benchmark studies in that they are also task-based assessments of sites within a given industry by users. They also focus on which site(s) are providing the best user experience; but differ in that they allow companies to custom design aspects of the study along with KLI researchers. Companies can “ customize” by selecting the competitors they are most interested in benchmarking themselves against, as well as having input about the tasks users complete, and timing of when the study fields. Need to benchmark yourself against competitors in your industry? Learn more…