In 2016, a European gas-station chain hired HappyOrNot, a small Finnish startup, to measure customer satisfaction at its hundred and fifty-plus outlets. One gas station rapidly emerged as the leader, and another as the distant laggard. But customer satisfaction can be influenced by factors unrelated to customer service, so, to check, the chain’s executives swapped the managers at the best and worst performers. Within a short time, the store at the top of the original list was at the bottom, the store at the bottom was at the top, and one of the managers was looking for work.
By the standards of traditional market research, HappyOrNot’s analysis was simplistic in the extreme. There were no comment cards, customer surveys, focus groups, or reports from incognito “mystery shoppers.” There was just crude data collected by customer-operated devices that looked almost like Fisher-Price toys: freestanding battery-powered terminals with four big push buttons—dark green and smiley, light green and less smiley, light red and sort of frowny, dark red and very frowny. As customers left a store, a small sign asked them to rate their experience by pressing one of the buttons (very happy, pretty happy, pretty unhappy, or very unhappy), and that was all.
What HappyOrNot’s gas-station data lacked in substance, though, they made up for in volume. A perennial challenge in polling is gathering responses from enough people to support meaningful conclusions. The challenge grows as the questions become more probing, since people who have the time and the inclination to fill out long, boring surveys aren’t necessarily representative customers. Even ratings on Amazon and on Walmart.com, which are visited by millions of people every day, are often based on so few responses that a single positive or negative review can affect customer purchases for months. In 2014, a study of more than a million online restaurant reviews, on sites including Foursquare, GrubHub, and TripAdvisor, found that the ratings were influenced by a number of “exogenous” factors, unrelated to food quality—among them menu prices (higher is better) and the weather on the day the reviews were written (worse is worse).
A single HappyOrNot terminal can register thousands of impressions in a day, from people who buy and people who don’t. The terminals are self-explanatory, and customers can use them without breaking stride. In the jargon of tech, giving feedback through HappyOrNot is “frictionless.” And, although the responses are anonymous, they are time-stamped. One client discovered that customer satisfaction in a particular store plummeted at ten o’clock every morning. Video from a closed-circuit security camera revealed that the drop was caused by an employee who began work at that hour and took a long time to get going. She was retrained, and the frowns went away.
Last year, a Swedish sofa retailer hired HappyOrNot to help it understand a sales problem in its stores. Revenues were high during the late afternoon and evening but low during the morning and early afternoon, and the retailer’s executives hadn’t been able to figure out what their daytime employees were doing wrong. The data from HappyOrNot’s terminals surprised them: customers felt the most satisfied during the hours when sales were low, and the least satisfied during the hours when sales were high. The executives realized that, for years, they’d looked at the problem the wrong way. Because late-day revenues had always been relatively high, the executives hadn’t considered the possibility that they should have been even higher. The company added more salespeople in the afternoon and evening, and earnings improved.
HappyOrNot was founded just eight years ago, but its terminals have already been installed in more than a hundred countries and have registered more than six hundred million responses—more than the number of online customer ratings ever posted on Amazon, Yelp, or TripAdvisor. HappyOrNot is profitable, and its revenues have doubled each year for the past several years; its clients have a habit of inquiring whether, by chance, the company is for sale—significant accomplishments for a still tiny enterprise whose leaders say that their ultimate goal is to change not just the way people think about customer satisfaction but also the way they think about happiness itself.
By the standards of traditional market research, HappyOrNot’s analysis was simplistic in the extreme. There were no comment cards, customer surveys, focus groups, or reports from incognito “mystery shoppers.” There was just crude data collected by customer-operated devices that looked almost like Fisher-Price toys: freestanding battery-powered terminals with four big push buttons—dark green and smiley, light green and less smiley, light red and sort of frowny, dark red and very frowny. As customers left a store, a small sign asked them to rate their experience by pressing one of the buttons (very happy, pretty happy, pretty unhappy, or very unhappy), and that was all.
What HappyOrNot’s gas-station data lacked in substance, though, they made up for in volume. A perennial challenge in polling is gathering responses from enough people to support meaningful conclusions. The challenge grows as the questions become more probing, since people who have the time and the inclination to fill out long, boring surveys aren’t necessarily representative customers. Even ratings on Amazon and on Walmart.com, which are visited by millions of people every day, are often based on so few responses that a single positive or negative review can affect customer purchases for months. In 2014, a study of more than a million online restaurant reviews, on sites including Foursquare, GrubHub, and TripAdvisor, found that the ratings were influenced by a number of “exogenous” factors, unrelated to food quality—among them menu prices (higher is better) and the weather on the day the reviews were written (worse is worse).
A single HappyOrNot terminal can register thousands of impressions in a day, from people who buy and people who don’t. The terminals are self-explanatory, and customers can use them without breaking stride. In the jargon of tech, giving feedback through HappyOrNot is “frictionless.” And, although the responses are anonymous, they are time-stamped. One client discovered that customer satisfaction in a particular store plummeted at ten o’clock every morning. Video from a closed-circuit security camera revealed that the drop was caused by an employee who began work at that hour and took a long time to get going. She was retrained, and the frowns went away.
Last year, a Swedish sofa retailer hired HappyOrNot to help it understand a sales problem in its stores. Revenues were high during the late afternoon and evening but low during the morning and early afternoon, and the retailer’s executives hadn’t been able to figure out what their daytime employees were doing wrong. The data from HappyOrNot’s terminals surprised them: customers felt the most satisfied during the hours when sales were low, and the least satisfied during the hours when sales were high. The executives realized that, for years, they’d looked at the problem the wrong way. Because late-day revenues had always been relatively high, the executives hadn’t considered the possibility that they should have been even higher. The company added more salespeople in the afternoon and evening, and earnings improved.
HappyOrNot was founded just eight years ago, but its terminals have already been installed in more than a hundred countries and have registered more than six hundred million responses—more than the number of online customer ratings ever posted on Amazon, Yelp, or TripAdvisor. HappyOrNot is profitable, and its revenues have doubled each year for the past several years; its clients have a habit of inquiring whether, by chance, the company is for sale—significant accomplishments for a still tiny enterprise whose leaders say that their ultimate goal is to change not just the way people think about customer satisfaction but also the way they think about happiness itself.
by David Owen, New Yorker | Read more:
Image: HappyOrNot