The market research industry is faced with a lot of change lately. From more people doing it themselves to increased telemetry availability to the increase in the number of people who are taking (or attempting to take) every survey on mobile, it can sometimes feel overwhelming to keep up. I read an article today about applying the kaizen principle to daily life (as opposed to the grand sweeping goals set Jan 1 that get thrown out more often than they are achieved). It made me wonder if this principle of small steps to achieve big changes might also be applicable to the research industry.
Smaller bites are easier to swallow
One of the biggest hurdles to making changes to studies is the convincing of client (or even supplier) that the change needs to be made in the first place. Let’s take a scenario where a 30-minute semi-annual study is seeing declining response rates due, most likely, to the length of the study and the lack of being optimized for the mobile experience. It can be easy to say the entire study needs to change: shorten the study and redesign for the mobile experience. This works towards the desire for increased response rates, but it omits the need for tracking certain pieces of information that are deemed important by stakeholders who have become increasingly attached to their ability to get a read on this information twice a year. And yet, the changes need to be made.
How can this be done using small steps? First, acknowledge all of the needs of the study: stakeholder needs for data; ability to keep tracking information over time; accurate data; representative audience. Now let’s acknowledge the needs of the survey participants: relevant topic; ability to accurately provide feedback; compelling reason to provide feedback; not terribly time-consuming; easy to take on mobile.
What small steps can be done in your next wave of the study to help accommodate these needs? Why not first review the last set of analysis and compare it against the questions being asked to see if there are questions that are being asked but that nobody cares about (generally legacy questions that made sense at one point, or perhaps a stakeholder really cared, but that stakeholder has changed or else the data turned out to be less helpful than imagined). Trim those questions out, and you are already taking a step towards shortening the study. How about also looking at third-party data sources such as telemetry and determining what questions could be replaced using telemetry? Again – another great step at shortening the study.
These suggestions could likely go over well with everyone involved. After all, if nobody is caring about the data being asked, or (even better) if there is history that indicates the data gathered from that question is either irrelevant or is more accurate using third-party measured data, then what’s the point of keeping the question in the study?
Next wave, you can look at improving the mobile experience. (Who knows, that step of trimming the survey may likely have already improved the mobile experience!)
Benefits of the small steps approach
As in life goals, trying to do too much at once often results in frustrated failures. Ultimately, everyone gets overwhelmed, nervous that the data is going to be compromised with all of the changes being requested, and so nothing changes and the study continues as before. By taking smaller steps over perhaps a longer amount of time, chances of success increase. The changes being introduced become less scary for everyone involved; take a bit less time; and can be monitored more easily.
I will admit, it can be frustrating to move so slowly, but I’m also guessing that some of the changes made can have cascading positive effects.
I’d be curious to hear if you’ve seen this small steps approach used effectively, versus the “tear the band-aid off” approach? Or vice-versa? Have you seen a study shift dramatically to accommodate all of the changes in the industry landscape and be successful? Of course, I’m focusing on trackers, as those are the studies most affected by all of these changes in data availability and survey modes used. However, if you’ve seen this also applied generally in you company, I’d also be interested to know how it worked out.