Artificial intelligence has received a ton of attention lately, along with other forward-looking technologies like blockchain, microtech, etc. I recently had the pleasure of listening to a talk about AI that felt far more realistic than most talks, and I found my dreamer self battling it out with my practical self.
Suddenly, I thought, “Oh, that would be a great blog post.”
So, here I am.
The Dream
Technology has long been touted as the answer to all sorts of things. It’s supposed to be the neutral, objective party. It’s supposed to take inputs, not care about them, and generate outputs. We’re supposed to be able to accurately predict weather, sports results, purchase behaviors, and even eventually answers to survey questions.
For market research, it’s supposed to make our jobs easier and, increasingly, possibly obsolete – see above about predicted behaviors.
We’re supposed to be able to gather all sorts of data on people easily, then rely on machine learning and whatnot to tell us how to run the next advertising campaign — and maybe even write it for us. We can then sit back and watch the revenue’s pile up.
We’re gonna be rich!!! Rich and barely working, I tell you!
The Reality
First, can we look at the market research industry right now? So many of us are still fighting to convince each other, let alone our stakeholders, that surveys need to adapt to become more engaging and able to be taken on mobile devices.
We’re still trying to figure out this whole telemetry world, from what data are we actually gathering to what questions can we actually ask of the data.
We’re still trying to correctly apply gamification to our research, or at least add it to our research toolsets.
We’re still working on becoming more consultative, not just producing data reports, but getting to that seemingly ever elusive insights delivery goal.
That’s not taking into account the issues with B2C and B2B research. It doesn’t account for the disparate data sources we have and still can’t tie together because of the lack of a unique identifier, let alone the privacy and security issues with doing such a thing.
We also have the fact humans are the ones programming the AI, so whatever biases we have we’re passing on to our robot overlords.
When it comes to data collection, we have to also be wary of what feels like the same question being asked across studies, but the scale direction changed in this one, so now we can’t tie these two studies together and say we have a collective view of this one metric. Or maybe one study changed the wording just a bit, but it was enough that it changed the way respondents interpreted the question.
And then there’s the fact most of our stakeholders don’t even know how to correctly frame their business questions to get the data they actually need to feed their decision making processes. We could solve for everything mentioned earlier and have this amazingly comprehensive dataset, but if we’re asking the wrong questions of it, we still end up with trying wrong answers being used.
And again, we’re the ones doing the programming, so if we teach the machine to ask question X and use the output to give us prediction Y….and that question is the wrong question?
So, while I love keeping the eye on the future, my reality angel keeps reminding me we still have such a long ways to go.