Integrating AI into Market Research Reporting Process
Our team’s initiative to test AI tools to support the human-centric research we do continues…We asked ourselves, “What AI tools can support deliverable production?”
Our recent study on Belonging for San Francisco State offered an ideal opportunity to evaluate deliverable options. We started with our thoroughly-analyzed, human-authored report to serve as a benchmark to compare to machine-generated versions.
This is a necessary step in the process of testing new solutions; be sure you have a grounding point to start from – and to compare your results with!
We then input data into multiple AI platforms to give the tools a synthesis starting point. Some platforms—typically generalist LLMs — were also able to include additional contextual documents (e.g., background documents, client information, participant segmentation, desired output structure).
It’s so important to consider multiple tools – an ‘AI tech stack’ if you will – because you often need more than one to come up with the optimal result.
Steps we took in this case:
- We gave Claude prompts such as: Rewrite the following marketing research report, inclusive of survey results and respondent quotes, into an engaging, informative, and professional marketing white paper for higher-ed audiences about the importance of Belonging among Latinx students and it did magic refining language in our original version. It even included new headings and a title for consideration
- Since we’ve learned from past tests that AI tools are currently built to condense things, we had to re-prompt Claude to get the depth we need with questions such as: Re-draft the pasted document of research findings again but in a longer, more detailed format. It should fill 10 pages of a Word document written at 14 point font
- Once the revisions were made, we turned to Gamma.app to help visualize and layout the content. AI Images can be a little funny/off so we found it more useful to use real images and to select “Free to Use” to avoid copyright issues.
For this particular project, we generated the report 5 times to get multiple options to take into consideration. We took all the versions with a grain of salt; reviewing them with a careful human eye and revising and reframing many things by hand along the way to ensure we were delivering human-level quality (and accuracy!)
Overall, this is one good set of tools that can be very useful for generating reports and visualizing information in new ways – especially when it isn’t a researcher’s specialty or when timelines are condensed.
If you’re curious about the output from this process, check out the resulting white paper here. And if you’re a brand in need of diverse and compelling deliverables, the KNow team (and our bot allies) are ready to step in and create compelling content for your team! Give us your deliverable challenges at admin@knowresearch.com