This past spring, two Jigsaw qualitative researchers, Ross Denton and Peter Totman, were curious to test the limitations of AI as a tool for qualitative analysis to see where – if anywhere – the human researcher has the advantage. They embarked on a project exploring a deeply controversial and emotional topic area – the trials, tribulations, and concerns parents have of raising boys in the modern era. After each group and depth interview, they would feed transcripts into WhycatcherAI, a generative AI analyser hosted on our Whycatcher platform. From here they could push the AI by asking it to make complex judgements, take positions, and even speak to potential biases of the researchers themselves.

The project produced fascinating discoveries about how the specific AI we used perceives the world, and how it works to mitigate internal biases. Here are three of the most exciting and fascinating discoveries we made about it:

  • AI can be a bit of a sycophant. AI tends to mirror the perspective of the researcher it is speaking with and appears hesitant to push back too aggressively on any given point. When AI does present a novel or contrary opinion, it will quickly give way to any pushback on the part of a researcher. The narcissist in us all may enjoy the flattery and superiority of conversing with AI, but the insights we arrive at may be half-formed, lacking the challenge and development from debate with real life colleagues.
  • Much like us, AI has its own bias, too. Much like any other colleague, AI’s perspective is based on its design, educational input, and interactions. Though AI may not identify the way we do (it has no gender, ethnicity, or socio-economic class), it has been trained on material written by people who do have such identities. Fascinatingly, when we asked several AI (both on and off the Whycatcher platform) to describe a “market researcher,” it almost always developed a white, middle aged, American man. While an identity is not an inherent bias, it may colour what the AI considers to be “typical,” “normal,” or “standard,” and may impact how it interprets an “interesting” or “unique” finding! It can also mirror the political outlook of its designers, using language associated with progressive mindset (e.g.  intersectionality and privilege) in its analysis, without acknowledging their potentially controversial status.
  • AI misses the “thing not said”. As the research team discussed our experiences speaking with participants about their children, we could all identify moments in our discussions in which a participant would allude to something. This could be a quote half-said, or the hinting toward a topic (such as something taboo or risqué), in which the participant chose not to explicitly state something the researcher is expected to implicitly understand. As researchers (and empathetic people!) we are able to pick up on these subtleties and address them in analysis. AI, however, is not so deft at picking through these conversational intricacies. A thing unsaid, to AI, is not a thing at all.

AI is an incredible tool for analyses – it saves time and brings its own unique insights to the research process. But AI won’t be replacing good, old-fashioned human thought, care, and conversation. It will only be adding to it. It also needs to be skilfully used – with relevant guardrails and understanding of current limitations.

Knowing these benefits and pitfalls means we can use this changing facet of research to help our clients successfully make the right decisions to help navigate their own changing environments.

Cady Crowley, Aug 24

Share it: