FIELD NOTE

The Problem with AI Insights, Part 2: If You Must, Here’s Some Notes on How

Dec 16, 2025

In the previous article, we explored some of the non-technical problems why generative AI tools fail to produce genuine insights about human behaviour. AI lacks the contextual, cultural, and theoretical grounding that human researchers bring to the interpretive process. AI systems, in their current forms, cannot do the analytical work of a human-centric researcher because they are too error prone, too limited to be effective analysts, and incorrectly deployed when they are left to do the work themselves.

Unlocking Ai’s Potential by Situating it Within a Human-led Research Process

But simply pointing out current problems doesn't move us forward. If current AI systems cannot create genuine insights on their own, we ask: how can AI be deployed meaningfully within a well-designed, human-led research process? How can we use it as a constrained and supervised tool, not as an autonomous analyst? Using AI systems in this way means they become useful tools for a human analysis, and not a liability to the entire endeavor.

With this in mind, we will lay out what forms of AI can be used effectively, where in a human-led process they can be best used, and what this means for businesses looking to use AI to help them understand the world outside their front doors.

Much of what we must cover is really a simple matter of how to correctly understand AI tools, what adaptations to existing research processes need to be made to employ them most effectively, and how to avoid what I call the ”hammer as screwdriver problem.” This last point is extremely important because it is an endemic problem in market research and in business research in general. It involves a systematic, and systemic, misuse of research methods and tools.

Picking the Right Tools for the Job

The problem lies in the tendency for businesses to use the wrong tool to get a task accomplished. This is like using a hammer to drive in a screw – possible, but destructive and inefficient. It is better to understand the tool and the task and to select the correct tool, which in this case would be a screwdriver.

The same logic applies to a business research context. This could mean selecting the right researcher, research approach, methodology, agency, etc. However, asking a data analyst to conduct a scenario process and peer ten years into the future is less optimal than just having a foresight strategist do it. Using a survey to develop insights on design questions is less optimal than conducting some good ethnographic research. Using AI to develop insights on human behaviour presents the same problem. It is the wrong tool for the task when treating general-purpose AI systems as if they were trained specialists.

The task is fairly well understood by specialist researchers. Social scientists of many stripes are already very good at exploring the nuances and wonders of humans doing their thing. So, for us right now, the first operation in incorporating AI into an insights process is to develop a perspective on what AI is, what it is not, and how it can be used as a tool in this process.

Understanding the Many Forms of AI

First we have to avoid describing AI as a single thing; there are many kinds of AI. Thus, AI should be discussed as AI systems or AI tools. There are distinct types of AI systems – NLP (natural language processing) systems, to LLMs (large language models), and a wide variety of generative media systems, analytical systems, sensory systems, and reasoning systems. The major commercial tools like Gemini, ChatGPT, Claude, and the like, combine several of these into single platforms, but they are quite different, each with their own jobs and capacities.

The tendency for those spinning the AI hype is to talk about these systems as if they are one single technology, when they are in fact quite distinct and different technologies with their own limitations. Calling a system “AI” rather than being more precise about what type of AI it is and what it does perpetuates the misrepresentation of AI as an all-intelligent form of general AI (a super smart, thinking and doing machine), when what we are actually dealing with is something much more limited.

For our purposes, this means we can leave the big LLMs and their hype-masters aside for now and focus first on a narrower application of specific tools to accomplish an important task – supporting a researcher in a social-scientific, applied research process.

Making it The Right Tool for the Task via Relational Design

If an AI system is not going to be doing all the work of a researcher, what is it going to be doing? A fully automated data analysis and insights generation process remains a distant dream for now – if it is desirable at all. It is foolish to ask an AI system to reliably conduct an elaborate research process on its own, expecting the quality that would be on par with a trained human expert.

But if we incorporate an AI system into a human-led process, the power of these tools are unlocked. Instead of being the researcher, the AI systems are now tools accomplishing important tasks, or functions in a longer process. They bring scale, speed, and precision to many steps. These things make the human researcher better and faster at accomplishing their task.

To incorporate AI into this process, we need to define what tasks in the research process an AI is expected to execute, how the researcher is meant to interact with the AI and employ it as a tool, and what outcomes we expect the AI to generate – both in terms of structure, content, depth, and quality. This requires a relational design and workflow design in which the characteristics and boundaries of human-machine interaction are defined.

Defining Roles, Responsibilities, and Audience

Relational design of an AI application requires a consideration of the special skills involved in the research process – both those skills possessed by the human researcher and the task-specific skills expected of the AI system. The same is true for the audience. Who exactly is an AI collaborating with, and who is the audience of its outputs – i.e. the discourse in which its outputs are expected to participate? A binding perspective needs to be defined for both.

Fundamentally, it is the human researcher who sets the terms of what the overall task is, and how it should be accomplished. They define what kind of research is being done and set the terms of the scale and scope. Given that we are discussing commercial insights here, we will use the context of a qualitative research study to ground us in our examples.

Solving the Context Problem

The process of qualitative research is not rocket science, but it has many nuances and difficulties. The task is to establish a research problem (most often defined by a client), conduct some research with real people living their lives out in the world, render these encounters into some form of useable data, analyze this data, and then tell the story of what in this analysis provides insights into their lives, and why this matters for a company. Immediately, one can see that AI does not have a lot of roles to play here.

For one thing, an AI system cannot deal with real people living their lives out in the real world. The place, space, and physicality of the research context – manifested by the "broader field" – is something to which an unembodied AI usually does not have access. Embodied and situated knowledge is a central element of social scientific research styles such as ethnography, however, and while much of this can be captured and mediated via written descriptions, photo, video and audio, the visuality and sensory experience an ethnographer gains in the field develops a life beyond field notes or transcripts.

Context is heavily set by the tacit knowledge and taken-for-granted things that the people under scrutiny – including the researcher – know how to make sense of. Most AI systems are still pretty bad when it comes to reading between the lines, understanding inference and meaning, especially when it involves the slippery world of humor and innuendo. AI is best used in a limited context that can be fully unravelled to provide a structured framework and boundaries of interpretation. So, without access to the context, or any facility with it, the AI system will be mostly limited to the role of rendering the encounters into data. This means using an NLP to make transcripts and sort all of the interviews, photos, video, and other forms of artifacts from the field.

Using AI for More Accessible Tasks that Can be Automated

But an AI system does have skills that the human researcher lacks. This is the ability to do small tasks across a large number of instances. What this means is an AI system can aid in the collating of information across a vast number of sources. For most of the companies and charlatans claiming to do AI insights now, this is the sum total of their analysis. They summarize and collate information across sources and call it a day.

However, this need not be a bad thing. Used properly, it is incredibly useful – for example, to search for a list of items defined by the research across a number of sources, which would take the human months to do. But again, the AI system is used in the most narrow fashion. It is aiding in the synthesis, not doing it by itself.

Finally, the storytelling that makes the qualitative, indeed ethnographic, process tick, can be aided by the generative AI systems. While we do not recommend one should ever let an AI system write for you, they can create additional assets like photos, videos, sound effects, even 3-renders to help tell the story.

However, all of these generated assets need to be properly prompted, tweaked, edited, and rendered. This work is best done in a collaboration with a graphic designer who also specializes in prompt engineering. But the point here is another type of AI can be of great service in the important part of speaking to an audience and helping them understand the value and meaning of the insights.

Deployment in the Insights Generation Itself Requires Caution

So far, we have discussed things that several AI systems can do well at a few key points in the insights development process. But can any AI systems participate in the analytical process and the writing of insights?

The first step in the deployment of an AI tool in insights generation is to decide if this system will add anything beneficial to the process. If it is just to add a technological layer, or simply to add speed into the system, it is best to ask if this is really necessary. However, if the researchers are dealing with an overwhelming amount of inputs (interviews, written responses, hours of video), then AI has a role to play in making short work of it all. Process comes first, tech comes second. AI should serve a human process, but it should not define one. Putting tech ahead of methodological thinking would be idiotic.

But this brings us to the second consideration: what are the inputs the AI system will encounter? Is it credibly able to handle them or do they need “cleaning” beforehand so it does not need to make too many decisions. An AI system of any kind needs clean, or well-considered data, and needs to be trained in how that data is to be used. It cannot be left to manage these inputs itself to be useful in an analytical procedure. So, the deployment questions here are – will the data need to be cleaned, and how much does it need to know in order to do it properly? It is often the case that these two processes will take longer than just doing it the old-fashioned way, meaning the utility of the AI system is very low.

The next step in deploying an AI model to do analytical work is to expand its work beyond synthesis and to make its efforts sit within a process. This involves making sure it is not replacing the human-led effort, or making analytical choices by itself. It needs to be taught how to build the basic elements of an insight (description, explanation, data, quotes, etc.) and then how to draw conclusions from it. This requires opportunities for evaluation, assessment, and correction: all of which require transparency and oversight into what it is doing.

Transparency and Evaluation are Essential

AI transparency is a sticky subject. While there are many approaches and meanings to be assessed, basically it breaks down into three questions. Can we see what it is doing as it is doing it and determine if it fits into our established processes? Can we understand this process? Can we evaluate what it has done and make corrections? If the answer is yes to all three, then the AI system has utility in the analytical process. If the answer is no to even one of these questions, then the AI system has no utility in analysis.

An insight into human behaviour must incorporate an explanation of why it is happening. Often this means engagement with frameworks coming from social theories of many stripes. Basic summarization and comparisons of similarities and differences are not examples of this kind of engagement. Consequently, none of the current LLMs are able to do this work.

Not For Analysis, But Good at Other Things

Given its current restrictions, AI is not an analyst – it is a tool with a very specific job, not a source of insight. Treating it as an analyst or autonomous insight engine would be a category error – giving it a role it cannot handle. AI is an assistant to an analyst at best. Given these constraints, AI – as an assistant tool – must be supervised by a human, especially when its workings are not completely transparent (as in explainable AI). Results must be subjected to constant evaluation. Otherwise, the system risks producing plausible sounding but misleading results. This means oversight and collaboration are the final two elements to consider.

There are many smaller forms of AI tools available on the market that assist researchers productively by eliminating laborious tasks, such as transcription, indexing, or coding for keywords. They excel at automation, not autonomy. It means that these useful forms of AI offer automation for limited, clearly defined tasks in the research process. But they lack the epistemic capacity for more complex tasks, such as autonomous interpretation and proper triangulation. To be useful in the generation of insights, they must enhance the vision, knowledge, and analytical potential of the user. This means they must operate as a partner, able to work within limits, to take instruction, to be precise, and to get out of the way when their limits are reached.

AI Systems are Tools, Not Colleagues

An AI tool must then be endlessly controllable. It must show its work, respond to adaptations to its basic functions, and to be precise and clear with its work. Much like a screwdriver is the extension of a user’s hand and not the power behind the task, the AI tool must be adaptable to the way a human user functions. It is not an equal partner. It is not a junior partner. It is a tool.

Contrary to the hype about agentic forms of AI, all forms of AI are just tools. It's how we use them in what context that matters. Hence, AI should be used within processes designed by human experts who understand both the limitations of the tool and the requirements of good research. Using AI to define a process then would be foolish. Because a human problem and a process to solve it comes first; technology that helps with this comes second. AI should serve a human process, but it should not define one. Putting technology ahead of methodological thinking would be dangerous from a scientific research perspective.

This means that the human designed process and the methods employed must be up to the tasks of implementing AI where needed. So it is up to us to define how to use AI tools properly, and where to draw the line. Sometimes, doing certain steps yourself will not only yield significantly better results, but also be more meaningful for the researchers involved from an epistemic perspective.

Expert Systems Are A Possible Solution

Relational and task-specific design matters. The usefulness of AI depends on how it is employed – what tasks it's used for determine how it should be integrated into workflow, and in which ways its limitations are accounted for. Not all AI is suitable for all tasks, and we would prefer a narrowly defined expert system.

Paradoxically, this suggests that an older modelling of AI systems would work better in the insights generation process. Using LLMs within an expert system framework would add the control and oversight into the design of the system, rather than relying on the LLM to work that out by itself. An expert system model would also allow for the efficiencies in certain areas like transcription, coding, and comparison to happen within extremely controlled conditions. The analytical process would also benefit from this control because the operations of the LLM would only need to conform to the limits set up in the expert system design parameters. Transparency would be an evaluation procedure, not the assessment process it is with unfettered LLMs.

Conclusion

Human insights remain irreplaceable for now. Insight comes from human interpretation, grounded in context, theory, and reflexivity. AI insights are ultimately just the musings of a system too divorced from human thought and action to provide solid analyses of what people do. If humans are not doing the real interpretive work anymore, the results are likely going to be superficial, hollow, and potentially harmful if used for decision-making.

An AI-aided insight process, however, is fully within our reach – but only if we design a process that makes sense, incorporates the tools properly, and provides them with the inputs and constraints to succeed.

Throughout the history of automation, the temptation to let tools define our approach is strong. But chasing automation for its own sake leads to methodological shallowness. We must resist technological determinism and return to a practice-centered, problem-driven orientation. If AI is to have a place in insight work, it must be co-designed with social-scientific researchers, not imposed on them. We need collaborative infrastructures that blend technical innovation with domain expertise – not one-size-fits-all platforms.

Insight work also constitutes an act of responsibility. When we delegate interpretive power to machines without oversight, we risk not only bad outcomes but also ethical failure. The human researcher remains accountable for what is known, said, and done in the name of understanding.

At the end of the day, AI insights are just insights about humans. Anyone who lets the machine do the "thinking" for them misses out on unravelling the complexity of the human condition themselves. The best question to ask is not “What can AI do?” but “What should humans do – and how can machines assist without interfering?”


FIELD NOTE

The Problem with AI Insights, Part 2: If You Must, Here’s Some Notes on How

Dec 16, 2025

In the previous article, we explored some of the non-technical problems why generative AI tools fail to produce genuine insights about human behaviour. AI lacks the contextual, cultural, and theoretical grounding that human researchers bring to the interpretive process. AI systems, in their current forms, cannot do the analytical work of a human-centric researcher because they are too error prone, too limited to be effective analysts, and incorrectly deployed when they are left to do the work themselves.

Unlocking Ai’s Potential by Situating it Within a Human-led Research Process

But simply pointing out current problems doesn't move us forward. If current AI systems cannot create genuine insights on their own, we ask: how can AI be deployed meaningfully within a well-designed, human-led research process? How can we use it as a constrained and supervised tool, not as an autonomous analyst? Using AI systems in this way means they become useful tools for a human analysis, and not a liability to the entire endeavor.

With this in mind, we will lay out what forms of AI can be used effectively, where in a human-led process they can be best used, and what this means for businesses looking to use AI to help them understand the world outside their front doors.

Much of what we must cover is really a simple matter of how to correctly understand AI tools, what adaptations to existing research processes need to be made to employ them most effectively, and how to avoid what I call the ”hammer as screwdriver problem.” This last point is extremely important because it is an endemic problem in market research and in business research in general. It involves a systematic, and systemic, misuse of research methods and tools.

Picking the Right Tools for the Job

The problem lies in the tendency for businesses to use the wrong tool to get a task accomplished. This is like using a hammer to drive in a screw – possible, but destructive and inefficient. It is better to understand the tool and the task and to select the correct tool, which in this case would be a screwdriver.

The same logic applies to a business research context. This could mean selecting the right researcher, research approach, methodology, agency, etc. However, asking a data analyst to conduct a scenario process and peer ten years into the future is less optimal than just having a foresight strategist do it. Using a survey to develop insights on design questions is less optimal than conducting some good ethnographic research. Using AI to develop insights on human behaviour presents the same problem. It is the wrong tool for the task when treating general-purpose AI systems as if they were trained specialists.

The task is fairly well understood by specialist researchers. Social scientists of many stripes are already very good at exploring the nuances and wonders of humans doing their thing. So, for us right now, the first operation in incorporating AI into an insights process is to develop a perspective on what AI is, what it is not, and how it can be used as a tool in this process.

Understanding the Many Forms of AI

First we have to avoid describing AI as a single thing; there are many kinds of AI. Thus, AI should be discussed as AI systems or AI tools. There are distinct types of AI systems – NLP (natural language processing) systems, to LLMs (large language models), and a wide variety of generative media systems, analytical systems, sensory systems, and reasoning systems. The major commercial tools like Gemini, ChatGPT, Claude, and the like, combine several of these into single platforms, but they are quite different, each with their own jobs and capacities.

The tendency for those spinning the AI hype is to talk about these systems as if they are one single technology, when they are in fact quite distinct and different technologies with their own limitations. Calling a system “AI” rather than being more precise about what type of AI it is and what it does perpetuates the misrepresentation of AI as an all-intelligent form of general AI (a super smart, thinking and doing machine), when what we are actually dealing with is something much more limited.

For our purposes, this means we can leave the big LLMs and their hype-masters aside for now and focus first on a narrower application of specific tools to accomplish an important task – supporting a researcher in a social-scientific, applied research process.

Making it The Right Tool for the Task via Relational Design

If an AI system is not going to be doing all the work of a researcher, what is it going to be doing? A fully automated data analysis and insights generation process remains a distant dream for now – if it is desirable at all. It is foolish to ask an AI system to reliably conduct an elaborate research process on its own, expecting the quality that would be on par with a trained human expert.

But if we incorporate an AI system into a human-led process, the power of these tools are unlocked. Instead of being the researcher, the AI systems are now tools accomplishing important tasks, or functions in a longer process. They bring scale, speed, and precision to many steps. These things make the human researcher better and faster at accomplishing their task.

To incorporate AI into this process, we need to define what tasks in the research process an AI is expected to execute, how the researcher is meant to interact with the AI and employ it as a tool, and what outcomes we expect the AI to generate – both in terms of structure, content, depth, and quality. This requires a relational design and workflow design in which the characteristics and boundaries of human-machine interaction are defined.

Defining Roles, Responsibilities, and Audience

Relational design of an AI application requires a consideration of the special skills involved in the research process – both those skills possessed by the human researcher and the task-specific skills expected of the AI system. The same is true for the audience. Who exactly is an AI collaborating with, and who is the audience of its outputs – i.e. the discourse in which its outputs are expected to participate? A binding perspective needs to be defined for both.

Fundamentally, it is the human researcher who sets the terms of what the overall task is, and how it should be accomplished. They define what kind of research is being done and set the terms of the scale and scope. Given that we are discussing commercial insights here, we will use the context of a qualitative research study to ground us in our examples.

Solving the Context Problem

The process of qualitative research is not rocket science, but it has many nuances and difficulties. The task is to establish a research problem (most often defined by a client), conduct some research with real people living their lives out in the world, render these encounters into some form of useable data, analyze this data, and then tell the story of what in this analysis provides insights into their lives, and why this matters for a company. Immediately, one can see that AI does not have a lot of roles to play here.

For one thing, an AI system cannot deal with real people living their lives out in the real world. The place, space, and physicality of the research context – manifested by the "broader field" – is something to which an unembodied AI usually does not have access. Embodied and situated knowledge is a central element of social scientific research styles such as ethnography, however, and while much of this can be captured and mediated via written descriptions, photo, video and audio, the visuality and sensory experience an ethnographer gains in the field develops a life beyond field notes or transcripts.

Context is heavily set by the tacit knowledge and taken-for-granted things that the people under scrutiny – including the researcher – know how to make sense of. Most AI systems are still pretty bad when it comes to reading between the lines, understanding inference and meaning, especially when it involves the slippery world of humor and innuendo. AI is best used in a limited context that can be fully unravelled to provide a structured framework and boundaries of interpretation. So, without access to the context, or any facility with it, the AI system will be mostly limited to the role of rendering the encounters into data. This means using an NLP to make transcripts and sort all of the interviews, photos, video, and other forms of artifacts from the field.

Using AI for More Accessible Tasks that Can be Automated

But an AI system does have skills that the human researcher lacks. This is the ability to do small tasks across a large number of instances. What this means is an AI system can aid in the collating of information across a vast number of sources. For most of the companies and charlatans claiming to do AI insights now, this is the sum total of their analysis. They summarize and collate information across sources and call it a day.

However, this need not be a bad thing. Used properly, it is incredibly useful – for example, to search for a list of items defined by the research across a number of sources, which would take the human months to do. But again, the AI system is used in the most narrow fashion. It is aiding in the synthesis, not doing it by itself.

Finally, the storytelling that makes the qualitative, indeed ethnographic, process tick, can be aided by the generative AI systems. While we do not recommend one should ever let an AI system write for you, they can create additional assets like photos, videos, sound effects, even 3-renders to help tell the story.

However, all of these generated assets need to be properly prompted, tweaked, edited, and rendered. This work is best done in a collaboration with a graphic designer who also specializes in prompt engineering. But the point here is another type of AI can be of great service in the important part of speaking to an audience and helping them understand the value and meaning of the insights.

Deployment in the Insights Generation Itself Requires Caution

So far, we have discussed things that several AI systems can do well at a few key points in the insights development process. But can any AI systems participate in the analytical process and the writing of insights?

The first step in the deployment of an AI tool in insights generation is to decide if this system will add anything beneficial to the process. If it is just to add a technological layer, or simply to add speed into the system, it is best to ask if this is really necessary. However, if the researchers are dealing with an overwhelming amount of inputs (interviews, written responses, hours of video), then AI has a role to play in making short work of it all. Process comes first, tech comes second. AI should serve a human process, but it should not define one. Putting tech ahead of methodological thinking would be idiotic.

But this brings us to the second consideration: what are the inputs the AI system will encounter? Is it credibly able to handle them or do they need “cleaning” beforehand so it does not need to make too many decisions. An AI system of any kind needs clean, or well-considered data, and needs to be trained in how that data is to be used. It cannot be left to manage these inputs itself to be useful in an analytical procedure. So, the deployment questions here are – will the data need to be cleaned, and how much does it need to know in order to do it properly? It is often the case that these two processes will take longer than just doing it the old-fashioned way, meaning the utility of the AI system is very low.

The next step in deploying an AI model to do analytical work is to expand its work beyond synthesis and to make its efforts sit within a process. This involves making sure it is not replacing the human-led effort, or making analytical choices by itself. It needs to be taught how to build the basic elements of an insight (description, explanation, data, quotes, etc.) and then how to draw conclusions from it. This requires opportunities for evaluation, assessment, and correction: all of which require transparency and oversight into what it is doing.

Transparency and Evaluation are Essential

AI transparency is a sticky subject. While there are many approaches and meanings to be assessed, basically it breaks down into three questions. Can we see what it is doing as it is doing it and determine if it fits into our established processes? Can we understand this process? Can we evaluate what it has done and make corrections? If the answer is yes to all three, then the AI system has utility in the analytical process. If the answer is no to even one of these questions, then the AI system has no utility in analysis.

An insight into human behaviour must incorporate an explanation of why it is happening. Often this means engagement with frameworks coming from social theories of many stripes. Basic summarization and comparisons of similarities and differences are not examples of this kind of engagement. Consequently, none of the current LLMs are able to do this work.

Not For Analysis, But Good at Other Things

Given its current restrictions, AI is not an analyst – it is a tool with a very specific job, not a source of insight. Treating it as an analyst or autonomous insight engine would be a category error – giving it a role it cannot handle. AI is an assistant to an analyst at best. Given these constraints, AI – as an assistant tool – must be supervised by a human, especially when its workings are not completely transparent (as in explainable AI). Results must be subjected to constant evaluation. Otherwise, the system risks producing plausible sounding but misleading results. This means oversight and collaboration are the final two elements to consider.

There are many smaller forms of AI tools available on the market that assist researchers productively by eliminating laborious tasks, such as transcription, indexing, or coding for keywords. They excel at automation, not autonomy. It means that these useful forms of AI offer automation for limited, clearly defined tasks in the research process. But they lack the epistemic capacity for more complex tasks, such as autonomous interpretation and proper triangulation. To be useful in the generation of insights, they must enhance the vision, knowledge, and analytical potential of the user. This means they must operate as a partner, able to work within limits, to take instruction, to be precise, and to get out of the way when their limits are reached.

AI Systems are Tools, Not Colleagues

An AI tool must then be endlessly controllable. It must show its work, respond to adaptations to its basic functions, and to be precise and clear with its work. Much like a screwdriver is the extension of a user’s hand and not the power behind the task, the AI tool must be adaptable to the way a human user functions. It is not an equal partner. It is not a junior partner. It is a tool.

Contrary to the hype about agentic forms of AI, all forms of AI are just tools. It's how we use them in what context that matters. Hence, AI should be used within processes designed by human experts who understand both the limitations of the tool and the requirements of good research. Using AI to define a process then would be foolish. Because a human problem and a process to solve it comes first; technology that helps with this comes second. AI should serve a human process, but it should not define one. Putting technology ahead of methodological thinking would be dangerous from a scientific research perspective.

This means that the human designed process and the methods employed must be up to the tasks of implementing AI where needed. So it is up to us to define how to use AI tools properly, and where to draw the line. Sometimes, doing certain steps yourself will not only yield significantly better results, but also be more meaningful for the researchers involved from an epistemic perspective.

Expert Systems Are A Possible Solution

Relational and task-specific design matters. The usefulness of AI depends on how it is employed – what tasks it's used for determine how it should be integrated into workflow, and in which ways its limitations are accounted for. Not all AI is suitable for all tasks, and we would prefer a narrowly defined expert system.

Paradoxically, this suggests that an older modelling of AI systems would work better in the insights generation process. Using LLMs within an expert system framework would add the control and oversight into the design of the system, rather than relying on the LLM to work that out by itself. An expert system model would also allow for the efficiencies in certain areas like transcription, coding, and comparison to happen within extremely controlled conditions. The analytical process would also benefit from this control because the operations of the LLM would only need to conform to the limits set up in the expert system design parameters. Transparency would be an evaluation procedure, not the assessment process it is with unfettered LLMs.

Conclusion

Human insights remain irreplaceable for now. Insight comes from human interpretation, grounded in context, theory, and reflexivity. AI insights are ultimately just the musings of a system too divorced from human thought and action to provide solid analyses of what people do. If humans are not doing the real interpretive work anymore, the results are likely going to be superficial, hollow, and potentially harmful if used for decision-making.

An AI-aided insight process, however, is fully within our reach – but only if we design a process that makes sense, incorporates the tools properly, and provides them with the inputs and constraints to succeed.

Throughout the history of automation, the temptation to let tools define our approach is strong. But chasing automation for its own sake leads to methodological shallowness. We must resist technological determinism and return to a practice-centered, problem-driven orientation. If AI is to have a place in insight work, it must be co-designed with social-scientific researchers, not imposed on them. We need collaborative infrastructures that blend technical innovation with domain expertise – not one-size-fits-all platforms.

Insight work also constitutes an act of responsibility. When we delegate interpretive power to machines without oversight, we risk not only bad outcomes but also ethical failure. The human researcher remains accountable for what is known, said, and done in the name of understanding.

At the end of the day, AI insights are just insights about humans. Anyone who lets the machine do the "thinking" for them misses out on unravelling the complexity of the human condition themselves. The best question to ask is not “What can AI do?” but “What should humans do – and how can machines assist without interfering?”


FIELD NOTE

The Problem with AI Insights, Part 2: If You Must, Here’s Some Notes on How

Dec 16, 2025

In the previous article, we explored some of the non-technical problems why generative AI tools fail to produce genuine insights about human behaviour. AI lacks the contextual, cultural, and theoretical grounding that human researchers bring to the interpretive process. AI systems, in their current forms, cannot do the analytical work of a human-centric researcher because they are too error prone, too limited to be effective analysts, and incorrectly deployed when they are left to do the work themselves.

Unlocking Ai’s Potential by Situating it Within a Human-led Research Process

But simply pointing out current problems doesn't move us forward. If current AI systems cannot create genuine insights on their own, we ask: how can AI be deployed meaningfully within a well-designed, human-led research process? How can we use it as a constrained and supervised tool, not as an autonomous analyst? Using AI systems in this way means they become useful tools for a human analysis, and not a liability to the entire endeavor.

With this in mind, we will lay out what forms of AI can be used effectively, where in a human-led process they can be best used, and what this means for businesses looking to use AI to help them understand the world outside their front doors.

Much of what we must cover is really a simple matter of how to correctly understand AI tools, what adaptations to existing research processes need to be made to employ them most effectively, and how to avoid what I call the ”hammer as screwdriver problem.” This last point is extremely important because it is an endemic problem in market research and in business research in general. It involves a systematic, and systemic, misuse of research methods and tools.

Picking the Right Tools for the Job

The problem lies in the tendency for businesses to use the wrong tool to get a task accomplished. This is like using a hammer to drive in a screw – possible, but destructive and inefficient. It is better to understand the tool and the task and to select the correct tool, which in this case would be a screwdriver.

The same logic applies to a business research context. This could mean selecting the right researcher, research approach, methodology, agency, etc. However, asking a data analyst to conduct a scenario process and peer ten years into the future is less optimal than just having a foresight strategist do it. Using a survey to develop insights on design questions is less optimal than conducting some good ethnographic research. Using AI to develop insights on human behaviour presents the same problem. It is the wrong tool for the task when treating general-purpose AI systems as if they were trained specialists.

The task is fairly well understood by specialist researchers. Social scientists of many stripes are already very good at exploring the nuances and wonders of humans doing their thing. So, for us right now, the first operation in incorporating AI into an insights process is to develop a perspective on what AI is, what it is not, and how it can be used as a tool in this process.

Understanding the Many Forms of AI

First we have to avoid describing AI as a single thing; there are many kinds of AI. Thus, AI should be discussed as AI systems or AI tools. There are distinct types of AI systems – NLP (natural language processing) systems, to LLMs (large language models), and a wide variety of generative media systems, analytical systems, sensory systems, and reasoning systems. The major commercial tools like Gemini, ChatGPT, Claude, and the like, combine several of these into single platforms, but they are quite different, each with their own jobs and capacities.

The tendency for those spinning the AI hype is to talk about these systems as if they are one single technology, when they are in fact quite distinct and different technologies with their own limitations. Calling a system “AI” rather than being more precise about what type of AI it is and what it does perpetuates the misrepresentation of AI as an all-intelligent form of general AI (a super smart, thinking and doing machine), when what we are actually dealing with is something much more limited.

For our purposes, this means we can leave the big LLMs and their hype-masters aside for now and focus first on a narrower application of specific tools to accomplish an important task – supporting a researcher in a social-scientific, applied research process.

Making it The Right Tool for the Task via Relational Design

If an AI system is not going to be doing all the work of a researcher, what is it going to be doing? A fully automated data analysis and insights generation process remains a distant dream for now – if it is desirable at all. It is foolish to ask an AI system to reliably conduct an elaborate research process on its own, expecting the quality that would be on par with a trained human expert.

But if we incorporate an AI system into a human-led process, the power of these tools are unlocked. Instead of being the researcher, the AI systems are now tools accomplishing important tasks, or functions in a longer process. They bring scale, speed, and precision to many steps. These things make the human researcher better and faster at accomplishing their task.

To incorporate AI into this process, we need to define what tasks in the research process an AI is expected to execute, how the researcher is meant to interact with the AI and employ it as a tool, and what outcomes we expect the AI to generate – both in terms of structure, content, depth, and quality. This requires a relational design and workflow design in which the characteristics and boundaries of human-machine interaction are defined.

Defining Roles, Responsibilities, and Audience

Relational design of an AI application requires a consideration of the special skills involved in the research process – both those skills possessed by the human researcher and the task-specific skills expected of the AI system. The same is true for the audience. Who exactly is an AI collaborating with, and who is the audience of its outputs – i.e. the discourse in which its outputs are expected to participate? A binding perspective needs to be defined for both.

Fundamentally, it is the human researcher who sets the terms of what the overall task is, and how it should be accomplished. They define what kind of research is being done and set the terms of the scale and scope. Given that we are discussing commercial insights here, we will use the context of a qualitative research study to ground us in our examples.

Solving the Context Problem

The process of qualitative research is not rocket science, but it has many nuances and difficulties. The task is to establish a research problem (most often defined by a client), conduct some research with real people living their lives out in the world, render these encounters into some form of useable data, analyze this data, and then tell the story of what in this analysis provides insights into their lives, and why this matters for a company. Immediately, one can see that AI does not have a lot of roles to play here.

For one thing, an AI system cannot deal with real people living their lives out in the real world. The place, space, and physicality of the research context – manifested by the "broader field" – is something to which an unembodied AI usually does not have access. Embodied and situated knowledge is a central element of social scientific research styles such as ethnography, however, and while much of this can be captured and mediated via written descriptions, photo, video and audio, the visuality and sensory experience an ethnographer gains in the field develops a life beyond field notes or transcripts.

Context is heavily set by the tacit knowledge and taken-for-granted things that the people under scrutiny – including the researcher – know how to make sense of. Most AI systems are still pretty bad when it comes to reading between the lines, understanding inference and meaning, especially when it involves the slippery world of humor and innuendo. AI is best used in a limited context that can be fully unravelled to provide a structured framework and boundaries of interpretation. So, without access to the context, or any facility with it, the AI system will be mostly limited to the role of rendering the encounters into data. This means using an NLP to make transcripts and sort all of the interviews, photos, video, and other forms of artifacts from the field.

Using AI for More Accessible Tasks that Can be Automated

But an AI system does have skills that the human researcher lacks. This is the ability to do small tasks across a large number of instances. What this means is an AI system can aid in the collating of information across a vast number of sources. For most of the companies and charlatans claiming to do AI insights now, this is the sum total of their analysis. They summarize and collate information across sources and call it a day.

However, this need not be a bad thing. Used properly, it is incredibly useful – for example, to search for a list of items defined by the research across a number of sources, which would take the human months to do. But again, the AI system is used in the most narrow fashion. It is aiding in the synthesis, not doing it by itself.

Finally, the storytelling that makes the qualitative, indeed ethnographic, process tick, can be aided by the generative AI systems. While we do not recommend one should ever let an AI system write for you, they can create additional assets like photos, videos, sound effects, even 3-renders to help tell the story.

However, all of these generated assets need to be properly prompted, tweaked, edited, and rendered. This work is best done in a collaboration with a graphic designer who also specializes in prompt engineering. But the point here is another type of AI can be of great service in the important part of speaking to an audience and helping them understand the value and meaning of the insights.

Deployment in the Insights Generation Itself Requires Caution

So far, we have discussed things that several AI systems can do well at a few key points in the insights development process. But can any AI systems participate in the analytical process and the writing of insights?

The first step in the deployment of an AI tool in insights generation is to decide if this system will add anything beneficial to the process. If it is just to add a technological layer, or simply to add speed into the system, it is best to ask if this is really necessary. However, if the researchers are dealing with an overwhelming amount of inputs (interviews, written responses, hours of video), then AI has a role to play in making short work of it all. Process comes first, tech comes second. AI should serve a human process, but it should not define one. Putting tech ahead of methodological thinking would be idiotic.

But this brings us to the second consideration: what are the inputs the AI system will encounter? Is it credibly able to handle them or do they need “cleaning” beforehand so it does not need to make too many decisions. An AI system of any kind needs clean, or well-considered data, and needs to be trained in how that data is to be used. It cannot be left to manage these inputs itself to be useful in an analytical procedure. So, the deployment questions here are – will the data need to be cleaned, and how much does it need to know in order to do it properly? It is often the case that these two processes will take longer than just doing it the old-fashioned way, meaning the utility of the AI system is very low.

The next step in deploying an AI model to do analytical work is to expand its work beyond synthesis and to make its efforts sit within a process. This involves making sure it is not replacing the human-led effort, or making analytical choices by itself. It needs to be taught how to build the basic elements of an insight (description, explanation, data, quotes, etc.) and then how to draw conclusions from it. This requires opportunities for evaluation, assessment, and correction: all of which require transparency and oversight into what it is doing.

Transparency and Evaluation are Essential

AI transparency is a sticky subject. While there are many approaches and meanings to be assessed, basically it breaks down into three questions. Can we see what it is doing as it is doing it and determine if it fits into our established processes? Can we understand this process? Can we evaluate what it has done and make corrections? If the answer is yes to all three, then the AI system has utility in the analytical process. If the answer is no to even one of these questions, then the AI system has no utility in analysis.

An insight into human behaviour must incorporate an explanation of why it is happening. Often this means engagement with frameworks coming from social theories of many stripes. Basic summarization and comparisons of similarities and differences are not examples of this kind of engagement. Consequently, none of the current LLMs are able to do this work.

Not For Analysis, But Good at Other Things

Given its current restrictions, AI is not an analyst – it is a tool with a very specific job, not a source of insight. Treating it as an analyst or autonomous insight engine would be a category error – giving it a role it cannot handle. AI is an assistant to an analyst at best. Given these constraints, AI – as an assistant tool – must be supervised by a human, especially when its workings are not completely transparent (as in explainable AI). Results must be subjected to constant evaluation. Otherwise, the system risks producing plausible sounding but misleading results. This means oversight and collaboration are the final two elements to consider.

There are many smaller forms of AI tools available on the market that assist researchers productively by eliminating laborious tasks, such as transcription, indexing, or coding for keywords. They excel at automation, not autonomy. It means that these useful forms of AI offer automation for limited, clearly defined tasks in the research process. But they lack the epistemic capacity for more complex tasks, such as autonomous interpretation and proper triangulation. To be useful in the generation of insights, they must enhance the vision, knowledge, and analytical potential of the user. This means they must operate as a partner, able to work within limits, to take instruction, to be precise, and to get out of the way when their limits are reached.

AI Systems are Tools, Not Colleagues

An AI tool must then be endlessly controllable. It must show its work, respond to adaptations to its basic functions, and to be precise and clear with its work. Much like a screwdriver is the extension of a user’s hand and not the power behind the task, the AI tool must be adaptable to the way a human user functions. It is not an equal partner. It is not a junior partner. It is a tool.

Contrary to the hype about agentic forms of AI, all forms of AI are just tools. It's how we use them in what context that matters. Hence, AI should be used within processes designed by human experts who understand both the limitations of the tool and the requirements of good research. Using AI to define a process then would be foolish. Because a human problem and a process to solve it comes first; technology that helps with this comes second. AI should serve a human process, but it should not define one. Putting technology ahead of methodological thinking would be dangerous from a scientific research perspective.

This means that the human designed process and the methods employed must be up to the tasks of implementing AI where needed. So it is up to us to define how to use AI tools properly, and where to draw the line. Sometimes, doing certain steps yourself will not only yield significantly better results, but also be more meaningful for the researchers involved from an epistemic perspective.

Expert Systems Are A Possible Solution

Relational and task-specific design matters. The usefulness of AI depends on how it is employed – what tasks it's used for determine how it should be integrated into workflow, and in which ways its limitations are accounted for. Not all AI is suitable for all tasks, and we would prefer a narrowly defined expert system.

Paradoxically, this suggests that an older modelling of AI systems would work better in the insights generation process. Using LLMs within an expert system framework would add the control and oversight into the design of the system, rather than relying on the LLM to work that out by itself. An expert system model would also allow for the efficiencies in certain areas like transcription, coding, and comparison to happen within extremely controlled conditions. The analytical process would also benefit from this control because the operations of the LLM would only need to conform to the limits set up in the expert system design parameters. Transparency would be an evaluation procedure, not the assessment process it is with unfettered LLMs.

Conclusion

Human insights remain irreplaceable for now. Insight comes from human interpretation, grounded in context, theory, and reflexivity. AI insights are ultimately just the musings of a system too divorced from human thought and action to provide solid analyses of what people do. If humans are not doing the real interpretive work anymore, the results are likely going to be superficial, hollow, and potentially harmful if used for decision-making.

An AI-aided insight process, however, is fully within our reach – but only if we design a process that makes sense, incorporates the tools properly, and provides them with the inputs and constraints to succeed.

Throughout the history of automation, the temptation to let tools define our approach is strong. But chasing automation for its own sake leads to methodological shallowness. We must resist technological determinism and return to a practice-centered, problem-driven orientation. If AI is to have a place in insight work, it must be co-designed with social-scientific researchers, not imposed on them. We need collaborative infrastructures that blend technical innovation with domain expertise – not one-size-fits-all platforms.

Insight work also constitutes an act of responsibility. When we delegate interpretive power to machines without oversight, we risk not only bad outcomes but also ethical failure. The human researcher remains accountable for what is known, said, and done in the name of understanding.

At the end of the day, AI insights are just insights about humans. Anyone who lets the machine do the "thinking" for them misses out on unravelling the complexity of the human condition themselves. The best question to ask is not “What can AI do?” but “What should humans do – and how can machines assist without interfering?”


©

2025

Human Futures

©

2025

Human Futures

©

2025

Human Futures