Novel analysis of green claims made on products – July 2025

data-technology||0
Practice area: Data & technology
Client: Which?
Published: 2 July, 2025
Keywords: AI Analysis consumer research LLM Novel methods Sustainability

London Economics worked with Which? to analyse environmental claims on products. Green claims made by businesses can influence consumers’ decisions to purchase a product. However, consumers often struggle to verify the accuracy or authenticity of environmental claims, which creates an incentive for businesses to make misleading green claims. This can lead to consumers buying products they believe are more sustainable than they actually are, and paying a premium for them. It also undermines businesses that genuinely invest in sustainability, weakening their competitive advantage and discouraging other businesses from acting more sustainably.

The research used a novel research methodology which deployed large-language models to identify green claims from product descriptions and to assess the claims made about a sample of 1,000 products against the Competition Market Authority’s Green Claims Code. The model was asked to apply an assessment framework of 24 checks based on five of the six principles in the Green Claims Code. The framework was tested and developed iteratively using a pilot sample to ensure the questions were robust, and supplemented by consumer tests.

The research found that from an initial sample of just over 8,800 product descriptions, more than a fifth contained at least one green claim (22%). The research then analysed the green claims made in a reduced sample of 1,000 products that were representative of consumer spending patterns. This analysis found that only 16% of these products did not fail any checks, while 62% of the products failed checks related to at least two of the Green Claims Code’s principles.

Besides the important findings, the work was a big step for LE in integrating generative AI into our methodology and use it for substantive analysis. Just a couple of years ago, this kind of analysis would have required a large effort by human reviewers. Given the novelty of the approach, we implemented a very thorough testing and verification framework, using human evaluators as benchmark. With the ongoing progress in LLM capabilities, we expect their performance on individual analysis tasks to improve and the scope to apply these models in our work to keep on increasing.

Read the full study here.