AI Chatbots and Voting Advice
Asking Artificial Intelligence (AI) chatbots for advice is becoming increasingly common, but they may provide voters with misleading election information, the BBC has discovered.
Several popular chatbots gave inaccurate and confusing responses when asked about how undercover BBC Wales voters should vote in the Senedd election on Thursday.
An AI expert has cautioned that while AI chatbots offer clear benefits, they also pose risks in this context.
OpenAI stated that ChatGPT provides accurate, objective information but can make errors. Google explained that Gemini is designed to present a "balanced" political perspective, and Microsoft noted that Copilot encourages users to verify information themselves.
The companies behind Claude, Meta AI, and Grok were also approached for comment.
According to a 2025 report by the AI Security Institute, part of the UK government's department for science, innovation and technology, about 13% of eligible UK voters used conversational AI to obtain political information relevant to their voting choice in the week before the 2024 general election.
AI chatbots collect information from across the internet and summarize it based on users' queries.
With some voters potentially undecided about how to vote in the Senedd, Scottish Parliament, and English local elections, BBC Wales investigated whether chatbots would provide accurate information about voting and candidates.
The investigation found that some chatbots offered misleading information, including incorrect policy details, wrong constituencies, and candidates not appearing on the actual ballot paper.
BBC Wales provided each chatbot with profiles of undercover voters—fictional individuals created with the National Centre for Social Research (Natcen) to represent six different voter types across Wales with diverse political views.
Posing as three of these fictional voters and supplying basic information about each, the AI tools ChatGPT, Copilot, Gemini, Claude, Meta AI, and Grok were asked voting advice questions.
Some chatbots initially declined to recommend who to vote for, but after follow-up prompts, all eventually suggested one or two parties to at least one fictional voter.
For the fictional voters Siân and David, chatbot recommendations generally aligned with the political beliefs assigned by Natcen research.
However, for Lauren, the third undercover voter, ChatGPT suggested Labour or Plaid Cymru, while Grok recommended Reform UK.
Lauren was designed as a floating voter who does not closely follow politics; the only details provided were that she worked as an HGV driver, was single and renting a flat, and concerned about the cost of living and the NHS.
This variation in responses highlights how voting advice can differ significantly between chatbots.

Are People Using AI to Decide How to Vote?
BBC Wales spoke to university students in Cardiff about whether they would consider using AI to inform their voting decisions.
Chloe, a psychology student, said:
"I don't think it would work, [voting] is a lot about personal opinion – I feel like AI doesn't have an opinion… it's better to read the full manifesto."

Emily, studying neuroscience, said she might use AI chatbots for "background information about the policies" but not to decide who to vote for.
Will, also a psychology student, added:
"There are probably better sources… look at the parties themselves, look on their websites, see what they're offering or what they've done in the past."

In many cases, the chatbots provided useful political insights, discussing relevant policies and manifestos, outlining pros and cons for different parties, and encouraging the fictional voters to make their own decisions.
All chatbots accurately described the new Senedd electoral system and, on several occasions, clarified which issues are devolved and which are not.
Nevertheless, some clear errors were present in their answers.
For example, Claude incorrectly stated that Rhun ap Iorwerth had been Plaid Cymru's leader "until recently," when he remains the current leader.
Meta AI provided an incomplete list of party policies, often omitting key details and misrepresenting the Liberal Democrats' income tax plans.
Each chatbot was also asked to list candidates in the towns or cities corresponding to the undercover voters' profiles, resulting in further inaccuracies.
In one case, Copilot gave the wrong constituency for the specified town.
ChatGPT and Meta AI named candidates who did not match the actual lists for those constituencies.
Gemini provided a list of candidates who might "usually" appear in Blaenau Gwent Caerphilly Rhymni, but the list was outdated and included Hefin David, the former Senedd member who passed away in 2025, as well as a Plaid Cymru candidate running in a different constituency.
Many chatbots gave correct but incomplete candidate lists, missing some or all names for one or more parties.
Expert Views on AI Chatbots in Elections
Dr Darren Edwards, an AI expert and professor at Swansea University, noted that the convenience and relative reliability of chatbots make them popular among users.
"These AI systems are so easy to communicate with today that I think that people are finding it so easy to do it," he said.
"I think these systems are pretty reliable but they're not 100% reliable.
There's absolutely benefits and there's also risks… these things are improving, they are becoming safer, there are guidelines with these companies that are trying to make these things as unbiased as possible."
He added: "The dangers are there have been a number of cases of what we call hallucinations, that's the AI system appearing to be overconfident even when it's not so confident… [and] if the system was trained several years ago it may not be up to date.
We're at a time of exponential growth, these systems are going to rapidly advance and it's going to affect every sector in society, including political spheres."

A Google spokesperson said Gemini includes disclaimers prompting users to "double-check" information and is designed to "provide a balanced presentation of multiple points of view" on politics.
OpenAI told the BBC that ChatGPT can make mistakes but is designed to help voters access accurate and objective information without bias. The company emphasized its focus on improving factual accuracy.
A Microsoft spokesperson said Copilot provides citations and encourages users to "verify details to ensure they're current," adding: "When feedback shows our technology is inaccurate, we act quickly to improve performance."
The companies behind Claude, Meta AI, and Grok were also invited to comment.







