Chris VallanceSenior technology reporter
Getty ImagesOne in three adults in the UK are using artificial intelligence (AI) for emotional support or social interaction, according to research published by a government body.
And one in 25 people turned to the tech for support or conversation every day, the AI Security Institute (AISI) said in its first report.
The report is based on two years of testing the abilities of more than 30 unnamed advanced AIs – covering areas critical to security, including cyber skills, chemistry and biology.
The government said AISI’s work would support its future plans by helping companies fix problems “before their AI systems are widely used”.
A survey by AISI of over 2,000 UK adults found people were primarily using chatbots like ChatGPT for emotional support or social interaction, followed by voice assistants like Amazon’s Alexa.
Researchers also analysed what happened to an online community of more than two million Reddit users dedicated to discussing AI companions, when the tech failed.
The researchers found when the chatbots went down, people reported self-described “symptoms of withdrawal”, such as feeling anxious or depressed – as well as having disrupted sleep or neglecting their responsibilities.
Doubling cyber skills
As well as the emotional impact of AI use, AISI researchers looked at other risks caused by the tech’s accelerating capabilities.
There is considerable concern about AI enabling cyber attacks, but equally it can be used to help secure systems from hackers.
Its ability to spot and exploit security flaws was in some cases “doubling every eight months”, the report suggests.
And AI systems were also beginning to complete expert-level cyber tasks which would typically require over 10 years of experience.
Researchers also found the tech’s impact in science was also growing rapidly.
In 2025, AI models had “long since exceeded human biology experts with PhDs – with performance in chemistry quickly catching up”.
‘Humans losing control’
From novels such as Isaac Asimov’s I, Robot to modern video games like Horizon: Zero Dawn, sci-fi has long imagined what would happen if AI broke free of human control.
Now, according to the report, the “worst-case scenario” of humans losing control of advanced AI systems is “taken seriously by many experts”.
AI models are increasingly exhibiting some of the capabilities required to self-replicate across the internet, controlled lab tests suggested.
AISI examined whether models could carry out simple versions of tasks needed in the early stages of self-replication – such as “passing know-your customer checks required to access financial services” in order to successfully purchase the computing on which their copies would run.
But the research found to be able to do this in the real world, AI systems would need to complete several such actions in sequence “while remaining undetected”, something its research suggests they currently lack the capacity to do.
Institute experts also looked at the possibility of models “sandbagging” – or strategically hiding their true capabilities from testers.
They found tests showed it was possible, but there was no evidence of this type of subterfuge taking place.
In May, AI firm Anthropic released a controversial report which described how an AI model was capable of seemingly blackmail-like behaviour if it thought its “self-preservation” was threatened.
The threat from rogue AI is, however, a source of profound disagreement among leading researchers – many of whom feel it is exaggerated.
‘Universal jailbreaks’
To mitigate the risk of their systems being used for nefarious purposes, companies deploy numerous safeguards.
But researchers were able to find “universal jailbreaks” – or workarounds – for all the models studied which would allow them to dodge these protections.
However, for some models, the time it took for experts to persuade systems to circumvent safeguards had increased forty-fold in just six months.
The report also found an increase in the use of tools which allowed AI agents to perform “high-stakes tasks” in critical sectors such as finance.
But researchers did not consider AI’s potential to cause unemployment in the short-term by displacing human workers.
The institute also did not examine the environmental impact of the computing resources required by advanced models, arguing that its task was to focus on “societal impacts” that are closely linked to AI’s abilities rather than more “diffuse” economic or environmental effects.
Some argue both are imminent and serious societal threats posed by the tech.
And hours before the AISI report was published, a peer-reviewed study suggested the environmental impact could be greater than previously thought, and argued for more detailed data to be released by big tech.

