Dr Sue Roberts from the Faculty of Humanities and Social Sciences writes for The Conversation.
The UK government plans to , in which images or videos of people are blended with pornographic material using artificial intelligence (AI) to an authentic piece of content. While it is already an offence to share this kind of material, it鈥檚 not illegal to create it.
Where children are concerned, however, most of the changes being proposed don鈥檛 apply. It鈥檚 already an offence to create explicit deepfakes of under 18s, courtesy of the , which anticipated the way that technology has progressed by outlawing computer-generated imagery.
This was confirmed in a landmark case in October to for 18 years for creating and sharing such deepfakes for customers who would supply him with the original innocent images.
The same law could almost certainly also be used to prosecute someone using AI to generate images of paedophilia without drawing on images of 鈥渞eal鈥 children at all. Such images can increase the risk of offenders progressing to sexually abusing children. In Nelson鈥檚 case, he admitted to encouraging his customers to abuse the children in the photographs they had sent him.
Having said all this, it鈥檚 still a struggle to keep up with the ways in which advances in technology are being used to facilitate child abuse, both in terms of the law and the practicalities of upholding it. A by the Internet Watch Foundation, a UK-based charity focused on this area, found that people are creating explicit AI child images at a 鈥渇rightening rate鈥.
Legal problems
The government鈥檚 plans will close one loophole around images of children that was a feature of the Nelson case. Those who obtain such internet tools with the intention of creating depraved images will be automatically committing an offence 鈥 even if they don鈥檛 go on to create or share such images.
Beyond this, however, the technology still creates lots of challenges for the law. For one thing, such images or videos can be copied and shared many times over. Many of these can never be deleted, particularly if they are outside UK jurisdiction. The children involved in a case like Nelson鈥檚 will grow up and the images will still be in the digital world, ready to be shared again and again.
This speaks to the challenges involved in legislating for a technology that crosses borders. Making the creation of such images illegal is one thing, but the UK authorities can鈥檛 track and prosecute everywhere. They can only hope to do that in partnership with other countries. Reciprocal arrangements do exist, but the government clearly needs to be doing everything it can to extend them.
Meanwhile, it鈥檚 not illegal for software companies to train an algorithm to produce child deepfakes in the first place, and perpetrators can hide where they are based by using proxy servers or third-party software. The government could certainly consider legislating against software providers, even if the international dimension again makes these things more difficult.
Then there are the online platforms. The placed the responsibility for curbing harmful content on their shoulders, which arguably gives them more power than is wise.
In fairness, Ofcom, the communications industry regulator, is talking tough. It has to carry out risk assessments or face penalties that can be as much as 10% of revenues. Some campaigners fear this won鈥檛 lead to harmful material being removed, but time will tell. Certainly, saying that the internet is ungovernable and AI grows faster than we can keep up will not suffice when the UK government has a vulnerable people such as children.
Beyond legislation
Another issue is that among people in the public sector, there is a lack of understanding and fear around AI and its applications. I see this from being in regular contact with numerous senior policymakers and police officers in my teaching and research. Many don鈥檛 really understand the threats posed by deepfakes or even the digital footprint they can have.
This chimes with by the National Audit Office in March 2024 which suggested that the British public sector is largely not equipped to respond to, or use, AI in the delivery of public services. The report found that 70% of staff didn鈥檛 have the necessary skills to handle these issues. This points to a need for the government to tackle this gap by educating staff.
Decision-makers in the government also tend to reflect a . Though even can be , part of the solution has to be ensuring age diversity in the skills pool for shaping policies around AI and deepfakes.
Finally, there is the issue of police resourcing. My police contacts tell me how hard it is to stay on top of the latest shifts in technology in this area, not to mention the international dimension. It鈥檚 difficult at a time when public funding is under such pressure, but the government has to look at increasing resources in this area.
It is vital that the future of AI-assisted imagery cannot be allowed to predominate over child protection. Unless the UK combats its legislative gaps and the skills issues in the public sector, there will be more Hugh Nelsons. The speed of technological change and the international nature of these problems make them especially difficult, but still, much more can be done to help
, Senior Lecturer Public Management, and Course Leader Masters in Public Administration,
This article is republished from under a Creative Commons license. Read the .
More articles from The Conversation...
Joe Biden鈥檚 legacy: four successes and four failures
Dafydd Townley
13 January 2025
3 minutes
Islamic State: despite the fall of its caliphate, the group is still influencing terrorism
Busra Nisa Sarac
10 January 2025
7 minutes
Why Romania鈥檚 election was annulled 鈥 and what happens next?
17 December 2024
9