Tag Archives: AI

Risk-based or sector led? How we can expect the government to regulate AI

Elon Musk’s AI chatbot, Grok, has received significant backlash in recent weeks after its ability to create sexualised images of women and children generated widespread media headlines.  The scale of the public outcry has sharpened concerns about how quickly AI capabilities are outpacing existing safeguards. This has increased pressure on the government to more stringently regulate AI, which is reshaping industries at an unprecedented pace, bringing both opportunities and risks.

Prime Minister Keir Starmer previously suggested that the government would move away from the last Conservative administration’s ‘pro-innovation regulatory framework’ for AI, as set out in its white paper on AI published in 2023. Instead, Starmer has publicly emphasised the need for an overarching regulatory framework with additional protections in specific areas. He has also expressed concerns about the potential risks and impacts of AI, while acknowledging its transformative potential for society. In January 2025, the government published its AI Opportunities Action Plan, which set out its ambitions to use AI to ‘turbocharge’ economic growth and create AI growth zones to speed up planning processes for AI infrastructure.

The government’s approach to AI differs from the EU’s risk-based framework, which classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Each category has a different set of regulations and requirements for organisations developing or using AI systems. UK-based organisations with operations in the EU or those deploying AI systems within the bloc are likely to fall under the jurisdiction of the EU AI Act, requiring UK organisations to keep abreast of legislative changes and any potential future misalignments between the UK and EU in this area.

Although Starmer has pledged to turn the UK into an ‘AI superpower’, ministers have so far struggled to find the right balance between regulation and harnessing AI’s economic potential. At the end of 2024, the government proposed relaxing copyright laws to allow developers to train AI models on any material they can legally access. The plans received widespread criticism from creatives and high-profile musicians who would be required to opt-out of having their work used. Ministers have since acknowledged that the move was misguided and announced that the associated legislation would be delayed while they develop a more extensive policy framework.

It is likely that we will see new legislation announced in the form of an AI and Copyright Bill at the King’s speech, which is due to take place in May 2026. This presents an opportunity for businesses to engage with the government at a key stage of the policymaking process.

The legislation is likely to focus on safety, copyright protections, and transparency. The government has been clear that it does not want to introduce measures that could drive AI investment out of the UK. Appearing before the Digital and Communications Committee in January 2026, technology secretary Liz Kendall stated that many of the larger AI companies are opposed to ‘onerous burdens’, suggesting the government is likely to adopt a cautious approach in its efforts to more stringently regulate AI to avoid deterring potential investment in the UK.

This means we can expect the government to attempt to tread a line between the EU’s risk-based framework and the deregulatory approach taken in the US in order to strike the right balance between innovation and oversight. Despite both the EU and UK focussing on principles such as accountability and transparency, the diverging approaches observed so far in practice mean a consistent approach to the regulation of AI is unlikely, at least in the near term.

If you would like to discuss AI regulation in more detail, please reach out to Annabelle Black at annabelle@gkstrategy.com.

Education and Digital Revolution: AI under Labour

The government is embracing the evolving landscape of artificial intelligence (AI) and attempting to integrate it into the education system. Improving mainstream education and increasing accessibility for young people has been central to Labour’s agenda, with one of the five key manifesto missions being ‘breaking barriers to opportunity’. To address challenges in mainstream schools, ministers are focused on issues such as teacher recruitment and retention. However, in the current economic and political climate, immediate solutions are limited, bar the initial 5.5% teacher pay rise in September 2024. To address these shortfalls in the long term, the government is exploring innovative ways to make the teaching profession more appealing and improve the overall efficiency of educational provision, including the use of AI to support teachers and school administrators.

As the government recognises the potential risks for young children when accessing AI, the introduction of AI into the classroom will be a teacher and administrator facing policy. To mitigate further issues, the government has committed to implementing safeguards. These safeguards include age restrictions on who can use AI tools and filtering and monitoring standards to ensure schools have the appropriate restrictions in place. However, with appropriate regulation, there is potential for expanding the use of AI tools to student facing use in supervised educational environments. Stakeholders and developers should anticipate these restrictions and the potential expansion from a teacher facing policy to one that includes students when developing AI models for educational settings.

AI models in education will focus on generative AI, with applications across various teaching and learning functions, such as creating educational resources, curriculum planning, feedback, revision activities, administrative tasks and supported personalised learnings. The government is also likely to encourage the introduction of other AI tools outside of the classroom that can enhance efficiency in schools and reduce administrative burdens. The new technologies and tools will likely require additional skills training for teachers and support staff. Organisations that provide the necessary training in this area, alongside the development of AI, are likely to be viewed favourably by government and schools.

To ensure a safe and responsible introduction of AI into the classroom, the government is collaborating with educational technology sector, experts and academics. As part of this dialogue, the government is piloting the EdTech Evidence Board to analyse the impact of edtech tools on teaching and learning. The Chartered College of Teaching is delivering the initial pilot scheme and is inviting organisations in the edtech sector to submit projects to the board later this year. This is an opportune moment for education service providers and stakeholders to engage with policymakers, demonstrating how their products can support the government’s educational objectives.

We’d be delighted to share our thoughts on what the government’s approach to AI and edtech could mean for you and how you can engage with the ongoing dialogue. Please contact mariella@gkstrategy.com if you would like to discuss the reforms with the GK team.

Silouhette of soldier

Artificial Intelligence: ‘The Future of Defence Capability’

Is tech political_

Is tech political?

 

It is not unreasonable to take the position that technology firms – at the cutting edge of innovation through new product and service development – should steer well away from the complex world of politics. However this overlooks the reality that the policy and regulatory decisions underpinning the sector’s operating environment are by their nature, political, and therefore engaging proactively with policymakers to help shape that future environment makes good business sense.

The range of current Government workstreams on technology and digital issues is vast. From the future of the UK’s data and privacy regime, to innovation and digital regulation, online safety, competition in digital markets, cyber security, AI technologies, digital tax and online advertising – there is a significant amount of thinking going on across Government about how policy and regulation should be shaped in response and to promote growth in this and other sectors where ministers see opportunities for the UK to develop a competitive edge in the post-Brexit environment.

Indeed for SMEs or newer entrants to the market, the risks of sitting back are even greater, as established players and those with the loudest voices look to either maintain the status quo or shape the regulatory environment in their favour, with heavy handed regulators also getting involved and creating a stifling environment for growth.

The tech sector is the fastest growing in the UK economy, but there is no monopoly of wisdom within Government about how best to tackle the challenges it faces. The risks of unintended consequences are significant. It is essential therefore that technology firms communicate effectively about the value of tech and work with the Government to shape the policy and regulatory environment in a way that creates a positive environment for long-term growth. Tech is political – and so it ought to be. But it is essential that companies take advantage of opportunities to be part of the conversation within Government and beyond.

Our team has significant experience of advising technology companies, helping them to engage with policymakers on a range of digital policy issues. If you would be interested in a conversation, please contact Will Blackman at will@gkstrategy.com