Overlay
Technology

Regulating AI: what should the tech sector expect?

Businesses may need to adapt to prepare for future legislation.

Choose the content you want

Get business inspiration and practical tips straight to your inbox 

Data and digital technologies are transforming the way we live, work, and learn

In the tech world, data is often described as the ‘new oil’. Why? Because it’s seen as a valuable commodity with the power to transform business in today’s digital economy. An asset that, when harnessed correctly, could unlock substantial value.

Data can create new opportunities for business growth across sectors and as we move forward, it's imperative we match our technological progress with robust ethical frameworks and regulations.

There are necessary protections in place to ensure peoples’ data is kept safe and used appropriately. Existing UK data laws to help protect consumers include the Data Protection Act 2018, which controls how personal information is used by organisations, businesses, or the government. It’s the UK’s implementation of the General Data Protection Regulation (GDPR).

Artificial Intelligence: new UK legislation is being considered

While there are no UK laws that were explicitly written to regulate AI, it is partially regulated through a patchwork of legal and regulatory requirements built for other purposes, which now also capture uses of AI technologies.

UK data protection law includes specific requirements around ‘automated decision-making’ and the broader processing of personal data, which also covers processing for the purpose of developing and training AI technologies. The upcoming Online Safety Bill also has provisions specifically concerning the design and use of algorithms.

The Government’s AI White Paper, which was published in late 2022, was a much-anticipated introduction to the legislation of AI. The paper focuses on a principles-based approach to theorise how the rapid expansion in AI utilisation will be monitored and ensures that we are acting in a proportionate manner.

The UK government says it is committed to developing a pro-innovation position on governing and regulating AI and recognises the potential benefits of the technology. This should mean that any rules developed to address future risks and opportunities to both support businesses to understand how they can incorporate AI systems and ensure consumers that such creations are safe and robust.

What does AI regulation look like in other jurisdictions?

The EU has put the finishing touches to its first AI regulation, the AI Act, which is aiming to establish a legal framework for the development, deployment, and use of AI systems in the EU.

In the US, tech leaders were summoned to the White House in May to discuss responsible AI development. They covered three areas specifically:

(1) The need for companies to be more transparent with policymakers, the public, and others about their AI systems

(2) The importance of being able to evaluate, verify, and validate the safety, security, and efficacy of AI systems

(3) The need to ensure AI systems are secure from malicious actors and attacks.

As we move into a new era of automation, and generative AI, here are some key considerations for security, regulation, compliance, and people management.

Seven considerations for businesses using AI systems

 
  1. We need to better understand the problems we are trying to solve. Implementing AI will be significantly more effective if the problem is well understood first. AI cannot be used competently without effective prompting and will not always produce perfect results.
  2. Build a talented, multidisciplinary team that can test the AI system thoroughly and improve the distribution of data.
  3. Start with small, manageable projects, and build from there.
  4. Prioritise ethics and transparency and consider how to explain the use of AI systems – what you’ve asked it to do and why. Does this meet your ethical requirements?
  5. Foster a culture of experimentation and learning, especially within bigger businesses where there may be a deeper cultural change required.
  6. Focus on governance, data quality and transparency.
  7. Stay on top of regulatory developments.

Seven principles regulators might focus on

Seven principles regulators might focus on

In its AI Whitepaper the government says it expects well governed AI to be used with due consideration to concepts of fairness and transparency. Similarly, it expects all actors in the AI lifecycle to appropriately manage risks to safety and to provide for strong accountability.

Some of the principles regulators might look at in your sector include:

Ensure AI is used safely and securely

Ensure data accuracy

Retain oversight

Ensure AI systems are used responsibly and ethically

Clarity on ownership of second-hand outputs using AI

Transparency so users can make informed decisions on how they use AI

Assigning ownership to data

 

For more technology insight and practical tips visit: All Insights articles.

This material is published by NatWest Group plc (“NatWest Group”), for information purposes only and should not be regarded as providing any specific advice. Recipients should make their own independent evaluation of this information and no action should be taken, solely relying on it. This material should not be reproduced or disclosed without our consent. It is not intended for distribution in any jurisdiction in which this would be prohibited. Whilst this information is believed to be reliable, it has not been independently verified by NatWest Group and NatWest Group makes no representation or warranty (express or implied) of any kind, as regards the accuracy or completeness of this information, nor does it accept any responsibility or liability for any loss or damage arising in any way from any use made of or reliance placed on, this information. Unless otherwise stated, any views, forecasts, or estimates are solely those of NatWest Group, as of this date and are subject to change without notice. Copyright © NatWest Group. All rights reserved.

scroll to top