ao link
Twitter
Linked In
Bluesky
Threads
Twitter
Linked In
Bluesky
Threads

You are viewing 1 of your 1 free articles

Accountability and consistency in AI regulation

Organisations must establish their own guidelines for the ethical and responsible use of artificial intelligence (AI), writes Lauren Trevelyan, principal consultant at Altair

Linked InTwitterFacebookeCard
Sharelines

Organisations must establish their own guidelines for the ethical and responsible use of artificial intelligence, writes Lauren Trevelyan, principal consultant at Altair #UKhousing

The housing sector, known for its strict regulations, focus on quality and consumer rights, now faces significant challenges as it grapples with advanced technologies like generative AI. With increasing bad press and scare-mongering around AI, it’s clear that we urgently need a robust regulatory framework.

The UK government has made some impressive strides with the establishment of the Artificial Intelligence Safety Institute (AISI), the first state-backed organisation focused on AI safety for the public. This initiative aims to develop the socio-technical infrastructure necessary to understand and proactively manage AI risks.

By collaborating with the Centre for Data Ethics and Innovation (now called the Responsible Technology Adoption Unit), the Ada Lovelace Institute, the Alan Turing Institute and programmes like Bridging Responsible AI Divides and Responsible AI UK, the AISI is leading foundational AI safety research.


READ MORE

King’s Speech 2024: the new housing-related bills in focusKing’s Speech 2024: the new housing-related bills in focus
London landlord launches new scheme to boost staff skills in data security and ethics, plus predictive modellingLondon landlord launches new scheme to boost staff skills in data security and ethics, plus predictive modelling
The Week in Housing: Labour reveals Spending Review commitments and new draft NPPFThe Week in Housing: Labour reveals Spending Review commitments and new draft NPPF

Even with these efforts, there’s still a significant gap in common standards for AI quality and consistency. The AISI is attempting to address this. The government’s white paper on AI focuses on supporting innovation through a flexible framework rather than rigid regulations. However, since this framework isn’t statutory, and its effectiveness is under periodic review, it leaves us wondering how the housing sector’s voice will be heard in these evaluations.

It’s currently unclear how many housing sector providers are complying with emerging standards such as those set out in A pro-innovation approach to AI regulation. This government policy paper encourages regulators to look at technical ISO standards when aligning AI usage to the framework principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

As AI references in sector-specific regulations are few and far between, and the existing policy paper requires a high level of interpretation and technical expertise, it remains difficult to see how many organisations can and are complying. This might be a challenge you’re already facing in your organisation.

The government’s research paper points out relevant use cases for the sector, such as the use of AI systems to accelerate the recruitment process. It also highlights issues like poor data quality, whether that’s misleading data being input into the AI system, or bias/stereotypical data being reproduced. 

“While it is suggested that AI can free us from tedious tasks, it also brings ethical and governance challenges, especially as AI autonomy increases”

While it is suggested that AI can free us from tedious tasks, it also brings ethical and governance challenges, especially as AI autonomy increases.

It may be that you have already encountered some ethical dilemmas or data-quality issues with AI in your housing organisation. A key goal of AI regulation is to boost public trust, but current regulations don’t address crucial issues like data access, which are vital for training large language models (LLMs). This gap highlights the need for a sector-specific approach to AI regulation, considering the unique risks and needs of social housing when compared with other industries.

The Digital Regulation Cooperation Forum (DRCF) has been instrumental in aligning regulations and fostering dialogue among regulators. It has published a vision for a coordinated approach to digital regulation, covering the benefits, harms and auditing of algorithms.

However, the DRCF was not created to support all the functions necessary for implementing a comprehensive AI regulatory framework. 

The government’s commitment to a digital sandbox for AI is promising, particularly in areas with high AI investment and industry demand. However, the reliance on cross-regulator collaboration means there needs to be a clear leader to issue joint guidance.

The AI Standards Hub outlines a governance framework encouraging responsible practices, emphasising principles like safety, security, transparency, fairness, accountability and contestability. A layered approach to AI technology standards is proposed:

Layer 1: Adoption of sector-agnostic standards to support the implementation of cross-sectoral principles.

Layer 2: Adaptation of these governance practices to address specific risks raised by AI in particular contexts. 

Central coordination is necessary to navigate trade-offs, like minimising bias while protecting privacy. The government’s pro-innovation approach to AI regulation must balance these needs.

The lack of comprehensive regulation and the complexity of LLMs mean that housing associations and tenants face increased ambiguity and risk. LLMs are often trained on generic datasets that may not reflect the diverse communities served by the housing sector. Without specific regulatory guardrails, the sector relies heavily on interpreting emerging ethical standards and a thorough understanding of AI training processes.

While the lack of specific regulations for AI is clear, it is essential for organisations to create their own guidelines and policies for using it internally. Since the rules are still being developed, organisations cannot rely on external directions to ensure AI is used responsibly and ethically. Boards need to consider the usage of AI in their organisations. 

“While the lack of specific regulations for AI is clear, it is essential for organisations to create their own guidelines and policies for using it internally”

They must set out clear principles for fairness and transparency, protect data privacy and security, create reliable procedures for using AI, invest in employee training, and involve stakeholders to address their concerns. By taking these steps, organisations can use AI responsibly, reduce risks, build trust and lead the way in ethical technology use. 

There is encouraging movement with the development of Explainable AI (XAI) to improve trust in AI decision-making. However, the housing sector, known for its rigorous assessment processes, requires tailored regulations to ensure responsible AI deployment.

The sector must rally for a call to arms, demanding better regulatory frameworks that reflect the unique needs and risks of social housing. This means advocating for improved data access, transparency and accountability from AI providers. By ensuring AI systems are trained on relevant, high-quality datasets, we can harness its benefits while mitigating its risks.

This demand for better regulation should be a genuine cross-sector response, requiring coordination to ensure that the sector’s voice doesn’t get lost among other industries.

As AI technologies evolve rapidly, we must proactively engage with sector regulatory bodies to shape a framework that ensures safety, fairness and accountability. Only through a collaborative and informed approach can we navigate the complexities of AI and leverage its potential to improve housing services.

Lauren Trevelyan, principal consultant, Altair

Sign up for our daily newsletter

Sign up for our daily newsletter

Sign up to the Inside Housing Communications Conference 2024

Sign up to the Inside Housing Communications Conference 2024

With an upcoming election, and a raft of regulatory changes within the Social Housing (Regulation) Act aimed at improving communication between landlords and tenants, housing communicators are navigating a multitude of priorities in an ever-changing environment.

Join over 250 communications professionals and hear from 50 speakers across three content streams covering both internal and external communications. The conference will explore how to influence key stakeholders and deal with emerging issues in a transparent and empathetic way to improve the lives of those in our communities.

Find out more and book your delegate pass here.

Linked InTwitterFacebookeCard
Add New Comment
You must be logged in to comment.