📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Regulatory Frameworks for Responsible AI Innovation in a Corporate Setting: Bridging Ethical Governance and Technological Advancement

Lead Research Organisation: NOTTINGHAM TRENT UNIVERSITY

Abstract

Context
Artificial Intelligence (AI) is a transformative force with global reach, yet its rapid development has outpaced regulatory frameworks, raising pressing ethical and social concerns. Algorithmic bias, lack of transparency, and accountability gaps risk exacerbating inequities and eroding public trust. As leaders in technology and governance, the UK and US face a critical challenge: their fragmented regulatory approaches—from the UK’s principles-based Pro-Innovation White Paper to the US’s sectoral NIST AI Risk Management Framework—struggle to address AI’s transnational risks. This project investigates how UK-US collaboration can harmonise legal frameworks to incentivise ethical AI governance, mitigate systemic harms, and institutionalise transparency in a corporate setting, positioning both nations as pioneers of globally interoperable, human-centric regulation.
Challenge
Balancing AI innovation with public welfare is complicated by the technology’s global scalability and complexity. Current regulations—reactive, sectoral, or principles-based—fail to address culturally specific harms, such as biased corporate algorithms. Generative AI exacerbates risks like misinformation, greenwashing, and opaque decision-making in companies. Adaptive governance models are urgently needed to reconcile innovation with accountability, equitable redress, and safeguards against systemic inequities.
Aims and Objectives
The project aims to develop evidence-based frameworks for Responsible AI—systems prioritising ethics, transparency, fairness, and accountability—through four objectives (mirroring the four workstreams):

Mapping the Regulatory Landscape: Compare global AI governance approaches to identify gaps in rights protections and synthesise best practices from UK and US frameworks in a corporate setting, providing insights for AI regulation in general.
AI Regulation Participatory Research: Conduct multi-stakeholder workshops with policymakers, industry, and civil society to co-design adaptive strategies, defining key challenges and principles for agile governance frameworks that align AI innovation with ethics amid technological shifts.
Sectoral Pilot and Vulnerability Analysis (Finance Industry): Evaluate AI accountability gaps in financial industry—a sector with broad AI application and tremendous impact. Identify vulnerable parties impacted by AI applications and tailor recommendations to mitigate vulnerabilities.
Generative AI Governance: Propose protocols for transparency (e.g., AI content labelling) and intellectual property safeguards to counter misinformation and copyright risks.

Potential Applications and Benefits

Policy Reform: Develop sector-specific regulatory templates for finance, including harmonised audit protocols and liability frameworks to address AI-driven harms (e.g., biased loan denials).
Industry Compliance: Create enforceable transparency standards (e.g., mandatory audits for credit algorithms) to align financial institutions with anti-bias laws and reduce litigation.
Citizen Safeguards: Establish legal redress pathways (e.g., appeals processes for AI-denied services) to protect marginalised groups affected by opaque systems.
Scholarly Impact: Build an open-access repository of financial-sector case studies on AI liability, informing global debates on law-centric governance.
Global Governance: Propose a blueprint for cross-jurisdictional regulatory coherence in AI application, balancing innovation with accountability to mitigate systemic risks.

By leveraging the UK’s civil society expertise and US tech-sector dynamism, this project will strengthen participatory accountability models. The finance pilot exemplifies how transatlantic cooperation can bridge regulatory gaps, enforce fairness, and institutionalise redress. Outcomes will empower policymakers to advance AI governance aligned with human rights, ensuring innovation reinforces democratic values, public trust, and equitable justice.

Publications

10 25 50