
In January 2026, the UK Parliament’s Joint Committee on Human Rights convened a series of oral evidence sessions examining the regulation of AI and its implications for human rights. The sessions formed part of the Committee’s ongoing inquiry into human rights principles and regulation of AI, bringing together expert perspectives on how emerging technologies can be developed and deployed in a manner that respects fundamental rights.
During the first session held on 14 January 2026, witnesses provided evidence on how human rights protections can be integrated into AI systems at every stage of their development and use. Discussion focused on the need for effective monitoring and evaluation mechanisms once AI technologies are operational, as well as the importance of transparency and accountability to users and individuals affected by automated decision-making. Particular emphasis was placed on ensuring that safeguards are not limited to the design phase but remain effective throughout the lifetime of AI systems.
The Committee examined the distinct challenges that AI presents for human rights, including risks related to data protection, bias, transparency and accountability. Evidence explored how these risks can be addressed within the private sector and how human rights considerations can be embedded at the core of AI design, governance and oversight. The importance of continuous scrutiny was highlighted as a means of identifying potential harms early and ensuring that appropriate remedies are available.
As part of its inquiry, the Joint Committee on Human Rights also took oral evidence from Google on 21 January 2026, focusing on the responsibilities of large technology companies in the development and deployment of AI. The Committee heard evidence from Google’s Global Head of Human Rights, who has been with the company for over a decade and founded Google’s Human Rights Program.
Google now plays a central role in the development and implementation of AI tools used widely by the public. It operates the most widely used search engine in the UK and has recently introduced automated “AI overviews” prominently within search results. Its AI portfolio also includes generative tools such as Gemini, advanced research systems developed by DeepMind and image-generation technology that enables users to create and edit high-quality images from prompts or simple sketches.
During the session, the Committee questioned Google on how AI can be developed and used in a manner that protects human rights. Discussion focused on whether the UK’s existing regulatory framework strikes an appropriate balance between encouraging innovation and safeguarding fundamental rights.
Committee members further explored issues including age-appropriate design, the prevention of biased outcomes and whether individuals should have a right to know when interacting with AI systems. The evidence also addressed the relevance of the UN Guiding Principles on Business and Human Rights, ongoing developments in large language models and generative AI tools and the core human rights implications of AI technologies, particularly in relation to privacy, non-discrimination, data quality and responsibility for decision-making.
Taken together, these evidence sessions demonstrate the Committee’s continued commitment to ensuring that emerging technologies develop within a framework that upholds human rights and public accountability. Our team attended and observed the oral evidence presented to the Committee across the January 2026 sessions.



