NEW ZEALAND JUDICIARY PUBLISHES GUIDELINES ON THE USE OF AI IN COURTS AND TRIBUNALS

The New Zealand judiciary has published guidelines on the use of artificial intelligence (AI) in Courts and Tribunals. There are three guidelines in total, which are each geared towards a different audience:

·        Judges, judicial officers, Tribunal members and judicial support staff;

·        Lawyers; and

·        Non-lawyers, such as self-represented litigants, lay-advocates and McKenzie friends.

The guidelines follow on from the commissioning of a Judicial Artificial Intelligence Advisory Group by Chief Justice Helen Winkelman in April 2023, and aligns with the judiciary’s “overall responsibility for the integrity of the administration of justice and court processes”.

The use of AI is an increasing reality in legal contexts, both internationally and in New Zealand. While AI brings with it new possibilities and potential benefits for the legal world and those who engage with it, it also carries risks and challenges, such as the advent of fictitious, AI-generated case authorities. The new guidelines respond to this reality by offering “practical advice” on the appropriate use of AI and how to navigate its “inherent” risks.

The guidelines focus on ‘generative AI’ or ‘GenAI’ programmes, such as ChatGBT, which use pattern recognition and probability to generate new content (such as text) based on already held data. The guidelines note the potential benefit of these programmes for certain tasks, such as summarising text. However, the guidelines  also stress their limitations, including referring to the  content not being sourced from authoritative databases. Instead, GenAI bases its content off information that it has been trained on, and data its users have entered. As the guidelines note, this poses issues in relation to privacy, confidentiality, and suppression, as any information entered into a GenAI programme could become public.

The way that GenAI sources information also poses potential issues around accuracy, reliability, and even misinformation. Any errors or bias contained in the sources and data that the AI draws on may carry over into the newly generated content. Bearing this in mind, the guidelines caution participants to be aware of and address any ethical issues, particularly as “GenAI chatbots generally do not account for New Zealand’s cultural context, nor specific cultural values and practices of Māori and Pasifika”.

Overall, the new guidelines stress caution around the use of AI. For lawyers, the guidelines serve as a reminder that their professional and ethical obligations under the Lawyers and Conveyancers Act 2006 and the Lawyers and Conveyancers Act (Lawyers: Conduct and Client Care) Rules 2008  continue to apply when engaging with this new technology. This includes taking reasonable steps to ensure that information provided to Courts and Tribunals is accurate, and adhering to confidentiality, suppression and privacy considerations.

Article written by Jessica MacPherson and Farzana Nizam

Previous
Previous

Emergent Challenges to Private Sector Enterprise and Public Policy: Smith v Fonterra

Next
Next

The development of professional disciplinary responses to intimate and sexual harassment