Latest news with #legalFramework


Mail & Guardian
03-07-2025
- Business
- Mail & Guardian
Humans must be the decisionmakers when AI is used in legal proceedings
The use of artificial intelligence in arbitration may support justice, but it cannot replace those who are tasked with safeguarding it South Africa has no legal framework to govern artificial intelligence (AI) use in alternative dispute resolution (ADR) proceedings, placing at risk the preservation of the principles of fairness, accountability, transparency and confidentiality when machines join the table. To offer much-needed direction for parties and tribunals integrating AI into adjudications and arbitration, the Association of Arbitrators (Southern Africa) (AASA) issued AI guidelines in May 2025 on the use of AI tools in this environment. AI tools are already embedded in arbitration proceedings in South Africa and are assisting with numerous legal tasks. Such tasks include collating and sequencing complex case facts and chronologies; managing documents and expediting the review of large volumes of content; conducting legal research and sourcing precedents; drafting text (for example, legal submissions or procedural documents); and facilitating real-time translation or transcription during hearings. While the new AI guidelines are not exhaustive nor a substitute for legal advice, they provide a helpful framework to promote responsible AI use, protect the integrity of proceedings and balance innovation with ethical awareness and risk-management. As a starting point, the guidelines stress the importance of parties reaching agreement upfront on the use of AI, including whether the arbitrator or tribunal has the power to issue directives regarding their use. The use of AI in arbitration proceedings can easily result in confidentiality and data security risks. One of the key advantages of arbitration is the confidentiality of the proceedings that it offers, as opposed to public court proceedings. This can be threatened by irresponsible use of AI by the parties or the tribunal, and expose the parties to risk. The use of AI tools can also result in technical limitations and wavering reliability. AI tools can produce flawed or 'hallucinated' results, especially in complex or novel fact patterns. This can lead to misleading outputs or fabricated references. AI tools are well known to fabricate case law references to answer legal questions posed to them. The AI guidelines highlight core principles that should be upheld whenever AI is used. These include that tribunals and arbitrators must be accountable and must not cede their adjudicative responsibilities to software. Humans ultimately bear responsibility for the outcome of a dispute. Ensuring confidentiality and security is also a key principle. For example, public AI models sometimes use user inputs for further 'training', which raises the risk that sensitive information could inadvertently be exposed. The need for transparency and disclosure is also important and parties and tribunals should consider whether AI usage needs to be disclosed to all participants. Finally, fairness in decision-making is paramount. There is a risk of underlying biases or inaccuracies in AI-generated outputs due to training data biases. Human oversight of any AI-driven analysis is indispensable to ensure just and equitable results. The guidelines advise tribunals to adopt a transparent approach to AI usage throughout proceedings, whether deployed by the tribunal itself or by the parties. Tribunals should also consider obtaining explicit agreement on whether, and how, AI-based tools may be used and determine upfront if disclosure of the use of AI tools is required. Safeguarding confidentiality should be considered upfront and throughout the proceedings, and agreement should be reached on what information can be shared with what AI tools to ensure parties are protected. During hearings, any AI-driven transcription or translation services should be thoroughly vetted to preserve both accuracy and confidentiality. Equal access to AI tools for all parties should be ensured so that no party is prejudiced. Ultimately, the arbitrator's or adjudicator's independent professional judgment must determine the outcome of any proceeding, even if certain AI-generated analyses or texts help shape the final award. As disputes become ever more data-intensive and as technological solutions proliferate, parties, counsel and tribunals must consider how best to incorporate AI tools into their processes. The guidelines affirm that human adjudicators remain the ultimate decision-makers. Vanessa Jacklin-Levin is a partner and Rachel Potter a senior associate at Bowmans South Africa.


The Guardian
17-06-2025
- The Guardian
Facial recognition technology needs stricter regulation
The Metropolitan police's recognition of the value in 'some sort of framework or statutory guidance' for live facial recognition is welcome (Live facial recognition cameras may become 'commonplace' as police use soars, 24 May). However, it is not just police use of this technology that needs a clear legal framework. Despite the scale and speed of its expansion, there is still no specific law providing a basis for live facial recognition or other emerging biometric technologies, whether these are used in the public or private sector. Biometric surveillance is expanding rapidly, not just in policing but across society: in train stations, schools and supermarkets. Newer biometric systems go further, claiming to infer people's emotional states, raising serious concerns about their accuracy, ethics and legality. In 2020, the court of appeal – in the UK's only judgment on live facial recognition to date – found a deployment by South Wales police was unlawful. It identified deficiencies in the legal framework and set out minimum standards for lawful use. Since then, a patchwork of voluntary guidance has emerged. New research from the Ada Lovelace Institute has found this patchwork is inadequate in practice, creating legal uncertainty, putting fundamental rights at risk and undermining public trust. Crucially, we find that non-police uses, such as those in the private sector or involving inference, are subject to even fewer safeguards and so stand on far shakier legal ground. Governance is simply not keeping up with technological adoption and advancement. Policymakers must act. We urgently need new legislation to establish a comprehensive framework covering all forms of biometric surveillance and inference – not just police use – and an independent regulator to oversee and enforce BirtwistleAssociate director, the Ada Lovelace Institute


Bloomberg
06-06-2025
- Politics
- Bloomberg
‘Reverse Discrimination' Ruling Is a Win for the Rule of Law
White individuals and straight people do not need to meet a higher burden of proof than members of minority groups to prevail in employment discrimination suits, the Supreme Court held Thursday. The immediate effect is to make so-called 'reverse discrimination' claims easier to bring. However, the decision also solidifies the existing legal framework for workplace discrimination — a framework that the court's ultra-conservative justices would like to upend. The result is not so much a win for conservatives or liberals as for legal stability. The case, Ames v. Ohio, arose when a straight White woman employed by the Ohio Department of Youth Services applied for a management position, which instead went to a lesbian candidate. She was subsequently demoted, and her old job was given to a gay man. Ames sued, alleging these decisions amounted to employment discrimination.