logo
Home-Grown success: AI revolutionising health tech through a powerful cross-sector partnership

Home-Grown success: AI revolutionising health tech through a powerful cross-sector partnership

Mail & Guardian7 hours ago
last week joined a remarkable community of innovators and healthcare professionals attending
The feedback from the audience, especially from clinicians, was a powerful reminder that when innovation is driven by purpose, it can make a real difference.
Central to this story is the inspiring journey behind Healthbridge's partnership with
From their shared roots, attending boarding school together in Cape Town, to coincidentally ending up working at the same venture capital firm in Cape Town after completing their degrees.
They both left the VC firm to pursue Nora, which was originally designed as an educational technology tool to help students learn by using artificial intelligence to automatically generate study materials from various course resources, such as lecture videos, slides, required readings, and case studies.
After about two months of launching Nora Education, the team was introduced to Healthbridge, which was looking to integrate similar technology into their clinical platform, such as using AI to generate clinical documentation from patient-provider conversation recordings.
This introduction and lengthy discussions then evolved into
Since the launch of its pilot phase at the start of this year, more than 265 clinicians have started using the solution, with over 41,000 consultations processed.
'At
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Humans must be the decisionmakers when AI is used in legal proceedings
Humans must be the decisionmakers when AI is used in legal proceedings

Mail & Guardian

time6 hours ago

  • Mail & Guardian

Humans must be the decisionmakers when AI is used in legal proceedings

The use of artificial intelligence in arbitration may support justice, but it cannot replace those who are tasked with safeguarding it South Africa has no legal framework to govern artificial intelligence (AI) use in alternative dispute resolution (ADR) proceedings, placing at risk the preservation of the principles of fairness, accountability, transparency and confidentiality when machines join the table. To offer much-needed direction for parties and tribunals integrating AI into adjudications and arbitration, the Association of Arbitrators (Southern Africa) (AASA) issued AI guidelines in May 2025 on the use of AI tools in this environment. AI tools are already embedded in arbitration proceedings in South Africa and are assisting with numerous legal tasks. Such tasks include collating and sequencing complex case facts and chronologies; managing documents and expediting the review of large volumes of content; conducting legal research and sourcing precedents; drafting text (for example, legal submissions or procedural documents); and facilitating real-time translation or transcription during hearings. While the new AI guidelines are not exhaustive nor a substitute for legal advice, they provide a helpful framework to promote responsible AI use, protect the integrity of proceedings and balance innovation with ethical awareness and risk-management. As a starting point, the guidelines stress the importance of parties reaching agreement upfront on the use of AI, including whether the arbitrator or tribunal has the power to issue directives regarding their use. The use of AI in arbitration proceedings can easily result in confidentiality and data security risks. One of the key advantages of arbitration is the confidentiality of the proceedings that it offers, as opposed to public court proceedings. This can be threatened by irresponsible use of AI by the parties or the tribunal, and expose the parties to risk. The use of AI tools can also result in technical limitations and wavering reliability. AI tools can produce flawed or 'hallucinated' results, especially in complex or novel fact patterns. This can lead to misleading outputs or fabricated references. AI tools are well known to fabricate case law references to answer legal questions posed to them. The AI guidelines highlight core principles that should be upheld whenever AI is used. These include that tribunals and arbitrators must be accountable and must not cede their adjudicative responsibilities to software. Humans ultimately bear responsibility for the outcome of a dispute. Ensuring confidentiality and security is also a key principle. For example, public AI models sometimes use user inputs for further 'training', which raises the risk that sensitive information could inadvertently be exposed. The need for transparency and disclosure is also important and parties and tribunals should consider whether AI usage needs to be disclosed to all participants. Finally, fairness in decision-making is paramount. There is a risk of underlying biases or inaccuracies in AI-generated outputs due to training data biases. Human oversight of any AI-driven analysis is indispensable to ensure just and equitable results. The guidelines advise tribunals to adopt a transparent approach to AI usage throughout proceedings, whether deployed by the tribunal itself or by the parties. Tribunals should also consider obtaining explicit agreement on whether, and how, AI-based tools may be used and determine upfront if disclosure of the use of AI tools is required. Safeguarding confidentiality should be considered upfront and throughout the proceedings, and agreement should be reached on what information can be shared with what AI tools to ensure parties are protected. During hearings, any AI-driven transcription or translation services should be thoroughly vetted to preserve both accuracy and confidentiality. Equal access to AI tools for all parties should be ensured so that no party is prejudiced. Ultimately, the arbitrator's or adjudicator's independent professional judgment must determine the outcome of any proceeding, even if certain AI-generated analyses or texts help shape the final award. As disputes become ever more data-intensive and as technological solutions proliferate, parties, counsel and tribunals must consider how best to incorporate AI tools into their processes. The guidelines affirm that human adjudicators remain the ultimate decision-makers. Vanessa Jacklin-Levin is a partner and Rachel Potter a senior associate at Bowmans South Africa.

Home-Grown success: AI revolutionising health tech through a powerful cross-sector partnership
Home-Grown success: AI revolutionising health tech through a powerful cross-sector partnership

Mail & Guardian

time7 hours ago

  • Mail & Guardian

Home-Grown success: AI revolutionising health tech through a powerful cross-sector partnership

last week joined a remarkable community of innovators and healthcare professionals attending The feedback from the audience, especially from clinicians, was a powerful reminder that when innovation is driven by purpose, it can make a real difference. Central to this story is the inspiring journey behind Healthbridge's partnership with From their shared roots, attending boarding school together in Cape Town, to coincidentally ending up working at the same venture capital firm in Cape Town after completing their degrees. They both left the VC firm to pursue Nora, which was originally designed as an educational technology tool to help students learn by using artificial intelligence to automatically generate study materials from various course resources, such as lecture videos, slides, required readings, and case studies. After about two months of launching Nora Education, the team was introduced to Healthbridge, which was looking to integrate similar technology into their clinical platform, such as using AI to generate clinical documentation from patient-provider conversation recordings. This introduction and lengthy discussions then evolved into Since the launch of its pilot phase at the start of this year, more than 265 clinicians have started using the solution, with over 41,000 consultations processed. 'At

'Half SA's Covid-19 patients suffered long-term mental problems'
'Half SA's Covid-19 patients suffered long-term mental problems'

The Herald

time9 hours ago

  • The Herald

'Half SA's Covid-19 patients suffered long-term mental problems'

Researchers have found that more than half of South Africans infected with Covid-19 experienced lasting mental and cognitive health issues long after their recovery — some for as long as up to two years later. The research, conducted by the University of Cape Town and published in the journal Brain, Behaviour & Immunity — Health , followed 97 people who tested positive for Covid-19 during the first three waves of the pandemic. These people — ranging from those who had no symptoms to those who were critically ill — were interviewed at least six months after they were infected to assess ongoing neuropsychiatric symptoms like anxiety, fatigue and memory problems. Lead researcher Prof Jonny Peter said they found that illness severity didn't necessarily predict who would go on to experience these long-term effects. Even people who had mild or no symptoms reported problems months later. 'Nearly half of the participants showed signs of cognitive or memory difficulties on standard screening tests, and over 50% reported ongoing fatigue or mental health challenges,' he said. 'The team also looked for early warning signs in the blood — specific proteins or markers taken during the patients' illness that might help predict who would develop these persistent symptoms.' Peter said no clear patterns emerged.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store