
Orchestrating Mental Health Advice Via Multiple AI-Based Personas Diagnosing Human Psychological Disorders
In today's column, I examine a newly identified innovative approach to using generative AI and large language models (LLMs) for medical-related diagnoses, and I then performed a simple mini-experiment to explore the efficacy in a mental health therapeutic analysis context. The upshot is that the approach involves using multiple AI personas in a systematic and orchestrated fashion. This is a method worthy of additional research and possibly adapting into day-to-day mental health therapy practice.
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here.
If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.
Orchestrating AI Personas
One of the perhaps least leveraged capabilities of generative AI and LLMs is their ability to computationally simulate a kind of persona. The idea is rather straightforward. You tell the AI to pretend to be a particular type of person or exhibit an outlined personality, and the AI attempts to respond accordingly. For example, I made use of this feature by having ChatGPT undertake the persona of Sigmund Freud and perform therapy as though the AI was mimicking or simulating what Freud might say (see the link here).
You can tell LLMs to pretend to be a specific person. The key is that the AI must have sufficient data about the person to pull off the mimicry. Also, your expectations about how good a job the AI will do in such a pretense mode need to be soberly tempered since the AI might end up far afield. An important aspect is not to somehow assume or believe that the AI will be precisely like the person. It won't be.
Another angle to using personas is to broadly describe the nature of the persona that you want to have the AI to pretend to be. I have previously done a mini-experiment of having ChatGPT pretend to be a team of mental health therapists that confer when seeking to undertake a psychological assessment (see the link here). None of the personas represented a specific person. Instead, the AI was generally told to make use of several personas that generally represented a group of therapists.
There are a lot more uses of AI personas.
I'll list a few. A mental health professional who wants to improve their skills can carry on a dialogue with an LLM that is pretending to be a patient, which is a handy means of enhancing the psychological analysis acumen of the therapist (see the link here). Here's another example. When doing mental health research, you can tell AI to pretend to be hundreds or thousands of respondents to a survey. This isn't necessarily equal to using real people, but it can be a fruitful way to gauge what kind of responses you might get and how to prepare accordingly (see the link here and the link here).
And so on.
Latest Research Uses AI Personas
A recently posted research study innovatively used AI personas in the realm of performing medical diagnoses. The study was entitled 'Sequential Diagnosis with Language Models' by Harsha Nori, Mayank Daswani, Christopher Kelly, Scott Lundberg, Marco Tulio Ribeiro, Marc Wilson, Xiaoxuan Liu, Viknesh Sounderajah, Jonathan Carlson, Matthew P Lungren, Bay Gross, Peter Hames, Mustafa Suleyman, Dominic King, Eric Horvitz, arXiv, June 30, 2025, and made these salient remarks (excerpts):
There are some interesting twists identified on how to make use of AI personas.
The crux is that they had an AI persona that served as a diagnostician, another one that was feeding a case history to the AI-based diagnostician, and they even had another AI persona that acted as an assessor of how well the clinical diagnosis was taking place. That's three AI personas that were set up to aid in performing a medical diagnosis on various case studies presented to the AI.
The researchers opted to go further with this promising approach by having a panel of AI personas that performed medical diagnoses. They decided to have five AI personas that would each, in turn, confer while stepwise undertaking a diagnosis. The names given to the AI personas generally suggested what each one was intended to do, consisting of Dr. Hypothesis, Dr. Test-Chooser, Dr. Challenger, Dr. Stewardship, and Dr. Checklist.
Without anthropomorphizing the approach, the aspect of using a panel of AI personas would be considered analogous to having a panel of medical doctors conferring about a medical diagnosis. The AI personas each have a designated specialty, and they walk through the case history of the patient so that each specialty takes its turn during the diagnosis.
Orchestration In AI Mental Health Analysis
I thought it might be interesting to try a similar form of orchestration by doing so in a mental health analysis context. I welcome researchers trying this same method in a more robust setting so that we could have a firmer grasp on the ins and outs of employing such an approach. My effort was just a mini-experiment to get the ball rolling.
I used a mental health case history that is a vignette publicly posted by the American Board of Psychiatry and Neurology (ABPN) and entails a fictionalized patient who is undergoing a psychiatric evaluation.
It is a handy instance since it has been carefully composed and analyzed, and serves as a formalized test question for budding psychiatrists and psychologists. The downside is that due to being widely known and on the Internet, there is a chance that any generative AI used to analyze this case history might already have scanned the case and its posted solutions.
Researchers who want to do something similar to this mini-experiment will likely need to come up with entirely new and unseen case histories. That would prevent the AI from 'cheating' by already having potentially encountered the case.
Overview Of The Vignette
The vignette has to do with a man in his forties who had previously been under psychiatric care and has recently been exhibiting questionable behavior. As stated in the vignette: 'For the past several months, he has been buying expensive artwork, his attendance at work has become increasingly erratic, and he is sleeping only one to two hours each night. Nineteen years ago, he was hospitalized for a serious manic episode involving the police.' (source: ABPN online posting).
I made use of a popular LLM and told it to invoke five personas, somewhat on par with the orchestration approach noted above, consisting of:
After entering a prompt defining those five personas, I then had the LLM proceed to perform a mental health analysis concerning the vignette.
Orchestration Did Well
Included in my instruction to the LLM was that I wanted to see the AI perform a series of diagnoses or turns. At each turn, the panel was to summarize where they were in their analysis and tell me what they had done so far. This is a means of having the AI generate a kind of explanation or indication of what the computational reasoning process entails.
As an aside, be careful in relying on such computationally concocted explanations since they may have little to do with what the internal tokenization mechanics of the LLM were actually doing, see my discussion of noteworthy cautions at the link here.
I provided the LLM persona panel with questions that are associated with the vignette. I then compared the answers from the AI panel with those that have been posted online and are considered the right or most appropriate answers.
To illustrate what the AI personas panel came up with, here's the initial response about the overall characteristics of the patient at the first turn:
The analysis ended up matching overall with the posted solution. In that sense, the AI personas panel did well. Whether this was due to true performance versus having previously scanned the case history is unclear. When I asked directly if the case had been seen previously, the LLM denied that it had already encountered the case.
Don't believe an LLM that tells you it hasn't scanned something. The LLM might be unable to ascertain that it had scanned the content. Furthermore, in some instances, the AI might essentially lie and tell you that it hasn't seen a piece of content, a kind of cover-up, if you will.
Leaning Into AI Personas
AI personas are an incredibly advantageous capability of modern-era generative AI and LLMs. Using AI personas in an orchestrated fashion is a wise move. You can get the AI personas to work as a team. This can readily boost the results.
One quick issue that you ought to be cognizant of is that if the LLM is undertaking all the personas, you might not be getting exactly what you thought you were getting. An alternative approach is to use separate LLMs to represent the personas. For example, I could connect five different LLMs and have each simulate the personas that I used in my mini-experiment. The idea is that by using separate LLMs, you avoid the chances of the single LLM lazily double-dealing by not really trying to invoke personas. An LLM can be sneaky that way.
A final thought for now.
Mark Twain famously provided this telling remark: 'Synergy is the bonus that is achieved when things work together harmoniously.' The use of orchestration with AI personas can achieve a level of synergy that otherwise would not be exhibited in these types of analyses. That being said, sometimes you can have too many cooks in the kitchen, too.
Make sure to utilize AI persona orchestration suitably, and you'll hopefully get sweet sounds and delightfully impressive results.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
13 minutes ago
- Yahoo
Why Google should 'walk away' from Waymo
Needham & Company senior media and internet analyst Laura Martin joins Market Domination with Josh Lipton to explain why Alphabet (GOOG, GOOGL) should ditch Waymo and focus on generative artificial intelligence (AI) instead. To watch more expert insights and analysis on the latest market action, check out more Market Domination here. Related videos The £70bn pension tax raid Reeves may not be able to resist 10 shares I wouldn't want to hold in a stock market crash After falling 16% in a day, this stock's on my list of shares to buy in August This FTSE 250 trust is easily beating the global index in 2025. Time to buy? Sign in to access your portfolio


The Hill
15 minutes ago
- The Hill
Tech companies building massive AI data centers should pay to power them
The projected growth in artificial intelligence and its unprecedented demand for electricity to power enormous data centers present a serious challenge to the financial and technical capacity of the U.S. utility system. Appreciation for the sheer magnitude of that challenge has gotten lost as forecast after forecast projects massive growth in electric demand over the coming decade. The idea of building a data center that will draw 1 gigawatt of power or more, an amount sufficient to serve over 875,000 homes, is in the plans of so many data center developers and so routinely discussed that it no longer seems extraordinary. The challenge, when viewed in the aggregate, may be overwhelming. A recent Wood Mackenzie report identified 64 gigawatts of confirmed data center related power projects currently on the books with another 132 gigawatts potentially to be developed. 64 gigawatts are enough to power 56 million homes — more than twice the population of the 15 largest cities in America. The U.S. electric utility system is struggling to meet the projected energy needs of the AI industry. The problem is that many utilities do not have the financial and organizational resources to build new generating and transmission facilities at the scale and on the data center developers' desired timeline. The public policy question now on the table is who should pay for and bear the risk for these massive mega-energy projects. Will it be the AI developers such as Amazon, Microsoft, Meta and Alphabet — whose combined market value is seven times that of the entire S&P 500 Utility Sector — or the residential and other customers of local electric utilities? The process to answer this and related questions is underway in the hallways of the U.S. Congress, at the Federal Energy Regulatory Commission and other federal agencies, in tariff proceedings before state regulatory authorities and in public debate at the national, state and local levels. Whether they are developed at the federal, state or local level, the following values and objectives should form the core of public policy in this area: Data centers developers that require massive amounts of electric power (e.g. above 500MW or another specified level) should be required to pay for building new generating and transmission facilities. The State of Texas recently enacted legislation that requires data centers and other new large users to fund the infrastructure necessary to serve their needs. Although it is customary to spread the cost of new facilities across the user base of a utility, the demands that data center developers are placing on utility systems across the country are sufficiently extraordinary to justify allocating the costs of new facilities to those developers. Moreover, data center developers have the financial resources to cover those costs and incorporate them into the rates charged to users of their AI services. The developers of large data centers should bear the risk associated with new utility-built generating and transmission facilities, not the utility. As an example of such a policy, the Public Utility Commission of Ohio just approved a compromise proposed by American Electric Power of Ohio that would require data centers with loads greater than 1 gigawatt and mobile data centers over 25 megawatts to commit to 10-year electric service contracts and pay minimum demand charges based on 85 percent of their contract capacity, up from 60 percent under the utility's current general service tariff. Another option included in the Texas legislation requires significant up-front payments early in the planning process and mandates that data center developers disclose where they may have simultaneously placed demands for power. It is not unusual for data center requests for service to be withdrawn once they decide on the best location and package of incentives. Data center developers have the financial capacity and ability to manage this risk, utilities do not. Generating facilities that are co-located at large data centers should be integrated with the local utility electric grid, with appropriate cost allocation. Although a few projects have examined the option of a co-located power generation 'island' fully independent of the grid, most projects intend to interconnect with the grid system for back-up power and related purposes. Properly managed, this interconnection could be advantageous for both the data center and the utility system, provided that costs are appropriately allocated across the system. The U.S. government should continue to support the development of nuclear technology, including small modular reactors. U.S. utilities do not have the financial resources to assume the risk of building new nuclear-powered generating facilities. The emergence of a new set of customers, data center developers with enormous needs for electric power and deep pockets, changes the equation. The U.S. government has provided billions of dollars of support for new nuclear technologies and should continue to do so for the purpose of bringing their costs down. The U.S. government should continue to support energy efficiency improvements at data centers. Data centers use massive amounts of power for running servers, cooling systems, storage systems, networking equipment, backup systems, security systems and lighting. The National Renewable Energy Laboratory has developed a 'handbook' of measures that data centers can implement to reduce energy usage and achieve savings. In addition, there now are strong market forces to develop new super-efficient chips that will lower the unit costs of training and using AI models. The U.S. government should help accelerate the development of these chips given their leverage on U.S. electricity demand. The stakes in this public policy debate over our energy future could not be higher. If we get these policies right, AI has the potential to remake the U.S. economy and the energy infrastructure of this country. If we get it wrong, the push to build new generating and transmission facilities to provide gigawatts of power has the potential to overwhelm the financial and operational capacity our electric utility system, impose burdensome rate increases on homeowners and businesses, undercut efforts to reduce the use of fossil fuels to meet climate-related goals and compromise the reliability of our electricity grid for years to come. David M. Klaus is a consultant on energy issues who served as deputy undersecretary of the U.S. Department of Energy during the Obama administration and as a political appointee to two other Democratic presidents. Mark MacCarthy is the author of 'Regulating Digital Industries' (Brookings, 2023), an adjunct professor at Georgetown University's Communication, Culture & Technology Program, a nonresident senior fellow at the Institute for Technology Law and Policy at Georgetown Law and a nonresident senior fellow at the Brookings Institution.


Forbes
16 minutes ago
- Forbes
7 Business Lessons For AI
From above photo of an anonymous African-American woman analyzing business graph on a laptop ... More computer while sitting at restaurant desk with notebook, pen and eyeglasses. When considering any implementation of AI in a business, leadership teams have a weighty responsibility. This is an approach that people want to get right. They face a few challenges – that the technology is so nascent, that there doesn't seem to be a lot of road maps available for companies, and that many people instinctively distrust large language models to automate processes. So what's to be done? A Leader's Perspective Here's where I recently got some insights from a paper written by Lidiane Jones, who was previously head of Slack, and CEO of Bumble, a major dating platform. Jones breaks down some of the aspects of AI implementation that C-suite people are looking at. Data Transfers and Governance Jones points out that transformations like ETL (extract, transform, load) and ELT (extract, load, transform) predated AI, but data is still siloed in many cases. One solution Jones touts is an 'omnichannel data strategy' – this, she writes, 'will ensure privacy and security of your data, ease of access for business applications, offer real time capabilities and can integrate with your everyday tools.' Compliance with Financial Data Rules For example, Jones speaks about the need to focus on compliance in some areas. 'Every company has critical financial data, subject to audit, regulation and compliance that must be carefully protected,' she writes. 'Normally, for more scaled companies, this data sits on an ERP system. Every CEO, CFO, COO and CRO needs critical real-time insight from these systems, to determine how the business is performing against plans, how expenses are tracking against the budget or how a change in employee investment … will affect the overall cost structure and velocity of the business, among numerous other capital allocation considerations.' Business Intelligence for the Win In terms of general business intelligence, Jones spins a story to illustrate: 'Imagine a Sales Executive who develops a multi-year high trust relationship with one of a company's most important large customer, and she decides to leave the company for a better career opportunity,' she writes. 'Historically, though there will be good information about that customer and notes from this leader, much of her institutional knowledge leaves with her. Corporate human knowledge exists within organizations, and is shaped by the culture, people and business processes.' She then addresses the role of workflow tools and other platform resources. 'Collaboration software of all kinds like Slack, Google Workspace and Teams … have a lot of people's knowledge embedded in them that is hardly ever nurtured,' she adds. 'Unstructured data like this is highly effective in training LLMs, and can provide opportunities that haven't existed before - like capturing the sentiment of what this large customer loved the most about their relationship with this Sales Executive.' She also gave a nod to the potential difficulties, conceding that ' it might feel daunting to expand data strategy planning to be as broad as this,' but notes that partnering with vendors and other firms can help. 'Phasing and prioritizing how you bring more of your data into a single system is key to making progress and capturing business value along the way,' she writes. Agents do the Agenting Jones also makes an important point about the use of AI agents. It goes sort of like this: we're used to computers doing calculations, and digesting and presenting information, but these new systems can actually brainstorm on their own to change processes. 'In many instances, agents can optimize workflows themselves as they determine more effective ways to get the work done,' she writes. A Story of Implementation Citing ChatGPT's meteoric rise, Jones talked about using these technologies in the context of her work at Slack, which is, after all, a business communication tool. She chronicled the firm's connection with companies like OpenAI circa 2017. 'At the time, when I was leading Slack, it was exciting to collaborate with OpenAI, Cohere and Anthropic to use their LLMs to help our customers with some of the most challenging productivity challenges at Slack,' she writes. The challenges she enumerates: 'finding a conversation they knew they had but couldn't remember in what channel, or help customers manage the large amount of messages they received with summaries and prioritization, optimize search for information discovery and so much more.' Then, too, the company created tools. 'We introduced Slack Canvas based templates to help our customers quickly create content based on their corporate information, and captured Huddles' meeting notes and action items, and that was just the beginning,' she explains. 'The capabilities of LLMs gave us the opportunity to solve real-world customer challenges in a pleasant and insightful way, while maintaining the experience of the Slack brand.' Calling this kind of thing the 'table stakes' of the new corporate world, Jones walks us through a lot of the way stations on the path to what she calls 'co-intelligence.' That includes workflow automation, agentic AI, multi-agent systems, and new interfaces. Our AI Brethren Here's one way that Jones characterizes managing an AI: 'Considering autonomous agents as truly 'digital workers' can be a helpful framing for questions we already think of today with 'human workers' like: how does the employer track the quality of the work done? What systems does the digital worker have access to? If the company is audited, how do we track what steps and actions were taken by the digital worker? If the digital worker's actions turn malicious, how do we terminate the agent?' As for the extent of agent autonomy, Jones suggests that fully autonomous agents will be able to handle a complex or 'scoped' job on their own, conceding, though, that 'even an autonomous agent, like a human, needs a job scope and definition - or a set of instructions - on the job at hand.' This new world is one we will have to reckon with soon. Four Principles of Leadership Jones finished with a set of ideas for those who are considering these kinds of deployments. 1. Be hands-on: as a leader, stay close to what's happening 2. This one goes back to prior points: working with vendors and partners is a plus 3. Build an AI-first culture with AI-native projects 4. Find the value for your company I found this to be pretty useful for someone who is contemplating a big move in the age of AI. Some of the best ideas for leadership can be gleaned from TED talks, conferences, and these kinds of personal papers on experience with the industry.