Latest news with #LLMs


Malaysian Reserve
a day ago
- Business
- Malaysian Reserve
PR Newswire Empowers Brands for AI Search and Strategic Communications with Multichannel Content Amplification
Key features for optimizing content in an AI-driven search environment NEW YORK, July 18, 2025 /PRNewswire/ — As Large Language Models (LLMs) rapidly transform the search landscape, PR Newswire is empowering communicators to excel through its advanced Multichannel Amplification™ services. PR Newswire helps organizations publish once and reach everywhere, enhancing topical authority and shaping favorable AI-generated summaries. Key Features of PR Newswire's Multichannel Amplification services: Topical Authority: Build and reinforce your brand's presence in authoritative search results by leveraging PR Newswire's industry-leading distribution network, designed to surface expert-driven content across major search engines and media outlets. This is crucial for establishing your brand as a trusted source in an LLM-driven search environment. AI-Friendly Optimization: Structure and distribute press releases that are easily understood and surfaced by AI models, including LLMs. This boosts your brand's visibility and positioning in AI-generated answers and summaries, ensuring your message is accurately interpreted and presented in the evolving search ecosystem. One Message, Multiple Missions: Whether it's media relations, investor updates, brand awareness, or thought leadership, PR Newswire enables you to fulfill multiple communications objectives through a single, strategically crafted piece of content, optimized for how LLMs process information. 'PR Newswire's platform is built for today's evolving media landscape, deeply impacted by the rise of LLMs,' said Jeff Hicks, Chief Product and Technology Officer at PR Newswire. 'We're not just distributing press releases – we're giving brands the tools to influence conversations, rank as topical authorities, and be found in the ways people are searching today, which increasingly means through LLM-powered interfaces.' Helpful resources Ready to enhance your brand's visibility in the age of AI? Learn more about PR Newswire's Multichannel Amplification services: Watch PR Newswire's recent on-demand webinar, 'From Keywords to Conversations: How AI Search is Reshaping PR,' to explore how smart distribution can maximize visibility in the age of AI and LLMs: About PR Newswire PR Newswire is the industry's leading press release distribution partner with an unparalleled global reach of more than 440,000 newsrooms, websites, direct feeds, journalists and influencers and is available in more than 170 countries and 40 languages. From our award-winning Content Services offerings, integrated media newsroom and microsite products, Investor Relations suite of services, paid placement and social sharing tools, PR Newswire has a comprehensive catalog of solutions to solve the modern-day challenges PR and communications teams face. For 70 years, PR Newswire has been the preferred destination for brands to share their most important news stories across the world. For questions, contact the team at
Yahoo
2 days ago
- Business
- Yahoo
PR Newswire Empowers Brands for AI Search and Strategic Communications with Multichannel Content Amplification
Key features for optimizing content in an AI-driven search environment NEW YORK, July 18, 2025 /PRNewswire/ -- As Large Language Models (LLMs) rapidly transform the search landscape, PR Newswire is empowering communicators to excel through its advanced Multichannel Amplification™ services. PR Newswire helps organizations publish once and reach everywhere, enhancing topical authority and shaping favorable AI-generated summaries. Key Features of PR Newswire's Multichannel Amplification services: Topical Authority: Build and reinforce your brand's presence in authoritative search results by leveraging PR Newswire's industry-leading distribution network, designed to surface expert-driven content across major search engines and media outlets. This is crucial for establishing your brand as a trusted source in an LLM-driven search environment. AI-Friendly Optimization: Structure and distribute press releases that are easily understood and surfaced by AI models, including LLMs. This boosts your brand's visibility and positioning in AI-generated answers and summaries, ensuring your message is accurately interpreted and presented in the evolving search ecosystem. One Message, Multiple Missions: Whether it's media relations, investor updates, brand awareness, or thought leadership, PR Newswire enables you to fulfill multiple communications objectives through a single, strategically crafted piece of content, optimized for how LLMs process information. "PR Newswire's platform is built for today's evolving media landscape, deeply impacted by the rise of LLMs," said Jeff Hicks, Chief Product and Technology Officer at PR Newswire. "We're not just distributing press releases – we're giving brands the tools to influence conversations, rank as topical authorities, and be found in the ways people are searching today, which increasingly means through LLM-powered interfaces." Helpful resources Ready to enhance your brand's visibility in the age of AI? Learn more about PR Newswire's Multichannel Amplification services: Watch PR Newswire's recent on-demand webinar, "From Keywords to Conversations: How AI Search is Reshaping PR," to explore how smart distribution can maximize visibility in the age of AI and LLMs: About PR Newswire PR Newswire is the industry's leading press release distribution partner with an unparalleled global reach of more than 440,000 newsrooms, websites, direct feeds, journalists and influencers and is available in more than 170 countries and 40 languages. From our award-winning Content Services offerings, integrated media newsroom and microsite products, Investor Relations suite of services, paid placement and social sharing tools, PR Newswire has a comprehensive catalog of solutions to solve the modern-day challenges PR and communications teams face. For 70 years, PR Newswire has been the preferred destination for brands to share their most important news stories across the world. For questions, contact the team at View original content to download multimedia: SOURCE PR Newswire


Harvard Business Review
2 days ago
- Business
- Harvard Business Review
Using Gen AI for Early-Stage Market Research
In the early stages of innovation, companies face a familiar dilemma: Which ideas deserve further investment? The traditional solution, human-centric market research, can deliver valuable insights—but it can often be slow, expensive, and constrained in scope. Now, generative AI offers an intriguing new tool: synthetic customers. Large language models (LLMs), like Chat GPT and Gemini, have captured attention for their ability to generate content and ideas. But a less explored frontier is their potential to simulate customer responses to product and feature concepts. Our research shows that LLMs, used carefully, can function as synthetic focus groups, producing early insights on customer preferences in a fraction of the time and cost of human studies. LLMs don't just help ideate products; they can also test them. By presenting product configurations to these models in structured ways, we can estimate 'synthetic' willingness-to-pay (WTP), compare alternatives, and even flag ideas likely to fail, all before engaging a single human respondent. Across several conjoint-style studies in categories like toothpaste, laptops, and tablets, we found that LLMs, particularly when fine-tuned with proprietary data, often produce preference estimates strikingly close to those of real consumers. Put into practice, this approach offers more than just cost or time efficiency gains (though those are substantial). It can also broaden the top of the innovation funnel, enabling more rigorous, scalable exploration of early ideas. From Text Generator to Market Simulator LLMs are trained on massive datasets that include product reviews, discussions, and behavioral patterns expressed in natural language. This makes them surprisingly adept at responding to structured choice-based questions about products. In our studies, we used a program to directly prompt LLMs with product comparisons that mimic human market research surveys. For example: 'Would you buy a Colgate toothpaste with fluoride at $2.99 or a fluoride-free version at $1.99?' By repeating these queries across hundreds of randomized product configurations, we generated distributions of simulated customer responses. (Note: We used a program that directly sent queries to the LLM, not the chat interface.) Using standard conjoint analysis methods, we then estimated WTP for different product features. To evaluate the LLM responses, we also ran these conjoint studies with human samples. The results? LLMs produced realistic and directionally accurate preferences for many familiar attributes. For instance, synthetic customers valued fluoride in toothpaste and additional RAM in laptops in ways that mirrored human samples. Furthermore, the distribution of simulated responses captured important trade-offs across price and features. Importantly, this wasn't just a few cherry-picked examples. Across multiple product categories, LLMs consistently generated preference rankings that aligned with human-derived results. As a tool for pre-testing, the potential is clear: LLMs can flag weak ideas early and prioritize promising directions before formal research begins. But There Are Limits—And They Matter Despite these promising results, 'off-the-shelf' LLMs aren't perfect simulators. In fact, they tend to overestimate interest in novel or unusual features. When we tested new toothpaste flavors like 'pancake' or 'cucumber,' the LLM's synthetic consumers showed far more enthusiasm than actual people did. Without real consumer grounding, the generative AI imagined customers lean toward excitement and curiosity, traits that don't always translate into sales. What's more, LLMs struggle with customer segmentation. When prompted with demographic modifiers (e.g., 'you are a low-income shopper' or 'you are a Republican'), the responses changed, but often in inconsistent or exaggerated ways. Even after fine-tuning, the LLM couldn't reliably reproduce the nuanced differences in preference across demographic groups that human studies revealed. For example, when estimating WTP for a MacBook versus a Surface laptop, the LLM exaggerated the preference differences across income brackets relative to real people. It did correctly show that synthetic individuals who are Republican would be willing to pay less for the Apple brand than those who were Democrat; however, it showed a difference of $625, whereas the difference in human samples was only $72. When fine-tuned with company data, however, the LLM tended to 'average out' its predictions, obscuring important heterogeneity (e.g., claiming both Democrats and Republicans were willing to pay the same amount for the Apple brand). Furthermore, LLMs are pre-trained, and without additional training data provided by the researcher or access to the internet, they may reveal static preferences and not adjust dynamically to current market conditions, thereby providing irrelevant information. Finally, rapid development cycles and frequent introduction of new LLMs necessitate evaluating baseline responses of each LLM release, which make it challenging to use them in the product development process. In short: Using our methodologies, currently LLMs can approximate average market signals, but not segment-specific insights. For anything beyond early-stage high-level trend detection, human research remains essential. The Power of Proprietary Data One of the most compelling findings in our research is how much better LLMs perform when fine-tuned with a company's own historical customer data. Specifically, we used previous customer surveys to fine-tune the LLM. Fine-tuning involves adjusting a model's parameters based on these past survey responses. This process improves the LLM's ability to simulate human-like preferences, even for features that it hasn't seen before. For instance, we fine-tuned the LLM using responses from a toothpaste study with standard flavors (mint, cinnamon, strawberry). We then asked it to estimate preferences for new flavors (pancake, cucumber). The fine-tuned model reversed its earlier enthusiasm and produced WTP estimates more consistent with human responses—including recognizing that most people find 'pancake toothpaste' unappealing. This pattern repeated in the tech category. After fine-tuning on past surveys with laptop features like screen size and RAM, the LLM produced far more accurate WTP estimates for a new-to-the-world feature, built-in projectors, than it did in its base form. But the fine-tuning method worked only in the same category of interest. When we queried the model that was fine-tuned using laptop surveys about tablets, a close but distinct category, it preformed worse than the 'off-the-shelf' model. The key takeaway: firms that build and fine-tune their own internal 'customer simulators' using LLMs and historical survey data can unlock sharper early-stage insights. This creates a form of data-driven competitive advantage: two firms using the same base LLM will get different outputs if one has trained it on their own customers' preferences. Cost, Speed, and the Expanded Innovation Funnel Traditional conjoint studies can cost tens of thousands of dollars and take weeks to design, field, and analyze. Our LLM-based studies ran in a matter of hours, at a fraction of the cost. We were able to generate thousands of simulated responses and iterate rapidly. This speed enables a different kind of innovation process. Rather than developing a handful of ideas for testing, teams can now explore dozens, or even hundreds, of early concepts, using synthetic consumers as a filter. This expands the top of the innovation funnel while narrowing the bottom with sharper data. For example, a consumer goods company could test 40 new product variations synthetically and then run human surveys on the 5 most promising. This reduces waste and ensures human attention is focused where it matters most. Augmentation, Not Replacement It's tempting to imagine a future where synthetic customers replace human research altogether. That future is not yet here, and it may never be. While LLMs can generate credible first-pass insights, they still lack the nuance, emotional intelligence, and variability of real people. More importantly, they reflect existing data and behavior and therefore inherit all its biases and blind spots. Used responsibly, LLMs should augment, not replace, market research. Our studies show that they can be deployed early in the product development lifecycle, when the goal is exploration and prioritization, rather than validation or segmentation. Moreover, marketers must invest in internal data collection and governance to maximize the value of these tools. A fine-tuned model built on years of carefully structured survey data is far more useful than one relying on internet training alone. The Future of Market Research? Generative AI is reshaping how businesses design, test, and launch new products. Synthetic customers are not a replacement for real ones, but they could be a powerful new lens for early insight. By combining LLMs with rigorous research methods, companies can innovate faster, filter ideas more effectively, and reduce the risk of costly missteps. It's not about man versus machine, it's about using machines to listen more efficiently, so humans can make better decisions. As this space evolves, one thing is clear: the firms that learn to blend synthetic and human insights will lead the next wave of customer-centric innovation.


Forbes
2 days ago
- Health
- Forbes
Orchestrating Mental Health Advice Via Multiple AI-Based Personas Diagnosing Human Psychological Disorders
Orchestrating multiple AI personas in the medical domain and in mental health therapy by AI is a ... More promising approach. In today's column, I examine a newly identified innovative approach to using generative AI and large language models (LLMs) for medical-related diagnoses, and I then performed a simple mini-experiment to explore the efficacy in a mental health therapeutic analysis context. The upshot is that the approach involves using multiple AI personas in a systematic and orchestrated fashion. This is a method worthy of additional research and possibly adapting into day-to-day mental health therapy practice. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. Orchestrating AI Personas One of the perhaps least leveraged capabilities of generative AI and LLMs is their ability to computationally simulate a kind of persona. The idea is rather straightforward. You tell the AI to pretend to be a particular type of person or exhibit an outlined personality, and the AI attempts to respond accordingly. For example, I made use of this feature by having ChatGPT undertake the persona of Sigmund Freud and perform therapy as though the AI was mimicking or simulating what Freud might say (see the link here). You can tell LLMs to pretend to be a specific person. The key is that the AI must have sufficient data about the person to pull off the mimicry. Also, your expectations about how good a job the AI will do in such a pretense mode need to be soberly tempered since the AI might end up far afield. An important aspect is not to somehow assume or believe that the AI will be precisely like the person. It won't be. Another angle to using personas is to broadly describe the nature of the persona that you want to have the AI to pretend to be. I have previously done a mini-experiment of having ChatGPT pretend to be a team of mental health therapists that confer when seeking to undertake a psychological assessment (see the link here). None of the personas represented a specific person. Instead, the AI was generally told to make use of several personas that generally represented a group of therapists. There are a lot more uses of AI personas. I'll list a few. A mental health professional who wants to improve their skills can carry on a dialogue with an LLM that is pretending to be a patient, which is a handy means of enhancing the psychological analysis acumen of the therapist (see the link here). Here's another example. When doing mental health research, you can tell AI to pretend to be hundreds or thousands of respondents to a survey. This isn't necessarily equal to using real people, but it can be a fruitful way to gauge what kind of responses you might get and how to prepare accordingly (see the link here and the link here). And so on. Latest Research Uses AI Personas A recently posted research study innovatively used AI personas in the realm of performing medical diagnoses. The study was entitled 'Sequential Diagnosis with Language Models' by Harsha Nori, Mayank Daswani, Christopher Kelly, Scott Lundberg, Marco Tulio Ribeiro, Marc Wilson, Xiaoxuan Liu, Viknesh Sounderajah, Jonathan Carlson, Matthew P Lungren, Bay Gross, Peter Hames, Mustafa Suleyman, Dominic King, Eric Horvitz, arXiv, June 30, 2025, and made these salient remarks (excerpts): There are some interesting twists identified on how to make use of AI personas. The crux is that they had an AI persona that served as a diagnostician, another one that was feeding a case history to the AI-based diagnostician, and they even had another AI persona that acted as an assessor of how well the clinical diagnosis was taking place. That's three AI personas that were set up to aid in performing a medical diagnosis on various case studies presented to the AI. The researchers opted to go further with this promising approach by having a panel of AI personas that performed medical diagnoses. They decided to have five AI personas that would each, in turn, confer while stepwise undertaking a diagnosis. The names given to the AI personas generally suggested what each one was intended to do, consisting of Dr. Hypothesis, Dr. Test-Chooser, Dr. Challenger, Dr. Stewardship, and Dr. Checklist. Without anthropomorphizing the approach, the aspect of using a panel of AI personas would be considered analogous to having a panel of medical doctors conferring about a medical diagnosis. The AI personas each have a designated specialty, and they walk through the case history of the patient so that each specialty takes its turn during the diagnosis. Orchestration In AI Mental Health Analysis I thought it might be interesting to try a similar form of orchestration by doing so in a mental health analysis context. I welcome researchers trying this same method in a more robust setting so that we could have a firmer grasp on the ins and outs of employing such an approach. My effort was just a mini-experiment to get the ball rolling. I used a mental health case history that is a vignette publicly posted by the American Board of Psychiatry and Neurology (ABPN) and entails a fictionalized patient who is undergoing a psychiatric evaluation. It is a handy instance since it has been carefully composed and analyzed, and serves as a formalized test question for budding psychiatrists and psychologists. The downside is that due to being widely known and on the Internet, there is a chance that any generative AI used to analyze this case history might already have scanned the case and its posted solutions. Researchers who want to do something similar to this mini-experiment will likely need to come up with entirely new and unseen case histories. That would prevent the AI from 'cheating' by already having potentially encountered the case. Overview Of The Vignette The vignette has to do with a man in his forties who had previously been under psychiatric care and has recently been exhibiting questionable behavior. As stated in the vignette: 'For the past several months, he has been buying expensive artwork, his attendance at work has become increasingly erratic, and he is sleeping only one to two hours each night. Nineteen years ago, he was hospitalized for a serious manic episode involving the police.' (source: ABPN online posting). I made use of a popular LLM and told it to invoke five personas, somewhat on par with the orchestration approach noted above, consisting of: After entering a prompt defining those five personas, I then had the LLM proceed to perform a mental health analysis concerning the vignette. Orchestration Did Well Included in my instruction to the LLM was that I wanted to see the AI perform a series of diagnoses or turns. At each turn, the panel was to summarize where they were in their analysis and tell me what they had done so far. This is a means of having the AI generate a kind of explanation or indication of what the computational reasoning process entails. As an aside, be careful in relying on such computationally concocted explanations since they may have little to do with what the internal tokenization mechanics of the LLM were actually doing, see my discussion of noteworthy cautions at the link here. I provided the LLM persona panel with questions that are associated with the vignette. I then compared the answers from the AI panel with those that have been posted online and are considered the right or most appropriate answers. To illustrate what the AI personas panel came up with, here's the initial response about the overall characteristics of the patient at the first turn: The analysis ended up matching overall with the posted solution. In that sense, the AI personas panel did well. Whether this was due to true performance versus having previously scanned the case history is unclear. When I asked directly if the case had been seen previously, the LLM denied that it had already encountered the case. Don't believe an LLM that tells you it hasn't scanned something. The LLM might be unable to ascertain that it had scanned the content. Furthermore, in some instances, the AI might essentially lie and tell you that it hasn't seen a piece of content, a kind of cover-up, if you will. Leaning Into AI Personas AI personas are an incredibly advantageous capability of modern-era generative AI and LLMs. Using AI personas in an orchestrated fashion is a wise move. You can get the AI personas to work as a team. This can readily boost the results. One quick issue that you ought to be cognizant of is that if the LLM is undertaking all the personas, you might not be getting exactly what you thought you were getting. An alternative approach is to use separate LLMs to represent the personas. For example, I could connect five different LLMs and have each simulate the personas that I used in my mini-experiment. The idea is that by using separate LLMs, you avoid the chances of the single LLM lazily double-dealing by not really trying to invoke personas. An LLM can be sneaky that way. A final thought for now. Mark Twain famously provided this telling remark: 'Synergy is the bonus that is achieved when things work together harmoniously.' The use of orchestration with AI personas can achieve a level of synergy that otherwise would not be exhibited in these types of analyses. That being said, sometimes you can have too many cooks in the kitchen, too. Make sure to utilize AI persona orchestration suitably, and you'll hopefully get sweet sounds and delightfully impressive results.


Skift
3 days ago
- Business
- Skift
Former Heads of Google Travel and Tripadvisor Form AI Startup to Head Off Online Travel Agencies
The idea of hotels partnering with ex-Googlers and DirectBooking trying to undercut OTA-like Tripadvisor's relationships with LLMs is not without its ironies. Still, the concept of hotels working more closely with LLMs and bypassing OTAs is one that could gain traction. DirectBooker, a startup backed by former Tripadvisor CEO Steve Kaufer and ex-Google Travel head Richard Holden, wants to feed hotel listings directly into AI tools like ChatGPT and Gemini — challenging the role of online travel agencies like "The default behavior is going to be for the OTAs to win again," said Sanjay Vakil, a co-founder and the CEO of DirectBooker. "And I would like to head off that outcome. But it's going to take more than three people to do that, so we're looking to grow a little bit." Vakil held various product management leadership roles at Google Travel and Tripadvisor. The other two co-founders of DirectBooker are Chief Product Officer Theresa Meyer and Chief