
AI will be ready to manage your money in 5 years: Andrew Lo
Now Lo, a finance professor at the Massachusetts Institute of Technology and leading AI expert, believes the same kind of technology that nailed the stock call could soon do far more. Not just dispense advice, but manage money, balance risk, tailor strategies-and meet one of finance's highest duties: acting in a client's best interest. Within five years, he predicts, large language models will have the technical capability to make real investment decisions on behalf of clients.
Explore courses from Top Institutes in
Please select course:
Select a Course Category
Operations Management
Healthcare
Finance
Data Science
Cybersecurity
Project Management
MBA
Data Science
CXO
Technology
Others
Leadership
Management
Product Management
Digital Marketing
MCA
Design Thinking
healthcare
Artificial Intelligence
PGDM
Public Policy
Degree
Data Analytics
others
Skills you'll gain:
Quality Management & Lean Six Sigma
Analytical Tools
Supply Chain Management & Strategies
Service Operations Management
Duration:
10 Months
IIM Lucknow
IIML Executive Programme in Strategic Operations Management & Supply Chain Analytics
Starts on
Jan 27, 2024
Get Details
Lo, 65, has long bridged the worlds of finance and technology. He co-founded QLS Advisors, a firm that applies machine learning to health care and asset management, and helped pioneer quantitative investing when it was still viewed as fringe. He believes that generative AI, despite its flaws, is fast approaching the capacity to parse complex market dynamics, weigh long-term risks, and earn the kind of trust typically reserved for human advisers. "This could be in the form of the so-called agent AI where we have agents that are working on our behalf and making decisions on our behalf in an automated fashion," Lo said in an interview. "I believe that within the next five years we're going to see a revolution in how humans interact with AI."
The idea still sounds radical on Wall Street, where ChatGPT-style tools are mostly confined to junior-level work such as data collection and analysis. Yet Lo's vision goes beyond that: under the right regulatory guardrails, AI could evolve from a hard-working but rigid researcher to meet one of finance's highest bars: the fiduciary standard.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
35 minutes ago
- News18
Your ChatGPT Therapy Sessions Are Not Confidential, Warns OpenAI CEO Sam Altman
Last Updated: Sam Altman raised concerns about user data confidentiality with AI chatbots like ChatGPT, especially for therapy, citing a lack of legal frameworks to protect sensitive info. OpenAI CEO Sam Altman has raised concerns about maintaining user data confidentiality when it comes to sensitive conversations, as millions of people, including children, have turned to AI chatbots like ChatGPT for therapy and emotional support. In a recent podcast, This Past Weekend, hosted by Theo Von on YouTube, CEO Altman replied to a question about how AI works with the current legal system, cautioning that users shouldn't expect confidentiality in their conversations with ChatGPT, citing the lack of a legal or policy framework to protect sensitive information shared with the AI chatbot. 'People talk about the most personal sh*t in their lives to ChatGPT. People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT." Altman continued to say that the concept of confidentiality and privacy for conversations with AI should be addressed urgently. 'So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," the Indian Express quoted Altman as saying. This means that none of your conversations with ChatGPT about mental health, emotional advice, or companionship are private and can be produced in court or shared with others in case of a lawsuit. Unlike end-to-end encrypted apps like WhatsApp or Signal, which prevent third parties from reading or accessing your chats, OpenAI can access your chats with ChatGPT, using them to improve the AI model and detect misuse. While OpenAI claims to delete free-tier ChatGPT conversations within 30 days, but may retain them for legal or security reasons. Adding to privacy concerns, OpenAI is currently in the middle of a lawsuit with The New York Times, which requires the company to save user conversations with millions of ChatGPT users, excluding enterprise customers. view comments First Published: July 26, 2025, 22:27 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Economic Times
44 minutes ago
- Economic Times
Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman
OpenAI CEO Sam Altman Flags Privacy Loophole in ChatGPT's Use as a Digital Confidant. (Image Source: YouTube/@Theo Von) Synopsis OpenAI CEO Sam Altman has warned that conversations with ChatGPT are not legally protected, unlike those with therapists, doctors, or lawyers. In a podcast with Theo Von, Altman explained that users often share deeply personal information with the AI, but current laws do not offer confidentiality. This means OpenAI could be required to hand over user chats in legal cases. He stressed the need for urgent privacy regulations, as the legal system has yet to catch up with AI's growing role in users' personal lives. Many users may treat ChatGPT like a trusted confidant—asking for relationship advice, sharing emotional struggles, or even seeking guidance during personal crises. But OpenAI CEO Sam Altman has warned that unlike conversations with a therapist, doctor, or lawyer, chats with the AI tool carry no legal confidentiality. ADVERTISEMENT During a recent appearance on This Past Weekend, a podcast hosted by comedian Theo Von, Altman said that users, particularly younger ones, often treat ChatGPT like a therapist or life coach. However, he cautioned that the same legal safeguards that protect personal conversations in professional settings do not extend to AI. Altman explained that legal privileges—such as doctor-patient or attorney-client confidentiality—do not apply when using ChatGPT. If there's a lawsuit, OpenAI could be compelled to turn over user chats, including the most sensitive ones. 'That's very screwed up,' Altman admitted, adding that the lack of legal protection is a major gap that needs urgent attention. Altman believes that conversations with AI should eventually be treated with the same privacy standards as those with human professionals. He pointed out that the rapid adoption of generative AI has raised legal and ethical questions that didn't even exist a year ago. Von, who expressed hesitation about using ChatGPT due to privacy concerns, found Altman's warning OpenAI chief acknowledged that the absence of clear regulations could be a barrier for users who might otherwise benefit from the chatbot's assistance. 'It makes sense to want privacy clarity before you use it a lot,' Altman said, agreeing with Von's to OpenAI's own policies, conversations from users on the free tier can be retained for up to 30 days for safety and system improvement, though they may sometimes be kept longer for legal reasons. This means chats are not end-to-end encrypted like on messaging platforms such as WhatsApp or Signal. OpenAI staff may access user inputs to optimize the AI model or monitor misuse. ADVERTISEMENT The privacy issue is not just theoretical. OpenAI is currently involved in a lawsuit with The New York Times, which has brought the company's data storage practices under scrutiny. A court order related to the case has reportedly required OpenAI to retain and potentially produce user conversations—excluding those from its ChatGPT Enterprise customers. OpenAI is appealing the order, calling it an also highlighted that tech companies are increasingly facing demands to produce user data in legal or criminal cases. He drew parallels to how people shifted to encrypted health tracking apps after the U.S. Supreme Court's Roe v. Wade reversal, which raised fears about digital privacy around personal choices. ADVERTISEMENT While AI chatbots like ChatGPT have become a popular tool for emotional support, the legal framework surrounding their use hasn't caught up. Until it does, Altman's message is clear: users should be cautious about what they choose to share. (Catch all the Budget 2024 News, Budget 2024 Live Coverage Events and Latest News Updates on The Economic Times.) NEXT STORY


Time of India
2 hours ago
- Time of India
Telling secrets to ChatGPT? Using it as a therapist? Your AI chats aren't legally private, warns Sam Altman
Many users may treat ChatGPT like a trusted confidant—asking for relationship advice, sharing emotional struggles, or even seeking guidance during personal crises. But OpenAI CEO Sam Altman has warned that unlike conversations with a therapist, doctor, or lawyer, chats with the AI tool carry no legal confidentiality. During a recent appearance on This Past Weekend, a podcast hosted by comedian Theo Von, Altman said that users, particularly younger ones, often treat ChatGPT like a therapist or life coach. However, he cautioned that the same legal safeguards that protect personal conversations in professional settings do not extend to AI. Explore courses from Top Institutes in Please select course: Select a Course Category Data Science Artificial Intelligence Operations Management Degree Healthcare Technology Design Thinking Leadership Digital Marketing Public Policy Product Management CXO Data Analytics Finance Others others MCA PGDM Project Management Cybersecurity Data Science Management MBA healthcare Skills you'll gain: Duration: 10 Months IIM Kozhikode CERT-IIMK DABS India Starts on undefined Get Details Skills you'll gain: Duration: 11 Months IIT Madras CERT-IITM Advanced Cert Prog in AI and ML India Starts on undefined Get Details Skills you'll gain: Duration: 11 Months E&ICT Academy, Indian Institute of Technology Guwahati CERT-IITG Postgraduate Cert in AI and ML India Starts on undefined Get Details Skills you'll gain: Duration: 30 Weeks IIM Kozhikode SEPO - IIMK-AI for Senior Executives India Starts on undefined Get Details Skills you'll gain: Duration: 10 Months E&ICT Academy, Indian Institute of Technology Guwahati CERT-IITG Prof Cert in DS & BA with GenAI India Starts on undefined Get Details Altman explained that legal privileges—such as doctor-patient or attorney-client confidentiality—do not apply when using ChatGPT. If there's a lawsuit, OpenAI could be compelled to turn over user chats, including the most sensitive ones. 'That's very screwed up,' Altman admitted, adding that the lack of legal protection is a major gap that needs urgent attention. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like SRM Online MBA | India's top ranked institute SRM Online Learn More Undo Altman Urges New Privacy Standards for AI Altman believes that conversations with AI should eventually be treated with the same privacy standards as those with human professionals. He pointed out that the rapid adoption of generative AI has raised legal and ethical questions that didn't even exist a year ago. Von, who expressed hesitation about using ChatGPT due to privacy concerns, found Altman's warning validating. The OpenAI chief acknowledged that the absence of clear regulations could be a barrier for users who might otherwise benefit from the chatbot's assistance. 'It makes sense to want privacy clarity before you use it a lot,' Altman said, agreeing with Von's skepticism. Chats Can Be Accessed and Stored According to OpenAI's own policies, conversations from users on the free tier can be retained for up to 30 days for safety and system improvement, though they may sometimes be kept longer for legal reasons. This means chats are not end-to-end encrypted like on messaging platforms such as WhatsApp or Signal. OpenAI staff may access user inputs to optimize the AI model or monitor misuse. The privacy issue is not just theoretical. OpenAI is currently involved in a lawsuit with The New York Times, which has brought the company's data storage practices under scrutiny. A court order related to the case has reportedly required OpenAI to retain and potentially produce user conversations—excluding those from its ChatGPT Enterprise customers. OpenAI is appealing the order, calling it an overreach. Debate Around AI and Data Rights Altman also highlighted that tech companies are increasingly facing demands to produce user data in legal or criminal cases. He drew parallels to how people shifted to encrypted health tracking apps after the U.S. Supreme Court's Roe v. Wade reversal, which raised fears about digital privacy around personal choices. While AI chatbots like ChatGPT have become a popular tool for emotional support, the legal framework surrounding their use hasn't caught up. Until it does, Altman's message is clear: users should be cautious about what they choose to share.