logo
I spoke to an AI-version of my future self – and it might have changed my life

I spoke to an AI-version of my future self – and it might have changed my life

Independent16-02-2025
When I was 15, I wrote a letter to my 30-year-old self, which was about as far into the future as I could imagine at that age. I found it when I was clearing out some boxes in my mid twenties, and although I'd completely forgotten what I'd written, some of it had already partly come true.
I had imagined myself a sports journalist (not far off); that I'd bought a bus and travelled across Europe (it wasn't a double decker, but I'd driven an old campervan from Ireland to North Africa in my early twenties); and that I'd been in a car chase (this one had not come true).
I didn't know it at the time, but engaging with an older version of myself in this way is something psychologists refer to as 'future self-continuity' – a field of study that is the inspiration behind a new AI chatbot that lets you talk to a simulation of your future self.
Developed by researchers at Massachusetts Institute of Technology (MIT), Future You uses the same large language model (LLM) technology as ChatGPT, but trained on your own personal data. Keen to see how realistic it is – and maybe find out if I ever do get in a car chase – I signed up as one of the thousands of participants in the research.
After an extensive survey to determine my emotions, hopes, fears, current situation and future goals, a personalised chatbot appeared, offering to share its 'experiences and insights' with me. The system also used age progression software to transform a picture from my webcam into a 60-year-old version of me. (I apparently keep the bad haircut, but lose the blue eyes.)
I began with some simple questions to my future self: Do I have any regrets? (Not spending enough time with my children); What have been the highlights? (Travelling the world with my wife); Will I still have a landlord when I'm 60? (No! I'm told I buy a 'beautiful home in the countryside'.)
None of the answers felt particularly insightful, more like the stock answers given by a fairground fortune teller. I gave up asking anymore questions and instead turned to an early study of the Future You chatbot, involving 344 participants, to see what benefits there might be to this AI time machine.
The study found that the people who interacted with the AI system were less anxious and reported improved motivation. The results mirrored previous research into future self continuity, which has shown that the connection a person feels with their future self can have a positive impact on their long-term decision making and general well being.
In a 2011 study on the subject, US-based psychologist Hal Hershfield wrote: 'When the future self shares similarities with the present self, when it is viewed in vivid and realistic terms, and when it is seen in a positive light, people are more willing to make choices that may benefit them at some point in the years to come.'
MIT says its Future You platform draws on these insights to 'create an experience that reduces anxiety and strengthens users' connection to their future potential'. The researchers also said that the more I interact with the chatbot, the more it learns, and the more personalised the responses become.
So I tried again, this time being more specific in my approach and sharing more details about my current life.
Future me said that the most cherished memory of my middle age is a trip to Bali for my 10 year wedding anniversary. 'The two of you stayed in a private villa surrounded by lush jungles and breathtaking views of the ocean,' the bot told me.
I'd never considered that trip before, but it's my 10 year wedding anniversary next year and I immediately began looking into holidays to Indonesia.
When I told my wife, she was on board for the trip, but also had a load of questions for future me. 'Will we still live in France?' (No, we move to England); 'Will we have any more children?' (Also no, two is enough apparently); 'Which of us will die first?' (It didn't know because my wife is still alive in this simulated future. Some of my friends are dead though).
The more I talked with it, the more convincing it felt, which made me worry – not for the first time – that AI will soon be taking my job.
So I asked it if AI will replace my job. ' Artificial intelligence is advancing at a rapid pace,' it told me, ' but there will always be certain aspects of work that require human touch and creativity.' Not very reassuring.
I wondered if it could write an article for me. In fact, could it write this article for me? 'As I sit here, talking to an AI version of my future self, I can't help but feel a sense of wonder and curiosity,' it wrote. 'It's a surreal experience to be able to have a conversation with someone who has already lived the years that I am currently experiencing… I learned about the importance of prioritising love and family above all else… And perhaps most importantly, my future self reminded me to embrace the journey rather than just focussing on reaching specific goals or achievements.' Back to platitudes.
How about writing this article, but from the AI's perspective? 'This experience highlights the ever-evolving relationship between humans and technology,' it wrote. 'As we talked, I couldn't help but feel a sense of nostalgia. I shared stories and life lessons with my past self – about love, loss, and embracing the journey. These are all things that may seem simple or cliché, but they hold so much truth and wisdom that goes beyond time.'
Similar to the letter I wrote when I was 15, it was beginning to feel a bit like a one-way conversation – though this time it was the future me saying what I was going to do, rather than the other way around. But it did allow me to do something that I haven't done since then, which is to properly think about what a future me might be like. These chats may even shape my life's trajectory, even if I forget all about them. Finally, I asked the AI if I will ever be in a car chase. No, future me told me; but he's only 60. There's still plenty of time.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tech's trillion-dollar binge, Palantir's empire and women's privacy under attack
Tech's trillion-dollar binge, Palantir's empire and women's privacy under attack

The Guardian

time26 minutes ago

  • The Guardian

Tech's trillion-dollar binge, Palantir's empire and women's privacy under attack

Hello, and welcome to TechScape. This week, tech companies are spending amounts of money that stretch the limits of the imagination. Donald Trump's administration is spending more money with data analytics and surveillance firm Palantir. And women on both sides of the Pacific face the extreme difficulty of keeping intimate moments private online. In last week's edition of the newsletter, my colleagues wrote about the upshot of Google's earnings call: lots of money earned, but, more importantly lots of money spent on AI. Even more money shelled out than previously expected: Google revised its predictions for how much it would invest in building up its AI capacity upwards by billions. Investors loved it. Shares up. In the ensuing week, three more tech giants reported their quarterly earnings – Meta, Microsoft and Amazon – and disclosed that they have collectively spent $155bn. Investors expressed elation at the colossal sums. Meta's market capitalization shot up by more than $130bn. Microsoft's valuation soared past $4tn, making the software giant the second publicly traded company to reach that stratospheric milestone. Amazon's financial outlook was murkier, and its shares went down. What a bummer to miss out on the AI stock party. The $155bn sum represents more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far. One economics research firm claims AI spending has contributed more to the US economy over the past two quarters than consumer spending, traditionally the biggest factor in economic growth. For the coming fiscal year, big tech's total capital expenditure is slated to balloon enormously, surpassing the already eye-popping sums of the previous year. Microsoft plans to unload about $100bn on AI in the next fiscal year, CEO Satya Nadella said. Meta plans to spend between $66bn and $72bn. Alphabet plans to spend $85bn, significantly higher than its previous estimation of $75bn. Amazon estimated that its 2025 expenditure would come to $100bn as it plows money into Amazon Web Services, which analysts now expect to amount to $118bn. In total, the four tech companies will spend more than $400bn on capex in the coming year, according to the Wall Street Journal. Read more about the gargantuan sums of money being spent on AI. Last week, the US army announced a new agreement with Palantir, the Peter Thiel-founded, Alex Karp-run technology company. The agreement combined 75 separate, existing contracts between the army and Palantir into one, and allows for the possibility to purchase goods and services up to $10bn. It's just one of dozens of agreements between the company and the US, a relationship that's only been growing rapidly in the second Trump administration, though it had been on the rise before then. Palantir brought in $373m in revenue from US government contracts in just the first quarter of 2025 – $151.6m more than a year prior. The vast majority of that increase – $148.7m – came from government customers who were already working with the company, according to its earnings reports. Department: Defense Worth: The Department of Defense lists its 'obligations' to Palantir as $1.66bn in the government, which can encompass current and future spending, according to the US government's database of its own spending. Financial analysts estimate Palantir earns $400m in annual recurring revenue from the DoD. The details: The DoD remains Palantir's biggest and oldest customer within the US federal government. The first contract between the two dates back to 2008. The army has made no commitment and is under no obligation to purchase anywhere close to the $10bn figure listed as the value of its new agreement with Palantir, which represents the 'maximum potential value of the contract', according to the press release the government published. The number is not exactly money in hand for Palantir, but analysts seem encouraged it could represent a major source of revenue and more business from the US government. 'It's no obligation but we believe the army will spend billions with Palantir with this contract,' said Dan Ives, managing director at the wealth management firm Wedbush Securities. Department: Homeland Security Worth: $256.7m in obligations Details: The company has been working with the homeland security department since 2011. The vast majority of Palantir's contracts with DHS are to provide services to Immigration and Customs Enforcement, or Ice. These amount to $248.3m in obligations. The company's most recent contract with Ice was for $30m to make the deportation process more efficient. Department: Health and Human Services Worth: $385m in obligations Department: Treasury Worth: $140.9m Department: Justice Worth: $204.5m Department: Energy Worth: $91m Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Department: State Worth: $56.1m Department: Transportation Worth: $55.92m In the UK, Palantir has won a £330m contract with the National Health Service. On both sides of the Pacific Ocean, women are being confronted with the difficulty of keeping their most personal moments private online. In the US, an app that pledged to make IRL dating safer has suffered a breach that may imperil its users. In China, hundreds of thousands of men are sharing explicit pictures without the consent of the women in the photos. The app in the US, Tea, offered a forum for women who subscribed to share past experiences with men so that other women could conduct DIY background checks on their prospective dates, highlighting negative 'red flags' and positive 'green flags'. Tea's owners bill the app as 'the safest place to spill', in reference to the English slang term for gossip. It has topped US download charts in recent weeks, and the company has boasted about a user base of 1.6 million women. It is only available in the US. The app promised 'dating safety tools that protect women'. but in late July, the company discovered that hackers had breached its systems and leaked users' driver's licenses, direct messages and selfies. Users of the noxious message board 4Chan screenshotted and spread Tea users' personal information, according to NPR. A second breach exposed more than a million messages sent by Tea's users, including ones about sensitive topics like abortions or cheating, per 404 Media, which first reported both breaches. The company claimed in a statement that the first breach only affected users who had signed up before February 2024, but the second one was much more recent, 404 reported. In response to the hack, the app has suspended messaging entirely, the BBC reported. Most data breaches inspire little public uproar. The exposure of an email address here, a birthday there can feel commonplace. The breach of Tea is different. The app promised safety as a core feature. It delivered the opposite. The sine qua non of a whisper network like Tea is privacy, the ability to share damning information in secret, which the app failed to protect. Exposing users' identities and messages is the most basic type of failure, one that can be fatal to a product's reputation. To make matters worse, the breach offers red meat to the male-dominated 4Chan forum, a node of incel culture and men's rage in the US. 'Our team remains fully engaged in strengthening the Tea app's security, and we look forward to sharing more about those enhancements soon,' the company said in a statement to the BBC. 'In the meantime, we are working to identify any users whose personal information was involved and will be offering free identity protection services to those individuals.' In China, women are facing down an online legion of men dedicated to invading their most private moments with spy cameras and sharing the results on the internet. My colleague Amy Hawkins reports: Anger is growing on Chinese social media after news reports revealed the existence of online groups, said to involve hundreds of thousands of Chinese men, which shared photographs of women, including sexually explicit ones, taken without their consent. The Chinese newspaper Southern Metropolis Daily published a report last week about a group on the encrypted messaging app Telegram called 'MaskPark tree hole forum'. It said it had more than 100,000 members and was 'comprised entirely of Chinese men'. Men reportedly shared sexually explicit images of women either in intimate settings or with so-called 'pinhole cameras' that can be hidden in everyday items such as plug sockets and shoes. Read the full story. In an influential 2014 essay, 'Why women aren't welcome on the internet', the writer Amanda Hess said that receiving countless graphic death and rape threats in response to her work did not make her exceptional: 'It just makes me a woman with an internet connection.' Events of the past week indicate that Hess's headline still holds true. Tim Berners-Lee, credited with inventing the world wide web, told the Guardian in 2020 that the internet 'is not working for women and girls'. The same year, a Unesco report found that 73% of women journalists endured online threats to their safety. Other UN reports have found that significant portions of women across the world, somewhere between 16% and 58%, face threats of gender-based violence online.

ChatGPT set for new updates in response to rising usage
ChatGPT set for new updates in response to rising usage

The Independent

time2 hours ago

  • The Independent

ChatGPT set for new updates in response to rising usage

OpenAI is urgently addressing concerns that ChatGPT fails to recognise and appropriately respond to users experiencing mental or emotional distress. Reports indicate that people are increasingly using ChatGPT for mental health support, but the system has been criticised for encouraging users' delusions and failing to challenge their assumptions. The company is implementing improvements to its models to better recognise signs of delusion or emotional dependency and will introduce alerts for users engaged in long sessions. ChatGPT will now aim to guide users through complex personal decisions, such as relationship advice, rather than providing direct answers. OpenAI is collaborating with medical experts, a mental health advisory group, and researchers to enhance the system's ability to spot concerning behaviour and respond effectively.

‘We didn't vote for ChatGPT': Swedish PM under fire for using AI in role
‘We didn't vote for ChatGPT': Swedish PM under fire for using AI in role

The Guardian

time3 hours ago

  • The Guardian

‘We didn't vote for ChatGPT': Swedish PM under fire for using AI in role

The Swedish prime minister, Ulf Kristersson, has come under fire after admitting that he regularly consults AI tools for a second opinion in his role running the country. Kristersson, whose Moderate party leads Sweden's centre-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: 'I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions.' Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having 'fallen for the oligarchs' AI psychosis'. 'You have to be very careful,' Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information. Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. 'Naturally it is not security sensitive information that ends up there. It is used more as a ballpark,' he said. But Virginia Dignum, a professor of responsible artificial intelligence at Umeå University, said AI was not capable of giving a meaningful opinion on political ideas, and that it simply reflects the views of those who built it. 'The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope,' she told the Dagens Nyheter newspaper. 'We must demand that reliability can be guaranteed. We didn't vote for ChatGPT.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store