7 lessons I learned about end-of-life planning when my mother died, as a financial advisor
Shaw, a financial advisor, learned crucial lessons about end-of-life planning and caregiving.
Her biggest lessons include the importance of Medigap, healthcare proxies, and life insurance.
This as-told-to essay is based on a conversation with Melissa Shaw, a 46-year-old financial advisor in Palo Alto, California. It has been edited for length and clarity.
I've been a financial advisor since 2011 and have worked at Teachers Insurance and Annuity Association of America, or TIAA, as a wealth management advisor for over seven years.
I help clients with estate and incapacity planning, but I encountered completely different issues when my own mother became terminally ill and I became her primary caregiver in October 2024.
Her diagnosis was sudden. Doctors found stage four cancer that had metastasized to her back, causing a fracture. Within weeks, my family moved her from Las Vegas to Northern California to be closer to me.
She died by the end of December — it was a two-month ordeal.
Becoming her caregiver was emotionally intense
Initially, she seemed fine, but she declined rapidly. It was shocking and unexpected.
I visited the hospital daily and took on the bulk of decision-making responsibilities. Thankfully, TIAA offers generous caregiver benefits and flexibility, and I had savings to help cover unexpected costs.
I've learned many valuable lessons through this experience about end-of-life planning.
1. Medicare supplemental plans are essential
Since enrolling in Medicare at the age of 65, my mom opted for a Medigap (Medicare Supplement Insurance) plan instead of a Medicare Advantage plan, and that decision proved vital.
Her Medigap plan covered 20% of medical costs that original Medicare didn't, including any doctor or procedure approved by Medicare, without referrals or prior authorizations. Every doctor she saw was relieved she had it.
If you or a loved one is approaching 65 — especially with ongoing health issues — I strongly recommend researching Medigap options during the Medigap Open Enrollment Period, when insurers can't deny coverage or charge more due to pre-existing conditions.
2. Assign a designated healthcare decision-maker ASAP
My mom didn't assign a designated decision-maker, and I couldn't make health decisions for her. When her health rapidly declined in the last three weeks of her life, she became barely cognizant and luckily was able to manage a scribbled signature for a necessary procedure.
I started to prepare a POA and healthcare proxy, but by the time it was ready, she was no longer mentally competent enough to sign it. She signed an advanced directive form with the hospital when she started the cancer treatment, which allowed me to make some decisions on her behalf.
I learned how imperative it is to name a health proxy at any age.
3. Banking may not be easily accessible
After she died, we were unable to access her bank account funds for 45 days due to a waiting period intended to protect creditors. Luckily, she had a term life insurance policy that paid out quickly to help cover immediate expenses.
Additionally, she didn't name a beneficiary for the bank accounts, which is a common mistake. Many assume that checking accounts don't need beneficiaries, but even modest balances may end up in probate, which can be a significant hassle.
Also, the bank was unable to share her transaction history, so I had no way of knowing which bills had already been paid.
4. Sign up for life insurance
We received her life insurance proceeds quickly; all that was required was a death certificate.
Clients may want to consider insurance as a liquidity measure at death to cover immediate expenses, such as funeral costs and bills.
5. Prepare for end-of-life costs
I was surprised by how expensive it is to bury someone. We were quoted up to $25,000 for burial plots in California.
Even cremation, which we chose, came to around $23,000 after including the niche (a final resting spot to house cremated remains) and the funeral. Prepaying or researching in advance can prevent financial issues.
6. Prepare for the difficulties of caretaking
I spent many nights in the hospital with my mom. Her condition changed from day to day; it was an emotional roller coaster.
Balancing work, caregiving, and my own emotional health was difficult. I'm married, and my kids were 5 and 7 years old. I wasn't seeing them regularly during the two months she was sick. Luckily, TIAA offered eight weeks of caregiver leave.
Many caregivers only have access to unpaid leave through the Family Medical Leave Act (FMLA), so it's important to plan for potential income loss. If you can take paid leave, do it, because it's tough to balance the emotional toll it takes.
7. Wills aren't everything
Wills are essential for securing guardianship and expressing personal wishes, but they don't guarantee that all your assets will be transferred correctly.
Retirement accounts, such as IRAs or 403(b)s, are typically passed by beneficiary designations, rather than through wills or trusts. Many other assets are passed via trusts. You should work with both a financial advisor and an estate attorney to discuss your needs.
I did the best I could, but if I could do things differently, I would've taken an official leave from work to focus solely on caring for my mother.
Read the original article on Business Insider
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TechCrunch
a few seconds ago
- TechCrunch
Trump's ‘anti-woke AI' order could reshape how US tech companies train their models
When DeepSeek, Alibaba, and other Chinese firms released their AI models, Western researchers quickly noticed they sidestepped questions critical of the Chinese Communist Party. U.S. officials later confirmed that these tools are engineered to reflect Beijing's talking points, raising concerns about censorship and bias. American AI leaders like OpenAI have pointed to this as justification for advancing their tech quickly, without too much regulation or oversight. As OpenAI's chief global affairs officer Chris Lehane wrote in a LinkedIn post last month, there is a contest between 'US-led democratic AI and Communist-led China's autocratic AI.' An executive order signed Wednesday by President Donald Trump that bans 'woke AI' and AI models that aren't 'ideologically neutral' from government contracts could disrupt that balance. The order calls out diversity, equity, and inclusion (DEI) calling it a 'pervasive and destructive' ideology that can 'distort the quality and accuracy of the output.' Specifically, the order refers to information about race or sex, manipulation of racial or sexual representation, critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism. Experts warn it could create a chilling effect on developers who may feel pressure to align model outputs and datasets with White House rhetoric to secure federal dollars for their cash-burning businesses. The order comes the same day the White House published Trump's 'AI Action Plan,' which shifts national priorities away from societal risk and focuses instead on building out AI infrastructure, cutting red tape for tech companies, shoring up national security, and competing with China. The order directs the Director of the Office of Management and Budget along with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of the Office of Science and Technology Policy, to issue guidance to other agencies on how to comply. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW 'Once and for all, we are getting rid of woke,' Trump said Wednesday during an AI event hosted by the All-In Podcast and Hill & Valley Forum. 'I will be signing an order banning the federal government from procuring AI technology that has been infused with partisan bias or ideological agendas, such as critical race theory, which is ridiculous. And from now on the U.S. government will deal only with AI that pursues truth, fairness, and strict impartiality.' Determining what is impartial or objective is one of many challenges to the order. Philip Seargeant, senior lecturer in applied linguistics at The Open University, told TechCrunch that nothing can ever be objective. 'One of the fundamental tenets of sociolinguistics is that language is never neutral,' Sergeant said. 'So the idea that you can ever get pure objectivity is a fantasy.' On top of that, the Trump administration's ideology doesn't reflect the beliefs and values of all Americans. Trump has repeatedly sought to eliminate funding for climate initiatives, education, public broadcasting, research, social service grants, community and agricultural support programs, and gender-affirming care, often framing these initiatives as examples of 'woke' or politically biased government spending. As Rumman Chowdhury, a data scientist, CEO of the tech nonprofit Humane Intelligence, and former U.S. science envoy for AI, put it, 'Anything [the Trump administration doesn't] like is immediately tossed into this pejorative pile of woke.' The definitions of 'truth-seeking' and 'ideological neutrality' in the order published Wednesday are vague in some ways, and specific in others. While 'truth-seeking' is defined as LLMs that 'prioritize historical accuracy, scientific inquiry, and objectivity,' 'ideological neutrality' is defined as LLMs that are 'neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI.' Those definitions leave room for broad interpretation, as well as potential pressure. AI companies have pushed for fewer constraints on how they operate. And while an executive order doesn't carry the force of legislation, frontier AI firms could still find themselves subject to the shifting priorities of the administration's political agenda. Last week, OpenAI, Anthropic, Google, and xAI signed contracts with the Department of Defense to receive up to $200 million each to develop agentic AI workflows that address critical national security challenges. It's unclear which of these companies is best positioned to gain from the woke AI ban, or if they will comply. TechCrunch has reached out to each of them and will update this article if we hear back. Despite displaying biases of its own, xAI may be the most aligned with the order — at least at this early stage. Elon Musk has positioned Grok, xAI's chatbot, as the ultimate anti-woke, 'less biased,' truthseeker. Grok's system prompts have directed it to avoid deferring to mainstream authorities and media, to seek contrarian information even if it's politically incorrect, and to even reference Musk's own views on controversial topics. In recent months, Grok has even spouted antisemitic comments and praised Hitler on X, among other hateful, racist, and misogynistic posts. Mark Lemley, a law professor at Stanford University, told TechCrunch the executive order is 'clearly intended as viewpoint discrimination, since [the government] just signed a contract with Grok, aka 'MechaHitler.'' Alongside xAI's DOD funding, the company announced that 'Grok for Government' had been added to the General Services Administration schedule, meaning that xAI products are now available for purchase across every government office and agency. 'The right question is this: would they ban Grok, the AI they just signed a large contract with, because it has been deliberately engineered to give politically charged answers?' Lemley said in an email interview. 'If not, it is clearly designed to discriminate against a particular viewpoint.' As Grok's own system prompts have shown, model outputs can be a reflection of both the people building the technology and the data the AI is trained on. In some cases, an overabundance of caution among developers and AI trained on internet content that promotes values like inclusivity have led to distorted model outputs. Google, for example, last year came under fire after its Gemini chatbot showed a black George Washington and racially diverse Nazis – which Trump's order calls out as an example of DEI-infected AI models. Chowdhury says her biggest fear with this executive order is that AI companies will actively rework training data to tow the party line. She pointed to statements from Musk a few weeks prior to launching Grok 4, saying that xAI would use the new model and its advanced reasoning capabilities to 'rewrite the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that.' This would ostensibly put Musk into the position of judging what is true, which could have huge downstream implications for how information is accessed. Of course, companies have been making judgement calls about what information is seen and not seen since the dawn of the internet. Conservatives like David Sacks – the entrepreneur and investor whom Trump appointed as AI Czar – has been outspoken about his concerns around 'woke AI' on the All-In Podcast, which co-hosted Trump's day of AI announcements. Sacks has accused the creators of prominent AI products of infusing them with left-wing values, framing his arguments as a defense of free speech, and a warning against a trend towards centralized ideological control in digital platforms. The problem, experts say, is that there is no one truth. Achieving unbiased or neutral results is impossible, especially in today's world where even facts are politicized. 'If the results that an AI produces say that climate science is correct, is that left wing bias?' Seargeant said. 'Some people say you need to give both sides of the argument to be objective, even if one side of the argument has no status to it.'


Washington Post
a few seconds ago
- Washington Post
Mike Lindell celebrates victory after appeals court voids $5M award in election data dispute
MINNEAPOLIS — A federal appeals court handed a victory Wednesday to Mike Lindell, ruling that the MyPillow founder doesn't have to pay a $5 million award to a software engineer who disputed data that Lindell claims proves that China interfered in the 2020 U.S. presidential election. The 8th Circuit Court of Appeals ruled that an arbitration panel overstepped its authority in 2023 when it awarded $5 million to the engineer, Robert Zeidman, of Las Vegas, who took Lindell up on his 'Prove Mike Wrong Challenge.'


Bloomberg
a few seconds ago
- Bloomberg
Tesla Warns of Pain From Trump's Tax Bill That Elon Musk Blasted
Tesla Inc. warned that provisions in President Donald Trump's $3.4 trillion fiscal package will pose meaningful challenges for the EV maker in the next several months. The legislation eliminated civil penalties that automakers had been required to pay US regulators that oversee US fuel economy requirements. That will hurt Tesla's sales of regulatory credits to rival automakers — long a key source of revenue and profit for the EV maker — and 'lead to lower earnings,' Chief Financial Officer Vaibhav Taneja said during the company's second-quarter earnings call Wednesday.