
Can AI help you win your March Madness bracket? One disruptor bets $1 million on 'yes' (and Houston)
The day could be coming soon.
In an experiment that a) was bound to happen, b) might actually make us all look smarter and c) should probably also scare the daylights out of everyone, a successful CEO-turned-disruptor is running a $1 million March Madness bracket challenge that pits his AI programmers' picks against those belonging to one of the world's best-known sports gamblers.
'We're not a crystal ball,' says Alan Levy, whose platform, 4C Predictions, is running this challenge. 'But it's going to start to get very, very creepy. In 2025, we're making a million-dollar bet with a professional sports bettor, and the reason we feel confident to do that is because data, we feel, will beat humans.'
Levy isn't the only one leveraging AI to help people succeed in America 's favorite pick 'em pool — one that's become even more lucrative over the past seven years, after a Supreme Court ruling led to the spread of legalized sports betting to 38 states.
ChatGPT, a chatbot developed by OpenAI, is hawking its services to help bracket fillers more easily find stats and identify trends. Not surprisingly, it makes no promises.
'With upsets, momentum shifts, and basketball's inherent unpredictability, consistently creating a perfect bracket may still come down to luck,' said Leah Anise, a spokesperson for OpenAI.
Also making no promises, but trying his hardest, is Sheldon Jacobson, the computer science professor at Illinois who has been trying to build a better bracket through science for years; he might have been AI before AI.
'Nobody predicts the weather,' he explained in an interview back in 2018. 'They forecast it using chances and odds.'
$1 million on the line in AI vs. Sean Perry showdown
Levy's angle is he's willing to wager $1 million that the AI bracket his company produces can beat that of professional gambler Sean Perry.
Among Perry's claims to fame was his refusal to accept a four-way split in a pot worth $9.3 million in an NFL survivor pool two years ago. The next week, his pick, the Broncos, lost to New England and he ended up with nothing.
But Perry has wagered and won millions over his career, using heaps of analytics, data and insider information to try to find an edge that, for decades, has been proprietary to casinos and legal sports books, giving them an advantage that allows them to build all those massive hotels.
Levy says his ultimate goal is bring that advantage to the average Joe — either the weekly football bettor who doesn't have access to reams of data, or the March Madness bracket filler who goes by feel or what team's mascot he likes best.
'The massive thesis is that the average person are playing games that they can never win, they're trading stocks where they can never win, they're trading crypto where they can never win,' Levy said. '4C gives people the chance to empower themselves. It's a great equalizer. It's going to level the playing field for everyone.'
But can AI predict the completely unexpected?
It's one thing to find an edge, quite another to take out every element of chance — every halfcourt game-winner, every 4-point-a-game scorer who goes off for 25, every questionable call by a ref, every St. Peter's, Yale, FAU or UMBC that rises up and wins for reasons nobody quite understands.
For those who fear AI is leading the world to bad places, Levy reassures us that when it comes to sports, at least, the human element is always the final decider — and humans can do funny and unexpected things.
That's one of many reasons that, according to the NCAA, there's a 1 in 120.2 billion chance of a fan with good knowledge of college basketball going 63 for 63 in picking the games. It's one of many reasons that almost everyone has a story about their 8-year-old niece walking away with the pot because she was the only one who picked George Mason, or North Carolina State, or VCU, to make the Final Four.
'You can't take the element of fun and luck out of it,' Levy said. 'Having said that, as AI develops, it's going to get creepier and creepier and the predictions are going to get more and more accurate, and it's all around data sets.'
Levy suggests AI is no three-headed monster, but rather, an advanced version of 'Moneyball' — the classic book-turned-movie that followed Oakland A's GM Billy Beane's groundbreaking quest to leverage data to build a winning team.
Now, it's all about putting all that data on steroids, trying to minimize the impact of luck and glass slippers, and building a winning bracket.
'We've got to understand that this technology is meant to augment us,' Levy said. 'It's meant to make our lives better. So, let's encourage people to use it, and even if it's creepy, at least it's creepy on our side.'
The AI's side in this one: Houston to win it all. Perry, the gambler, is going with Duke.
___

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
an hour ago
- The Guardian
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat


Daily Mail
4 hours ago
- Daily Mail
Transgender pool star loses discrimination case as judge explains why ban from competition must stand
A transgender pool player has lost a discrimination case against one of the sport's main organisers in a landmark ruling in the UK. The English Blackball Pool Federation (EBPF) banned players who were not born as biologically female from its women's competitions in August 2023. Professional player Harriet Haynes took the body to court, claiming the exclusion was 'direct discrimination' against her on the grounds of her gender reassignment. However, in a judgment published on Friday, a court dismissed her claim and said that the ban was the only 'reasonable' way to ensure 'fair competition'. Speaking after the ruling, Haynes' representatives said they were 'naturally disappointed' with the decision and are weighing up whether to appeal. The judgment is the first to apply the newly established legal definition of a woman as someone who is biologically female, introduced after a Supreme Court decision. The EBPF previously said that its ladies' events would only be open to those born female, claiming the decision was intended to ensure 'equality and fairness for all'. Haynes was surprised by the EBPF's crackdown and did not believe she held an unfair advantage because she went through male pubery. She later told The Independent: 'All I've ever wanted is to be able to play like any other woman.' Handing down his judgment, His Honour Judge Parker concluded that pool is a 'gender-affected activity' and that excluding those born as male from the female category was necessary to 'secure fair competition'. He also said the claim 'could not survive' the Supreme Court's decision in April. The EBPF said it was delighted with the ruling and that it welcomes transgender players in its 'open' category. It also argued that players who were born male and went through male puberty hold specific physical advantages in cue sports. According to the body, these include an ability to generate higher break speed, greater hand span to bridge over balls and a longer reach. A spokesperson said: 'The court found that pool is a game in which men have an advantage over women and that allowing only those born as women to compete in our women's competitions is necessary to secure fair competition.' In her claim, Haynes said her exclusion from the Kent Women's A pool team had caused her distress, and that she had been subjected to hurtful comments online. She also claimed the ban violated the European Convention on Human Rights, including the right to respect for an individual's private and family life. The EBPF, however, said the rule did not discriminate against Haynes on the grounds of gender reassignment as 'she was born male'. They added that 'if she had been a transgender person who was born female, she would not have been excluded'. Matt Champ, senior associate at Colman Coyle, who represented Haynes, said: 'We and our client are naturally disappointed with the court's decision that it was bound to follow the much-criticised Supreme Court case of For Women Scotland and dismiss our client's case for gender reassignment discrimination. 'However, whilst the judge dismissed the case based upon For Women Scotland, we take some solace in the fact that he found that, if he was not bound by that decision, he would have agreed with our client and found that the need to show that exclusion was "necessary" so as to comply with the Equality Act 2010 would have been on the defendants, that was a hotly contested issue at trial. 'More importantly, the judge also found that if he were required to decide it, he would have found that the EBPF's actions were not capable of being a "proportionate means of achieving a legitimate aim" and so the defendants' secondary case would have failed. 'But, obviously because of the judge's reliance on For Women Scotland, the claim still had to be dismissed. We are reflecting on the judgment and our next steps which will include whether or not we appeal.'


Reuters
19 hours ago
- Reuters
Orioles rally late against Cubs behind Gunnar Henderson's homer
August 2 - Gunnar Henderson belted a three-run homer in the four-run eighth inning to lift the visiting Baltimore Orioles to a 4-3 win against the Chicago Cubs in the middle game of their three-game series on Saturday afternoon. Ryan Brasier replaced Cubs starter Matthew Boyd to start the eighth inning and walked the leadoff batter on four pitches before giving up a single to Jeremiah Jackson. Caleb Thielbar (2-3) replaced Brasier with one out and surrendered an RBI single to Jordan Westburg to cut it to 3-1 and end an 18-inning scoreless streak for Baltimore. Henderson then stepped up and blasted a home run into the wind in dead center to give the Orioles a 4-3 lead. Orioles right-hander Tomoyuki Sugano went five innings, allowing three runs and five hits while striking out five and walking one. Corbin Martin, Grant Wolfram (2-0) and Yennier Cano combined for three shutout innings before Keegan Akin pitched the ninth for his first save of the season. Westburg had two hits, an RBI, and a run scored. Jackson also had two hits for the Orioles, who lost the series opener 1-0 on Friday afternoon. Willi Castro tripled, singled, and scored twice in his team debut and Nico Hoerner contributed three hits and two RBIs for the Cubs, who had won two in a row. Boyd allowed four hits over seven shutout innings. He struck out eight and didn't walk a batter. Sugano got the first out of the second inning before Ian Happ doubled to right on a 3-2 pitch. Castro, who was acquired from the Minnesota Twins on Thursday, hit a sinking liner to left that was originally ruled a catch by Colton Cowser, but the umpires convened and quickly changed it to a hit. Happ remained at second but scored when Hoerner belted a line drive into the left-field corner that stuck in the ivy for a ground-rule double, scoring Happ for a 1-0 lead and putting runners on second and third. Reese McGuire followed with a sacrifice fly to left to drive in Castro for a 2-0 lead. Castro tripled to center with one out in the fourth and Hoerner followed with an RBI single to right to make it 3-0. --Field Level Media