logo
Paralysed man regains movement in breakthrough trial

Paralysed man regains movement in breakthrough trial

Yahoo24-02-2025
A Kent man paralysed from the waist down has partially regained bowel and bladder control thanks to breakthrough UK research.
Dan Woodall, from Rainham, was paralysed in 2016 after falling from a bypass after a night out.
The 33-year-old was one of 10 people to take part in a recent Pathfinder2 trial, funded by charity Spinal Research, using electrical stimulation to "excite" the spinal cord and attempt to create movement.
"[The trial] gave me back control over muscle groups I never thought I'd move again, including my right hamstrings and hip flexors," he said.
"I've also regained some bowel and bladder control - something I was told in hospital after my accident might never happen.
"Just knowing when you want to use the toilet is such a massive thing for your independence and mental health.
"The fact that the gains have continued after the trial is really encouraging and I can't wait to see where this goes."
The participants took part in 120 sessions using the technology - known as ARC-EX therapy.
All saw significant improvements in upper body strength, torso control and balance, according to Spinal Research.
Spinal Research chairwoman Tara Stewart said: "This therapy is not a silver bullet.
"It works on spared spinal tissue so results will vary widely and it does need to be paired with proper active rehabilitation over a consistent period of time.
"Even so, this is a game changing moment. It's now time to stop talking about spinal cord injury as being incurable and to stop telling people with this injury that nothing can be done."
The peer-reviewed study has been published in Neuromodulation: Technology at Neural Interface.
Follow BBC Kent on Facebook, on X, and on Instagram. Send your story ideas to southeasttoday@bbc.co.uk or WhatsApp us on 08081 002250.
Man accidentally sets marathon record on crutches
'Everybody cried when I was able to walk again'
Rugby coach who broke neck to run half marathon
Spinal Research
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines
AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines

Scientific American

time3 days ago

  • Scientific American

AI's Achilles Heel—Puzzles Humans Solve in Seconds Often Defy Machines

There are many ways to test the intelligence of an artificial intelligence —conversational fluidity, reading comprehension or mind-bendingly difficult physics. But some of the tests that are most likely to stump AIs are ones that humans find relatively easy, even entertaining. Though AIs increasingly excel at tasks that require high levels of human expertise, this does not mean that they are close to attaining artificial general intelligence, or AGI. AGI requires that an AI can take a very small amount of information and use it to generalize and adapt to highly novel situations. This ability, which is the basis for human learning, remains challenging for AIs. One test designed to evaluate an AI's ability to generalize is the Abstraction and Reasoning Corpus, or ARC: a collection of tiny, colored-grid puzzles that ask a solver to deduce a hidden rule and then apply it to a new grid. Developed by AI researcher François Chollet in 2019, it became the basis of the ARC Prize Foundation, a nonprofit program that administers the test—now an industry benchmark used by all major AI models. The organization also develops new tests and has been routinely using two (ARC-AGI-1 and its more challenging successor ARC-AGI-2). This week the foundation is launching ARC-AGI-3, which is specifically designed for testing AI agents—and is based on making them play video games. Scientific American spoke to ARC Prize Foundation president, AI researcher and entrepreneur Greg Kamradt to understand how these tests evaluate AIs, what they tell us about the potential for AGI and why they are often challenging for deep-learning models even though many humans tend to find them relatively easy. Links to try the tests are at the end of the article. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. [ An edited transcript of the interview follows. ] What definition of intelligence is measured by ARC-AGI-1? Our definition of intelligence is your ability to learn new things. We already know that AI can win at chess. We know they can beat Go. But those models cannot generalize to new domains; they can't go and learn English. So what François Chollet made was a benchmark called ARC-AGI—it teaches you a mini skill in the question, and then it asks you to demonstrate that mini skill. We're basically teaching something and asking you to repeat the skill that you just learned. So the test measures a model's ability to learn within a narrow domain. But our claim is that it does not measure AGI because it's still in a scoped domain [in which learning applies to only a limited area]. It measures that an AI can generalize, but we do not claim this is AGI. How are you defining AGI here? There are two ways I look at it. The first is more tech-forward, which is 'Can an artificial system match the learning efficiency of a human?' Now what I mean by that is after humans are born, they learn a lot outside their training data. In fact, they don't really have training data, other than a few evolutionary priors. So we learn how to speak English, we learn how to drive a car, and we learn how to ride a bike—all these things outside our training data. That's called generalization. When you can do things outside of what you've been trained on now, we define that as intelligence. Now, an alternative definition of AGI that we use is when we can no longer come up with problems that humans can do and AI cannot—that's when we have AGI. That's an observational definition. The flip side is also true, which is as long as the ARC Prize or humanity in general can still find problems that humans can do but AI cannot, then we do not have AGI. One of the key factors about François Chollet's benchmark... is that we test humans on them, and the average human can do these tasks and these problems, but AI still has a really hard time with it. The reason that's so interesting is that some advanced AIs, such as Grok, can pass any graduate-level exam or do all these crazy things, but that's spiky intelligence. It still doesn't have the generalization power of a human. And that's what this benchmark shows. How do your benchmarks differ from those used by other organizations? One of the things that differentiates us is that we require that our benchmark to be solvable by humans. That's in opposition to other benchmarks, where they do 'Ph.D.-plus-plus' problems. I don't need to be told that AI is smarter than me—I already know that OpenAI's o3 can do a lot of things better than me, but it doesn't have a human's power to generalize. That's what we measure on, so we need to test humans. We actually tested 400 people on ARC-AGI-2. We got them in a room, we gave them computers, we did demographic screening, and then gave them the test. The average person scored 66 percent on ARC-AGI-2. Collectively, though, the aggregated responses of five to 10 people will contain the correct answers to all the questions on the ARC2. What makes this test hard for AI and relatively easy for humans? There are two things. Humans are incredibly sample-efficient with their learning, meaning they can look at a problem and with maybe one or two examples, they can pick up the mini skill or transformation and they can go and do it. The algorithm that's running in a human's head is orders of magnitude better and more efficient than what we're seeing with AI right now. What is the difference between ARC-AGI-1 and ARC-AGI-2? So ARC-AGI-1, François Chollet made that himself. It was about 1,000 tasks. That was in 2019. He basically did the minimum viable version in order to measure generalization, and it held for five years because deep learning couldn't touch it at all. It wasn't even getting close. Then reasoning models that came out in 2024, by OpenAI, started making progress on it, which showed a step-level change in what AI could do. Then, when we went to ARC-AGI-2, we went a little bit further down the rabbit hole in regard to what humans can do and AI cannot. It requires a little bit more planning for each task. So instead of getting solved within five seconds, humans may be able to do it in a minute or two. There are more complicated rules, and the grids are larger, so you have to be more precise with your answer, but it's the same concept, more or less.... We are now launching a developer preview for ARC-AGI-3, and that's completely departing from this format. The new format will actually be interactive. So think of it more as an agent benchmark. How will ARC-AGI-3 test agents differently compared with previous tests? If you think about everyday life, it's rare that we have a stateless decision. When I say stateless, I mean just a question and an answer. Right now all benchmarks are more or less stateless benchmarks. If you ask a language model a question, it gives you a single answer. There's a lot that you cannot test with a stateless benchmark. You cannot test planning. You cannot test exploration. You cannot test intuiting about your environment or the goals that come with that. So we're making 100 novel video games that we will use to test humans to make sure that humans can do them because that's the basis for our benchmark. And then we're going to drop AIs into these video games and see if they can understand this environment that they've never seen beforehand. To date, with our internal testing, we haven't had a single AI be able to beat even one level of one of the games. Can you describe the video games here? Each 'environment,' or video game, is a two-dimensional, pixel-based puzzle. These games are structured as distinct levels, each designed to teach a specific mini skill to the player (human or AI). To successfully complete a level, the player must demonstrate mastery of that skill by executing planned sequences of actions. How is using video games to test for AGI different from the ways that video games have previously been used to test AI systems? Video games have long been used as benchmarks in AI research, with Atari games being a popular example. But traditional video game benchmarks face several limitations. Popular games have extensive training data publicly available, lack standardized performance evaluation metrics and permit brute-force methods involving billions of simulations. Additionally, the developers building AI agents typically have prior knowledge of these games—unintentionally embedding their own insights into the solutions.

Missing in Action: The National Space Council
Missing in Action: The National Space Council

Politico

time3 days ago

  • Politico

Missing in Action: The National Space Council

WELCOME TO POLITICO PRO SPACE. We've made it through another surprisingly busy summer week. But, hey, one lucky buyer snagged the biggest Mars meteorite to ever land on Earth. In more serious news, the Senate and House united in opposing major NASA cuts. Now it's up to the White House to decide whether it will listen. And rumors are flying around the space industry about whether anyone actually wants to lead the National Space Council. Do you want to helm it? Email me at sskove@ with tips, pitches and feedback, and find me on X at @samuelskove. And remember, we're offering this newsletter for free over the next few weeks. After that, only POLITICO Pro subscribers will receive it. Read all about it here. The Spotlight No one wants to run the National Space Council, if you follow the rumors rocketing through space circles that at least three people have declined the job. The reality is … more earthly. I broke the news in early May that the White House would restaff the council, which coordinates space policy across the federal government. The group is viewed as influential, in part because by statute the vice president chairs it. Two months after the decision, the administration still hasn't made any staffing announcements. The White House did not respond to my request for comment on what progress it has made in standing up the council, which consists of an executive secretary and several officials. Rumor mill: Industry circles have filled the void with all manner of speculation. Four industry officials, granted anonymity to discuss private conversations, said they had heard that former Space Force Gen. Jay Raymond, ex-National Space Council executive secretary Scott Pace, and former Office of Space Commerce head Kevin O'Connell all declined the position. So I asked them. The gossip appears to have been just that. Raymond said he had not been offered the job and had no plans to return to government service. O'Connell said he had not been approached. Pace said he had no plans to go back. Status check: It's quite possible the White House simply hasn't started the process of choosing an executive secretary. The National Space Council, and space in general, is usually pretty low on the agenda of new administrations. Chirag Parikh, the executive secretary for the Biden administration's National Space Council, didn't assume the role until eight months after former President Joe Biden took office. The attention of the executive branch is also focused on other pressing issues, from Ukraine to the recent passage of President Donald Trump's reconciliation bill. There's also the matter of choosing a full-time NASA administrator. Give us a sign: The swirl of rumors may have more to do with the space industry's eagerness to believe the White House cares about space. Many in the space sector are supportive of reviving the National Space Council. O'Connell, the former Office of Space Commerce head, said he hoped it would get going soon so officials could tackle issues such as the Golden Dome missile defense shield and advancing the space economy. The House appropriations subcommittee that covers civil space voted this week for almost $2 million in funds for a council. But even if the search for staff is in full swing, it's a bit like finding a needle in a haystack. Any candidate must have experience with space issues, be ready to defend the administration's controversial space policy, be willing to forgo a lucrative private sector job, and have no ties to Trump's adversaries. That list includes Democrats, Elon Musk, and former Vice President Mike Pence, who headed the Space Council under the first Trump administration. For now, space enthusiasts may just have to cross their fingers and wait. Galactic Government ALL TOGETHER NOW: Both the House and Senate issued a clear 'no thanks' to steep White House cuts to NASA, presaging a political battle if the White House tries to bully its version through. The administration has proposed a nearly 25 percent cut to the agency. But the Senate appropriations subcommittee voted along partisan lines on Thursday to fund NASA at $24.9 billion, or the same as in 2025. The split was due in part to a disagreement over a bill provision unrelated to NASA. Chair Jerry Moran (R-Kan.) told me last week that the bill would be a 'normal' appropriation. Ranking member Chris Van Hollen (D-MD) said it would fund NASA science programs at $7.3 billion, the same as in 2025 and a rejection of the White House's proposed $3.4 billion cut. The House subcommittee that oversees NASA also voted this week for a budget on par with previous years at $24.8 billion. The House budget differs from 2025 in that it would boost space exploration by $2 billion and cut science funding by $1 billion. Democrats voiced opposition to the cuts to science programs. What next: What happens now is anyone's guess. The White House could seek to push the cuts through anyway. But that would pick a political fight with the administration's Republican allies, most notably NASA supporter Ted Cruz(R-Texas) — a potentially bruising battle for a few billion dollars. Military EYE IN THE SKY: Commercial satellite companies, take heart. The House Armed Services Committee this week voted to increase funding for a Space Force program that uses the businesses' spy photos. The Space Force effort, dubbed the Tactical Surveillance, Reconnaissance and Tracking Program, supplies commercial imagery to deployed forces and was used to help soldiers evacuate from Niger in 2024. The House National Defense Authorization Act would raise its funding by $10 million, and turn the pilot program into a more permanent $50 million one. Why it matters: Commercial satellite imagery companies, in a rare public outcry, protested proposed White House cuts to National Reconnaissance Office contracts for the companies' imagery. This is particularly key to Ukraine, which relies on U.S. commercial satellite pictures for its battle plans. But even if the companies lose clients as part of the White House cuts, they could gain some funding through the Space Force program. The Reading Room Musk's SpaceX Plans Share Sale That Would Value Company at About $400 Billion: Bloomberg. Lawmakers Want DoD Briefings on Nuke Propulsion, VLEO, Commercial PNT: Payload Space Force sets guidelines prioritizing military missions as launch demand surges: SpaceNews The ISS is nearing retirement, so why is NASA still gung-ho about Starliner? Ars Technica Event Horizon MONDAY: NASA will hold a news conference on the joint U.S.-Indian Synthetic Aperture Radar (NISAR) satellite. TUESDAY The American Institute of Aeronautics and Astronautics's ASCEND 2025 conference starts in Las Vegas. The Space Foundation holds the 'Innovate Space: Global Economic Summit.' The Mitchell Institute hosts a webinar with Space Force Brig. Gen. Jacob Middleton. Making Moves Andrew Lock has joined the public policy team at Project Kuiper, Amazon's constellation of low-earth orbit satellites. He most recently was principal at Monument Advocacy, and was a staffer in both the House and Senate. Photo of the Week

Footballer's career ended prematurely because of unnecessary procedure
Footballer's career ended prematurely because of unnecessary procedure

Yahoo

time4 days ago

  • Yahoo

Footballer's career ended prematurely because of unnecessary procedure

A former Premier League footballer's career came to a 'premature end' due to an unnecessary procedure carried out by a leading surgeon, the High Court has been told. Ex-Wolverhampton Wanderers striker Sylvan Ebanks-Blake, 39, had surgery after he broke his left leg during a match against Birmingham City in April 2013. He alleges that during the operation to fix his leg, the surgeon, Professor James Calder, also performed procedures, which included cleaning out the joint and removing some cartilage, that 'gave rise to inflammation', and sped up the development of osteoarthritis in his ankle. He also says the surgeon failed to properly tell him the risks associated with the procedure. Prof Calder is defending the claims and denies that there was a lack of time for the footballer to weigh up his options. In written submissions, Simeon Maskrey KC, representing Ebanks-Blake, said on Wednesday: 'The onset of symptoms and the development and acceleration of osteoarthritis brought the claimant's footballing career to a premature end.' Mr Maskrey said the footballer had suffered a previous ankle injury and, although this resulted in some 'stiffness', he had learned to adapt and it caused him no pain. He continued: 'The proposed procedure carried with it the significant risk that it would render the ankle symptomatic.' Mr Maskrey also told the court the surgery consent process was 'wholly inadequate', and Ebanks-Blake was given 'no opportunity of considering his options'. He added that had his client been told that 'wait and see' was a reasonable option, which ran the risk of the ankle becoming symptomatic and needing later intervention, 'he would have taken that risk'. Mr Maskrey said it was for the court to decide whether Ebanks-Blake 'was provided with sufficient information to provide informed consent'. Martin Forde KC, representing the surgeon, said in written submissions: 'It is arguable that if Professor Calder had done anything other than what he did do, he would have been negligent for not dealing with the loose fragments and unstable cartilage.' He continued: 'The defendant's position is that through his judgment and skill he prolonged the claimant's career. 'The claimant's case quite clearly is that his career was curtailed.' Mr Forde also told the court that Ebanks-Blake's witness statement is the earliest indication of his 'dissatisfaction', and before this, he had made positive comments about his recovery from injury. He added: 'Far from curtailing the claimant's career, the defendant will argue that his clinical skills prolonged the career of a professional player who suffered a very serious injury.' Mr Forde told the court that after the surgery, Ebanks-Blake continued playing football for a number of years, retiring in 2019. The case before Mrs Justice Lambert is due to conclude on Tuesday July 22.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store