logo
Hull hospitals scan more patients with AI technology

Hull hospitals scan more patients with AI technology

BBC News12 hours ago
Staff at NHS hospitals in Hull said AI technology had cut MRI scan times, allowing them to see more patients.Hull University Teaching Hospitals NHS Trust said the software used algorithms to help reduce background noise, helping to achieve sharper images in a shorter time.Karen Bunker, head of imaging, said: "This means we can reduce the scanning time on certain sequences, but still get the same imaging quality."The software has been installed at Hull Royal Infirmary and Castle Hill Hospital and will also be introduced at Scunthorpe General Hospital and Diana, Princess of Wales Hospital in Grimsby.
The Air Recon Deep Learning (ARDL) software was installed on the hospitals' existing MRI machines.Staff said the software was cutting between 10 and 15 minutes from average scan times. A routine MRI head scan used to take 30 minutes but now takes 20, the trust said, while a prostate scan now takes 30 minutes instead of 45. The trust added it can now scan 31 lumber spine patients over a 12 hour period, instead of 21 before.Ms Bunker said: "People who struggle with claustrophobia or those with learning disabilities, who previously couldn't tolerate a scan, are finding they are able to endure the shorter scan times."Staff also reported fewer children needed to have a general anaesthetic to get through a scan.Listen to highlights from Hull and East Yorkshire on BBC Sounds, watch the latest episode of Look North or tell us about a story you think we should be covering here.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Context Engineering for Financial Services: By Steve Wilcockson
Context Engineering for Financial Services: By Steve Wilcockson

Finextra

timean hour ago

  • Finextra

Context Engineering for Financial Services: By Steve Wilcockson

The hottest discussion in AI right now, at least the one not about Agentic AI, is about how "context engineering" is more important than prompt engineering, how you give AI the data and information it needs to make decisions, and it cannot (and must not) be a solely technical function. "'Context' is actually how your company operates; the ideal versions of your reports, documents & processes that the AI can use as a model; the tone & voice of your organization. It is a cross-functional problem.' So says renowned Tech Influencer and Associate Professor at Wharton School, Ethan Molick. He in turn cites fellow Tech Influencer Andrej Karpathy on X, who in turn cites Tobi Lutke, CEO of Shopify: "It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM. " The three together - Molick, Karpathy and Lutke - make for a powerful triumvirate of Tech-influencers. Karpathy consolidates the subject nicely. He emphasizes that in real-world, industrial-strength LLM applications, the challenge entails filling the model's context window with just the right mix of information. He thinks about context engineering as both a science—because it involves structured systems and system-level thinking, data pipelines, and optimization —and an art, because it requires intuition about how LLMs interpret and prioritize information. His analysis reflects two of my predictions for 2025 one highlighting the increasing impact of uncertainty and another a growing appreciation of knowledge. Tech mortals offered further useful comments on the threads, two of my favorites being: 'Owning knowledge no longer sets anyone apart; what matters is pattern literacy—the ability to frame a goal, spot exactly what you don't know, and pull in just the right strands of information while an AI loom weaves those strands into coherent solutions.' weaves those strands into coherent solutions.' 'It also feels like 'leadership' Tobi. How to give enough information, goal and then empower.' I love the AI loom analogy, in part because it corresponds with one of my favorite data descriptors, the "Contextual Fabric". I like the leadership positivity too, because the AI looms and contextual fabrics, are led by and empowered by humanity. Here's my spin, to take or leave. Knowledge, based on data, isn't singular, it's contingent, contextual. Knowledge and thus the contextual fabric of data on which it is embedded is ever changing, constantly shifting, dependent on situations and needs. I believe knowledge is shaped by who speaks, who listens, and what about. That is, to a large extent, led by power and the powerful. Whether in Latin, science, religious education, finance and now AI, what counts as 'truth' is often a function of who gets to tell the story. It's not just about what you know, but how, why, and where you know it, and who told you it. But of course it's not that simple; agency matters - the peasant can become an abbot, the council house schoolgirl can become a Nobel prize-winning scientist, a frontier barbarian can become a Roman emperor. For AI, truth to power is held by the big tech firms and grounded on bias, but on the other it's democratizing in that all of us and our experiences help train and ground AI, in theory at least. I digress. For AI-informed decision intelligence, context will likely be the new computation that makes GenAI tooling more useful than simply being an oft-hallucinating stochastic parrot, while enhancing traditional AI - predictive machine learning, for example - to be increasingly relevant and affordable for the enterprise. Context Engineering for FinTech Context engineering—the art of shaping the data, metadata, and relationships that feed AI—may become the most critical discipline in tech. This is like gold for those of us in the FinTech data engineering space, because we're the dudes helping you create your own context. I'll explore how five different contextual approaches, all representing data engineering-relevant vendors I have worked for —technical computing, vector-based, time-series, graph and geospatial platforms—can support context engineering. Parameterizing with Technical Computing Technical computing tools – think R, Julia, MATLAB and Python's SciPy stack - can integrate domain-specific data directly into the model's environment through structured inputs, simulations, and real-time sensor data, normally as vectors, tables or matrices. For example, in engineering or robotics applications, an AI model can be fed with contextual information such as system dynamics, environmental parameters, or control constraints. Thus the model can make decisions that are not just statistically sound but also physically meaningful within the modeled system. They can dynamically update the context window of an AI model, for example in scenarios like predictive maintenance or adaptive control, where AI must continuously adapt to new data. By embedding contextual cues, like historical trends, operational thresholds, or user-defined rules, such tools help ground the model's outputs in the specific realities of the task or domain. Financial Services Use Cases Quantitative Strategy Simulation Simulate trading strategies and feed results into an LLM for interpretation or optimization. Stress Testing Financial Models Run Monte Carlo simulations or scenario analyses and use the outputs to inform LLMs about potential systemic risks. Vectors and the Semantics of Similarity Vector embeddings are closely related to the linear algebra of technical computing, but they bring semantic context to the table. Typically stored in so-called vector databases, they encode meaning into high-dimensional space, allowing AI to retrieve through search not just exact matches, but conceptual neighbors. They thus allow for multiple stochastically arranged answers, not just one. Until recently, vector embeddings and vector databases have been primary providers of enterprise context to LLMs, shoehorning all types of data as searchable mathematical vectors. Their downside is their brute force and compute-intensive approach to storing and searching data. That said, they use similar transfer learning approaches – and deep neural nets – to those that drive LLMs. As expensive, powerful brute force vehicles of Retrieval-Augmented Generation (RAG), vector databases don't simply just store documents but understand them, and have an increasingly proven place for enabling LLMs to ground their outputs in relevant, contextualized knowledge. Financial Services Use Cases Customer Support Automation Retrieve similar past queries, regulatory documents, or product FAQs to inform LLM responses in real-time. Fraud Pattern Matching Embed transaction descriptions and retrieve similar fraud cases to help the model assess risk or flag suspicious behavior. Time-Series, Temporal and Streaming Context Time-series database and analytics providers, and in-memory and columnar databases that can organize their data structures by time, specialize in knowing about the when. They can ensure temporal context—the heartbeat of many use cases in financial markets as well as IoT, and edge computing- grounds AI at the right time with time-denominated sequential accuracy. Streaming systems, like Kafka, Flink, et al can also facilitate the real-time central nervous systems of financial event-based systems. It's not just about having access to time-stamped data, but analyzing it in motion, enabling AI to detect patterns, anomalies, and causality, as close as possible to real time. In context engineering, this is gold. Whether it's fraud that happens in milliseconds or sensor data populating insurance telematics, temporal granularity can be the difference between insight and noise, with context stored and delivered by what some might see as a data timehouse. Financial Services Use Cases Market Anomaly Detection Injecting real-time price, volume, and volatility data into an LLM's context allows it to detect and explain unusual market behavior. High-Frequency Trading Insights Feed LLMs with microsecond-level trade data to analyze execution quality or latency arbitrage. Graphs That Know Who's Who Graph and relationship-focussed providers play a powerful role in context engineering by structuring and surfacing relationships between entities that are otherwise hidden in raw data. In the context of large language models (LLMs), graph platforms can dynamically populate the model's context window with relevant, interconnected knowledge—such as relationships between people, organizations, events, or transactions. They enable the model to reason more effectively, disambiguate entities, and generate responses that are grounded in a rich, structured understanding of the domain. Graphs can act as a contextual memory layer through GraphRAG and Contextual RAG, ensuring that the LLM operates with awareness of the most relevant and trustworthy information. For example, graph databases - or other environments, e.g. Spark, that can store graph data types as accessible files, e.g. Parquet, HDFS - can be used to retrieve a subgraph of relevant nodes and edges based on a user query, which can then be serialized into natural language or structured prompts for the LLM. Platforms that focus graph context around entity resolution and contextual decision intelligence can enrich the model's context with high-confidence, real-world connections—especially useful in domains like fraud detection, anti-money laundering, or customer intelligence. Think of them as like Shakespeare's Comedy of Errors meets Netflix's Department Q. Two Antipholuses and two Dromios rather than 1 of each in Comedy of Errors? Only 1 Jennings brother to investigate in Department Q's case, and where does Kelly MacDonald fit into anything? Entity resolution and graph context can help resolve and connect them in a way that more standard data repositories and analytics tools struggle with. LLMs cannot function without correct and contingent knowledge of people, places, things and the relationships between them, though to be sure many types of AI can also help discover the connections and resolve entities in the first place. Financial Services Use Cases AML and KYC Investigations Surface hidden connections between accounts, transactions, and entities to inform LLMs during risk assessments. Credit Risk Analysis Use relationship graphs to understand borrower affiliations, guarantors, and exposure networks. Seeing the World in Geospatial Layers Geospatial platforms support context engineering by embedding spatial awareness into AI systems, enabling them to reason about location, proximity, movement, and environmental context. They can provide rich, structured data layers (e.g., terrain, infrastructure, demographics, weather) that can be dynamically retrieved and injected into an LLM's context window. This allows the model to generate responses that are not only linguistically coherent but also geographically grounded. For example, in disaster response, a geospatial platform can provide real-time satellite imagery, flood zones, and population density maps. This data can be translated into structured prompts or visual inputs for an AI model tasked with coordinating relief efforts or summarizing risk. Similarly, in urban planning or logistics, geospatial context helps the model understand constraints like traffic patterns, zoning laws, or accessibility. In essence, geospatial platforms act as a spatial memory layer, enriching the model's understanding of the physical world and enabling more accurate, context-aware decision-making. Financial Services Use Cases Branch Network Optimization Combine demographic, economic, and competitor data to help LLMs recommend new branch locations. Climate Risk Assessment Integrate flood zones, wildfire risk, or urban heat maps to evaluate the environmental exposure of mortgage and insurance portfolios. Context Engineering Beyond the Limits of Data, Knowledge & Truths Context engineering I believe recognizes that data is partial, and that knowledge and perhaps truth or truths needs to be situated, connected, and interpreted. Whether through graphs, time-series, vectors, tech computing platforms, or geospatial layering, AI depends on weaving the right contextual strands together. Where AI represents the loom, the five types of platforms I describe are like the spindles, needles, and dyes drawing on their respective contextual fabrics of ever changing data, driving threads of knowledge—contingent, contextual, and ready for action.

Why thousands of women diagnosed with ADHD could really be suffering from iron deficiency - that can easily be solved by taking this simple supplement
Why thousands of women diagnosed with ADHD could really be suffering from iron deficiency - that can easily be solved by taking this simple supplement

Daily Mail​

time3 hours ago

  • Daily Mail​

Why thousands of women diagnosed with ADHD could really be suffering from iron deficiency - that can easily be solved by taking this simple supplement

For as long as Josie Heath-Smith can remember, she has suffered from brain fog, fatigue and an inability to concentrate. There were also the 'debilitating' periods of hyper-fixation. Josie, 44, explains: 'I'd swing from being completely unable to focus – at work I'd drift off whenever anyone tried to explain anything – to staying up all night obsessively focused on a single task. It was always something random, like putting up a shelving unit at 4am. With two kids, the cycle left me completely burnt out.'

More than 100 NHS doctors warn Wes Streeting to honour promise to roll out screening clinics for osteoporosis - or risk thousands of ­preventable deaths
More than 100 NHS doctors warn Wes Streeting to honour promise to roll out screening clinics for osteoporosis - or risk thousands of ­preventable deaths

Daily Mail​

time3 hours ago

  • Daily Mail​

More than 100 NHS doctors warn Wes Streeting to honour promise to roll out screening clinics for osteoporosis - or risk thousands of ­preventable deaths

More than 100 leading NHS doctors have warned Health Secretary Wes Streeting that failing to honour his pre-Election commitment to urgently rollout screening clinics for osteoporosis risks causing thousands of preventable deaths. In a letter shared exclusively with The Mail on Sunday, the medics and healthcare workers say tens of thousands of people with the bone-thinning disease are 'slipping through the net' because of a postcode lottery of care – and 2,500 may have needlessly died in the last 12 months alone – because of a failure by the Government to prioritise tackling the crisis. Mr Streeting told this newspaper that one of his first acts in Government would be to publish a plan to rollout Fracture Liaison Services, or FLS, across all parts of the country. These vital services, which require no new equipment, represent a gold standard in the early diagnosis of the condition and would mean everyone over 50 who breaks a bone could be screened for the disease and given bone-preserving drugs – potentially saving thousands of people from serious injury, disability and an early death. But a year on from that promise, the Government now says FLS will not be available everywhere until 2030. The Government reiterated its commitment to funding the clinics in its 10-Year Health Plan, published on Thursday, but the doctors say any further delay will cause 'harm' to patients – and say officials have still failed to reveal how they plan to achieve the rollout. The doctors, part of the Royal Osteoporosis Society's clinical network who are backing the charity's campaign for universal FLS, wrote: 'Late diagnosis of osteoporosis leads to avoidable fractures, loss of independence, long-term disability and, in many cases, premature death. 'A properly functioning FLS catches patients after their first fracture, enabling early diagnosis and access to proven, cost-effective treatments. They are a classic example of driving the shift from sickness to prevention. 'However, the current postcode lottery for FLS provision means tens of thousands are slipping through the net. Each year that this remains unaddressed, an estimated 2,500 more people die following hip fractures that could have been prevented. 'Any delay in implementation of this important policy will cost lives.' Shadow Health Secretary Edward Argar also called on Mr Streeting to deliver his pledge 'right away, with no more delays'. Mr Argar said: 'I'm calling on the Health Secretary to set out an action plan with a clear timetable, starting now, for the roll-out of FLS to all areas, something which will not only help thousands of people with osteoporosis but will also be saving the NHS money within a few years.' Osteoporosis affects around 3.5million people in the UK and causes bones to thin and weaken, leading to fractures. Most people are only diagnosed after breaking several bones, but in hospitals which have FLS, patients attending with fractures can be screened for osteoporosis with a bone density test called a DEXA scan. Only half of NHS Trusts in England currently have FLS, and rolling it out is estimated to cost £30million. Osteoporosis-related fractures have cost the British economy more than £142million since last July. The Mail on Sunday has been running a War on Osteoporosis campaign to make FLS universal. Craig Jones, chief executive of the Royal Osteoporosis Society, said: 'This is a preventative model that's tried and tested, ready to go, and capable of delivering savings before the middle of this Parliament. 'We welcome inclusion in the 10 Year Plan and now call for a speedy implementation plan so we can protect patients and save lives.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store