logo
#

Latest news with #imageprocessing

QT Imaging Leverages NVIDIA L40 GPU Acceleration to Power its Next-Gen Breast Imaging Reconstruction Software
QT Imaging Leverages NVIDIA L40 GPU Acceleration to Power its Next-Gen Breast Imaging Reconstruction Software

Yahoo

time25-06-2025

  • Business
  • Yahoo

QT Imaging Leverages NVIDIA L40 GPU Acceleration to Power its Next-Gen Breast Imaging Reconstruction Software

– New software release delivers substantial image processing time improvements – – Faster image availability enables radiologists to review and interpret QTI breast scans more quickly, supporting earlier clinical decision-making – – Image fidelity and diagnostic accuracy improvement, in addition to the significant processing speed gains – NOVATO, Calif., June 25, 2025--(BUSINESS WIRE)--QT Imaging Holdings, Inc. (OTCQB:QTIH), a medical device company engaged in research, development, and commercialization of innovative body imaging systems, is pleased to announce its latest image reconstruction software update release, version 4.4.0, which delivers a substantial reduction in the QTscan™ image processing time, improving user throughput and overall efficiency. The software update was developed leveraging NVIDIA's L40 GPU, powered by the Ada Lovelace architecture. The U.S. FDA-cleared QTI Breast Acoustic CT™ scanner is the first non-invasive breast imaging technology that provides a true 3D image of the breast anatomy without compression, contrast administration, or harmful ionizing radiation. Despite the processing speed gains achieved via the version 4.4.0 software update, QTscan breast image fidelity and diagnostic accuracy are uncompromised, and image quality remains best-in-class, with a resolution similar to magnetic resonance imaging (MRI). Designed to meet the needs of busy breast imaging centers, the software update ensures minimal downtime and seamless integration into existing workflows. Accordingly, it enhances scalability for institutions expanding their QTI breast screening programs, ensuring consistent performance across increasing patient volumes. "This latest software release reflects direct input from radiologists and clinical partners, underscoring our commitment to continuous innovation based on real-world use," said QTI's CEO, Dr. Raluca Dinu. "Faster image availability enables radiologists to review and interpret QTI scans more quickly, supporting earlier clinical decision-making. And accelerated processing reduces wait times for results, contributing to a smoother and more reassuring patient journey." The QTI team continues to innovate by leveraging proprietary machine learning algorithms not only to accelerate image processing via the version 4.4.0 software update, but also to actively reduce scanning time — a key focus area in ongoing development. Version 4.4.0 is optimized for systems utilizing NVIDIA's L40 GPUs and remains fully backward-compatible with earlier QTI system hardware, ensuring broad accessibility through scalable, software-driven architecture. About QT Imaging Holdings, Inc. QT Imaging Holdings, Inc. is a public (OTCQB: QTIH) medical device company engaged in research, development, and commercialization of innovative body imaging systems using low frequency sound waves. QT Imaging Holdings, Inc. strives to improve global health outcomes. Its strategy is predicated upon the fact that medical imaging is critical to the detection, diagnosis, and treatment of disease and that it should be safe, affordable, accessible, and centered on the patient's experience. For more information on QT Imaging Holdings, Inc., please visit the company's website at Breast Acoustic CT™ is a trademark of an affiliate of QT Imaging Holdings, Inc. Forward-Looking Statements This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Such statements contain words such as "will," and "expect," or the negative thereof or comparable terminology, and include (without limitation) statements regarding the presentation and performance of the QT Imaging Breast Acoustic CT imaging technology, including the software update, plans for QT Imaging Holdings, new product development and introduction, and product sales growth and projected revenues. Forward-looking statements involve certain risks and uncertainties, and actual results may differ materially from those discussed in any such statement. These risks include, but are not limited to: research results from the use of the QT Imaging Breast Acoustic CT Scanner, the ability of QT Imaging Holdings to sell and deploy the QT Imaging Breast Acoustic CT Scanner, the ability to extend product offerings into new areas or products, the ability to commercialize technology, unexpected occurrences that deter the full documentation and "bring to market" plan for products, trends and fluctuations in the industry, changes in demand and purchasing volume of customers, unpredictability of suppliers, the ability to attract and retain qualified personnel and the ability to move product sales to production levels. Additional factors that could cause actual results to differ are discussed under the heading "Risk Factors" and in other sections of QT Imaging Holding's filings with the SEC, and in its other current and periodic reports filed or furnished from time to time with the SEC. All forward-looking statements in this press release are made as of the date hereof, based on information available to QT Imaging Holdings as of the date hereof, and QT Imaging Holdings assumes no obligation to update any forward-looking statement. View source version on Contacts For media inquiries, please contact: Stephen Kilmer Head of Investor Direct: (646) 274-3580 擷取數據時發生錯誤 登入存取你的投資組合 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤 擷取數據時發生錯誤

Astronomy has a major data problem – simulating realistic images of the sky can help train algorithms
Astronomy has a major data problem – simulating realistic images of the sky can help train algorithms

Yahoo

time23-06-2025

  • Science
  • Yahoo

Astronomy has a major data problem – simulating realistic images of the sky can help train algorithms

Professional astronomers don't make discoveries by looking through an eyepiece like you might with a backyard telescope. Instead, they collect digital images in massive cameras attached to large telescopes. Just as you might have an endless library of digital photos stored in your cellphone, many astronomers collect more photos than they would ever have the time to look at. Instead, astronomers like me look at some of the images, then build algorithms and later use computers to combine and analyze the rest. But how can we know that the algorithms we write will work, when we don't even have time to look at all the images? We can practice on some of the images, but one new way to build the best algorithms is to simulate some fake images as accurately as possible. With fake images, we can customize the exact properties of the objects in the image. That way, we can see if the algorithms we're training can uncover those properties correctly. My research group and collaborators have found that the best way to create fake but realistic astronomical images is to painstakingly simulate light and its interaction with everything it encounters. Light is composed of particles called photons, and we can simulate each photon. We wrote a publicly available code to do this called the photon simulator, or PhoSim. The goal of the PhoSim project is to create realistic fake images that help us understand where distortions in images from real telescopes come from. The fake images help us train programs that sort through images from real telescopes. And the results from studies using PhoSim can also help astronomers correct distortions and defects in their real telescope images. But first, why is there so much astronomy data in the first place? This is primarily due to the rise of dedicated survey telescopes. A survey telescope maps out a region on the sky rather than just pointing at specific objects. These observatories all have a large collecting area, a large field of view and a dedicated survey mode to collect as much light over a period of time as possible. Major surveys from the past two decades include the SDSS, Kepler, Blanco-DECam, Subaru HSC, TESS, ZTF and Euclid. The Vera Rubin Observatory in Chile has recently finished construction and will soon join those. Its survey begins soon after its official 'first look' event on June 23, 2025. It will have a particularly strong set of survey capabilities. The Rubin observatory can look at a region of the sky all at once that is several times larger than the full Moon, and it can survey the entire southern celestial hemisphere every few nights. A survey can shed light on practically every topic in astronomy. Some of the ambitious research questions include: making measurements about dark matter and dark energy, mapping the Milky Way's distribution of stars, finding asteroids in the solar system, building a three-dimensional map of galaxies in the universe, finding new planets outside the solar system and tracking millions of objects that change over time, including supernovas. All of these surveys create a massive data deluge. They generate tens of terabytes every night – that's millions to billions of pixels collected in seconds. In the extreme case of the Rubin observatory, if you spent all day long looking at images equivalent to the size of a 4K television screen for about one second each, you'd be looking at them 25 times too slow and you'd never keep up. At this rate, no individual human could ever look at all the images. But automated programs can process the data. Astronomers don't just survey an astronomical object like a planet, galaxy or supernova once, either. Often we measure the same object's size, shape, brightness and position in many different ways under many different conditions. But more measurements do come with more complications. For example, measurements taken under certain weather conditions or on one part of the camera may disagree with others at different locations or under different conditions. Astronomers can correct these errors – called systematics – with careful calibration or algorithms, but only if we understand the reason for the inconsistency between different measurements. That's where PhoSim comes in. Once corrected, we can use all the images and make more detailed measurements. To understand the origin of these systematics, we built PhoSim, which can simulate the propagation of light particles – photons – through the Earth's atmosphere and then into the telescope and camera. PhoSim simulates the atmosphere, including air turbulence, as well as distortions from the shape of the telescope's mirrors and the electrical properties of the sensors. The photons are propagated using a variety of physics that predict what photons do when they encounter the air and the telescope's mirrors and lenses. The simulation ends by collecting electrons that have been ejected by photons into a grid of pixels, to make an image. Representing the light as trillions of photons is computationally efficient and an application of the Monte Carlo method, which uses random sampling. Researchers used PhoSim to verify some aspects of the Rubin observatory's design and estimate how its images would look. The results are complex, but so far we've connected the variation in temperature across telescope mirrors directly to astigmatism – angular blurring – in the images. We've also studied how high-altitude turbulence in the atmosphere that can disturb light on its way to the telescope shifts the positions of stars and galaxies in the image and causes blurring patterns that correlate with the wind. We've demonstrated how the electric fields in telescope sensors – which are intended to be vertical – can get distorted and warp the images. Researchers can use these new results to correct their measurements and better take advantage of all the data that telescopes collect. Traditionally, astronomical analyses haven't worried about this level of detail, but the meticulous measurements with the current and future surveys will have to. Astronomers can make the most out of this deluge of data by using simulations to achieve a deeper level of understanding. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: John Peterson, Purdue University Read more: How the Hubble Space Telescope opened our eyes to the first galaxies of the universe Astronomy's 10-year wish list: Big money, bigger telescopes and the biggest questions in science Dark energy may have once been 'springier' than it is today − DESI cosmologists explain what their collaboration's new measurement says about the universe's history John Peterson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

VeriSilicon's AI-ISP Custom Chip Solution Enables Mass Production of Customer's Smartphones
VeriSilicon's AI-ISP Custom Chip Solution Enables Mass Production of Customer's Smartphones

National Post

time12-06-2025

  • Business
  • National Post

VeriSilicon's AI-ISP Custom Chip Solution Enables Mass Production of Customer's Smartphones

Article content Providing architecture design, software-hardware co-development, and mass production support, and enhancing AI-powered imaging Article content Article content SHANGHAI — VeriSilicon ( recently announced that its AI-ISP custom chip solution has been successfully adopted in a customer's mass-produced smartphones, reaffirming the company's comprehensive one-stop custom silicon service capabilities in AI vision processing. Article content VeriSilicon's AI-ISP custom chip solution can integrate proprietary or third-party Neural Network Processing Unit (NPU) IP and Image Signal Processing (ISP) IP. By combining traditional image processing techniques with AI algorithms, it significantly enhances image and video clarity, dynamic range, and environmental adaptability. The chip solution offers flexible configurations with RISC-V or Arm-based processors, supports MIPI image input/output interfaces, provides LPDDR5/4X memory integration capability, and is compatible with common peripheral interfaces such as UART, I2C, and SDIO. This makes the solution highly adaptable for deployment across various applications including smartphones, surveillance systems, and automotive electronics. Article content For this collaboration, VeriSilicon designed a low-power AI-ISP system-on-chip (SoC) based on the RISC-V architecture, tailored to the customer's specific requirements. It also included a FreeRTOS real-time Software Development Kit (SDK). The customized SoC was fully optimized for seamless interoperability with the customer's main processor platform and has since been successfully deployed in multiple smart devices, achieving large-scale production. This success highlights VeriSilicon's robust capabilities in heterogeneous computing, software-hardware co-optimization, and system-level integration and verification. Article content 'AI-powered imaging has become a key differentiator in the competitive smartphone market, driving increasing demand for high-performance and low-power image processing solutions,' said Wiseway Wang, Executive Vice President and General Manager of the Custom Silicon Platform Division at VeriSilicon. 'With full-spectrum capabilities ranging from IP licensing and chip architecture design to system-level software and hardware development, tape-out, packaging and testing, as well as mass production, VeriSilicon offers end-to-end custom silicon services leveraging its extensive design service experience and proven mass production capabilities. The successful mass production of this customer's chip further validates our strength in high-end silicon design services. Moving forward, we will continue to innovate and improve our offerings, empowering customers to accelerate the launch of differentiated products with efficient, high-quality custom chip solutions.' Article content Article content

Award-winning Chinese radar expert Li Chunsheng dies aged 62 at conference
Award-winning Chinese radar expert Li Chunsheng dies aged 62 at conference

South China Morning Post

time22-05-2025

  • Science
  • South China Morning Post

Award-winning Chinese radar expert Li Chunsheng dies aged 62 at conference

The leading Chinese radar expert Li Chunsheng has died at the age of 62. Li, a professor at the school of electronic information engineering of Beihang University in Beijing, became ill last week at a conference hosted by the China Radar Industry Association in Hefei, the capital of the southeastern province of Anhui. An obituary published by his university on Monday said he had 'devoted himself to developing China's aerospace industry'. Li specialised in synthetic aperture radar, with a particular focus on image processing and methods to enhance the quality of the images produced. His university said he had made 'systematic and significant contributions' to the development of the technology and that as a teacher he had helped cultivate top talent in the aerospace remote sensing sector. Li also held positions on a number of national military and aerospace organisations, including the Central Military Commission's science and technology committee. He published more than 80 academic papers and four monographs, and served as team leader and chief scientist on many major national research projects, including the National High-Tech Programme and National Basic Research Programme.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store