Latest news with #EmbeddedVisionSummit


Korea Herald
26-05-2025
- Business
- Korea Herald
Nota AI Demonstrates On-Device AI Breakthrough at Embedded Vision Summit 2025 in Collaboration with Qualcomm AI Hub
SEOUL, South Korea, May 26, 2025 /PRNewswire/ -- Nota AI, a global leader in AI optimization, showcased its latest edge AI innovations alongside Qualcomm Technologies, Inc. at the Embedded Vision Summit 2025, held May 20–22 in Santa Clara, California. The Embedded Vision Summit is a prominent global conference for innovators incorporating computer vision and AI in products, attended by more than 70 companies and over 1,400 industry experts worldwide. Nota AI prominently featured its collaboration with Qualcomm Technologies, emphasizing the optimization of its proprietary AI model optimization platform, NetsPresso®, for use with the Qualcomm® AI Hub. Both companies utilized video presentations at their booths to demonstrate the enhanced efficiency and scalability achieved through this collaboration. Nota AI's CTO, Tae-Ho Kim, further highlighted these advancements in a Qualcomm Technologies-hosted Deep Dive Session, detailing how the integrated platforms significantly streamline the workflow for developing and deploying AI models on edge devices. "This collaboration shows how we're making edge AI deployment faster, lighter, and more efficient," said Tae-Ho Kim, CTO of Nota AI. "We're excited to deepen our collaboration with Qualcomm Technologies and extend our reach across global edge and IoT applications." Additionally, Nota AI unveiled the NetsPresso Optimization Studio, the latest enhancement to its AI model optimization platform, NetsPresso®. Optimization Studio offers users an intuitive, visual interface designed to simplify AI model optimization. Developers can quickly visualize critical layer details and model performance required for efficient quantization, enabling rapid, data-driven decisions based on actual device performance metrics. Also featured was Nota Vision Agent (NVA), a generative AI-based video analytics solution. NVA enables real-time video event detection, natural language video search, and automated report generation, helping enterprise users maximize situational awareness and operational efficiency. The solution has already proven its commercial viability through a recent supply agreement with the Dubai Roads and Transport Authority (RTA) — a first for a Korean company in this domain. Meanwhile, On May 22, Nota AI filed for a preliminary IPO listing, making it the first AI optimization company from Korea to do so via the country's technology-special track. The IPO plan is attracting significant market attention, backed by Nota AI's robust global expansion and strong product competitiveness. Earlier in April, Nota AI was also recognized as one of the "Top 100 Global Innovative AI Startups" by the global market research firm CB Insights. Looking ahead, Nota AI plans to accelerate its presence across key global markets — including the Middle East, Southeast Asia, and Europe.


Business Wire
21-05-2025
- Business
- Business Wire
Plainsight Introduces Open Source OpenFilter for Scalable Computer Vision AI
SANTA CLARA, Calif.--(BUSINESS WIRE)-- (Embedded Vision Summit, booth #518) -- Plainsight, the leader in automated infrastructure for data-centric AI pipelines, today announced the launch of OpenFilter, an open source project that simplifies and accelerates the development, deployment, and scaling of production-grade computer vision applications. Available now under the Apache 2.0 license, OpenFilter introduces an innovative "filter" abstraction that combines code and AI models into modular components that allow developers to assemble vision pipelines. 'OpenFilter has revolutionized how we deploy vision AI for our manufacturing and logistics clients,' said Priyanshu Sharma, Senior Data Engineer at BrickRed Systems. Plainsight will demonstrate OpenFilter at the Embedded Vision Summit, where it will be showcased at booth #518. CEO Kit Merker will present on Thursday, May 22, with a talk titled 'Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-Effective Solutions.' 'OpenFilter has revolutionized how we deploy vision AI for our manufacturing and logistics clients,' said Priyanshu Sharma, Senior Data Engineer at BrickRed Systems. 'With its modular filter architecture, we can quickly build and customize pipelines for tasks like automated quality inspection and real-time inventory tracking, without having to rewrite core infrastructure. This flexibility has enabled us to deliver robust, scalable solutions that meet our clients' evolving needs, while dramatically reducing development time and operational complexity.' OpenFilter directly addresses the challenges enterprises face when working to deploy AI computer vision in production. Its frame deduplication and priority scheduling reduce GPU inference costs and its advanced abstractions reduce deployment timelines from weeks to days. Its extensible architecture future-proofs investments, making it easy to adapt it to audio, text, and multimodal AI, and positioning OpenFilter as a foundational platform for scalable, agentic computer vision systems. Bridging the Prototype-to-Production Gap Traditional computer vision projects often stall due to fragmented tooling and scalability challenges. OpenFilter addresses this with: Open Source Core: Apache 2.0-licensed runtime with pre-built filters for common tasks (tracking, cropping, segmentation). Filter Runtime: Manage video inputs (RTSP, webcams, image files), processing, and output routing to databases, MQTT, or APIs. Modular Pipelines: Assemble filters for tasks like object detection, deduplication, or alerts into reusable workflows. Flexible Deployment: Deploy filters across CPUs, GPUs, or edge devices, optimizing resource costs. Broad Model Support: Integrate PyTorch, OpenCV, or custom models (e.g., YOLO) while avoiding vendor lock-in. OpenFilter Use Cases: Manufacturing: Automated quality inspection, defect and foreign object detection, and fill level monitoring on production lines. Retailers and Food Service: Drive-through analytics, cup and condiment counting, and real-time inventory tracking. Logistics and Supply Chain: Vehicle tracking, automated inventory management, and workflow automation. Agriculture: Precision farming and livestock monitoring through drone and camera footage analysis. Security: People counting, surveillance automation, and safety protocol enforcement IoT and Edge: Event detection and alerting. 'Filters are the building blocks for operationalizing vision AI,' said Andrew Smith, CTO of Plainsight. 'Instead of wrestling with brittle pipelines and bespoke infrastructure, developers can snap together reusable components that scale from prototypes to production. It's how we make computer vision feel more like software engineering – and less like science experiments.' 'OpenFilter is a leap forward for open source, giving developers and data scientists a powerful, collaborative platform to build and scale computer vision AI,' said Chris Aniszczyk, CTO, CNCF. 'Its modular design and permissive Apache 2.0 license make it easy to adapt solutions for everything from agriculture and manufacturing to retail and logistics, helping organizations of all types and sizes unlock the value of vision-based AI.' 'OpenFilter is the abstraction the AI industry has been waiting for. We're making it possible for anyone – not just experts – to turn camera data into real business value, faster and at lower cost,' said Plainsight CEO Kit Merker. 'By treating vision workloads as modular filters, we give developers the power to build, scale, and update applications with the same ease and flexibility as modern cloud software. This isn't just about productivity, it's about democratizing computer vision, unlocking new use cases, and making AI accessible and sustainable for every organization. We believe this is the foundation for the next wave of AI-powered transformation.' Availability OpenFilter is available today under the Apache 2.0 license. Enterprises can join the Early Access Program for the commercial version of OpenFilter. Learn more at Supporting Resources About Plainsight Plainsight empowers businesses to unlock actionable insights from visual data. Its open source and commercial solutions serve industries like manufacturing, agriculture, and security, prioritizing privacy, scalability, and responsible AI. Headquartered in Kirkland, Washington, Plainsight is backed by distributed systems pioneers from Amazon, Google, and Microsoft.
Yahoo
20-05-2025
- Business
- Yahoo
Leopard Imaging to Showcase Multiple Holoscan Perception Solutions Built on NVIDIA IGX Orin at Embedded Vision Summit
FREMONT, Calif., May 20, 2025 /PRNewswire/ -- Leopard Imaging Inc. (Leopard Imaging), a leading provider of embedded AI vision solutions, is excited to demonstrate its NVIDIA Holoscan Sensor Bridge enabling multiple synchronized high-performance cameras by the NVIDIA IGX Orin platform at the 2025 Embedded Vision Summit. Leopard Imaging launched imaging solutions by Holoscan Sensor Bridge during NVIDIA GTC 2025. Designed to meet the demanding real-time imaging needs of robotics, automation, and industrial inspection, Leopard Imaging's Holoscan-based solutions enable multi-camera, ultra-low-latency streaming into NVIDIA IGX —NVIDIA's sensor over ethernet streaming platform for edge AI. Leopard Imaging's custom-engineered Holoscan Sensor Bridge module is a plug-and-play interface that allows simultaneous connection of up to five 10GigE cameras directly connected into the Holoscan Sensor Bridge Developer Kit. It delivers deterministic, synchronized data streaming with hardware-level timestamping, ensuring tight integration across vision pipelines. This bridge maximizes the high-speed sensor performance of systems built with IGX Orin by enabling parallel vision workloads for AI inference, visualization, and data logging—all in real time. The Leopard Imaging cameras integrated into the Holoscan Sensor Bridge offer a range of resolutions, frame rates, and sensors for varied edge AI tasks: LI-ISX031-ST80-10GigE-118H:Features Sony's IMX031 sensor with 10GigE and 80mm baseline stereo camera, providing high dynamic range and low-light performance for 3D robotics vision and high-performance stereoscopic color imaging. LI-VB1940-ST100-10GigE-120H:A 5M-resolution 3D depth stereo camera with ultra-low latency, 100mm baseline for stereo camera, and BSI global shutter performance. LI-ISX031-10GigE-118H:A compact variant of the Sony's IMX031 CMOS sensor based camera, offering flexibility for tight mechanical integrations. LI-AR0830-10GigE-195H:Built around the 8.3MP AR0830 sensor, this camera supports HDR imaging and is optimized for live streaming. LI-HAWK-HOLOSCAN:NVIDIA reference design camera Hawk LI-AR0234CS-STEREO-GMSL2 integrated with Holoscan connectivity to empower applications in robotics, automation, and etc. At the heart of the demo is the NVIDIA IGX Orin platform, an industrial-grade edge AI platform that combines enterprise-level hardware, software, and support. Together, Leopard Imaging's hardware and the NVIDIA Holoscan SDK provide a scalable foundation for developing the next generation of vision-enabled robotics, diagnostic systems, and industrial automation tools. Visit Leopard Imaging at Booth #700 at Embedded Vision Summit, May 21–22, to see these innovations live. Please email marketing@ for meeting inquiries. About Leopard Imaging Inc. Founded in 2008, Leopard Imaging is a global leader in high-definition embedded cameras and AI-based imaging solutions. Leopard Imaging serves various industries, including automotive, aerospace, drones, IoT, and robotics. Offering both original equipment manufacturer (OEM) and original design manufacturer (ODM) services, as well as high-quality manufacturing capabilities in both the U.S. and offshore, Leopard Imaging provides customized camera solutions for some of the most prestigious organizations worldwide. As an NVIDIA Elite Partner, Leopard Imaging holds quality management certifications such as IATF16949 for the automotive industry and AS9100D for the aerospace industry, ensuring the highest standards in its products and services. Press Contact Cathy Zhao marketing@ View original content to download multimedia: SOURCE Leopard Imaging Inc. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Business Wire
19-05-2025
- Business
- Business Wire
BrainChip CTO to Present on Architectural Innovation for Low-Power AI at the Embedded Vision Summit
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world's first commercial producer of ultra-low power, fully digital, event-based neuromorphic AI, today announced that Dr. M. Anthony Lewis, the company's Chief Technology Officer, will present at the Embedded Vision Summit, taking place May 20–22, 2025 in Santa Clara, CA. The Summit is the premier conference for innovators focused on computer vision and edge AI technologies. BrainChip CTO to Present on Architectural Innovation for Low-Power AI at the Embedded Vision Summit Share Dr. Lewis' presentation, titled 'State-Space Models vs Transformers at the Extreme Edge: Architectural Choices for Low Power AI,' will examine the trade-offs and opportunities in designing AI models for environments with severe energy and compute constraints. In his session, Dr. Lewis will explore how State-Space Models (SSMs)—including BrainChip's proprietary Temporal Event-based Neural Networks (TENNs)—offer significant advantages over traditional Transformer architectures in edge scenarios. Unlike Transformers, which rely on energy-intensive read-write memory architectures, SSMs support read-only operations that reduce total system power and enable the use of novel memory types. With fewer MAC (multiply-accumulate) units required, SSMs further minimize energy consumption and chip area. The talk will also highlight methods for migrating from Transformer-based models like LLaMA to SSMs using distillation, preserving accuracy while dramatically improving efficiency. Attendees of the Summit can visit BrainChip at booth #716 to see live demonstrations of the company's latest advancements in Edge AI technology, including innovations in on-chip language processing, ultra-low power inference, and Akida-powered solutions. 'State-space models are redefining the limits of real-time, low-power intelligence at the edge,' said Dr. Lewis. 'This presentation will show how BrainChip is delivering scalable, efficient, and highly interactive AI solutions for today's most demanding embedded environments.' This participation reinforces BrainChip's leadership in the rapidly growing field of real-time streaming Edge AI, bringing highly efficient compute to devices in aerospace, automotive, robotics, consumer electronics, and wearables. About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY) BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company's first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition and processing data with unmatched efficiency, precision, and energy economy. BrainChip's Temporal Event-based Neural Networks (TENNs) build on State-Space Models (SSMs) with time-sensitive, event-driven frameworks that are ideal for real-time streaming applications. These innovations make low-power Edge AI deployable across industries such as aerospace, autonomous vehicles, robotics, industrial IoT, consumer devices, and wearables. BrainChip is advancing the future of intelligent computing, bringing AI closer to the sensor and closer to real-time. Explore more at