
Pilot who died in small plane crash was an MIT scientist expecting his first child
Geoffrey Andrews, 30, was taking off from Beverly Regional Airport when the four-seater, fixed-wing plane crashed Thursday morning, WBZ news reported.
Witnesses at the airport reported seeing a puff of smoke before the plane plummeted to the ground, the NTSB told the outlet.
Advertisement
3 MIT scientist and dad-to-be Geoffrey Andrews died during a single-engine plane crash in Massachusetts.
Instagram/@geoffreyandrews
The smoke may have been an indication of engine failure and 'a gradual left-hand descending turn,' the NTSB said.
Andrews was a staff scientist for MIT Lincoln Laboratory and was expecting his first child with his wife, Gentry Andrews, according to a GoFundMe organized for the budding family.
Advertisement
'Beyond his love for flight, Geoffrey was a charismatic, beautiful soul who cared deeply for his family and friends and always had a kind word for others. He was so excited about the upcoming birth of their baby,' the statement said.
The couple's baby is due in October.
The mother-to-be had recently suffered another incredible tragedy — losing her mother, Marcia — according to the statement.
3 Witnesses saw a plume of smoke before the plane crashed shortly after takeoff at Beverly Regional Airport.
WCVB
Advertisement
Andrews had several years of piloting experience, was a visiting lecturer at Lehigh University, and a doctoral graduate at Purdue University, the family said.
He 'volunteered as a glider pilot' and was working to become a Certified Flight Instructor to teach others how to fly, according to the statement.
'He was almost always seen sporting a bow tie and a smile. He loved cooking, baking, nature, was a talented amateur photographer, and was oddly passionate about scuba diving. He loved music, often played piano, and sang in choirs for much of his life, including with Gentry,' the statement said.
A fundraiser for Gentry and their baby-to-be has raised over $50,000 of its $60,000 goal.
Advertisement
3 Andrews was an experienced pilot who was working to become a Certified Flight Instructor.
Facebook/Geoffrey Andrews
Andrews was enjoying his 'last camping trip that he would have for some time with our baby girl due soon,' Gentry Andrews wrote in an Instagram post.
Gentry Andrews said her husband's response to engine trouble was 'textbook,' but the plane still went down, and he 'died on impact,' according to the post.
One other unidentified person was critically injured in the crash, WBZ news reported.
The cause of the crash is still under investigation by the Federal Aviation Administration, the National Transportation Safety Board, and the Massachusetts State Police Detective Unit assigned to the Office of the District Attorney, according to WCVB.
'We thank the investigators who we trust will conduct a thorough investigation into what caused this catastrophic loss of life,' Andrew's family said in the statement.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
13 hours ago
- Forbes
Building An AI Future Without Sacrificing The Planet
Somdip is the Chief Scientist of Nosh Technologies, an MIT Innovator Under 35 and a Professor of Practice (AI/ML) at the Woxsen University. AI has revolutionized the way industries operate—streamlining processes, unlocking insights and catalyzing new markets. However, behind these breakthroughs lies a growing environmental shadow. On one hand, AI can be leveraged to promote ESG goals within corporations and industries. On the other hand, its escalating energy demands pose significant challenges to global net-zero initiatives aimed at reducing carbon emissions. A striking example: everyday interactions with ChatGPT—like users saying "thank you"—can cumulatively consume up to 10 times more energy than a standard Google search result, translating into millions in electricity spending and a consequential carbon footprint. This encapsulates a broader tension: How do we reconcile AI's promise with the urgent goals of net zero? Counting The Environmental Cost AI's climb has been meteoric. A 2024 Center for Data Innovation report found that models like OpenAI's GPT‑3, with its 175 billion parameters, required roughly 1,287 megawatt-hours (MWh) of energy for a single training run—generating 552 metric tons of carbon (equivalent to hundreds of cars on the road). What's more, according to Goldman Sachs, data centers (AI's operational backbone) currently consume 1% to 2% of global electricity, a proportion expected to rise to 3% to 4% by 2030 as AI computing demand accelerates. Where AI Meets Net-Zero Tensions For businesses, governments and investors, this convergence presents a dilemma. The economic upsides of AI are undeniable—advanced analytics, automation and smarter decision making. Yet persisting on a path of unchecked energy use undermines corporate ESG commitments, international climate pledges and responsible science goals. Unless energy efficiency and low-carbon energy sourcing scale up, AI's rise could offset wider sustainability gains. Building A Framework For Responsible AI To position AI as a catalyst—not a constraint—for achieving net zero, businesses and policymakers must embrace a holistic framework that incorporates efficiency, sustainability, governance and collaboration. The following five pillars outline how to responsibly advance AI while curbing its environmental footprint: One of the most effective ways to reduce AI's carbon intensity is through designing models that require less computational power to deliver high performance. A notable example is Google's Generalist Language Model (GLaM), which leverages a sparsely activated architecture. Despite having nearly 1.2 trillion parameters, GLaM consumed just around 456 MWh during training, according to the Center for Data Innovation report—roughly one-third the energy required for GPT‑3—while outperforming it on several language benchmarks. Beyond model design, developers can deploy methods like pruning (removing unnecessary neurons), quantization (reducing numerical precision), knowledge distillation (training smaller models to mimic larger ones), embedded machine learning (machine learning on embedded systems) and federated learning (distributing model training across devices). These approaches collectively help lower both training and inference energy costs. AI's massive energy draw is ultimately tied to where and how it runs. Transitioning to renewable-powered data centers is a foundational step in decarbonizing AI operations. Companies should also prioritize deploying advanced cooling systems and optimizing server usage to prevent energy waste. Microsoft is committed to powering all of its data centers with 100% renewable energy by 2025, setting a strong benchmark for the sector. Infrastructure improvements not only shrink emissions but also future-proof operations against evolving regulatory and ESG standards. Governments have a critical role to play in steering AI innovation along a sustainable path. Public policy should encourage low-carbon AI adoption through mechanisms such as tax credits for green computing, grants for energy-efficient model research and the establishment of minimum efficiency thresholds for high-performance computing. The European Union's Green Deal provides a leading example, embedding energy efficiency into digital innovation policy. By building similar frameworks globally, policymakers can guide AI growth without compromising climate targets. Without visibility into AI's environmental footprint, it's impossible to improve it. Organizations should start to report the energy consumption and emissions associated with their AI models and infrastructure, ideally through standardised disclosures—something akin to a sustainability label for AI. Tools like the AI Emissions Scenario Generator can assist in estimating and tracking a model's carbon impact, fostering transparency across the ecosystem. Public accountability not only builds trust but incentivises continuous improvement in energy efficiency. Sustainability in AI isn't a challenge any one group can solve alone. Collaboration among scientists, technologists, corporate leaders and policymakers is essential to build shared standards for sustainable model development. Organizations like the Partnership on AI provide platforms for multi-stakeholder dialogue on ethics, efficiency and environmental impact. Through such partnerships, the global AI community can co-create certification standards, share best practices and accelerate the adoption of greener methodologies across sectors. Turning AI Into A Climate Asset When responsibly deployed, AI becomes a force multiplier for sustainability efforts: • Grid Optimization: AI-driven demand forecasting enables utilities to match renewable generation with usage patterns—minimizing waste and balancing intermittent supply. • Precision Agriculture: Smart farming solutions optimize waste, water, fertilizer and pesticide use. For example, research by Cornell University found that predictive modeling could cut irrigation volumes by 40% while preserving yields. • Sustainable Logistics: AI route-planning tools—already used by logistics giants—reduce fuel usage, slash emissions and drive down costs through smarter delivery scheduling and traffic forecasting. A practical example from my own work in building AI responsibly involves deploying embedded machine learning—that is, running trained models directly on edge devices like smartphones without relying on constant cloud communication. By applying techniques such as model pruning, quantization and federated learning, we were able to dramatically reduce computational load and energy usage during both training and inference. These models helped users manage food inventory efficiently, leading to measurable reductions in waste and associated carbon emissions. This approach not only embodies the five pillars of responsible AI—efficiency, sustainability, governance, transparency, and collaboration—but also demonstrates how lightweight, localized AI solutions can drive meaningful progress toward net-zero goals at scale. The Bottom Line AI's rise and climate action need not be at odds. With thoughtful approaches (efficient models, clean infrastructure, transparent practices and enabling policy), AI can accelerate net zero rather than slow it—which ties back to reskilling workforces for the AI era and empowering them to thrive in AI-driven futures. Our next chapter must ensure that AI grows with, not against, sustainability. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
a day ago
- Forbes
MIT Teaches Soft Robots Body Awareness Through AI And Vision
MIT CSAIL researchers have developed a new system that teaches robots to understand their own ... More bodies, using only vision. Instead of relying on sensors, the system allows robots to learn how their bodies move and respond to commands just by watching themselves. Researchers from the Massachussets Institute of Technology's (MIT) CSAIL lab have developed a new system that teaches robots to understand their bodies, using only vision. Using consumer-grade cameras, the robot watched itself move and then built an internal model of its geometry and controllability. According the researchers this could dramatically expand what's possible in soft and bio-inspired robotics, enabling affordable, sensor-free machines that adapt to their environments in real time. The team at MIT said that this system and research is a major step toward more adaptable, accessible robots that can operate in the wild with no GPS, simulations or sensors. The research was published in June in Nature. Daniela Rus, MIT CSAIL Director said with Neural Jacobian Fields, CSAIL's soft robotic hands were able to learn to grasp objects entirely through visual observation with no sensors, no prior model and no manual programming. 'By watching its own movements through a camera and performing random actions, the robot built an internal model of how its body responds to motor commands. Neural Jacobian Fields mapped these visual inputs to a dense visuomotor Jacobian field, enabling the robot to control its motion in real time based solely on what it sees,' added Rus. Rus adds that the reframing of control has major implications. "Traditional methods require detailed models or embedded sensors but Neural Jacobian Fields lifts those constraints, enabling control of unconventional, deformable, or sensor-less robots in real time, using only a single monocular camera.'Vincent Sitzmann, Assistant Professor at MIT's Department of Electrical Engineering and Computer Science and CSAIL Principal Investigator said the researchers relied on techniques from computer vision and machine learning. The neural network observes a single image and learns to reconstruct a 3D model of the robot which relies on a technique called differentiable rendering which allows machine learning algorithms to learn to reconstruct 3D scenes from only 2D images. 'We use motion tracking algorithms - point tracking and optical flow - to track the motion of the robot during training,' said Sitzmann. "By relating the motion of the robot to the commands that we instructed it with, we reconstruct our proposed Neural Jacobian Field, which endows the 3D model of the robot with an understanding of how each 3D point would move under a particular robot action.' Sitzmann says this represents a shift towards robots possessing a form of bodily self-awareness and away from pre-programmed 3D models and precision-engineered hardware. 'This moves us towards more generalist sensors, such as vision, combined with artificial intelligence that allows the robot to learn a model of itself instead of a human expert,' said Sitzmann. "This also signals a new class of adaptable, machine-learning driven robots that can perceive and understand themselves.' The researchers said that three different types of robots acquired awareness of their bodies and the actions they could take as a result of that understandi A 3D-printed DIY toy robot arm with loose joints and no sensors learned to draw letters in the air with centimeter-level precision. It discovered which visual region corresponds to each actuation channel, mapping 'which joint moves when I command actuator X' just from seeing motion. A soft pneumatic hand learned which air channel controls each finger, not by being told, but just by watching itself wiggle. They inferred depth and geometry from color video alone, reconstructing 3D shape before and after actions. A soft, wrist-like robot platform, physically disturbed with added weight, learned to balance and follow complex trajectories. They quantified motion sensitivity, for example, measuring how a command that slightly changes an actuator produces millimeter‑level translations in the gripper. Changing soft robotics The CSAIL researchers aid that soft robots are hard to model because they deform in complex ways. One reasercher said in an email interview that the method they used in the research doesn't require any manual modeling. The robot watches itself move and figures out how its body behaves similar to a human learning to move their arm by watching themselves in a mirror. Sitzmann says conventional robots are rigid, discrete joints connected by rigid linksbuilt to have low manufacturing tolerance. "Compare that to your own body, which is soft: first, of course, your skin and muscles are not perfectly solid but give in when you grasp something.' 'However, your joints also aren't perfectly rigid like those of a robot, they can similarly bend and give in, and while you can sense the approximate position of your joints, your highest-precision sensors are vision and touch, which is how you solve most manipulation tasks,' said Sitzmann. "Soft robots are inspired by these properties of living creatures to be similarly compliant, and must therefore necessarily also rely on different sensors than their rigid cousins.' Sitzmann says that this kind of understanding could revolutionize industries like soft robotics, low‑cost manufacturing, home automation and agricultural robotics. 'Any sector that can profit from automation but does not require sub-millimeter accuracy can benefit from vision‑based calibration and control, dramatically lowering cost and complexity,' said Sitzmann. "In the future, with inclusion of tactile sensing (=touch), this paradigm may even extend to applications that require high accuracy.' A new approach to soft robotics Researchers say their approach removes the need for experts to build an accurate model of the robot, a process that can take months. It also eliminates reliance on expensive sensor systems or manual calibration. The simplified process entails recording the robot moving randomly and the model learns everything it needs to know from that video. 'Instead of painstakingly measuring every joint parameter or embedding sensors in every motor, our system heavily relies on a camera to control the robot," said Sitzmann. 'In the future, for applications where sub-millimeter accuracy is not critical, we will see that conventional robots with all their embedded sensors will increasingly be replaced by mass-producible, affordable robots that rely on sensors more similar to our own: vision and touch."
Yahoo
a day ago
- Yahoo
MIT researchers invent game-changing product that could revolutionize agriculture: 'You could give back a billion dollars to US growers'
A team of MIT researchers has developed a system that helps agricultural sprays stick to plant leaves, cutting down on polluting runoff and lowering costs for farmers. Agricultural spraying involves mixing water with chemicals and applying droplets to plant leaves, which are inherently water-repellent, according to a report by MIT News. After testing a variety of methods to optimize the delivery of pesticides and other sprays, the team ended up coating water droplets with a small amount of oil to help them adhere to the leaves. "Basically, this oil film acts as a way to trap that droplet on the surface, because oil is very attracted to the surface and sort of holds the water in place," said Simon Rufer, an MIT graduate student and co-author of a study on the topic. During initial tests, the researchers used soybean oil, figuring that this material would be familiar to farmers, many of whom grow soybeans, the report explained. However, soybean oil wasn't part of the usual supply chains, so they found that several of the chemicals they were already using could be employed in the same way. "That way, we're not introducing a new chemical or changed chemistries into their field, but they're using things they've known for a long time," said Kripa Varanasi, an MIT professor involved in the project. Pesticide use has been steadily increasing across the globe, rising 20% over the last decade, and even up to 153% in low-income countries. The use of this and other types of sprays across the nearly 1.2 billion acres of agricultural land in the U.S. adds up both in terms of quantity and costs. Approximately half a million tons of pesticides, 12 million tons of nitrogen, and four million tons of phosphorus fertilizer are applied to crops across the country. By using the new system, which is being commercialized by AgZen, a spinoff company created by the researchers, farmers can reduce the amount of spray they need to use on crops. Do you worry about pesticides in your food? All the time Sometimes Not really I only eat organic Click your choice to see results and speak your mind. "You could give back a billion dollars to U.S. growers if you just saved 6 percent of their pesticide budget," said Vishnu Jayaprakash, lead author of the research paper and CEO of AgZen. "In the lab we got 300 percent of extra product on the plant. So that means we could get orders of magnitude reductions in the amount of pesticides that farmers are spraying." This adhesive effect also helps reduce agricultural runoff, a leading cause of water quality degradation in rivers and streams. It also helps reduce soil erosion, nutrient loss, and the spread of pesticides into our waterways. The system, which AgZen calls RealCoverage, has already been deployed across 920,000 acres of farmland and various crop types, saving farmers up to 50% on their pesticide expenditures, the report explained. The RealCoverage system requires a nozzle that fits most spraying equipment and provides real-time coverage data using its onboard AI, so that it can be fine-tuned, even while in use. "The knowledge we are gathering from every leaf, combined with our expertise in interfacial science and fluid mechanics, is giving us unparalleled insights into how chemicals are used and developed — and it's clear that we can deliver value across the entire agrochemical supply chain," Varanasi said, per MIT News. "Our mission is to use these technologies to deliver improved outcomes and reduced costs for the ag industry." Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don't miss this cool list of easy ways to help yourself while helping the planet.