&w=3840&q=100)
Air India crash: Pilots simulate engine failures, probe dual shutdown
The pilots tested scenarios involving electrical faults that could potentially lead to both engines failing – a condition known as dual-engine flame-out. Such a failure would prevent the aircraft from climbing after takeoff. However, none of these simulations were successful in replicating the conditions that led to the crash on June 12.
To ensure precision, the pilots used the exact trim sheet from the AI-171 flight. A trim sheet is used to calculate and document an aircraft's weight and balance, ensuring the centre of gravity is suitable for safe takeoff, flight, and landing.
Unsafe takeoff conditions simulated
In their efforts, the trainer-pilots also tested a single-engine failure scenario. During this simulation, the landing gear (undercarriage) was intentionally kept down, and the flaps, which are usually partially extended during takeoff to improve lift, were fully retracted – an unsafe configuration.
This setup was designed to test the jet's performance under extreme and unrealistic conditions. Normally, the landing gear is retracted shortly after takeoff to help the aircraft become more aerodynamic.
Despite the poor configuration and operating on only one engine, the Boeing 787 managed to climb safely in all simulations. This is partly due to the aircraft's powerful General Electric GEnx-1B67-K engines, which produce up to 70,000 pounds of thrust each. These engines are among the most powerful in their category for commercial jets.
Investigators examine fuel switches
Accident investigators have already recovered data from the aircraft's black boxes – the flight data recorder and cockpit voice recorder. They are now looking into whether the position of the fuel switches may have contributed to the engine failure.
This involves checking the recorded data alongside any recovered parts of the fuel switches. It is essential to determine whether a fuel switch may have been accidentally turned off during the crucial moments of takeoff or shortly afterwards.
Dual-engine failure not recoverable
While the Boeing 787 can climb on one engine, the situation changes drastically if both engines fail. Investigators believe this might have happened in the case of AI-171.
Pilots on Air India's 787 fleet are not trained to manage a dual-engine failure at an altitude below 400 feet. This type of scenario falls under what is known as "negative training", meaning the situation is considered unrecoverable and therefore not practised.
In short, 'a dual-engine failure at the altitude AI-171 was flying in, would have likely resulted in a crash.'
Investigators await key findings
Many pilots are now looking to the Aircraft Accident Investigation Bureau (AAIB), which is expected to release a preliminary report next week. The findings are likely to shed light on whether a rare dual-engine failure was the cause of the crash.
Such a failure has been regarded as a statistical possibility rather than a realistic one, especially in an airline adhering to international standards for safety and maintenance, such as those set by the International Civil Aviation Organisation (ICAO).
Wider implications for Air India and Boeing
The outcome of this investigation carries serious implications. Air India operates a fleet of 33 Boeing 787 Dreamliners – 26 of the 787-8 variant and 7 of the larger 787-9. The Dreamliner is the airline's most commonly used wide-body aircraft for international operations.
A systemic fault, if confirmed, could affect Boeing and many other global airlines that fly the 787. It would also draw attention to General Electric, the manufacturer of the aircraft's engines.
The AI-171 crash is the first fatal incident involving a Boeing 787 Dreamliner since the aircraft entered service in October 2011.
Meanwhile, the black box data is currently being analysed at the AAIB lab in Delhi to help determine the exact sequence of events, including why both engines might have lost power at the same time.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India Today
22 minutes ago
- India Today
Indian techie faked drone strike during Op Sindoor to shirk work, claims ex-boss
Indian tech professional Soham Parekh, who has been at the centre of a row for moonlighting at multiple Silicon Valley companies, emotionally manipulated Leaping AI co-founder, Arkadiy Telegin, by invoking the India-Pakistan military conflict in May, Telegin has AI co-founder Telegin, took to X to make the claim, days after the Indian techie admitted to simultaneously working at multiple companies without claimed that Parekh misled him by pretending to be in a "conflict" area during the "India-Pakistan thing", despite actually being in Mumbai, all that while. The US-based Starup co-founder alleged that the techie "guilt-tripped" him for taking too long to get work SHARES SCREENSHOT OF HIS CHAT WITH SOHAM PAREK "Soham used to guilt-trip me for being slow on PRs (a step in coding carried out by a coder) when the India-Pakistan thing was going on, all while he was in Mumbai. The next person should hire him for the Chief Intelligence Officer role," Telegin wrote in a post on X along with the screenshots of his chat with claimed the chat with Parekh was from the time in May when India and Pakistan were engaged in an intense military stand-off after New Delhi launched Operation Sindoor. The precise Indian strikes on terror heavens in Pakistan and Pakistan-Occupied Kashmir resulted in Islamabad launching a barrage of missiles and drones across the international border and the Line of Control. Pakistan targeted Indian military installations and civilian sent a message to Telegin, claiming, "Drone shot down 10 minutes away". When Telegin asked about Parekh's well-being, Parekh lied that a building close to his home was damaged in the FOR 34 STARTUPS: FOUNDERSuhail Doshi, former CEO of Mixpanel, earlier posted on X alleging that Parekh was employed by "34 startups at the same time" and had deceived Y Combinator-backed firms. Y Combinator-backed firms are startups that get money, support, and advice from the startup accelerator to grow their further said he terminated Parekh within a week after discovering the overlapping founders backed up Doshi's warning, prompting one to call off Parekh's trial last week, while another disclosed they had recently interviewed him -- only to discover his involvement with multiple responded to the allegations during an interview on the tech show TBPN, openly acknowledging the truth behind the accusations."It is true," he admitted, adding, "I'm not proud of what I've done. But, you know, financial circumstances, essentially. No one really likes to work 140 hours a week, right? But I had to do this out of necessity. I was in extremely dire financial circumstances".He clarified that he personally handled all assigned work without the help of other engineers or AI PAREKH SAYS GETS NEW JOB AT DARWINSoham Parek, the India-based techie has now announced that he's joining an AI firm, Darwin, which is a new startup based in San Francisco in the however, said that, this time, he won't be taking on any additional founder and CEO, Sanjit Juneja, also issued a statement expressing confidence in Parekh's skills."Soham is an incredibly talented engineer, and we believe in his abilities to help bring our products to market," Juneja the series of allegations and controversies, Parekh on June 3 responded on his X account."I've been isolated, written off and shut out by nearly everyone I've known and every company I've worked at. But building is the only thing I've ever truly known, and it's what I'll keep doing."He confirmed that he has wrapped up all other job commitments and has now signed an exclusive deal with Darwin.- Ends


Time of India
27 minutes ago
- Time of India
AI couldn't crack it, but kids did: THIS puzzle proves how kids are smarter than AI
Researchers at the University of Washington developed 'AI Puzzlers,' a game featuring reasoning puzzles that AI systems struggle to solve. In the study, children outperformed AI in completing visual patterns, demonstrating their critical thinking skills. The kids also identified errors in AI solutions and explanations, highlighting the differences between human and artificial intelligence. We are living in an era where artificial intelligence (AI) is slowly taking over the world. While concerns arise if AI would replace humans one day, a new study shows it cannot defeat kids! A team of researchers developed a game that AI systems blatantly failed! Researchers at the University of Washington developed A game called AI Puzzlers to show kids an area where AI systems fail: solving certain reasoning puzzles. The findings were presented at the Interaction Design and Children 2025 conference in Reykjavik, Iceland. Kids beat AI The users have to solve 'ARC (Abstraction and Reasoning Corpus)' puzzles by completing patterns of colored blocks. The kids can ask AI chatbots to solve it. However, these bots nearly always fail. To understand if kids were smarter than AI, the researchers tested the game with two groups of kids. The researchers found that the children learned to think critically about AI responses and discovered ways to nudge the systems toward better answers. 'Kids naturally loved ARC puzzles, and they're not specific to any language or culture. Because the puzzles rely solely on visual pattern recognition, even kids who can't read yet can play and learn. They get a lot of satisfaction in being able to solve the puzzles, and then in seeing AI — which they might consider super smart — fail at the puzzles that they thought were easy,' lead author Aayushi Dangol, a UW doctoral student in human-centered design and engineering, said in a statement. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo What are ARC puzzles Abstraction and Reasoning Corpus puzzles were developed in 2019. It was designed in a way it is difficult for computers, but easy for humans. These puzzles require abstraction: being able to look at a few examples of a pattern, then apply it to a new example. Though the current AI models have improved at ARC puzzles, they've not caught up with humans. Findings For the study, the researchers developed AI Puzzlers with 12 ARC puzzles that kids can solve. The kids can compare solutions with AI, and an 'Ask AI to Explain' button also generates a text explanation of its solution attempt. In some cases, when the system got the puzzle right, it still struggled to get the explanation accurate. The kids could correct AI, using an 'Assist Mode'. 'Initially, kids were giving really broad hints. Like, 'Oh, this pattern is like a doughnut.' An AI model might not understand that a kid means that there's a hole in the middle, so then the kid needs to iterate. Maybe they say, 'A white space surrounded by blue squares,'' Dangol said. Kids turned winners The researchers tested the system at UW College of Engineering's Discovery Days last year. More than 100 kids from grades 3 to 8 participated in the game. They also held two sessions with KidsTeam UW, a group that helps design technology with children. In those sessions, 21 kids aged 6 to 11 played AI Puzzlers and worked with the researchers. 'The kids in KidsTeam are used to giving advice on how to make a piece of technology better. We hadn't really thought about adding the Assist Mode feature, but during these co-design sessions, we were talking with the kids about how we might help AI solve the puzzles and the idea came from that,' co-senior author Jason Yip, a UW associate professor in the Information School and KidsTeam director, said. Children were able to spot errors both in the puzzle solutions and in the text explanations from the AI models. They were also able to recognize differences in how human brains think and how AI systems generate information. 'This is the internet's mind. It's trying to solve it based only on the internet, but the human brain is creative,' one kid said. 7 AI Chatbots You NEED To Know! (They're NOT All The Same!) 'Kids are smart and capable. We need to give them opportunities to make up their own minds about what AI is and isn't, because they're actually really capable of recognizing it. And they can be bigger skeptics than adults,' co-senior author Julie Kientz, a UW professor and chair in human-center design and engineering, added.


Mint
33 minutes ago
- Mint
New data shows Meta's highest-paid AI research engineer gets ₹3.76 crore salary, excluding bonuses and stock options
There's a quiet but fierce battle playing out in the world of tech and it's not about who's building the next big app or launching the smartest gadget. It's about talent, specifically the kind of talent that can help a company stay ahead in artificial intelligence. Meta, which is betting big on AI, is right in the thick of it, and the salary numbers now coming out are enough to make anyone do a double take. Meta's recent H-1B visa filings have pulled back the curtain on just how much the company is willing to pay to bring top AI minds on board. The highest-paid AI research engineers at Meta are getting base salaries up to $440,000, or about ₹ 3.76 crore. That's just the base, not counting the stock options, bonuses, or other perks that can sometimes make the total package balloon to double or even triple the headline figure. It's not just the AI research engineers cashing in. Software engineers at Meta can go even higher, with base salaries reportedly reaching $480,000. Machine learning engineers, data science managers, and directors are all comfortably in the six-figure range. Even roles like product managers, designers, and UX researchers are seeing paychecks that would make most people's eyes pop. These filings don't show the full picture, though. The real money in tech often comes from restricted stock units and bonuses, especially for those working on AI projects, and those numbers aren't public. Meta isn't the only player throwing big money at AI talent. Across Silicon Valley and beyond, the competition is heating up. Thinking Machines Lab, a new startup from former OpenAI CTO Mira Murati, is reportedly offering base salaries up to $500,000 for technical staff, and they haven't even launched a product yet. That's the kind of climate AI engineers are walking into right now - one where companies are willing to pay top dollar, sometimes just for the chance to get ahead. What's interesting is how quickly things have changed. A few years ago, these kinds of salaries would have sounded like science fiction. Now, they're almost expected for anyone with the right skills and experience. The demand for AI talent is only going up, and so are the paychecks. Where this ends is anyone's guess. Maybe these sky-high salaries will become the new normal, or maybe the market will cool off once the next wave of tech comes along. For now, though, if you're working in AI, it's a good time to be checking your email.