logo
Manipal Institute of Technology students win award at International Rocket Engineering Competition

Manipal Institute of Technology students win award at International Rocket Engineering Competition

The Hindu4 days ago
A team of students from the Manipal Institute of Technology (MIT), a unit of the Manipal Academy of Higher Education (MAHE), which took part in the International Rocket Engineering Competition 2025 held in Midland, Texas, secured second place in the SDL Payload Challenge category.
The team, 'thrustMIT', took part in the competition from June 9 to June 14 and won a cash award of $750. The MIT team competed with 138 teams, which passed the Flight Safety Review (FSR) test, a MAHE release said on Monday.
Participating in the highly competitive 30K COTS (Commercial Off-The-Shelf) launching category, the team's rocket 'Vayu Vega' successfully completed all safety protocols during the first two days of competition. The culmination of the efforts came on June 12 when 'Vayu Vega' was launched to the desired altitude and executed a flawless landing using parachute deployment systems.
The 'thrustMIT' recovery team demonstrated its technical competence by successfully retrieving the rocket using GPS coordinates, with the recovered vehicle subsequently evaluated and approved by the IREC recovery team.
The competition represents one of the world's most challenging platforms for aspiring aerospace engineers, attracting top-tier student teams from leading institutions globally, the release added.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Can ChatGPT ‘rot' your brain as MIT study claims?
Can ChatGPT ‘rot' your brain as MIT study claims?

Scroll.in

timea day ago

  • Scroll.in

Can ChatGPT ‘rot' your brain as MIT study claims?

Since ChatGPT appeared almost three years ago, the impact of artificial intelligence technologies on learning has been widely debated. Are they handy tools for personalised education, or gateways to academic dishonesty? Most importantly, there has been concern that using AI will lead to a widespread 'dumbing down', or decline in the ability to think critically. If students use AI tools too early, the argument goes, they may not develop basic skills for critical thinking and problem-solving. Is that really the case? According to a recent study by scientists from MIT, it appears so. Using ChatGPT to help write essays, the researchers say, can lead to 'cognitive debt' and a 'likely decrease in learning skills'. So what did the study find? Brain vs AI Over the course of four months, the MIT team asked 54 adults to write a series of three essays using either AI (ChatGPT), a search engine, or their own brains ('brain-only' group). The team measured cognitive engagement by examining electrical activity in the brain and through linguistic analysis of the essays. The cognitive engagement of those who used AI was significantly lower than the other two groups. This group also had a harder time recalling quotes from their essays and felt a lower sense of ownership over them. Interestingly, participants switched roles for a final, fourth essay (the brain-only group used AI and vice versa). The AI-to-brain group performed worse and had engagement that was only slightly better than the other group's during their first session, far below the engagement of the brain-only group in their third session. The authors claim this demonstrates how prolonged use of AI led to participants accumulating 'cognitive debt'. When they finally had the opportunity to use their brains, they were unable to replicate the engagement or perform as well as the other two groups. Cautiously, the authors note that only 18 participants (six per condition) completed the fourth, final session. Therefore, the findings are preliminary and require further testing. Does AI really make us stupider? These results do not necessarily mean that students who used AI accumulated 'cognitive debt'. In our view, the findings are due to the particular design of the study. The change in neural connectivity of the brain-only group over the first three sessions was likely the result of becoming more familiar with the study task, a phenomenon known as the familiarisation effect. As study participants repeat the task, they become more familiar and efficient, and their cognitive strategy adapts accordingly. When the AI group finally got to 'use their brains', they were only doing the task once. As a result, they were unable to match the other group's experience. They achieved only slightly better engagement than the brain-only group during the first session. To fully justify the researchers' claims, the AI-to-brain participants would also need to complete three writing sessions without AI. Similarly, the fact the brain-to-AI group used ChatGPT more productively and strategically is likely due to the nature of the fourth writing task, which required writing an essay on one of the previous three topics. As writing without AI required more substantial engagement, they had a far better recall of what they had written in the past. Hence, they primarily used AI to search for new information and refine what they had previously written. What are the implications? To understand the current situation with AI, we can look back to what happened when calculators first became available. Back in the 1970s, their impact was regulated by making exams much harder. Instead of doing calculations by hand, students were expected to use calculators and spend their cognitive efforts on more complex tasks. Effectively, the bar was significantly raised, which made students work equally hard (if not harder) than before calculators were available. The challenge with AI is that, for the most part, educators have not raised the bar in a way that makes AI a necessary part of the process. Educators still require students to complete the same tasks and expect the same standard of work as they did five years ago. In such situations, AI can indeed be detrimental. Students can for the most part offload critical engagement with learning to AI, which results in ' metacognitive laziness '. However, just like calculators, AI can and should help us accomplish tasks that were previously impossible – and still require significant engagement. For example, we might ask teaching students to use AI to produce a detailed lesson plan, which will then be evaluated for quality and pedagogical soundness in an oral examination. In the MIT study, participants who used AI were producing the 'same old' essays. They adjusted their engagement to deliver the standard of work expected of them. The same would happen if students were asked to perform complex calculations with or without a calculator. The group doing calculations by hand would sweat, while those with calculators would barely blink an eye. Learning how to use AI Current and future generations need to be able to think critically and creatively and solve problems. However, AI is changing what these things mean. Producing essays with pen and paper is no longer a demonstration of critical thinking ability, just as doing long division is no longer a demonstration of numeracy. Knowing when, where and how to use AI is the key to long-term success and skill development. Prioritising which tasks can be offloaded to an AI to reduce cognitive debt is just as important as understanding which tasks require genuine creativity and critical thinking. Vitomir Kovanovic is Associate Professor and Associate Director of the Centre for Change and Complexity in Learning (C3L), Education Futures, University of South Australia. Rebecca Marrone.

Meet Mark Zuckerberg's AI dream team powering Meta's next big leap
Meet Mark Zuckerberg's AI dream team powering Meta's next big leap

Time of India

timea day ago

  • Time of India

Meet Mark Zuckerberg's AI dream team powering Meta's next big leap

Brains over data. That's Meta's game plan as it races to dominate artificial intelligence. While other tech titans throw compute power and training data at the problem, Mark Zuckerberg is doing something far more personal. Tired of too many ads? go ad free now He's handpicking minds. And he's not subtle about it. According to The Wall Street Journal, Zuckerberg has been personally calling OpenAI researchers, offering eye-popping compensation packages—some reportedly as high as $100 million—to woo them away. Even Sam Altman, OpenAI's CEO, admitted in a recent podcast that Meta's offers were staggering. It's all part of Meta's newly revealed Superintelligence Lab, and Zuckerberg has already released the first 11 names on what insiders call 'The List.' These aren't just brilliant AI engineers. They are scientists, founders, problem solvers—and in many cases, immigrants or first-generation Americans whose work helped define the most powerful AI models in existence today. Before we dive in, one thing is clear: this isn't just a hiring spree. It's the making of a brain trust that could shape how AI reasons, speaks, listens, and even dreams. Let's meet the team. Alexandr Wang Source: X The wunderkind leading Meta's new lab has already made a name for himself in Silicon Valley. As the founder of Scale AI, Wang built a company that quietly powered the data-hungry ambitions of tech's biggest players. What fewer people know is that his story begins far from boardrooms—in New Mexico, where he was born to Chinese immigrant parents who worked as physicists for the U.S. military. Wang grew up surrounded by science and structure, but also by discipline. He competed in national math Olympiads as early as sixth grade, taught himself how to code, and played violin with the same intensity he brought to algorithms. Tired of too many ads? go ad free now After enrolling at MIT to study mathematics and computer science, he dropped out to pursue Scale. By 28, he wasn't just building tools for AI—he was redefining how AI learns. Meta reportedly invested $14 billion into Scale as part of the deal to bring Wang onboard. Nat Friedman Source: LinkedIn In contrast to Wang's youth, Nat Friedman brings gravitas. A seasoned technologist and venture investor. Friedman is known for scaling ideas into institutions. As the former CEO of GitHub, he steered the platform through its $7.5 billion acquisition by Microsoft and was known for his understated but razor-sharp leadership style. Born in Charlottesville, Virginia, Friedman fell in love with online communities at the age of 14 and later called them his 'actual hometown.' That early sense of connection shaped his future—first at MIT, then through his work co-founding Xamarin, a developer tools company that attracted Fortune 500 clients like Coca-Cola and JetBlue. Today, Friedman is deeply embedded in the AI startup ecosystem, backing companies like Perplexity and Stripe. Trapit Bansal Source: X Born and raised in India, Trapit Bansal is a quiet architect behind some of OpenAI's most sophisticated reasoning models. With dual degrees in mathematics and statistics from IIT Kanpur and a PhD from the University of Massachusetts Amherst, Bansal's academic journey has always straddled theory and application. At OpenAI, he played a crucial role in the development of the o-series, particularly the o1 model—widely regarded as a turning point in AI's ability to 'think' before responding. Bansal's specialty is meta-learning. Jiahui Yu Source: LinkedIn A rising star in the world of multimodal AI, Jiahui Yu has already left his mark on two of the most powerful labs in the world—Google and OpenAI. At OpenAI, he led the perception team, working on how machines interpret images, audio, and language as a seamless whole. At Google's DeepMind, he helped develop Gemini's multimodal capabilities. Yu's educational path began at the prestigious School of the Gifted Young in China, followed by a PhD in computer science from the University of Illinois Urbana-Champaign. Shuchao Bi Source: X Shuchao Bi is one of the few people who can claim co-founder status on a cultural juggernaut—YouTube Shorts. During his 11 years at Google, he helped create and refine its short-form video platform and later led its algorithm team. But Bi's heart has always belonged to research. At OpenAI, he focused on multimodal AI and helped launch GPT-4o's voice mode—essentially giving chatbots the power to talk back. Educated at Zhejiang University and later at UC Berkeley, Bi blends statistical elegance with creative application. His role at Meta? To make machines not just responsive, but expressive. Huiwen Chang Source: LinkedIn Known for her expertise in image generation and style transfer, Huiwen Chang was instrumental in OpenAI's visual interface work for GPT-4o. But her roots are in rigorous academia. She graduated from the Yao Class at Tsinghua University—a training ground for China's best minds in computer science—and then earned her PhD from Princeton. Chang's work is where art meets architecture. She understands how to train machines to not just see an image, but to manipulate it, interpret it, and even mimic human aesthetic judgment. Before OpenAI, she cut her teeth at Adobe and Google. Ji Lin Source: LinkedIn Another Tsinghua-to-MIT story, Ji Lin blends engineering finesse with frontier thinking. He worked on several of OpenAI's most powerful models before joining Meta, with a focus on both reasoning and multimodal integration. What sets Lin apart is his mix of research and real-world application. He interned at NVIDIA, Adobe, and Google before landing at OpenAI. Hongyu Ren Source: X If you're improving an AI model after it's built—teaching it to be more ethical, more accurate, or more human—you're doing post-training. That's Hongyu Ren's specialty. Educated at Peking University and Stanford, Ren led a post-training team at OpenAI and is one of the more philosophically-minded researchers in the group. Shengjia Zhao Source: X As a co-creator of ChatGPT, Shengjia Zhao is no stranger to AI that captures public imagination. But behind the scenes, he was also working on one of the field's most quietly important trends: synthetic data. By helping machines generate their own training material, Zhao advanced a method to keep AI learning, even as real-world data dries up. After graduating from Tsinghua University and Stanford, Zhao joined OpenAI in 2022 and quickly rose through the ranks. Johan Schalkwyk Source: X Hailing from South Africa, Johan Schalkwyk has always worked on the frontier of communication. At Google, he led the company's ambitious effort to support 1,000 spoken languages, a moonshot project that blended linguistics, machine learning, and cultural preservation. Most recently, he served as machine learning lead at Sesame, a startup trying to make conversational AI feel like real dialogue. Pei Sun Pei Sun helped power the brains behind Waymo, Google's self-driving car unit. His work involved building next-generation models for perception and reasoning—skills that translate neatly into the world of chatbots, robots, and beyond. Educated at Tsinghua University and Carnegie Mellon, Sun began a PhD before dropping out to join the industry faster. Joel Pobar Source: LinkedIn An AI infrastructure veteran, Joel Pobar most recently worked at Anthropic, where he helped scale inference systems for some of the most advanced models in the world. Before that, he spent nearly a decade at Facebook, leading engineering teams. Educated in Australia at Queensland University of Technology, Pobar brings a rare mix of insider knowledge and outsider grit. His job at Meta will likely focus on making sure the lab's most powerful creations can actually run reliably, at scale, and in real time.

MAHE BLRU's MIT Reports Stellar Placement Season with INR 52 LPA Top Salary
MAHE BLRU's MIT Reports Stellar Placement Season with INR 52 LPA Top Salary

Business Standard

timea day ago

  • Business Standard

MAHE BLRU's MIT Reports Stellar Placement Season with INR 52 LPA Top Salary

NewsVoir Bengaluru (Karnataka) [India], July 3: Manipal Institute of Technology (MIT), located at the Manipal Academy of Higher Education (MAHE) Bangalore campus. It is a top-tier constituent of MAHE, which ranked 4th in the NIRF 2024 University Rankings. MIT Bangalore has achieved incredible placement results this year. The highlight of the season was a INR 52 LPA package secured by Vijval Narayana, a student from the Electronics and Communication Engineering department. This marks a significant milestone for MIT Bengaluru's first graduating batch. The season also witnessed several high-value offers, the standout packages being INR 22 LPA for Pradyota Kirtikar (Computer Science); INR 18.6 LPA each for Himavarshini Beedala and Shreeya Khera (CSE - AI & Cyber Security); and INR 15 LPA for P. Yasmeen Begum (Information Technology). Prof. (Dr) Madhu Veeraraghavan, Pro Vice-Chancellor, Manipal Academy of Higher Education, Bengaluru said, "At MAHE Bengaluru, we are always committed to extend beyond traditional education to foster industry-ready professionals. By aligning our curriculum with modern market demands and enhancing practical skills development, we ensure our students are always well-prepared for the competitive landscape. The impressive placement outcomes and handsome packages our students secure are a testament to this strategic approach." Sharing his thoughts, Dr. Iven Jose, Director of MIT Bengaluru, said, "At MIT Bengaluru, we create an exciting learning ecosystem shaped by innovation, industry immersion, and holistic development. Being in the center of India's and specifically Karnataka's technology hub, our students have easy access to incredible companies, research institutes, and start-up ecosystems. Through advanced infrastructure, interdisciplinary working, and engaged faculty, we commit to our students working on real-world challenges and place importance on honing skills such as entrepreneurship, sustainability, and digital transformation." MIT Bengaluru also reported encouraging statistics in its internship placement drive: * INR 1.1 LPA - Highest internship stipend * INR 38 KPM - Average stipend * INR 30 KPM - Median stipend Over 250 Top Recruiters Participated The placement season saw participation from more than 250 recruiters spanning sectors like technology, consulting, healthcare, energy, and finance. Major companies included: Microsoft, Amazon, McKinsey & Company, Komprise, Optum, Saviynt, Shell, HPE, Infosys, TCS, Intel, Goldman Sachs, Philips, Unilever, Dell, Capgemini, Swiggy, LumiQ, BlackRock, Cognizant, Siemens Healthineers, and many others. Established in 1953, Manipal Academy of Higher Education (MAHE) is an Institution of Eminence Deemed to be University. With a remarkable track record in academics, state-of-the-art infrastructure, and significant research contributions, MAHE has earned recognition and acclaim both nationally and internationally. In October 2020, the Ministry of Education, Government of India, honoured MAHE with the prestigious designation of Institution of Eminence Deemed to be University. Currently ranked sixth in the National Institutional Ranking Framework (NIRF), MAHE is the preferred choice for students seeking a transformative learning experience. MAHE Bengaluru, an off campus of MAHE, excels in delivering comprehensive education to students, supported by highly qualified faculty, and dedicated mentors. The MAHE Bengaluru campus has an inspiring, future-relevant learning ecosystem, on a new-age tech-enabled living campus. Here, the students immerse themselves, transform, and discover multiple choices and opportunities. At MAHE Bengaluru, the potential for growth and the opportunities available are boundless.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store