logo
OpenAI Academy & NxtWave (NIAT) launch India's largest GenAI lnnovation challenge for students– The OpenAI Academy X NxtWave Buildathon

OpenAI Academy & NxtWave (NIAT) launch India's largest GenAI lnnovation challenge for students– The OpenAI Academy X NxtWave Buildathon

Time of India06-06-2025
OpenAI Academy & NxtWave (NIAT) launch India's largest GenAI lnnovation challenge for students– The OpenAI Academy X NxtWave Buildathon
OpenAI Academy
and
NxtWave (NIAT)
have come together to launch the
OpenAI Academy X NxtWave Buildathon
, the largest GenAI innovation challenge aimed at empowering students from Tier 1, 2, and 3 STEM colleges across India. This initiative invites the country's brightest student innovators to develop AI-powered solutions addressing pressing issues across key sectors, including healthcare, education, BFSI, retail, sustainability, agriculture, and more under the themes '
AI for Everyday India, AI for Bharat's Businesses, and AI for Societal Good.
'
A hybrid challenge driving real-world AI innovation
The Buildathon will be conducted in a hybrid format, combining online workshops and activities with regional offline finals, culminating in a grand finale where the best teams pitch live to expert judges from OpenAI India.
The participants will first complete a 6-hour online workshop focused on
GenAI fundamentals, intro to building agents, OpenAI API usage training, and responsible AI development best practices
.
This foundational sprint ensures all participants are well-prepared to develop innovative and impactful AI solutions using OpenAI's cutting-edge technologies.
The Buildathon unfolds over three competitive stages:
Stage 1: Screening Round — Post-workshop, teams submit problem statements, project ideas, and execution plans online. A panel of mentors reviews submissions to shortlist the most promising entries.
Stage 2: Regional Finals — Shortlisted teams participate in an intensive 48-hour offline Buildathon held across 25–30 STEM colleges, with hands-on mentor support. Regional winners are announced following this stage.
Stage 3: Grand Finale — The top 10–15 teams from regional finals compete in the Grand Finale, pitching their solutions live to expert judges.
Build with the best tools in AI
Participants will have access to the latest in AI innovation, including
GPT-4.1, GPT-4o, GPT-4o Audio, and GPT-4o Realtime models
, supporting multimodal inputs like text, image, and audio. Additionally, tools like
LangChain, vector databases (Pinecone, Weaviate), MCPs, and the OpenAI Agents SDK
.
These tools will empower students to build high-impact, multimodal, action-oriented GenAI applications. Hands-on mentorship and structured support will guide participants throughout the process.
Widespread reach, diverse participation
The Buildathon aims to empower
25,000+ students
across seven states — Telangana, Karnataka, Maharashtra, Andhra Pradesh, Tamil Nadu, Rajasthan, and Delhi NCR. The Grand Finale will be hosted in Hyderabad or Delhi.
With coverage across all major zones of India, the event ensures nationwide representation and diversity.
Evaluation criteria across all stages
The participants will be evaluated in three stages. In the
Screening Round
, mentors will assess submissions based on
problem relevance, idea feasibility, and the proposed use of OpenAI APIs
. During the
Regional Finals
, on-ground judges will evaluate the prototypes for
innovation, depth of OpenAI
API integration, societal impact, and business viability
. Finally, in the
Grand Finale
, an expert panel will judge the top teams using the same criteria, with greater weightage given to
execution quality and the effectiveness of live pitching
.
Exciting rewards & career-boosting opportunities
Participants in the Buildathon will gain access to a wide range of exclusive benefits designed to boost their skills, visibility, and career prospects. All selected teams will receive hands-on training along with mentorship from leading AI experts across the country. Top-performing teams will earn
certificates, GPT+ credits for prototyping, and national-level recognition
. They'll also gain a rare opportunity to pitch directly to the OpenAI Academy's India team during the Grand Finale. Winners will receive prize money worth
Rs 10,00,000
in total along with Career opportunities in the OpenAI ecosystem.
A nation-wide movement for GenAI talent
Driven by
NxtWave
(
NIAT
), the Buildathon aligns with India's mission to skill its youth in future technologies. With OpenAI Academy bringing in expert guidance, branding, and cutting-edge tools, this initiative is poised to become a defining moment in India's AI journey, along with offering students across the country a real chance to build and shine on a national stage.
This landmark initiative aims to position OpenAI Academy at the forefront of India's AI talent development, activating over 25,000 students across 500+ campuses and generating more than 2,000 AI projects tackling real-world challenges. Through collaborative efforts, OpenAI Academy and
NxtWave
seek to foster a vibrant community of AI builders ready to drive innovation and impact across India.
By enabling thousands of OpenAI-powered projects, the OpenAI Academy x NxtWave Buildathon sets the stage for a new wave of AI builders ready to innovate for India and beyond.
Disclaimer - The above content is non-editorial, and TIL hereby disclaims any and all warranties, expressed or implied, relating to it, and does not guarantee, vouch for or necessarily endorse any of the content.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Read Mark Zuckerberg's full memo to employees on Meta Superintelligence Labs: We are going to …
Read Mark Zuckerberg's full memo to employees on Meta Superintelligence Labs: We are going to …

Time of India

time2 hours ago

  • Time of India

Read Mark Zuckerberg's full memo to employees on Meta Superintelligence Labs: We are going to …

Facebook founder Mark Zuckerberg has officially announced the formation of Meta Superintelligence Labs . The new division aims to develop 'personal superintelligence for everyone' and will be led by former Scale AI CEO Alexandr Wang as its Chief AI Officer. This move follows Meta's recent $14.3 billion acquisition of Wang's data-labeling startup. Wang will co-lead MSL alongside former GitHub CEO Nat Friedman, who will focus on AI products and applied research. In a memo sent to employees, Zuckerberg also introduced the full team of 11 members who the company has hired from competitors like Google, OpenAI and Anthropic. Read Meta CEO's full memo to his employees: As the pace of AI progress accelerates, developing superintelligence is coming into sight. I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for Meta to lead the way. Today I want to share some details about how we're organizing our AI efforts to build towards our vision: personal superintelligence for everyone. We're going to call our overall organization Meta Superintelligence Labs (MSL). This includes all of our foundations, product, and FAIR teams, as well as a new lab focused on developing the next generation of our models. Alexandr Wang has joined Meta to serve as our Chief AI Officer and lead MSL. Alex and I have worked together for several years, and I consider him to be the most impressive founder of his generation. He has a clear sense of the historic importance of superintelligence, and as co-founder and CEO he built ScaleAI into a fast-growing company involved in the development of almost all leading models across the industry. Nat Friedman has also joined Meta to partner with Alex to lead MSL, heading our work on AI products and applied research. Nat will work with Connor to define his role going forward. He ran GitHub at Microsoft, and most recently has run one of the leading AI investment firms. Nat has served on our Meta Advisory Group for the last year, so he already has a good sense of our roadmap and what we need to do. We also have several strong new team members joining today or who have joined in the past few weeks that I'm excited to share as well: Trapit Bansal -- pioneered RL on chain of thought and co-creator of o-series models at OpenAI. Shuchao Bi -- co-creator of GPT-4o voice mode and o4-mini. Previously led multimodal post-training at OpenAI. Huiwen Chang -- co-creator of GPT-4o's image generation, and previously invented MaskGIT and Muse text-to-image architectures at Google Research Ji Lin -- helped build o3/o4-mini, GPT-4o, GPT-4 .1, GPT-4.5, 4o-imagegen, and Operator reasoning stack. Joel Pobar -- inference at Anthropic. Previously at Meta for 11 years on HHVM, Hack, Flow, Redex, performance tooling, and machine learning. Jack Rae -- pre-training tech lead for Gemini and reasoning for Gemini 2.5. Led Gopher and Chinchilla early LLM efforts at DeepMind . Hongyu Ren -- co-creator of GPT-4o, 4o-mini, o1-mini, o3-mini, o3 and o4-mini. Previously leading a group for post-training at OpenAI. Johan Schalkwyk -- former Google Fellow, early contributor to Sesame, and technical lead for Maya. Pei Sun -- post-training, coding, and reasoning for Gemini at Google Deepmind. Previously created the last two generations of Waymo's perception models. Jiahui Yu -- co-creator of o3, o4-mini, GPT-4.1 and GPT-4o. Previously led the perception team at OpenAI, and co-led multimodal at Gemini. Shengjia Zhao -- co-creator of ChatGPT, GPT-4, all mini models, 4.1 and o3. Previously led synthetic data at OpenAI. I'm excited about the progress we have planned for Llama 4.1 and 4.2. These models power Meta AI, which is used by more than 1 billion monthly actives across our apps and an increasing number of agents across Meta that help improve our products and technology. We're committed to continuing to build out these models. In parallel, we're going to start research on our next generation of models to get to the frontier in the next year or so. I've spent the past few months meeting top folks across Meta, other AI labs, and promising startups to put together the founding group for this small talent-dense effort. We're still forming this group and we'll ask several people across the AI org to join this lab as well. Meta is uniquely positioned to deliver superintelligence to the world. We have a strong business that supports building out significantly more compute than smaller labs. We have deeper experience building and growing products that reach billions of people. We are pioneering and leading the AI glasses and wearables category that is growing very quickly. And our company structure allows us to move with vastly greater conviction and boldness. I'm optimistic that this new influx of talent and parallel approach to model development will set us up to deliver on the promise of personal superintelligence for everyone. We have even more great people at all levels joining this effort in the coming weeks, so stay tuned. I'm excited to dive in and get to work. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI
Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI

Time of India

time2 hours ago

  • Time of India

Inside Meta's Superintelligence Lab: The scientists Mark Zuckerberg handpicked; the race to build real AGI

Mark Zuckerberg has rarely been accused of thinking small. After attempting to redefine the internet through the metaverse, he's now set his sights on a more ambitious frontier: superintelligence—the idea that machines can one day match, or even surpass, the general intelligence of humans. To that end, Meta has created an elite unit with a name that sounds like it belongs in a sci-fi script: Meta Superintelligence Lab (MSL). But this isn't fiction. It's a real-world, founder-led moonshot, powered by aggressive hiring, audacious capital, and a cast of technologists who've quietly shaped today's AI landscape. This is not just a story of algorithms and GPUs. It's about power, persuasion, and the elite brains Zuckerberg believes will push Meta into the next epoch of intelligence. The architects: Who's running Meta's AGI Ambitions? Zuckerberg has never been one to let bureaucracy slow him down. So he didn't delegate the hiring for MSL—he did it himself. The three minds now driving this initiative are not traditional corporate executives. They are product-obsessed builders, technologists who operate with startup urgency and almost missionary belief in Artificial general intelligence (AGI). Name Role at MSL Past Lives Education Alexandr Wang Chief AI Officer, Head of MSL Founder, Scale AI MIT dropout (Computer Science) Nat Friedman Co-lead, Product & Applied AI CEO, GitHub; Microsoft executive B.S. Computer Science & Math, MIT Daniel Gross (Joining soon, role TBD) Co-founder, Safe Superintelligence; ex-Apple, YC No degree; accepted into Y Combinator at 18 Wang, once dubbed the world's youngest self-made billionaire, is a data infrastructure prodigy who understands what it takes to feed modern AI. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like My baby is in so much pain, please help us? Donate For Health Donate Now Undo Friedman, a revered figure in the open-source community, knows how to productise deep tech. And Gross, who reportedly shares Zuckerberg's intensity, brings a perspective grounded in AI alignment and risk. Together, they form a high-agency, no-nonsense leadership core—Zuckerberg's version of a Manhattan Project trio. The Scientists: 11 defections that shook the AI world If leadership provides the vision, the next 11 are the ones expected to engineer it. In a hiring spree that rattled OpenAI, DeepMind, and Anthropic, Meta recruited some of the world's most sought-after researchers—those who helped build GPT-4, Gemini, and several of the most important multimodal models of the decade. Name Recruited From Expertise Education Jack Rae DeepMind LLMs, long-term memory in AI CMU, UCL Pei Sun DeepMind Structured reasoning (Gemini project) Tsinghua, CMU Trapit Bansal OpenAI Chain-of-thought prompting, model alignment IIT Kanpur, UMass Amherst Shengjia Zhao OpenAI Alignment, co-creator of ChatGPT, GPT-4 Tsinghua, Stanford Ji Lin OpenAI Model optimization, GPT-4 scaling Tsinghua, MIT Shuchao Bi OpenAI Speech-text integration Zhejiang, UC Berkeley Jiahui Yu OpenAI/Google Gemini vision, GPT-4 multimodal USTC, UIUC Hongyu Ren OpenAI Robustness and safety in LLMs Peking Univ., Stanford Huiwen Chang Google Muse, MaskIT – next-gen image generation Tsinghua, Princeton Johan Schalkwyk Sesame AI/Google Voice AI, led Google's voice search efforts Univ. of Pretoria Joel Pobar Anthropic/Meta Infrastructure, PyTorch optimization QUT (Australia) This roster isn't just impressive on paper—it's a coup. Several were responsible for core components of GPT-4's reasoning, efficiency, and voice capabilities. Others led image generation innovations like Muse or built memory modules crucial for scaling up AI's attention spans. Meta's hires reflect a global brain gain: most completed their undergrad education in China or India, and pursued PhDs in the US or UK. It's a clear signal to students—brilliance isn't constrained by geography. What Meta offered: Money, mission, and total autonomy Convincing this calibre of talent to switch sides wasn't easy. Meta offered more than mission—it offered unprecedented compensation. • Some were offered up to $300 million over four years. • Sign-on bonuses of $50–100 million were on the table for top OpenAI researchers. • The first year's payout alone reportedly crossed $100 million for certain hires. This level of compensation places them above most Fortune 500 CEOs—not for running a company, but for building the future. It's also part of a broader message: Zuckerberg is willing to spend aggressively to win this race. OpenAI's Sam Altman called it "distasteful." Others at Anthropic and DeepMind described the talent raid as 'alarming.' Meta, meanwhile, has made no apologies. In the words of one insider: 'This is the team that gets to skip the red tape. They sit near Mark. They move faster than anyone else at Meta.' The AGI problem: Bigger than just scaling up But even with all the talent and capital in the world, AGI remains the toughest problem in computer science. The goal isn't to make better chatbots or faster image generators. It's to build machines that can reason, plan, and learn like humans. Why is that so hard? • Generalisation: Today's models excel at pattern recognition, not abstract reasoning. They still lack true understanding. • Lack of theory: There is no grand unified theory of intelligence. Researchers are working without a blueprint. • Massive compute: AGI may require an order of magnitude more compute than even GPT-4 or Gemini. • Safety and alignment: Powerful models can behave in unexpected, even dangerous ways. Getting them to want what humans want remains an unsolved puzzle. To solve these, Meta isn't just scaling up—it's betting on new architectures, new training methods, and new safety frameworks. It's also why several of its new hires have deep expertise in AI alignment and multimodal reasoning. What this means for students aiming their future in AI This story isn't just about Meta. It's about the direction AI is heading—and what it takes to get to the frontier. If you're a student in India wondering how to break into this world, take notes: • Strong math and computer science foundations matter. Most researchers began with robust undergrad training before diving into AI. • Multimodality, alignment, and efficiency are key emerging areas. Learn to work across language, vision, and reasoning. • Internships, open-source contributions, and research papers still open doors faster than flashy resumes. • And above all, remember: AI is as much about values as it is about logic. The future won't just be built by engineers—it'll be shaped by ethicists, philosophers, and policy thinkers too. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

How to Become an AI Genius: Lessons students can learn from Meta's $100 million hires
How to Become an AI Genius: Lessons students can learn from Meta's $100 million hires

Time of India

time3 hours ago

  • Time of India

How to Become an AI Genius: Lessons students can learn from Meta's $100 million hires

If you want to become an AI genius – the kind that Mark Zuckerberg offers $50–$100 million to join his quest for artificial general intelligence (AGI) – here's the blueprint, decoded from Meta's elite hires. 1. Build a rock-solid maths foundation Almost every AI superstar Meta poached – from Lucas Beyer to Trapit Bansal – started with hardcore mathematics or computer science degrees. Linear algebra, calculus, probability, and optimisation aren't optional. They are your bread and butter. Why? Because AI models are just giant stacks of matrix multiplications optimised over billions of parameters. If you can't handle eigenvectors or gradient descent, you'll be stuck fine-tuning open-source models instead of inventing the next GPT-5. 2. Specialise in deep learning Next comes deep learning mastery. Study neural networks, convolutional networks for vision, transformers for language, and recurrent models for sequence data. The Vision Transformer (ViT) co-created by Lucas Beyer and Alexander Kolesnikov redefined computer vision precisely because they understood both transformer architectures and vision systems deeply. Recommended learning path: Undergraduate/early coursework : Machine learning, statistics, data structures, algorithms. Graduate-level depth : Neural network architectures, representation learning, reinforcement learning. 3. Research, research, research The real differentiator isn't coding ability alone. It's original research. Look at Meta's dream team: Jack Rae did a PhD in neural memory and reasoning. Xiaohua Zhai published groundbreaking papers on large-scale vision transformers. Trapit Bansal earned his PhD in meta-learning and reinforcement learning at UMass Amherst before co-creating OpenAI's o-series reasoning models. Top AI labs hire researchers who push knowledge forward, not just engineers who implement existing algorithms. This means: Reading papers daily (Arxiv sanity or Twitter AI circles help). Writing papers for conferences like NeurIPS, ICML, CVPR, ACL. 4. Dive into multimodal and reasoning systems If you want to be at the AGI frontier, focus on multimodal AI (vision + language + speech) and reasoning/planning systems. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Glicemia acima de 100? Insira essa fruta na sua dieta Saúde Nacional Undo Why? Because AGI isn't just about language models completing your sentences. It's about: Understanding images, videos, and speech seamlessly Performing logical reasoning and planning over long contexts For example, Hongyu Ren's work combines knowledge graphs with LLMs to improve question answering. Jack Rae focuses on LLM memory and chain-of-thought reasoning. This is the cutting edge. 5. Optimise your engineering skills Finally, remember that AI breakthroughs don't live in papers alone. They need to run efficiently at scale. Pei Sun and Joel Pobar are prime examples: engineering leaders who ensure giant models run on hardware without melting the data centre. Learn: Distributed training frameworks (PyTorch, TensorFlow) Systems optimisation (CUDA, GPUs, AI accelerators) Software engineering best practices for scalable deployment The bottom line Becoming an AI genius isn't about quick YouTube tutorials. It's about mastering mathematics, deep learning architectures, original research, multimodal reasoning, and scalable engineering. Do this, and maybe one day, Mark Zuckerberg will knock on your door offering you a $50 million signing bonus to build his artificial god. Until then, back to those linear algebra problem sets. The future belongs to those who understand tensors. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store