
Turkey Unveils Tayfun Block-4: First Home-Built Hypersonic Missile Vantage with Palki Sharma
Turkey Unveils Tayfun Block-4: First Home-Built Hypersonic Missile | Vantage with Palki Sharma
Turkey reveals its first indigenously developed hypersonic missile, the Tayfun Block-4, at an international arms expo in Istanbul. Developed by Turkish defence manufacturer Roketsan, the missile can travel at five times the speed of sound at low altitudes. The Tayfun Block-4 marks a major step in Turkey's defence ambitions and is the hypersonic variant of the country's longest-range ballistic missile.
Also on Vantage Shots:
- Thousands protest across Ukraine for a second day over controversial anti-corruption law
- Environmental activists rally against oil drilling in the Amazon ahead of COP30 in Brazil
- On this day in history, in 1969, astronauts Neil Armstrong, Buzz Aldrin, and Michael Collins successfully returned to Earth after completing the historic Apollo 11 Moon landing mission. The crew splashed down in the Pacific Ocean after spending over eight days in space. With their safe return, the final leg of the Apollo 11 mission came to a close, marking a defining moment in space exploration.
See More

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Who is Andrew Tulloch? Former OpenAI engineer and Mira Murati's co-founder who rejected a $1.5 billion offer from Mark Zuckerberg
Andrew Tulloch, an Australian computer scientist and machine learning expert, has made headlines after turning down a $1.5 billion offer from Mark Zuckerberg to rejoin Meta. A former OpenAI engineer, Tulloch is now the co-founder of Thinking Machines Lab alongside ex-OpenAI CTO Mira Murati. The AI startup, still in its early stages, is already valued at $12 billion. Tulloch's decision to decline Zuckerberg's aggressive recruitment attempt reflects a broader trend of top AI talent prioritizing independence, mission-driven work, and long-term impact over staggering financial packages. Andrew Tulloch: From Wall Street to the frontier of AI Tulloch's journey began at the University of Sydney, where he graduated with first-class honours and a University Medal in mathematics. He later earned a Master's in Mathematical Statistics from Cambridge and pursued a PhD at UC Berkeley. Tulloch worked at Meta (then Facebook) from 2012 to 2023, contributing to machine learning systems and the development of PyTorch. He joined OpenAI in 2023, focusing on GPT-4 pretraining and reasoning models, before co-founding Thinking Machines Lab in early 2025. Co-founded with Mira Murati, Thinking Machines Lab is focused on building AI systems that are safer, interpretable, and customizable—going beyond traditional chatbot interfaces. The startup, though yet to release a product, has secured a $2 billion seed round with backing from Andreessen Horowitz, Nvidia, AMD, and Google Cloud. Its ambition and leadership have made it a top target for recruitment, especially from Zuckerberg's new "superintelligence" division at Meta. Mark Zuckerberg's offer and Tulloch's viral rejection According to The Wall Street Journal, Zuckerberg personally tried to lure Tulloch back to Meta with a six-year offer worth up to $1.5 billion, contingent on bonuses and stock performance. Tulloch declined, joining other Thinking Machines Lab co-founders in resisting Meta's poaching attempts. The bold rejection has gone viral, with Tulloch's LinkedIn profile celebrated for charting a rare career driven by principles rather than payouts. Meta later disputed the exact terms of the offer but confirmed outreach efforts were made.


New Indian Express
2 hours ago
- New Indian Express
ISRO launches mission HOPE in Ladakh
BENGALURU: The Indian Space Research Organisation (ISRO) on Saturday announced the launch of Himalayan Outpost for Planetary Exploration (HOPE) analog mission in Tso Kar Valley, Ladakh. The ten-day long mission, from August 1-10, 2025, is more than a simulation, it a rehearsal for the future, said V Narayanan, ISRO Chairman at the sidelines of the inauguration. He said this high-altitude mission is being done at a 4,530 metres elevation. The location (The Tso Kar valley) is Earth's most Mars-like environment and the HOPE is designed to simulate planetary conditions for testing human physiological responses, validating mission protocols and evaluating spaceflight technologies. The mission marks a significant milestone in India's preparations for future human spaceflight to Low Earth Orbit and Moon and Mars exploration missions, he added. Explaining why Ladakh was chosen by ISRO scientists the mission, ISRO scientists explained that it is a dry desert like cold area, where oxygen supply is less. The striking environment of the Tso Kar valley parallels with early Mars, due to high UV flux, low air pressure, cold extremes and saline permafrost, the scientist explained.


Economic Times
2 hours ago
- Economic Times
‘I feel useless': ChatGPT-5 is so smart, it has spooked Sam Altman, the man who started the AI boom
OpenAI is on the verge of releasing GPT-5, the most powerful model it has ever built. But its CEO, Sam Altman, isn't celebrating just yet. Instead, he's sounding the a revealing podcast appearance on This Past Weekend with Theo Von, Altman admitted that testing the model left him shaken. 'It feels very fast,' he said. 'There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: 'What have we done?''His words weren't about performance metrics. They were about compared the development of GPT-5 to the Manhattan Project — the World War II effort that led to the first atomic bomb. The message was clear: speed and capability are growing faster than our ability to think through what they actually continued, 'Maybe it's great, maybe it's bad—but what have we done?' This wasn't just about AI as a tool. Altman was questioning whether humanity is moving so fast that it can no longer understand — or control — what it builds. 'It feels like there are no adults in the room,' he added, suggesting that regulation is far behind the pace of specs for GPT-5 are still under wraps, but reports suggest significant leaps over GPT-4: better multi-step reasoning, longer memory, and sharper multimodal capabilities. Altman himself didn't hold back about the previous version, saying, 'GPT-4 is the dumbest model any of you will ever have to use again, by a lot.'For many users, GPT-4 was already advanced. If GPT-5 lives up to the internal hype, it could change how people work, create, and another recent conversation, Altman described a moment where GPT-5 answered a complex question he couldn't solve himself. 'I felt like useless relative to the AI,' he admitted. 'It was really hard, but the AI just did it like that.'OpenAI's long-term goal has always been Artificial General Intelligence (AGI). That's AI capable of understanding and reasoning across almost any task — human-like once downplayed its arrival, suggesting it would 'whoosh by with surprisingly little societal impact.' Now, he's sounding far less sure. If GPT-5 is a real step toward AGI, the absence of a global framework to govern it could be dangerous. AGI remains loosely defined. Some firms treat it as a technical milestone. Others see it as a $100 billion opportunity, as Microsoft's partnership contract with OpenAI implies. Either way, the next model may blur the line between AI that helps and AI that acts. OpenAI isn't just facing ethical dilemmas. It's also under financial are pushing for the firm to transition into a for-profit entity by the end of the year. Microsoft, which has invested $13.5 billion in OpenAI, reportedly wants more control. There are whispers that OpenAI could declare AGI early in order to exit its agreement with Microsoft — a move that would shift the power balance in the AI sector insiders have reportedly described their wait-and-watch approach as the 'nuclear option.' In response, OpenAI is said to be prepared to go to court, accusing Microsoft of anti-competitive behaviour. One rumoured trigger could be the release of an AI coding agent so capable it surpasses a human programmer — something GPT-5 might be edging meanwhile, has tried to lower expectations about rollout glitches. Posting on X, he said, 'We have a ton of stuff to launch over the next couple of months — new models, products, features, and more. Please bear with us through some probable hiccups and capacity crunches.'While researchers and CEOs debate long-term AI impacts, one threat is already here: fraud. Haywood Talcove, CEO of the Government Group at LexisNexis Risk Solutions, works with over 9,000 public agencies. He says the AI fraud crisis is not approaching — it's already happening. 'Every week, AI-generated fraud is siphoning millions from public benefit systems, disaster relief funds, and unemployment programmes,' he warned. 'Criminal networks are using deepfakes, synthetic identities, and large language models to outpace outdated fraud defences — and they're winning.' During the pandemic, fraudsters exploited weaknesses to steal hundreds of billions in unemployment benefits. That trend has only accelerated. Today's tools are more advanced and automated, capable of filing tens of thousands of fake claims in a day. Talcove believes the AI arms race between criminals and institutions is widening. 'We may soon recognise a similar principle for AI that I call 'Altman's Law': every 180 days, AI capabilities double.' His call to action is blunt. 'Right now, criminals are using it better than we are. Until that changes, our most vulnerable systems and the people who depend on them will remain exposed.'Not everyone is convinced by Altman's remarks. Some see them as clever marketing. But his past record and unfiltered tone suggest genuine might be OpenAI's most ambitious release yet. It could also be a signpost for the world to stop, look around, and ask itself what kind of intelligence it really wants to build — and how much control it's willing to give up.