logo
Tesla's Long-Awaited India Debut Bets on Luxury Vehicle Buyers

Tesla's Long-Awaited India Debut Bets on Luxury Vehicle Buyers

Bloomberg21 hours ago
Tesla Inc. is opening its first India showroom as Elon Musk's electric-vehicle maker looks to ply new markets and offset slowing sales where it's already well established.
A 4,000-square-foot space in Mumbai's posh financial district of Bandra Kurla Complex will open its doors on Tuesday. It'll showcase Model Y crossovers made in China with an expected sticker price of more than $56,000 before taxes and insurance, Bloomberg News reported last month. That's about $10,000 more than the vehicle's starting price in the US without a federal tax credit.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

US auto safety nominee calls for active oversight of self-driving cars
US auto safety nominee calls for active oversight of self-driving cars

Yahoo

time12 minutes ago

  • Yahoo

US auto safety nominee calls for active oversight of self-driving cars

By David Shepardson WASHINGTON (Reuters) -President Donald Trump's nominee to head the nation's auto safety regulator will argue on Wednesday that the agency must actively oversee self-driving vehicle technology, a potential sign of a tougher approach than some critics expected. Jonathan Morrison, chief counsel of the National Highway Traffic Safety Administration in the first Trump administration, will testify to the U.S. Senate that autonomous vehicles offer potential benefits but also unique risks. "NHTSA cannot sit back and wait for problems to arise with such developing technologies, but must demonstrate strong leadership," Morrison said in written testimony seen by Reuters. The comments suggested NHTSA will continue to closely scrutinize self-driving vehicles. Some critics of the technology had expressed alarm over NHTSA staff cuts this year under a cost-cutting campaign led by Elon Musk, who was a close adviser to Trump and is CEO of self-driving automaker Tesla. The Musk-Trump alliance prompted some critics to speculate that NHTSA would go easy on self-driving vehicle developers. But the relationship began to unravel in late May over Trump's spending plans, and the two are now locked in a feud. NHTSA said last month it was seeking information from Tesla about social media videos of robotaxis and self-driving cars Tesla was testing in Austin, Texas. The videos were alleged to show one of the vehicles using the wrong lane and another speeding. Since October, NHTSA has been investigating 2.4 million Tesla vehicles with full self-driving technology after four reported collisions, including a 2023 fatal crash. "The technical and policy challenges surrounding these new technologies must be addressed," Morrison's testimony said. "Failure to do so will result in products that the public will not accept and the agency will not tolerate." Other companies in the self-driving sector also were subjects of NHTSA investigations including Alphabet's Waymo, which last year faced reports its robotaxis may have broken traffic laws. Waymo in May recalled 1,200 self-driving vehicles, and the probe remains open. Regulatory scrutiny increased after 2023 when a pedestrian was seriously injured by a GM Cruise self-driving car. The first recorded death of a pedestrian related to self-driving technology was in 2018 in Tempe, Arizona.

Closing The Digital Skills Gap: How UNICEF And Partners Empower Youth
Closing The Digital Skills Gap: How UNICEF And Partners Empower Youth

Forbes

time23 minutes ago

  • Forbes

Closing The Digital Skills Gap: How UNICEF And Partners Empower Youth

As digital technology rapidly transforms the workforce, a global digital skills gap is leaving many young people behind, especially girls and young women. UNICEF and committed private sector partners are equipping the next generation with essential digital, entrepreneurial and AI skills, empowering them to become innovators, leaders and changemakers. Anjali poses with the sewer cleaning robot protoype that she developed at the Atal Tinkering Lab (ATL) at her school in Chhattisgarh, India. UNICEF, along with private sector partners, supports ATLs across India to foster a culture of learning, skilling and entrepreneurship. Why digital skills are essential for today's youth As digital technology reshapes work, too many adolescents and young people are falling behind. Globally, 65 percent of teens lack the digital skills needed for 90 percent of today's jobs, with the widest gaps in low- and middle-income countries and among girls. In many of these places, girls are 25 percent less likely than boys to access the knowledge needed for basic digital tasks. However, 86 percent of employers expect artificial intelligence (AI) and information processing technologies will transform their businesses by 2030. The theme of World Youth Skills Day 2025, 'Youth empowerment through AI and digital skills,' highlights the acute need for an inclusive, ethical and empowering future for all youth. UNICEF's role in youth digital workforce readiness UNICEF is a leader in digital skills programs that prepare young people to take part in a fast-changing economy and become the leaders their communities and the world need. This work is supported by strong private sector partners whose values, interests and corporate philanthropy aims align with UNICEF's goal to create a better world for every child. Private sector partners collaborate with UNICEF in many ways, supplying the knowledge, tools and finances that complement UNICEF's strengths and accelerate young people's path to economic security and opportunity. Trusted private sector partners allow UNICEF to plan long term and scale up programs more effectively. True collaboration and bold innovation can lead to powerful solutions, while UNICEF remains committed to promoting and upholding children's rights as AI policies and practices evolve. How public-private partnerships are transforming youth opportunities Public-private sector collaboration can scale programs from concepts to solutions and achieve greater impact at an accelerated pace than either sector can by working alone. Since 1999, fewer young people around the world have been working, even though the number of young people has grown. When youth are not working, studying or in training, their overall wellbeing suffers, diminishing their ability to contribute to future economic development and sociopolitical stability. To flip the script, more young people must be able to identify and access the skills to participate in a digital and green economy. UNICEF and SAP piloted an innovative, scalable workforce readiness program for marginalized youth in Kenya, Nigeria and South Africa that supports learning to earning pathways. The program leverages Generation Unlimited's Youth Agency Marketplace (Yoma), a digital platform that connects young people with social impact tasks and learning to earning opportunities. Scaling digital learning with Yoma: a youth-led innovation The Yoma platform for youth was developed by young Africans seeking to address the stark reality that youth comprise 60 percent of all of Africa's jobless. Since 2022, the SAP and UNICEF partnership has reached over 815,000 and helped improve the lives of 250,000 more through engagement with foundational and digital skills for youth. Overall, thanks to SAP and other partners' support, Yoma has reached over 5 million engagements, which include registering more than 500,000 youth in over eight countries registered to access skilling, earning and impact opportunities through the Yoma ecosystem. Muhammad Abdullahi applies skills he learned from UNICEF Youth Agency Marketplace (Yoma) in Bauchi State, Nigeria, to his work as a health care innovator and employer. Yoma is a digital marketplace for youth to gain individualized learning and align opportunities with their aspirations. Muhammad Abdullahi, a health educator from Azare in Nigeria's Bauchi State, uses his Yoma-acquired skills to inspire change around him. Bauchi State has a high number of children who are out of school. "Growing up in a community like Azare gave me a sense that we need to call on our young people to change the narrative of how our people survive here,' he says. Muhammed used the money he earned scavenging plastic waste to pay for his university tuition. "I was afraid to graduate from university because I may not get a job, but after utilizing opportunities from Yoma, I am a proud health innovator and employer now.' How Skills4Girls builds confidence, STEM access and leadership Investment in girls' education and skills-building forges a critical pathway to dignified work and economic security. About 1 billion girls and women worldwide lack the skills to keep up in today's job market. For teenagers between 15 and 19, twice as many girls (1 in 4) are not working, learning or training compared to boys (1 in 10). With support from several private sector partners, UNICEF's Skills4Girls is closing the gap between the education girls traditionally receive and the digital skills to thrive in today's economy. The Skills4Girls develops girls' skills in STEM, digital technologies and social entrepreneurship areas and bolsters life skills like problem-solving, communication, teamwork and self-confidence. For example, thanks to Sylvamo'spartnership with UNICEF, Skills4Girls expanded its work in countries like Bolivia and Brazil to give girls greater access to STEM education and leadership training, unlocking their individual potential and yielding greater societal benefits. With more than 640 million adolescent girls living on the planet today, programs like Skills4Girls play a crucial role in supporting their growth and potential. Mary Luz, 15, of La Paz, Bolivia, created an award-winning robotic boat to collect trash from rivers and lakes. In Bolivia, only 24 percent of students in STEM are women. A UNICEF Skills4Girls program is teaching Bolivian girls to design and build robot prototypes. In Bolivia, only 24 percent of students in technological and scientific careers are women. Skills4Girls is working to improve that reality and build a better future by teaching Bolivian girls to design and build robot prototypes. 15-year-old Mary Luz from La Paz, Bolivia, dreamed of seeing nearby Lake Titicaca clean – free from pollution and plastic waste. Driven by that vision, she created a prototype robotic boat that collects trash from rivers and lakes. Mary's invention is equipped with weather sensors, a live camera and an anemometer to measure wind speed. With support from UNICEF, her creativity and determination led her to represent Bolivia at the world's largest robotics tournament. Grassroots innovation, generational power Partnerships are a means to an end, not the end itself. Each UNICEF and private sector initiative is a dynamic collaboration to lead young people somewhere better than where they started. And when young people are actively involved in crafting solutions, that goal is often reached faster. Crocs, Inc., one of UNICEF's newest skills partners, has committed to a 3-year partnership to support UNICEF's UPSHIFT, a social accelerator that prepares young people between 10 and 24 to become community changemakers and innovators. UPSHIFT aligns with Crocs, Inc.'s Step Up To Greatness program values and goals to support building skills and confidence in young people to unlock their potential. UPSHIFT equips youth with professional and transferable skills through experiential learning. Participants identify challenges in their communities and devise local, innovative solutions to address them. For example, in Ukraine, where approximately 1.5 million children are at risk of depression, anxiety and post-traumatic stress disorder, UPSHIFT has equipped young people to take action to address issues they care about the most. One solution is Teenage Island – created by teens for teens – on the social platform Discord. Teenage Island provides a safe virtual space for young people to connect over shared struggles. 'You can get away from unwanted reality. For us, that is the war,' says Oleksii, 22, a Teenage Island member. ofia, 17, hosts a podcast on Teenage Island, a teen-led virtual space offering connection and psychological support to young Ukrainians. UNICEF UPSHIFT participants identify challenges in their communities and devise local, innovative solutions. On Teenage Island, adolescents and young people can talk to a psychologist in group sessions, explore creative writing or dive into fantasy role-playing adventures. The team also launched a podcast series in which Sofia, a 17-year-old Ukrainian, openly discusses grief, mental health and war with a psychologist. Teenage Island exemplifies how partner funding doesn't just support immediate needs but can strengthen systems and services for sustainable progress long after UNICEF's interventions end. Partnering for a brighter future UNICEF's public-private sector partnerships for youth can bring the tech, experience and talent, and critical investment needed to supercharge skills development. Together, UNICEF and partners create scalable, forward-thinking solutions that fast-track young people's access to opportunity and build a brighter future for the next generation. Learn more about UNICEF's private sector partnerships that help bridge the digital divide and support every child's right to learn. UNICEF does not endorse any company, brand, organization, product or service.

Grok controversies raise questions about moderating, regulating AI content
Grok controversies raise questions about moderating, regulating AI content

The Hill

time24 minutes ago

  • The Hill

Grok controversies raise questions about moderating, regulating AI content

Elon Musk's artificial intelligence (AI) chatbot Grok has been plagued by controversy recently over its responses to users, raising questions about how tech companies seek to moderate content from AI and whether Washington should play a role in setting guidelines. Grok faced sharp scrutiny last week, after an update prompted the AI chatbot to produce antisemitic responses and praise Adolf Hitler. Musk's AI company, xAI, quickly deleted numerous incendiary posts and said it added guardrails to 'ban hate speech' from the chatbot. Just days later, xAI unveiled its newest version of Grok, which Musk claimed was the 'smartest AI model in the world.' However, users soon discovered that the chatbot appeared to be relying on its owner's views to respond to controversial queries. 'We should be extremely concerned that the best performing AI model on the market is Hitler-aligned. That should set off some alarm bells for folks,' Chris MacKenzie, vice president of communications at Americans for Responsible Innovation (ARI), an advocacy group focused on AI policy. 'I think that we're at a period right now, where AI models still aren't incredibly sophisticated,' he continued. 'They might have access to a lot of information, right. But in terms of their capacity for malicious acts, it's all very overt and not incredibly sophisticated.' 'There is a lot of room for us to address this misaligned behavior before it becomes much more difficult and much more harder to detect,' he added. Lucas Hansen, co-founder of the nonprofit CivAI, which aims to provide information about AI's capabilities and risks, said it was 'not at all surprising' that it was possible to get Grok to behave the way it did. 'For any language model, you can get it to behave in any way that you want, regardless of the guardrails that are currently in place,' he told The Hill. Musk announced last week that xAI had updated Grok, after he previously voiced frustrations with some of the chatbot's responses. In mid-June, the tech mogul took issue with a response from Grok suggesting that right-wing violence had become more frequent and deadly since 2016. Musk claimed the chatbot was 'parroting legacy media' and said he was 'working on it.' He later indicated he was retraining the model and called on users to help provide 'divisive facts,' which he defined as 'things that are politically incorrect, but nonetheless factually true.' The update caused a firestorm for xAI, as Grok began making broad generalizations about people with Jewish last names and perpetuating antisemitic stereotypes about Hollywood. The chatbot falsely suggested that people with 'Ashkenazi surnames' were pushing 'anti-white hate' and that Hollywood was advancing 'anti-white stereotypes,' which it later implied was the result of Jewish people being overrepresented in the industry. It also reportedly produced posts praising Hitler and referred to itself as 'MechaHitler.' xAI ultimately deleted the posts and said it was banning hate speech from Grok. It later offered an apology for the chatbot's 'horrific behavior,' blaming the issue on 'update to a code path upstream' of Grok. 'The update was active for 16 [hours], in which deprecated code made @grok susceptible to existing X user posts; including when such posts contained extremist views,' xAI wrote in a post Saturday. 'We have removed that deprecated code and refactored the entire system to prevent further abuse.' It identified several key prompts that caused Grok's responses, including one informing the chatbot it is 'not afraid to offend people who are politically correct' and another directing it to reflect the 'tone, context and language of the post' in its response. xAI's prompts for Grok have been publicly available since May, when the chatbot began responding to unrelated queries with allegations of 'white genocide' in South Africa. The company later said the posts were the result of an 'unauthorized modification' and vowed to make its prompts public in an effort to boost transparency. Just days after the latest incident, xAI unveiled the newest version of its AI model, called Grok 4. Users quickly spotted new problems, in which the chatbot suggested its surname was 'Hitler' and referenced Musk's views when responding to controversial queries. xAI explained Tuesday that Grok's searches had picked up on the 'MechaHitler' references, resulting in the chatbot's 'Hitler' surname response, while suggesting it had turned to Musk's views to 'align itself with the company.' The company said it has since tweaked the prompts and shared the details on GitHub. 'The kind of shocking thing is how that was closer to the default behavior, and it seemed that Grok needed very, very little encouragement or user prompting to start behaving in the way that it did,' Hansen said. The latest incident has echoes of problems that plagued Microsoft's Tay chatbot in 2016, which began producing racist and offensive posts before it was disabled, noted Julia Stoyanovich, a computer science professor at New York University and director of the Center for Responsible AI. 'This was almost 10 years ago, and the technology behind Grok is different from the technology behind Tay, but the problem is similar: hate speech moderation is a difficult problem that is bound to occur if it's not deliberately safeguarded against,' Stoyanovich said in a statement to The Hill. She suggested xAI had failed to take the necessary steps to prevent hate speech. 'Importantly, the kinds of safeguards one needs are not purely technical, we cannot 'solve' hate speech,' Stoyanovich added. 'This needs to be done through a combination of technical solutions, policies, and substantial human intervention and oversight. Implementing safeguards takes planning and it takes substantial resources.' MacKenzie underscored that speech outputs are 'incredibly hard' to regulate and instead pointed to a national framework for testing and transparency as a potential solution. 'At the end of the day, what we're concerned about is a model that shares the goals of Hitler, not just shares hate speech online, but is designed and weighted to support racist outcomes,' MacKenzie said. In a January report evaluating various frontier AI models on transparency, ARI ranked Grok the lowest, with a score of 19.4 out of 100. While xAI now releases its system prompts, the company notably does not produce system cards for its models. System cards, which are offered by most major AI developers, provide information about how an AI model was developed and tested. AI startup Anthropic proposed its own transparency framework for frontier AI models last week, suggesting the largest developers should be required to publish system cards, in addition to secure development frameworks detailing how they assess and mitigate major risks. 'Grok's recent hate-filled tirade is just one more example of how AI systems can quickly become misaligned with human values and interests,' said Brendan Steinhauser, CEO of The Alliance for Secure AI, a nonprofit that aims to mitigate the risks from AI. 'These kinds of incidents will only happen more frequently as AI becomes more advanced,' he continued in a statement. 'That's why all companies developing advanced AI should implement transparent safety standards and release their system cards. A collaborative and open effort to prevent misalignment is critical to ensuring that advanced AI systems are infused with human values.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store