logo
Philippine defense chief renounced Maltese citizenship before his appointment, department says

Philippine defense chief renounced Maltese citizenship before his appointment, department says

Washington Post14-07-2025
MANILA, Philippines — Philippine Defense Secretary Gilberto Teodoro Jr. had renounced his Maltese citizenship and disclosed it to Filipino authorities before taking office, the Defense Department said Monday.
Philippine law generally disqualifies candidates for high public office who have dual citizenship, especially those with foreign citizenship acquired through naturalization, unless they renounce their foreign citizenship.
Teodoro, who was appointed defense chief by President Ferdinand Marcos Jr. in 2023, is one of the most vocal critics of China's aggressive actions in the disputed South China Sea and elsewhere in Asia. He has led efforts by the Philippines to deepen its treaty alliance with the United States and build new security ties with other countries to deter China.
The Manila Times reported Teodoro had acquired a Maltese passport in 2016 with a validity of 10 years. The Defense Department said, however, Teodoro's Maltese passport 'was surrendered and renounced' before he filed his certificate of candidacy for a Senate seat in 2021. Teodoro lost that senatorial bid.
A Philippine congressional committee on appointments was also notified that Teodoro had renounced his Maltese passport and citizenship before it confirmed his appointment as defense secretary, the defense department said and suggested that critics were trying to undermine his image.
'The motive of this rumor is clear and known to Sec. Teodoro,' the defense department said without elaborating. 'The timing of the article adds to this motive.'
The Department of National Defense did not provide details of Teodoro's acquisition of his passport from Malta, which belongs to the 27-nation European Union. Malta, through its so-called golden passport scheme , has allowed foreigners to become citizens through financial investment.
In April, the European Court of Justice ordered Malta to close its 'golden passport' program, ruling that citizenship in EU countries cannot be sold. Programs that allow wealthy people to buy citizenship were once widespread in Europe, but they've been rolled back out of concern they facilitate transnational crime and sanctions evasion.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ICAI revises tax audit limits for chartered accountants
ICAI revises tax audit limits for chartered accountants

Yahoo

time18 minutes ago

  • Yahoo

ICAI revises tax audit limits for chartered accountants

The Institute of Chartered Accountants of India (ICAI) has announced revised norms for the maximum number of tax audits a chartered accountant can undertake annually. Effective from 1 April 2026, the guidelines aim to enhance audit quality by maintaining the existing limit of 60 tax audits per member per financial year, applicable to both individual and partnership capacities. The revised guidelines specify that the limit of 60 tax audits cannot be distributed or shared among partners in a chartered accountants (CA) company. However, this limit excludes tax audit assignments under clauses (c), (d) and (e) of section 44AB, concerning sections 44AE, 44ADA and 44AD. Additionally, revised tax audit reports will not count towards the 60-audit limit. These changes were decided in ICAI's 442nd and 443rd meetings held on 26–27 May 2025 and 30 June–1 July 2025, respectively. The guidelines aim to ensure that the quality of tax audits remains uncompromised. In a separate development, the ICAI inaugurated the ICAI International ADR Centre (IIAC), a Section 8 company, to promote alternate dispute resolution (ADR) mechanisms in India. The initiative represents ICAI's move into ADR, an area intersecting commercial, legal and economic interests. In a statement, the ICAI said that these centres will enhance the commercial dispute resolution ecosystem by providing a transparent, technology-enabled mechanism. The centres will operate under a governance framework, ensuring integrity, neutrality, and professional excellence for both domestic and international stakeholders. ICAI president Charanjot Singh Nanda said: 'In this evolving business ecosystem, effective dispute resolution is no longer a procedural formality; it is a strategic necessity and IIAC aims to provide this necessity with credibility, neutrality and efficiency, values that are at the core of ICAI's professional ethos. 'The IIAC will serve as a specialised institutional platform offering structured and time-bound arbitration, mediation, conciliation and negotiation services that are professionally managed, process-driven and globally benchmarked.' "ICAI revises tax audit limits for chartered accountants " was originally created and published by The Accountant, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

Nvidia in the Crosshairs: China Just Escalated the AI Chip War
Nvidia in the Crosshairs: China Just Escalated the AI Chip War

Yahoo

time18 minutes ago

  • Yahoo

Nvidia in the Crosshairs: China Just Escalated the AI Chip War

Nvidia (NASDAQ:NVDA) just hit another speed bump in China. Days after the U.S. rolled back export restrictions on its H20 chipdesigned specifically to comply with sanctionsChinese regulators summoned Nvidia to address alleged serious security vulnerabilities. The Cyberspace Administration of China (CAC) raised concerns tied to U.S. lawmakers' comments suggesting these chips may carry location-tracking or shutdown features. The agency demanded internal documentation and clarification from Nvidia's local team. No official details were shared beyond that, but investors are watching closely. The stock gained 2.5% on Germany's Tradegate, supported in part by strong earnings from Microsoft and Meta, but this probe introduces fresh uncertainty. Warning! GuruFocus has detected 5 Warning Signs with NVDA. The timing couldn't be more delicate. CEO Jensen Huang had just wrapped up a high-profile China visit, spotlighting local champions like DeepSeek and Tencent while calling China a global AI powerhouse. Meanwhile, U.S. officials had been framing the H20 green light as a goodwill gesturehoping it would unlock better access to China's rare earths. But now, Chinese regulators are signaling they're not quite sold. Local chipmakers like SMIC and Cambricon popped more than 5% following the CAC's statement, suggesting markets see this as a tailwind for China's self-reliance push. As Forrester's Charlie Dai put it: H20 sales could now face delays, just as China doubles down on homegrown tech. Underneath all this is a bigger chessboard. The U.S. had loosened chip export rules after trade talks in London, and in return, China allowed more outbound flow of rare-earth minerals. But the CAC's sudden scrutiny of the H20 chipespecially one already stripped down to meet U.S. compliancefeels like a move to keep pressure in the trade negotiations. At the same time, domestic players like Huawei are rolling out rivals like the Ascend 910C, which are already being compared to Nvidia's chips for inference workloads. As trade officials sidestep direct comment, one thing is clear: Nvidia's foothold in China remains fragile, and investors will need to watch how this unfoldsone regulatory move at a time. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Inside the Summit Where China Pitched Its AI Agenda to the World
Inside the Summit Where China Pitched Its AI Agenda to the World

WIRED

time21 minutes ago

  • WIRED

Inside the Summit Where China Pitched Its AI Agenda to the World

Jul 31, 2025 11:04 AM Behind closed doors, Chinese researchers are laying the groundwork for a new global AI agenda—without input from the US. Three days after the Trump administration published its much-anticipated AI action plan, the Chinese government put out its own AI policy blueprint. Was the timing a coincidence? I doubt it. China's 'Global AI Governance Action Plan' was released on July 26, the first day of the World Artificial Intelligence Conference (WAIC), the largest annual AI event in China. Geoffrey Hinton and Eric Schmidt were among the many Western tech industry figures who attended the festivities in Shanghai. Our WIRED colleague Will Knight was also on the scene. The vibe at WAIC was the polar opposite of Trump's America-first, regulation-light vision for AI, Will tells me. In his opening speech, Chinese Premier Li Qiang made a sobering case for the importance of global cooperation on AI. He was followed by a series of prominent Chinese AI researchers, who gave technical talks highlighting urgent questions the Trump administration appears to be largely brushing off. Zhou Bowen, leader of the Shanghai AI Lab, one of China's top AI research institutions, touted his team's work on AI safety at WAIC. He also suggested the government could play a role in monitoring commercial AI models for vulnerabilities. In an interview with WIRED, Yi Zeng, a professor at the Chinese Academy of Sciences and one of the country's leading voices on AI, said that he hopes AI safety organizations from around the world find ways to collaborate. 'It would be best if the UK, US, China, Singapore, and other institutes come together,' he said. The conference also included closed-door meetings about AI safety policy issues. Speaking after he attended one such confab, Paul Triolo, a partner at the advisory firm DGA-Albright Stonebridge Group, told WIRED that the discussions had been productive, despite the noticeable absence of American leadership. With the US out of the picture, 'a coalition of major AI safety players, co-led by China, Singapore, the UK, and the EU, will now drive efforts to construct guardrails around frontier AI model development,' Triolo told WIRED. He added that it wasn't just the US government that was missing: Of all the major US AI labs, only Elon Musk's xAI sent employees to attend the WAIC forum. Many Western visitors were surprised to learn how much of the conversation about AI in China revolves around safety regulations. 'You could literally attend AI safety events nonstop in the last seven days. And that was not the case with some of the other global AI summits,' Brian Tse, founder of the Beijing-based AI safety research institute Concordia AI, told me. Earlier this week, Concordia AI hosted a day-long safety forum in Shanghai with famous AI researchers like Stuart Russel and Yoshua Bengio. Switching Positions Comparing China's AI blueprint with Trump's action plan, it appears the two countries have switched positions. When Chinese companies first began developing advanced AI models, many observers thought they would be held back by censorship requirements imposed by the government. Now, US leaders say they want to ensure homegrown AI models 'pursue objective truth,' an endeavor that, as my colleague Steven Levy wrote in last week's Backchannel newsletter, is 'a blatant exercise in top-down ideological bias.' China's AI action plan, meanwhile, reads like a globalist manifesto: It recommends that the United Nations help lead international AI efforts and suggests governments have an important role to play in regulating the technology. Although their governments are very different, when it comes to AI safety, people in China and the US are worried about many of the same things: model hallucinations, discrimination, existential risks, cybersecurity vulnerabilities, etc. Because the US and China are developing frontier AI models 'trained on the same architecture and using the same methods of scaling laws, the types of societal impact and the risks they pose are very, very similar,' says Tse. That also means academic research on AI safety is converging in the two countries, including in areas like scalable oversight (how humans can monitor AI models with other AI models) and the development of interoperable safety testing standards. But Chinese and American leaders have demonstrated they have very different attitudes toward these issues. On one hand, the Trump administration recently tried and failed to put a 10-year moratorium on passing new state-level AI regulations. On the other hand, Chinese officials, including even Xi Jinping himself, are increasingly speaking out about the importance of putting guardrails on AI. Beijing has also been busy drafting domestic standards and rules for the technology, some of which are already in effect. As Trump goes rogue with unorthodox and inconsistent policies, the Chinese government increasingly looks like the adult in the room. With its new AI action plan, Beijing is trying to seize the moment and send the world a message: If you want leadership on this world-changing innovation, look here. Charm Offensive I don't know how effective China's charm offensive will be in the end, but the global retreat of the US does feel like a once-in-a-century opportunity for Beijing to spread its influence, especially at a moment when every country is looking for role models to help them make sense of AI risks and the best ways to manage them. But there's one thing I'm not sure about: How eager will China's domestic AI industry be to embrace this heightened focus on safety? While the Chinese government and academic circles have significantly ramped up their AI safety efforts, industry has so far seemed less enthusiastic—just like in the West. Chinese AI labs disclose less information about their AI safety efforts than their Western counterparts do, according to a recent report published by Concordia AI. Of the 13 frontier AI developers in China the report analyzed, only three produced details about safety assessments in their research publications. Will told me that several tech entrepreneurs he spoke to at WAIC said they were worried about AI risks such as hallucination, model bias, and criminal misuse. But when it came to AGI, many seemed optimistic that the technology will have positive impacts on their life, and they were less concerned about things like job loss or existential risks. Privately, Will says, some entrepreneurs admitted that addressing existential risks isn't as important to them as figuring out how to scale, make money, and beat the competition. But the clear signal from the Chinese government is that companies should be encouraged to tackle AI safety risks, and I wouldn't be surprised if many startups in the country change their tune. Triolo, of DGA-Albright Stonebridge Group, said he expected Chinese frontier research labs to begin publishing more cutting-edge safety work. Some WAIC attendees see China's focus on open source AI as a key part of the picture. 'As Chinese AI companies increasingly open-source powerful AIs, their American counterparts are pressured to do the same,' Bo Peng, a researcher who created the open source large language model RWKV, told WIRED. Peng envisions a future where different nations—including ones that do not always agree—work together on AI. 'A competitive landscape of multiple powerful open-source AIs is in the best interest of AI safety and humanity's future,' he explained. 'Because different AIs naturally embody different values and will keep each other in check.' This is an edition of Zeyi Yang and Louise Matsakis' Made in China newsletter . Read previous newsletters here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store