
UAE, Jordan lead Gaza aid airdrop operation with 4 other countries
Part of the 'Operation Chivalrous Knight 3' initiative, the relief aid was dropped by the two Arab countries, alongside seven aircraft from France, Germany, Italy, and Spain.
It aims to deliver food and relief supplies to the most affected areas in the Strip. In a phone call, the UAE and Jordanian foreign ministers discussed how to enhance coordination in relief efforts.
Sheikh Abdullah bin Zayed Al Nahyan, Deputy Prime Minister and Minister of Foreign Affairs, talked to Ayman Safadi, Deputy Prime Minister and Minister of Foreign Affairs and Expatriates of the Hashemite Kingdom of Jordan, discussing the latest developments of the humanitarian situation in the Gaza Strip.
UAE's foreign minister commended the ongoing humanitarian endeavours undertaken by Jordan to support the Palestinian people in Gaza.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gulf Business
7 minutes ago
- Gulf Business
UAE's new National Anti-Narcotics Authority: What you need to know
Image credit: WAM In a major step to bolster the country's anti-narcotics efforts, UAE President Sheikh Mohamed bin Zayed Al Nahyan has issued a federal decree-law establishing the National Anti-Narcotics Authority (NANA). The new independent body, reporting directly to the UAE Cabinet, replaces the General Department of Anti-Narcotics at the Ministry of Interior, Read- Sheikh Zayed bin Hamad bin Hamdan Al Nahyan has been appointed as the Chairman of the newly formed Authority. The move aims to unify and strengthen federal and local efforts in combating drug-related crimes and ensure the safety and well-being of communities across the UAE. Credit for images: WAM Centralised strategy for drug control The Authority will be responsible for designing and implementing policies, legislation, and strategies to fight narcotics. This includes coordinating with relevant agencies at both federal and local levels to track and dismantle smuggling and distribution networks. Its expanded mandate reflects the UAE's strategic vision to provide a robust legislative and operational environment for addressing drug-related threats. By consolidating responsibilities under one federal entity, the government aims to ensure cohesive national policies, faster response to emerging drug trends, and tighter law enforcement. Among its core responsibilities, NANA will work closely with judicial and security bodies to ensure offenders are prosecuted under the nation's laws. It will also oversee the development of legal frameworks and submit new regulations to the Cabinet for approval, ensuring alignment with international best practices. Stronger controls at entry points As part of its operational scope, the authority will monitor land, sea, and air entry points in collaboration with national entities to prevent the entry or exit of narcotic substances. It will track and inspect individuals, goods, and vehicles, with the aim of curbing illicit activities at the country's borders. In addition, NANA will monitor suspicious activities and suspected trafficking operations nationwide, working with concerned authorities to bolster intelligence capabilities and early detection mechanisms. The authority is also charged with regulating the circulation of chemical precursors used in drug manufacturing. It will propose licensing mechanisms and oversee trading, storage, and customs clearance procedures in coordination with relevant agencies. These measures aim to prevent the misuse of chemicals while ensuring lawful handling for legitimate purposes. Unified data and national coordination To enhance inter-agency cooperation, the Authority will establish and manage a centralised national database. The system will be accessible to all federal and local bodies involved in anti-narcotics operations and will facilitate real-time information sharing, coordinated responses, and improved decision-making. By enhancing the flow of intelligence and aligning strategic goals across the country, the UAE seeks to build a more resilient and responsive framework to confront drug-related threats.


Khaleej Times
37 minutes ago
- Khaleej Times
Israeli ex-security chiefs urge Trump to help end Gaza war
More than 600 retired Israeli security officials including former heads of intelligence agencies have urged US President Donald Trump to pressure their own government to end the war in Gaza. "It is our professional judgement that Hamas no longer poses a strategic threat to Israel," the former officials wrote in an open letter shared with the media on Monday, calling on Trump to "steer" Prime Minister Benjamin Netanyahu's decisions.


The National
an hour ago
- The National
Incorrect Grok answers amid Gaza devastation show risks of blindly trusting AI
A spike in misinformation amid the dire situation in Gaza has highlighted how imperfect artificial intelligence systems are being used to perpetuate it. Reaction to a recent social media post from US Senator Bernie Sanders in which he shared a photo of an emaciated child in the besieged Palestinian enclave shows just how fast AI tools can spur the spread of incorrect narratives. In the post on X, he accused Israeli Prime Minister Benjamin Netanyahu of lying by promoting the idea that there was "no starvation in Gaza". A user asked Grok, X's AI chatbot, for more information on the origin of the images. "These images are from 2016, showing malnourished children in a hospital in Houdieda, Yemen, amid the civil war there ... They do not depict current events in Gaza," Grok said. Several other users, however, were able to verify that the images were in fact recently taken in Gaza, but initially those voices were overtaken by hundreds who reposted Grok's incorrect answer. Proponents of Israel's continuing strategy in Gaza used the false information from Grok to perpetuate the narrative that the humanitarian crisis in Gaza was being exaggerated. Initially, when some users tried to tell the chatbot that is was wrong, and explained why it was wrong, Grok stood firm. "I'm committed to facts, not any agenda ... the images are from Yemen in 2016," it insisted. "Correcting their misuse in a Gaza context isn't lying – it's truth." Later however, after metadata and sources confirmed that the photos had been taken in Gaza, Grok apologised. Another recent incident involving Grok's confident wrong answers about the situation in Gaza also led to the spread of falsehoods. Several images began circulating on social media purporting to show people in Egypt filling bottles with food and throwing them into the sea with hopes of them reaching Gaza. While there were several videos that showed similar efforts, many of the photos circulating were later determined to be fake, according to PolitiFact, a non-partisan independent reporting fact-check organisation. This is not the first time Grok's answers have come under scrutiny. Last month, the chatbot started to answer user prompts with offensive comments, including giving anti-Semitic answers to prompts and praising Adolf Hitler. High stakes and major consequences AI chatbot enthusiasts are quick to point out that the technology is far from perfect and that it continues to learn. Grok and other chatbots include disclaimers warning users that they can be prone to mistakes. In the fast-paced social media world, however, those fine-print warnings are often forgotten, while the risks from the ramifications of misinformation increase substantially – most recently with regard to the Gaza war. Israel's campaign in Gaza – which followed the 2023 attacks by Hamas-led fighters that resulted in the deaths of about 1,200 people and the capture of 240 hostages – has killed more than 60,200 people and injured about 147,000. The war has raged against a backdrop of technology development that's causing ample confusion. "This chilling disconnect between reality and narratives online and in the media has increasingly become a feature of modern war," wrote Mahsa Alimardani and Sam Gregory in a recent analysis on AI and conflict for the Carnegie Endowment think tank. The experts pointed out that while several tools can be used to verify photos and video in addition to flagging possible AI manipulation, it will take broader efforts to prevent the spread of misinformation. Technology companies, they say, must "share the burden by embedding provenance responsibly, facilitating globally effective detection, flagging materially deceptive manipulated content, and doubling down on protecting users in high-risk regions". AI's triumphs and continuing tribulations A lot of the recent misinformation and disinformation controversies related to AI and modern conflict can be traced back to the various AI tools and how they handle images. Stretching back to the earliest days of AI, particularly in the 1970s and 1980s, researchers sought to replicate the human brain – more specifically the brain's neural networks that consist of neurons and electrical signals that strengthen over time, giving humans the ability to reason, remember and identify. As computer processors have become increasingly powerful and more economical, replicating those neural networks – often called "artificial neural networks" in the technology world – have become significantly easier. The internet, with its seemingly endless photos, videos and data, has also become a way for those neural networks to be constantly trained on information. Some of the earliest uses of AI involved software that made it possible to identify images. This was demonstrated back in 2012 by Alex Krizhevsky, then a student at the University of Toronto, whose research was overseen by British-Canadian computer scientist Geoffrey Hinton, considered to be the godfather of AI. "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images," his paper on deep convolutional neural networks read. "Our results show that a large, deep convolutional neural network is capable of achieving record-breaking results on a highly challenging data set." He added, however, that the network had the potential to degrade and pointed out it was far from perfect. AI has since improved by leaps and bounds, though there is still room for improvement. The latest AI chatbots like OpenAI's ChatGPT and Google's Gemini have capitalised on powerful CPUs and GPUs, making it possible for just about anyone to upload an image and ask the chatbot to explain what the image is showing. For example, some users have uploaded pictures of plants they can't recognise into chatbots to identify. When it works, it is helpful; when it doesn't, it's usually harmless. In the world of mass media, however, and more broadly the world of social media, when chatbots are wrong – such as Grok was about the Gaza photos – the consequences can have wide-reaching effects.