Latest news with #AImodels
Yahoo
4 days ago
- Automotive
- Yahoo
China's Xpeng revises up 2025 planned hires to 8,000, founder says
BEIJING (Reuters) -Chinese electric vehicle maker Xpeng's planned hires for this year have been revised up to 8,000 from 6,000, founder He Xiaopeng said in an internal speech. With the added openings, Xpeng's workforce will be nearing 30,000 within this year, He said, according to a transcript of the speech seen by Reuters on Wednesday. The revision highlights ramped-up hiring in smart driving with a large number of openings in fields such as large artificial intelligence models, according to the company.
Yahoo
4 days ago
- Automotive
- Yahoo
China's Xpeng revises up 2025 planned hires to 8,000, founder says
BEIJING (Reuters) -Chinese electric vehicle maker Xpeng's planned hires for this year have been revised up to 8,000 from 6,000, founder He Xiaopeng said in an internal speech. With the added openings, Xpeng's workforce will be nearing 30,000 within this year, He said, according to a transcript of the speech seen by Reuters on Wednesday. The revision highlights ramped-up hiring in smart driving with a large number of openings in fields such as large artificial intelligence models, according to the company.
Yahoo
5 days ago
- Climate
- Yahoo
Bryan Norcross: Watching for Florida flooding, then chance of development in Gulf
Updated at 9 a.m. ET, Monday, July 14, 2025 A disorganized disturbance off the east coast of Florida is forecast to bring heavy rain to the Florida Peninsula today into Wednesday. Flood Watches have already been issued for the metro areas in southeast Florida. Intense, slow-moving tropical downpours are likely with this system, which can cause local flooding in mostly flat Florida. The system is starting out non-tropical, but has already tapped into deep tropical moisture, which it will drag over the peninsula. Widespread areas of 3 to 5 inches of rainfall are forecast, with some locations receiving 7 inches or more. On Wednesday, the disturbance should begin tracking across the northern Gulf. The National Hurricane Center is giving it a chance of developing into at least a tropical depression as it tracks in the general direction of Louisiana — the odds are still in the low category. By Thursday or Friday, the center of the disturbance, if there is a center to track, should be near the north-central Gulf coast. It will still be carrying a mass of tropical moisture with it, so a long section of the coast and some inland sections will be impacted by heavy rain late in the week and into the weekend, if things play out as expected. The consensus of the various computer forecasts, including the latest AI models, is that the system will be weak, but be broad and contain lots of moisture. If the disturbance can organize to some degree over the Gulf, it is more likely to drag the moisture off the Florida Peninsula on Wednesday. If it stays broad and disorganized, however, the tropical moisture feed will continue to impact parts of the state. So the rain forecast for Florida on Wednesday is a bit up in the air. In any case, more typical summer weather should return. There is not a good consensus on how quickly the system will move past the Alabama, Mississippi, Louisiana coastal sections this weekend. As always, forecasts for poorly organized, or just developing systems carry more uncertainty than normal and are subject to change. In Florida, it's important that everyone pay attention to flood alerts that will likely be issued by the National Weather Service over the next couple of days. Have your FOX Weather app installed and updated, or be sure you have a way to know about any warnings that are issued. Otherwise, the rest of the Gulf, the Caribbean, and the Atlantic continue to be hostile to tropical development, so no concerns there for now. Long-range projections show tropical disturbances becoming more robust in the tropical Atlantic next week, but nothing looks like a article source: Bryan Norcross: Watching for Florida flooding, then chance of development in Gulf


Forbes
07-07-2025
- Business
- Forbes
What Leaders Need To Know About Open-Source Vs Proprietary Models
HONG KONG, CHINA - JANUARY 28: In this photo illustration, the DeepSeek logo is seen next to the ... More Chat GPT logo on a phone on January 28, 2025 in Hong Kong, China. (Photo illustration by) As business leaders adopt generative artificial intelligence they must decide whether to build their AI capabilities using open-source models or rely on proprietary, closed-source alternatives. Understanding the implications of this choice can be the difference between a sustainable competitive advantage and a strategic misstep. But what exactly is 'open source?' According to the Open-Source Initiative (OSI), for software to be considered open, it must offer users the freedom to use the software for any purpose, to study how it works, to modify it, and to share both the original and modified versions. When applied to AI, true open-source AI include model architecture (the blueprint for how the AI processes data); training data recipes (documenting how data was selected and used to train the model); and weights (the numerical values representing the AI's learned knowledge). But very few AI models are truly open according to the OSI definition. The Gradient of Openness While fully open-source models provide complete transparency, few model developers want to publish their full source code, and even fewer are transparent about the data their models were trained on. Many so-called foundation models – the largest generative AI models – were trained on data whose copyrights may be fuzzy at best and blatantly infringed at worst. More common are open-weight systems that offer public access to model weights without disclosing the full training data or architecture. This allows faster deployment and experimentation with fewer resources, though it limits the ability to diagnose biases or improve accuracy without full transparency. Some companies adopt a staggered openness model. They may release previous versions of proprietary models once a successor is launched, providing limited insight into the architecture while restricting access to the most current innovations. Even here, training data is rarely disclosed. Navigating the Gradient Deciding whether an enterprise wants to leverage a proprietary model like GPT-4o, or some level of openness, such as LlaMA 3.3, depends, of course on the use case. Many organizations end up using a mix of open and closed models. The main decision is where the model will reside. For regulated industries like banking, where data can't leave the premises due to regulatory constraints, open-source models are the only viable option. Because proprietary model owners need to protect their intellectual property, those models can only be accessed remotely via an application programming interface (API). Open-source models can be deployed on a company's premises or in the cloud. Both open and closed models can be fine-tuned to specific use cases, but open-source models offer more flexibility and allow deeper customization. Again, the data used in that fine tuning need not leave the company's hardware. Fine-tuning proprietary models requires less expertise but must be done in the cloud. Still, cost and latency can tip the scales in favor of proprietary AI. Proprietary providers often operate large-scale infrastructure designed to ensure fast response times and predictable performance, especially in consumer applications like chatbots or virtual assistants handling millions of queries per day. Open-source AI, although cheaper to operate in the long run, requires significant investment in infrastructure and expertise to achieve similar latency and uptime. Navigating the regulatory landscape is another concern for companies deploying AI. The European Union's Artificial Intelligence Act sets stricter transparency and accountability standards for proprietary AI models. Yet proprietary providers often assume greater compliance responsibility, reducing the regulatory burden on businesses. In the U.S., the National Telecommunications and Information Administration (NTIA) is considering guidelines that assess AI openness through a risk-based lens. Of course, a major consideration is security. By using a proprietary model, companies place their trust in the provider that the model is secure. But that opacity can hide vulnerabilities, leaving companies reliant on vendors to disclose and address threats. Open-source models, on the other hand, benefit from global security research communities that rapidly detect and patch vulnerabilities. Still, businesses often prefer the convenience of API access to proprietary models for rapid prototyping. And for consumer facing applications, proprietary models are fast and easy to integrate into products. Will Open-Source Overtake Proprietary Models? But an even larger issue looms over the future of closed and open source. As open models increase in performance, closing the gap with or even exceeding the performance of the best proprietary models, the financial viability of closed models and the companies that provide them remains uncertain. China is pursuing an aggressive open-source strategy, cutting the cost of its models to steal market share of companies like OpenAI. By openly releasing their research, code, and models, China hopes to make advanced AI accessible at a fraction of the cost of Western proprietary solutions. Key Takeaways for Business Leaders Remember Betamax, the proprietary video cassette recording format developed and tightly controlled by Japan's Sony in the 1970s. It lost to the more open VHS format for the same reason many people think closed AI models will eventually be eclipsed by open-source AI, Leaders must define what they want to achieve with AI, whether it be efficiency, innovation, risk reduction, or compliance, and let these goals guide their model selection and deployment strategy. For example, they can leverage open-source communities for innovation and rapid prototyping, while relying on proprietary solutions for mission-critical, high-security applications. Collaborating with external partners and leveraging both open-source and proprietary models as appropriate will position organizations to innovate responsibly and remain competitive. The key is for leaders to understand their unique operational needs, data sensitivities, and technical capabilities—then choose accordingly. But choosing between open-source and proprietary AI models is less a binary decision than it is finding the optimal model on a continuum from closed to fully open.


Arab News
06-07-2025
- Business
- Arab News
Feedzai leads fight against financial fraud with AI-powered solutions
A dynamic AI-native company with a global presence is setting a new standard for securing financial transactions and making significant investments in the Middle East. Financial fraud is becoming increasingly sophisticated. And as criminals develop ever-more advanced techniques, many organizations find themselves limited by outdated, rules-based fraud detection systems that struggle to respond in real time. Traditional methods are proving inadequate against contemporary threats. The rise of deepfake scams, synthetic identities and elaborate financial 'mule' networks demand a more sophisticated approach. Thankfully, dynamic, AI-powered models that continuously adapt to emerging threats are revolutionizing fraud prevention. Feedzai, an AI-native risk management platform, is leading this transformation, helping financial institutions fight fraud while maintaining customer trust. The changing landscape of fraud Financial fraud happens in various ways. Account takeover occurs when cybercriminals gain control of accounts through phishing attacks or credential-stuffing techniques. Authorized push payments and scams can take many forms — from romance scams to CEO identity fraud. Meanwhile, money laundering operations use synthetic identities and networks of mule accounts to move illicit funds undetected through the financial system. Each of these techniques requires different detection and mitigation strategies. In the Middle East, financial services regulators are intensifying their focus on fraud prevention, creating new compliance requirements for banks. The region faces a particular challenge with the proliferation of financial mules — individuals who, knowingly or unknowingly, allow their accounts to be used for laundering money. Deepfake technology is another challenge, as it allows criminals to generate entirely new digital identities rather than recycling known fraudulent ones. AI in fraud detection Traditional fraud detection systems use static, predetermined rules to identify risky transactions and are typically siloed to individual channels. But this means they are slow to adapt and limited in scope. In contrast, AI-driven solutions, such as those developed by Feedzai, use real-time behavioral analytics to detect anomalies, on an omnichannel basis. The system dynamically assesses risk by evaluating transactions against hundreds of potential risk factors simultaneously. This analysis enables financial institutions to deliver responses that are appropriate to the nature and level of the risk. For example, low-risk transactions are allowed to proceed unimpeded while high-risk payments are blocked or subjected to additional identity verification procedures. Feedzai processes data from over $8 trillion of transactions annually, creating a large and robust dataset that is used to train and improve its AI models. The company spends 25 percent of its profits on research and development to stay at the cutting-edge of fraud detection, detecting emerging threats before they become widespread problems. Building trust with AI Customer trust in financial institutions must be maintained. Feedzai has developed a comprehensive trust framework built on five foundational principles that guide its AI development and implementation. AI is used in a transparent manner, with traceable data and clear explanations of when and how it is used. Robust anti-fraud processes powered by AI are always in place. Outputs from the system are unbiased, ensuring people are treated fairly. The AI systems used are secure, and regularly tested, checked and updated to ensure quality. Interventions are appropriate to the type and level of risk, so that when risks are low, customers are not unduly inconvenienced, such as by having a transaction stopped or by being asked for extra identification. Privacy is also important. Feedzai's newest innovation, Feedzai IQ, uses powerful data analytics. But its data processing uses only metadata. No personally identifiable data is processed, so consumers can be confident their privacy is being preserved. This helps support privacy regulations such as GDPR and the requirements of the Saudi SDAIA. Third-party data can also help identify financial fraud. The integration of the Demyst data orchestration platform into Feedzai enables financial institutions to access third-party data, such as network activity and behavioral insights, allowing them to convert raw information into actionable insights in real time. All of this happens very rapidly. The normal time to set up a new method of detecting fraud using rules-based tools might be three months. But with Feedzai, results based on the actionable insight generated from its global customer base can be provided within a day, which is especially beneficial to financial institutions who may not have mature fraud-labelled data. As the fraud evolves, so do the defenses. Fraud prevention in a digital world The fight against financial fraud has entered a new era, where AI serves as the critical differentiator between vulnerable institutions and organizations capable of providing truly secure digital financial services. As fraudsters continue innovating, equally innovative defenses will be needed. Feedzai's platform demonstrates that AI, when properly implemented, can enhance the safety of financial transactions while preserving the exceptional customer experiences that drive business growth. In this evolving battle, AI-powered protection isn't just preferable, it's becoming indispensable. To learn more about Feedzai's AI-native financial crime prevention services, click here.