
Renault Seeks India Partnership With JSW After Nissan's Exit
The company has held preliminary talks with Indian billionaire Sajjan Jindal's JSW Group for a potential joint venture, the people said, asking not to be identified because the discussions are private.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
27 minutes ago
- Yahoo
Charting the Global Economy: US Jobs Data Eases Pressure on Fed
(Bloomberg) -- Fresh US jobs figures took pressure off the Federal Reserve to consider an interest-rate cut later this month, likely leaving the central bank on hold at least until the fall. Foreign Buyers Swoop on Cape Town Homes, Pricing Out Locals Massachusetts to Follow NYC in Making Landlords Pay Broker Fees NYC Commutes Resume After Midtown Bus Terminal Crash Chaos Struggling Downtowns Are Looking to Lure New Crowds What Gothenburg Got Out of Congestion Pricing While employers added more jobs in June than forecast and the unemployment rate ticked lower, growth in private payrolls weakened. Elsewhere, the manufacturing slowdown in Asia deepened. Survey data showed purchasing managers indexes for Taiwan, Indonesia and Vietnam firmly in contraction territory. Here are some of the charts that appeared on Bloomberg this week on the latest developments in the global economy, markets and geopolitics: US US job growth exceeded expectations in June as an unusual surge in public education employment masked a slowdown in hiring across the rest of the economy. Private payrolls rose the least since October, largely reflecting hiring in health care. The jobless rate declined to 4.1%, indicating employers remain reticent to lay off workers. A buildup of unsold houses sitting on the market for weeks is becoming a new reality in once-booming housing areas across the Sun Belt. Real estate agents in the South and Southwest say they're seeing more people list homes, giving up on hopes that mortgage rates will drop anytime soon. In Florida, homeowners are fleeing soaring insurance costs, and in Colorado, investors are culling rental properties. Europe Euro-area inflation settled at the European Central Bank's target in June, strengthening arguments to press pause on a year-long campaign of interest-rate cuts. A stronger euro and lower energy costs are helping keep price pressures in check — as is lackluster expansion by the region's 20-nation economy. The UK economy grew in the first quarter by the most in a year as Britons spent more and saved less before the Labour government's tax hikes and extra US tariffs came into effect. The outlook has darkened since the start of April amid a sharp drop in employment, weak retail sales and plunging exports to the US. Swedish retail sales fell the most in more than three decades in May, continuing a run of disappointing data and increasing pressure on the country's central bank to lower rates again. The slump compounds the recent below-forecasts data readings for Sweden including a surprise contraction in first-quarter economic output and a rise in the unemployment rate to 9% in May. Asia The slowdown in Asia's manufacturing activity deepened further in June, a warning sign for the region's growth prospects as tariffs on shipments to the US are poised to increase next week. Export-reliant economies including Taiwan and Vietnam saw their purchasing managers indexes deteriorate further, with factories reporting a continued decline in new orders, output and staffing as the trade war saps demand. Japan's annual wage negotiations concluded with the largest pay increase in 34 years, an outcome that supports the central bank's view that a cycle of higher wages and prices is emerging. Workers at 5,162 companies affiliated with the nation's largest union federation Rengo secured an average wage increase of 5.25%, according to the final update of pay deals announced by the union group US President Donald Trump floated the idea of keeping 25% tariffs on Japan's cars as talks between the two nations continued just before a slew of higher duties are set to kick in if a trade deal isn't reached. Emerging Markets Cargo thefts in Mexico topped 24,000 in 2024, up about 16%, data from transportation risk consultancy Overhaul show. That trails the US and Europe in total incidents. But in loss-ratio terms, which compare the number of thefts to economic activity, Mexico is the worst in the world. World Poland's central bank unexpectedly cut interest rates after a one-month pause and said inflation is likely to ease within its target in the coming months. A day after the Wednesday move, central bank Governor Adam Glapinski said the reduction was not the beginning of a cycle of monetary easing, even as he held out for another potential move in September. Tanzania also cut, while Ethiopia and the Bank of Central African States kept borrowing costs on hold. --With assistance from Irina Anghel, Maya Averbuch, Agnieszka Barteczko, Charlie Duxbury, Claire Jiao, Sakura Murakami, Andrea Navarro, Mark Niquette, Jana Randow, Michael Sasso, Zoe Schneeweiss, Erica Yokoyama, Craig Stirling and Jeremy Diamond. SNAP Cuts in Big Tax Bill Will Hit a Lot of Trump Voters Too America's Top Consumer-Sentiment Economist Is Worried For Brazil's Criminals, Coffee Beans Are the Target Sperm Freezing Is a New Hot Market for Startups Pistachios Are Everywhere Right Now, Not Just in Dubai Chocolate ©2025 Bloomberg L.P.
Yahoo
an hour ago
- Yahoo
Skywork-Reward-V2: Leading the New Milestone for Open-Source Reward Models
SINGAPORE, July 5, 2025 /PRNewswire/ -- In September 2024, Skywork first open-sourced the Skywork-Reward series models and related datasets. Over the past nine months, these models and data have been widely adopted by the open-source community for research and practice, with over 750,000 cumulative downloads on the HuggingFace platform, helping multiple frontier models achieve excellent results in authoritative evaluations such as RewardBench. On July 4, 2025, Skywork continues to open-source the second-generation reward models - the Skywork-Reward-V2 series, comprising 8 reward models based on different base models of varying sizes, with parameters ranging from 600 million to 8 billion. These models have achieved top rankings across seven major mainstream reward model evaluation benchmarks. Skywork-Reward-V2 Download Links HuggingFace: GitHub: Technical Report: Reward models play a crucial role in the Reinforcement Learning from Human Feedback (RLHF) process. In developing this new generation of reward models, we constructed a hybrid dataset called Skywork-SynPref-40M, containing a total of 40 million preference pairs. To achieve large-scale, efficient data screening and filtering, Skywork specially designed a two-stage human-machine collaborative process that combines high-quality human annotation with the scalable processing capabilities of models. In this process, humans provide rigorously verified high-quality annotations, while Large Language Models (LLMs) automatically organize and expand based on human guidance. Based on the above high-quality hybrid preference data, we developed the Skywork-Reward-V2 series, which demonstrates broad applicability and excellent performance across multiple capability dimensions, including general alignment with human preferences, objective correctness, safety, resistance to style bias, and best-of-N scaling capability. Experimental validation shows that this series of models achieved the best performance on seven mainstream reward model evaluation benchmarks. 01 Skywork-SynPref-40M: Human-Machine Collaboration for Million-Scale Human Preference Data Screening Even the most advanced current open-source reward models still perform inadequately on most mainstream evaluation benchmarks. They fail to effectively capture the subtle and complex characteristics of human preferences, particularly when facing multi-dimensional, multi-level feedback. Additionally, many reward models tend to excel on specific benchmark tasks but struggle to transfer to new tasks or scenarios, exhibiting obvious "overfitting" phenomena. Although existing research has attempted to improve performance through optimizing objective functions, improving model architectures, and recently emerging Generative Reward Models, the overall effectiveness remains quite limited. We believe that the current fragility of reward models mainly stems from the limitations of existing preference datasets, which often have limited coverage, mechanical label generation methods, or lack rigorous quality control. Therefore, in developing the new generation of reward models, we not only continued the first generation's experience in data optimization but also introduced more diverse and larger-scale real human preference data, striving to improve data scale while maintaining data quality. Consequently, Skywork proposes Skywork-SynPref-40M - the largest preference hybrid dataset to date, containing a total of 40 million preference sample pairs. Its core innovation lies in a "human-machine collaboration, two-stage iteration" data selection pipeline. Stage 1: Human-Guided Small-Scale High-Quality Preference Construction The team first constructed an unverified initial preference pool and used Large Language Models (LLMs) to generate preference-related auxiliary attributes such as task type, objectivity, and controversy. Based on this, human annotators followed a strict verification protocol and used external tools and advanced LLMs to conduct detailed reviews of partial data, ultimately constructing a small-scale but high-quality "gold standard" dataset as the basis for subsequent data generation and model evaluation. Subsequently, we used preference labels from the gold standard data as guidance, combined with LLM large-scale generation of high-quality "silver standard" data, thus achieving data volume expansion. The team also conducted multiple rounds of iterative optimization: in each round, training reward models and identifying model weaknesses based on their performance on gold standard data; then retrieving similar samples and using multi-model consensus mechanisms for automatic annotation to further expand and enhance silver standard data. This human-machine collaborative closed-loop process continues iteratively, effectively improving the reward model's understanding and discrimination of preferences. Stage 2: Fully Automated Large-Scale Preference Data Expansion After obtaining preliminary high-quality models, the second stage turns to automated large-scale data expansion. This stage no longer relies on manual review but uses trained reward models to perform consistency filtering: If a sample's label is inconsistent with the current optimal model's prediction, or if the model's confidence is low, LLMs are called to automatically re-annotate; If the sample label is consistent with the "gold model" (i.e., a model trained only on human data) prediction and receives support from the current model or LLM, it can directly pass screening. Through this mechanism, the team successfully screened 26 million selected data points from the original 40 million samples, achieving a good balance between preference data scale and quality while greatly reducing the human annotation burden. 02 Skywork-Reward-V2: Matching Large Model Performance with Small Model Size Compared to the previous generation Skywork-Reward, Skywork newly released Skywork-Reward-V2 series provides 8 reward models trained based on Qwen3 and LLaMA3 series models, with parameter scales covering from 600 million to 8 billion. On seven mainstream reward model evaluation benchmarks including Reward Bench v1/v2, PPE Preference & Correctness, RMB, RM-Bench, and JudgeBench, the Skywork-Reward-V2 series comprehensively achieved current state-of-the-art (SOTA) levels. Compensating for Model Scale Limitations with Data Quality and Richness Even the smallest model, Skywork-Reward-V2-Qwen3-0.6B, achieves overall performance nearly matching the previous generation's strongest model, Skywork-Reward-Gemma-2-27B-v0.2, on average. The largest scale model, Skywork-Reward-V2-Llama-3.1-8B, achieved comprehensive superiority across all mainstream benchmark tests, becoming the currently best-performing open-source reward model overall. Broad Coverage of Multi-Dimensional Human Preference Capabilities Additionally, Skywork-Reward-V2 achieved leading results in multiple advanced capability evaluations, including Best-of-N (BoN) tasks, bias resistance capability testing (RM-Bench), complex instruction understanding, and truthfulness judgment (RewardBench v2), demonstrating excellent generalization ability and practicality. Highly Scalable Data Screening Process Significantly Improves Reward Model Performance Beyond excellent performance in evaluations, the team also found that in the "human-machine collaboration, two-stage iteration" data construction process, preference data that underwent careful screening and filtering could continuously and effectively improve reward models' overall performance through multiple iterative training rounds, especially showing remarkable performance in the second stage's fully automated data expansion. In contrast, blindly expanding raw data not only fails to improve initial performance but may introduce noise and negative effects. To further validate the critical role of data quality, we conducted experiments on a subset of 16 million data points from an early version. Results showed that training an 8B-scale model using only 1.8% (about 290,000) of the high-quality data already exceeded the performance of current 70B-level SOTA reward models. This result again confirms that the Skywork-SynPref dataset not only leads in scale but also has significant advantages in data quality. 03 Welcoming a New Milestone for Open-Source Reward Models: Helping Build Future AI Infrastructure In this research work on the second-generation reward model Skywork-Reward-V2, the team proposed Skywork-SynPref-40M, a hybrid dataset containing 40 million preference pairs (with 26 million carefully screened pairs), and Skywork-Reward-V2, a series of eight reward models with state-of-the-art performance designed for broad task applicability. We believe this research work and the continued iteration of reward models will help advance the development of open-source reward models and more broadly promote progress in Reinforcement Learning from Human Feedback (RLHF) research. This represents an important step forward for the field and can further accelerate the prosperity of the open-source community. The Skywork-Reward-V2 series models focus on research into scaling preference data. In the future, the team's research scope will gradually expand to other areas that have not been fully explored, such as alternative training techniques and modeling objectives. Meanwhile, considering recent development trends in the field - reward models and reward shaping mechanisms have become core components in today's large-scale language model training pipelines, applicable not only to RLHF based on human preference learning and behavior guidance, but also to RLVR including mathematics, programming, or general reasoning tasks, as well as agent-based learning scenarios. Therefore, we envision that reward models, or more broadly, unified reward systems, are poised to form the core of AI infrastructure in the future. They will no longer merely serve as evaluators of behavior or correctness, but will become the "compass" for intelligent systems navigating complex environments, helping them align with human values and continuously evolve toward more meaningful goals. Additionally, Skywork released the world's first deep research AI workspace agents in May, which you can experience by visiting: Media Contact Company Name: Skywork AI Person: Peter TianEmail: peter@ 2 Science Park DriveCountry: SingaporeWebsite: View original content to download multimedia: SOURCE Skywork AI pte ltd Sign in to access your portfolio
Yahoo
2 hours ago
- Yahoo
Las Vegas Sands Faces Macau Market Challenges Despite Attractive Valuation, Says JPMorgan
Las Vegas Sands Corp. (NYSE:LVS) ranks among the best cyclical stocks to buy now. On June 23, JPMorgan set a price target of $47 and began coverage of Las Vegas Sands Corp. (NYSE:LVS) with a Neutral rating. With a valuation nine times the casino operator's projected 2026 enterprise value to EBITDA ratio, the investment bank's year-end 2026 price target represents a substantial discount to Las Vegas Sands' historical average. Although the massive 4x discount to the company's historical average seems 'enticing,' JPMorgan said it is still apprehensive due to Las Vegas Sands' poor performance in the already troubled Macau market. If Las Vegas Sands Corp. (NYSE:LVS) regains market share in Macau or if industry gross gaming revenue picks up speed again, JPMorgan said it might take a more optimistic view of the company. Las Vegas Sands Corp. (NYSE:LVS) is a casino operator with a primary focus on the Macau market. The company primarily targets the Asian market with its five casinos in Macau and Marina Bay Sands in Singapore. While we acknowledge the potential of LVS as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. Read More: and Disclosure: None. Melden Sie sich an, um Ihr Portfolio aufzurufen.