
Insight with Haslinda Amin 7/1/2025
Insight with Haslinda Amin, a daily news program featuring in-depth, high-profile interviews and analysis to give viewers the complete picture on the stories that matter. The show features prominent leaders spanning the worlds of business, finance, politics and culture. (Source: Bloomberg)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
44 minutes ago
- Yahoo
Skywork-Reward-V2: Leading the New Milestone for Open-Source Reward Models
SINGAPORE, July 5, 2025 /PRNewswire/ -- In September 2024, Skywork first open-sourced the Skywork-Reward series models and related datasets. Over the past nine months, these models and data have been widely adopted by the open-source community for research and practice, with over 750,000 cumulative downloads on the HuggingFace platform, helping multiple frontier models achieve excellent results in authoritative evaluations such as RewardBench. On July 4, 2025, Skywork continues to open-source the second-generation reward models - the Skywork-Reward-V2 series, comprising 8 reward models based on different base models of varying sizes, with parameters ranging from 600 million to 8 billion. These models have achieved top rankings across seven major mainstream reward model evaluation benchmarks. Skywork-Reward-V2 Download Links HuggingFace: GitHub: Technical Report: Reward models play a crucial role in the Reinforcement Learning from Human Feedback (RLHF) process. In developing this new generation of reward models, we constructed a hybrid dataset called Skywork-SynPref-40M, containing a total of 40 million preference pairs. To achieve large-scale, efficient data screening and filtering, Skywork specially designed a two-stage human-machine collaborative process that combines high-quality human annotation with the scalable processing capabilities of models. In this process, humans provide rigorously verified high-quality annotations, while Large Language Models (LLMs) automatically organize and expand based on human guidance. Based on the above high-quality hybrid preference data, we developed the Skywork-Reward-V2 series, which demonstrates broad applicability and excellent performance across multiple capability dimensions, including general alignment with human preferences, objective correctness, safety, resistance to style bias, and best-of-N scaling capability. Experimental validation shows that this series of models achieved the best performance on seven mainstream reward model evaluation benchmarks. 01 Skywork-SynPref-40M: Human-Machine Collaboration for Million-Scale Human Preference Data Screening Even the most advanced current open-source reward models still perform inadequately on most mainstream evaluation benchmarks. They fail to effectively capture the subtle and complex characteristics of human preferences, particularly when facing multi-dimensional, multi-level feedback. Additionally, many reward models tend to excel on specific benchmark tasks but struggle to transfer to new tasks or scenarios, exhibiting obvious "overfitting" phenomena. Although existing research has attempted to improve performance through optimizing objective functions, improving model architectures, and recently emerging Generative Reward Models, the overall effectiveness remains quite limited. We believe that the current fragility of reward models mainly stems from the limitations of existing preference datasets, which often have limited coverage, mechanical label generation methods, or lack rigorous quality control. Therefore, in developing the new generation of reward models, we not only continued the first generation's experience in data optimization but also introduced more diverse and larger-scale real human preference data, striving to improve data scale while maintaining data quality. Consequently, Skywork proposes Skywork-SynPref-40M - the largest preference hybrid dataset to date, containing a total of 40 million preference sample pairs. Its core innovation lies in a "human-machine collaboration, two-stage iteration" data selection pipeline. Stage 1: Human-Guided Small-Scale High-Quality Preference Construction The team first constructed an unverified initial preference pool and used Large Language Models (LLMs) to generate preference-related auxiliary attributes such as task type, objectivity, and controversy. Based on this, human annotators followed a strict verification protocol and used external tools and advanced LLMs to conduct detailed reviews of partial data, ultimately constructing a small-scale but high-quality "gold standard" dataset as the basis for subsequent data generation and model evaluation. Subsequently, we used preference labels from the gold standard data as guidance, combined with LLM large-scale generation of high-quality "silver standard" data, thus achieving data volume expansion. The team also conducted multiple rounds of iterative optimization: in each round, training reward models and identifying model weaknesses based on their performance on gold standard data; then retrieving similar samples and using multi-model consensus mechanisms for automatic annotation to further expand and enhance silver standard data. This human-machine collaborative closed-loop process continues iteratively, effectively improving the reward model's understanding and discrimination of preferences. Stage 2: Fully Automated Large-Scale Preference Data Expansion After obtaining preliminary high-quality models, the second stage turns to automated large-scale data expansion. This stage no longer relies on manual review but uses trained reward models to perform consistency filtering: If a sample's label is inconsistent with the current optimal model's prediction, or if the model's confidence is low, LLMs are called to automatically re-annotate; If the sample label is consistent with the "gold model" (i.e., a model trained only on human data) prediction and receives support from the current model or LLM, it can directly pass screening. Through this mechanism, the team successfully screened 26 million selected data points from the original 40 million samples, achieving a good balance between preference data scale and quality while greatly reducing the human annotation burden. 02 Skywork-Reward-V2: Matching Large Model Performance with Small Model Size Compared to the previous generation Skywork-Reward, Skywork newly released Skywork-Reward-V2 series provides 8 reward models trained based on Qwen3 and LLaMA3 series models, with parameter scales covering from 600 million to 8 billion. On seven mainstream reward model evaluation benchmarks including Reward Bench v1/v2, PPE Preference & Correctness, RMB, RM-Bench, and JudgeBench, the Skywork-Reward-V2 series comprehensively achieved current state-of-the-art (SOTA) levels. Compensating for Model Scale Limitations with Data Quality and Richness Even the smallest model, Skywork-Reward-V2-Qwen3-0.6B, achieves overall performance nearly matching the previous generation's strongest model, Skywork-Reward-Gemma-2-27B-v0.2, on average. The largest scale model, Skywork-Reward-V2-Llama-3.1-8B, achieved comprehensive superiority across all mainstream benchmark tests, becoming the currently best-performing open-source reward model overall. Broad Coverage of Multi-Dimensional Human Preference Capabilities Additionally, Skywork-Reward-V2 achieved leading results in multiple advanced capability evaluations, including Best-of-N (BoN) tasks, bias resistance capability testing (RM-Bench), complex instruction understanding, and truthfulness judgment (RewardBench v2), demonstrating excellent generalization ability and practicality. Highly Scalable Data Screening Process Significantly Improves Reward Model Performance Beyond excellent performance in evaluations, the team also found that in the "human-machine collaboration, two-stage iteration" data construction process, preference data that underwent careful screening and filtering could continuously and effectively improve reward models' overall performance through multiple iterative training rounds, especially showing remarkable performance in the second stage's fully automated data expansion. In contrast, blindly expanding raw data not only fails to improve initial performance but may introduce noise and negative effects. To further validate the critical role of data quality, we conducted experiments on a subset of 16 million data points from an early version. Results showed that training an 8B-scale model using only 1.8% (about 290,000) of the high-quality data already exceeded the performance of current 70B-level SOTA reward models. This result again confirms that the Skywork-SynPref dataset not only leads in scale but also has significant advantages in data quality. 03 Welcoming a New Milestone for Open-Source Reward Models: Helping Build Future AI Infrastructure In this research work on the second-generation reward model Skywork-Reward-V2, the team proposed Skywork-SynPref-40M, a hybrid dataset containing 40 million preference pairs (with 26 million carefully screened pairs), and Skywork-Reward-V2, a series of eight reward models with state-of-the-art performance designed for broad task applicability. We believe this research work and the continued iteration of reward models will help advance the development of open-source reward models and more broadly promote progress in Reinforcement Learning from Human Feedback (RLHF) research. This represents an important step forward for the field and can further accelerate the prosperity of the open-source community. The Skywork-Reward-V2 series models focus on research into scaling preference data. In the future, the team's research scope will gradually expand to other areas that have not been fully explored, such as alternative training techniques and modeling objectives. Meanwhile, considering recent development trends in the field - reward models and reward shaping mechanisms have become core components in today's large-scale language model training pipelines, applicable not only to RLHF based on human preference learning and behavior guidance, but also to RLVR including mathematics, programming, or general reasoning tasks, as well as agent-based learning scenarios. Therefore, we envision that reward models, or more broadly, unified reward systems, are poised to form the core of AI infrastructure in the future. They will no longer merely serve as evaluators of behavior or correctness, but will become the "compass" for intelligent systems navigating complex environments, helping them align with human values and continuously evolve toward more meaningful goals. Additionally, Skywork released the world's first deep research AI workspace agents in May, which you can experience by visiting: Media Contact Company Name: Skywork AI Person: Peter TianEmail: peter@ 2 Science Park DriveCountry: SingaporeWebsite: View original content to download multimedia: SOURCE Skywork AI pte ltd Sign in to access your portfolio


Fast Company
an hour ago
- Fast Company
How AI is transforming corporate finance
The role of the CFO is evolving—and fast. In today's volatile business environment, finance leaders are navigating everything from unpredictable tariffs to tightening regulations and rising geopolitical tensions. The latest shuffle in global trade policy is just another reminder that agility is no longer optional—it's a necessity. According to Pigment's latest CFO survey, most companies missed their financial targets last year. This isn't just a sobering statistic—it's a clear wake-up call. In today's volatile environment, businesses can no longer afford to wait and react; they must anticipate and move faster than the market to stay ahead. Finance leaders need tools that not only keep pace with a rapidly shifting global economy but also enable proactive scenario planning. Artificial Intelligence has emerged as the most powerful tool to meet this challenge—helping businesses pivot with the same speed and agility that today's business landscape demands. AI is ushering in a new era of smarter, faster, and more strategic decision-making in the office of the CFO. Finance leaders must now embrace AI not just to boost insights and productivity, but to drive more transformative, strategic outcomes. Teams are leveraging AI to access data faster, forecast more accurately, and collaborate seamlessly across the organization—often through simple natural language prompts. But the next evolution is underway: autonomous AI agents. These systems don't wait for prompts; they operate continuously in the background, proactively handling complex tasks with minimal human intervention. From real-time forecasting and dynamic scenario planning to risk management and anomaly detection, AI agents will become essential tools in the finance function. The right investments today won't just streamline operations—they will fundamentally redefine how finance teams drive value, resilience, and competitive advantage for the business. The Rise of Finance AI Agents The latest tariff developments and world trade saga are causing financial leaders and their institutions a lot of headaches. Trade policy is notoriously complex for businesses to navigate. CFOs must assess not only the downstream impact of specific regulations on functions like their supply chain but also how their business may be affected by the wider impact on regional and global economies. But fortunately for CFOs, there is a silver lining. The introduction of AI agents for finance teams has opened new doors to autonomous planning, real-time insights, and more proactive risk mitigation. AI agents can do more than just streamline processes like reconciliation and financial reporting—they can work independently and proactively as an extension of the team to help CFOs stay one step ahead of today's fast-moving business environment. Imagine a world where a forecasting model not only reacts to past trends but also continuously learns from new data, anticipates market shifts, and updates projections in real time. AI agents can simulate the financial impact of global events—from supply chain disruptions to new regulatory policies—and run thousands of scenarios to understand how these could impact the business well before the numbers show up on the balance sheet. This enables CFOs to help their businesses better decide the best course of action to take. AI agents are poised to be a game-changing technology for CFOs and finance teams—but only if they are ready to embrace the change. Making Smart Bets When new technology emerges, there is huge upside but also equal risk for first movers and early adopters. For CFOs, the key to navigating through the AI hype cycle to make smart and grounded investments lies less in being an expert in emerging technologies and more in understanding your business and what you aim to achieve. First, it's critical to understand the problem you're trying to solve with AI and the end goal: Are you trying to cut costs? Improve productivity? Looking for internal or external use cases? Most CFOs today are looking for ways that AI can help reduce spending and time spent on repetitive tasks, so their team can focus time elsewhere. But productivity is just one area that AI can drive value for businesses. CFOs should also think about how AI can democratize data for teams to be more strategic and even help make better business decisions and manage risk. No matter the primary goal for AI adoption—in order to maximize the ROI on AI investments—it's essential to have the right foundations in place. AI can only be as good as the data you feed it. If data sources are poor quality, disparate, or inaccurate then you will get lackluster results no matter how powerful the AI capabilities might be. Related, adding AI to an already complex platform can frustrate teams rather than help them. Platforms that integrate easily with data sources—and clean up data during implementation—make AI reliable and accessible for nontechnical users to maximize its value. AI agents operate best when supported by the right architecture. It is critical that they are embedded in a platform that is AI-first, flexible, and intuitive, while also having access to accurate, real-time data in order to deliver transformational value, fast. Finally, for AI to be truly effective and seamless, it requires an organization-wide strategy. CFOs should work alongside their CTOs and CIOs to ensure their data foundations are sound so that when new tools or platforms are added, teams can trust the data and outputs from AI are accurate. It also helps to start small. Get clear on exactly the use case for AI and test this out before building it out further. The Next Move is Yours The opportunity to become an AI-empowered finance organization is there for the taking. CFOs who want to give their teams the best chance to succeed and exceed expectations should not wait to make their move. According to McKinsey, 78% of business leaders say AI has already improved operational efficiency and decision-making in their organizations. And forward-thinking CFOs are already piloting AI in planning and analysis workflows, fraud detection, and even ESG reporting. The results? Greater accuracy, faster turnaround, and a better handle on risk. Those who delay risk being outpaced by competitors who are already harnessing AI to steer their companies with precision through these uncertain times. AI isn't just about unlocking new levels of efficiency—it's about giving finance teams better access to the insights they need to make faster and more informed decisions in a more challenging and unpredictable world. Agents in particular have the power to change a business's trajectory and results—finding new pathways to accelerate growth, drive higher margins, and identify the right opportunities to make trade-offs. CFOs who embrace this shift and harness the power of AI won't just have a significant edge over their competition—they'll lead and redefine their industries.
Yahoo
2 hours ago
- Yahoo
Best CD rates today, July 5, 2025 (best account provides 5.5% APY)
Find out how much you could earn by locking in a high CD rate today. The Federal Reserve cut its federal funds rate three times in 2024, so now could be your last chance to lock in a competitive CD rate before rates fall further. CD rates vary widely across financial institutions, so it's important to ensure you're getting the best rate possible when shopping around for a CD. The following is a breakdown of CD rates today and where to find the best offers. Generally, the best CD rates today are offered on shorter terms of around one year or less. Online banks and credit unions, in particular, offer the top CD rates. As of July 5, 2025, the highest CD rate is 5.5% APY, offered by Gainbridge® on its 5-year CD. There is a $1000 minimum opening deposit required. Here is a look at some of the best CD rates available today: This embedded content is not available in your region. The amount of interest you can earn from a CD depends on the annual percentage rate (APY). This is a measure of your total earnings after one year when considering the base interest rate and how often interest compounds (CD interest typically compounds daily or monthly). Say you invest $1,000 in a one-year CD with 1.81% APY, and interest compounds monthly. At the end of that year, your balance would grow to $1,018.25 — your initial $1,000 deposit, plus $18.25 in interest. Now let's say you choose a one-year CD that offers 4% APY instead. In this case, your balance would grow to $1,040.74 over the same period, which includes $40.74 in interest. The more you deposit in a CD, the more you stand to earn. If we took our same example of a one-year CD at 4% APY, but deposit $10,000, your total balance when the CD matures would be $10,407.42, meaning you'd earn $407.42 in interest. Read more: What is a good CD rate? When choosing a CD, the interest rate is usually top of mind. However, the rate isn't the only factor you should consider. There are several types of CDs that offer different benefits, though you may need to accept a slightly lower interest rate in exchange for more flexibility. Here's a look at some of the common types of CDs you can consider beyond traditional CDs: Bump-up CD: This type of CD allows you to request a higher interest rate if your bank's rates go up during the account's term. However, you're usually allowed to "bump up" your rate just once. No-penalty CD: Also known as a liquid CD, type of CD gives you the option to withdraw your funds before maturity without paying a penalty. Jumbo CD: These CDs require a higher minimum deposit (usually $100,000 or more), and often offer higher interest rate in return. In today's CD rate environment, however, the difference between traditional and jumbo CD rates may not be much. Brokered CD: As the name suggests, these CDs are purchased through a brokerage rather than directly from a bank. Brokered CDs can sometimes offer higher rates or more flexible terms, but they also carry more risk and might not be FDIC-insured. This embedded content is not available in your region.