
Apple Loop: iPhone Fold Launch Dates, The Apple Logo Mystery, Surprise iPhone 17 Upgrade
Taking a look back at this week's news and headlines from across the Apple world, including a surprise iPhone upgrade, iPhone Fold launch date, the mystery of the Apple logo, waiting for the MacBook Pro, sleeping in silence, and F1 The Movie saves cinema.
Apple Loop is here to remind you of a few of the many discussions around Apple in the last seven days. You can also read my weekly digest of Android news here on Forbes.
A Bigger iPhone
More information points to the vanilla iPhone 17 upgrading up to a 6.3 inch display. Following on from case leaks last week, this week saw the reputable Digital Chat Station share details on the potential for a larger screen. With the iPhone Plus range set to be replaced by the iPhone 17 Air, there is space in the portfoilio for a larger base modelÛ—especially when the iPhone 16e has the smaller screen:
'...the base iPhone 17 is rumored to get a larger display than the iPhone 16 base model. This potential upgrade has been in the rumor mill for months now, but now, yet another rumor out of China corroborates it, so it's getting more and more likely. "
(Phone Arena).
Where's The Apple Going?
There are also intruiging discussions around the location of the magnetic Mag-Safe charging ring and the location of the Apple Logo this week. Sources indicating that the logo is moving means that the stylised apple amy not sit in the centre of the charging ring. For a company known for chasing aesthetics, this is a courageous choice if true:
"What is clear is that if the logo moves down, the MagSafe circle will likely need to be replaced with an incomplete circle to offer any kind of visual harmony. Even so, I still believe the MagSafe coil could potentially also occupy a different position in the iPhone or at the very least, the visuals of this would have been front-and-center in Apple's engineers' minds from the beginning."
(Forbes).
The Foldable iPhone's Potential Launch Date
Apple is currently struggling to enter the Artificial Intelligence space, with the idea of 'don't do it first, do it right' proving to be a headache for the software engineers. The same may not be the case for the hardware team, as details on Apple's foldable iPhone point to a 2026 launch—definlty late but with Apple's hardware eye and hubris, an entry that will be called right by Tim Cook and his team:
"If everything stays on track, the device could complete prototype testing by the end of 2025 and proceed to the Engineering Verification Test (EVT) stage, setting the stage for a possible launch in the second half of 2026,' the report went on. That means that only two years after the iPhone 16 Pro, we could see a folding iPhone."
(Digitimes via Forbes).
Wait For The M5 MacBook Pro
Apple is expected to launch the M5 Apple Silicon chipset later this year, and with it a handful of new products, including a new iteration of the MacBook Pro. The biggest reason to hold off until the end of the year for this macOS laptop will be the inclusion of said M5 chipset:
"While the M5 chipset is expected to be a relatively steady upgrade of the macOS-focused Apple Silicon, that performance upgrade will be keenly felt as Apple establishes its deskbound operating system in a world focused on artificial intelligence. Apple's approach to offer as much local processing of personal data as possible requires as much power as possible inside the Mac."
(Forbes).
Night, Night, It's Time To Pause
A new audio option has been found in the code for iOS 26. The option to pause media when falling asleep will be a boon to those who settle in at night with an audio or podcast until they fall asleep:
"The option to pause audio when asleep will save your spot in an audiobook or a podcast, but it should also preserve battery life by preventing your earbuds from staying on all night. Pausing audio should be on by default when you install iOS 26, but it can be enabled by connecting your headphones to your iPhone and then tapping on them in the Settings app. Apple has not explained how the Beats or AirPods detect that you've fallen asleep."
(MacRumors).
And Finally...
Apple's latest foray into the major motion picture system with F1 The Movie is looking to have paid off. With a $140+ million opening weekend, the Brad Pitt vehicle of vehicles may push Apple and other streamers to more cinematic releases:
"Whatever the motivations behind this movie, though, the immediate financial success of F1: The Movie will hopefully be taken as a positive so that the streaming giants are more likely to bring their original movies to the big screen first — where they belong."
(Forbes).
Apple Loop brings you seven days worth of highlights every weekend here on Forbes. Don't forget to follow me so you don't miss any coverage in the future. Last week's Apple Loop can be read here, or this week's edition of Loop's sister column, Android Circuit, is also available on Forbes.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
24 minutes ago
- Yahoo
Skywork-Reward-V2: Leading the New Milestone for Open-Source Reward Models
SINGAPORE, July 5, 2025 /PRNewswire/ -- In September 2024, Skywork first open-sourced the Skywork-Reward series models and related datasets. Over the past nine months, these models and data have been widely adopted by the open-source community for research and practice, with over 750,000 cumulative downloads on the HuggingFace platform, helping multiple frontier models achieve excellent results in authoritative evaluations such as RewardBench. On July 4, 2025, Skywork continues to open-source the second-generation reward models - the Skywork-Reward-V2 series, comprising 8 reward models based on different base models of varying sizes, with parameters ranging from 600 million to 8 billion. These models have achieved top rankings across seven major mainstream reward model evaluation benchmarks. Skywork-Reward-V2 Download Links HuggingFace: GitHub: Technical Report: Reward models play a crucial role in the Reinforcement Learning from Human Feedback (RLHF) process. In developing this new generation of reward models, we constructed a hybrid dataset called Skywork-SynPref-40M, containing a total of 40 million preference pairs. To achieve large-scale, efficient data screening and filtering, Skywork specially designed a two-stage human-machine collaborative process that combines high-quality human annotation with the scalable processing capabilities of models. In this process, humans provide rigorously verified high-quality annotations, while Large Language Models (LLMs) automatically organize and expand based on human guidance. Based on the above high-quality hybrid preference data, we developed the Skywork-Reward-V2 series, which demonstrates broad applicability and excellent performance across multiple capability dimensions, including general alignment with human preferences, objective correctness, safety, resistance to style bias, and best-of-N scaling capability. Experimental validation shows that this series of models achieved the best performance on seven mainstream reward model evaluation benchmarks. 01 Skywork-SynPref-40M: Human-Machine Collaboration for Million-Scale Human Preference Data Screening Even the most advanced current open-source reward models still perform inadequately on most mainstream evaluation benchmarks. They fail to effectively capture the subtle and complex characteristics of human preferences, particularly when facing multi-dimensional, multi-level feedback. Additionally, many reward models tend to excel on specific benchmark tasks but struggle to transfer to new tasks or scenarios, exhibiting obvious "overfitting" phenomena. Although existing research has attempted to improve performance through optimizing objective functions, improving model architectures, and recently emerging Generative Reward Models, the overall effectiveness remains quite limited. We believe that the current fragility of reward models mainly stems from the limitations of existing preference datasets, which often have limited coverage, mechanical label generation methods, or lack rigorous quality control. Therefore, in developing the new generation of reward models, we not only continued the first generation's experience in data optimization but also introduced more diverse and larger-scale real human preference data, striving to improve data scale while maintaining data quality. Consequently, Skywork proposes Skywork-SynPref-40M - the largest preference hybrid dataset to date, containing a total of 40 million preference sample pairs. Its core innovation lies in a "human-machine collaboration, two-stage iteration" data selection pipeline. Stage 1: Human-Guided Small-Scale High-Quality Preference Construction The team first constructed an unverified initial preference pool and used Large Language Models (LLMs) to generate preference-related auxiliary attributes such as task type, objectivity, and controversy. Based on this, human annotators followed a strict verification protocol and used external tools and advanced LLMs to conduct detailed reviews of partial data, ultimately constructing a small-scale but high-quality "gold standard" dataset as the basis for subsequent data generation and model evaluation. Subsequently, we used preference labels from the gold standard data as guidance, combined with LLM large-scale generation of high-quality "silver standard" data, thus achieving data volume expansion. The team also conducted multiple rounds of iterative optimization: in each round, training reward models and identifying model weaknesses based on their performance on gold standard data; then retrieving similar samples and using multi-model consensus mechanisms for automatic annotation to further expand and enhance silver standard data. This human-machine collaborative closed-loop process continues iteratively, effectively improving the reward model's understanding and discrimination of preferences. Stage 2: Fully Automated Large-Scale Preference Data Expansion After obtaining preliminary high-quality models, the second stage turns to automated large-scale data expansion. This stage no longer relies on manual review but uses trained reward models to perform consistency filtering: If a sample's label is inconsistent with the current optimal model's prediction, or if the model's confidence is low, LLMs are called to automatically re-annotate; If the sample label is consistent with the "gold model" (i.e., a model trained only on human data) prediction and receives support from the current model or LLM, it can directly pass screening. Through this mechanism, the team successfully screened 26 million selected data points from the original 40 million samples, achieving a good balance between preference data scale and quality while greatly reducing the human annotation burden. 02 Skywork-Reward-V2: Matching Large Model Performance with Small Model Size Compared to the previous generation Skywork-Reward, Skywork newly released Skywork-Reward-V2 series provides 8 reward models trained based on Qwen3 and LLaMA3 series models, with parameter scales covering from 600 million to 8 billion. On seven mainstream reward model evaluation benchmarks including Reward Bench v1/v2, PPE Preference & Correctness, RMB, RM-Bench, and JudgeBench, the Skywork-Reward-V2 series comprehensively achieved current state-of-the-art (SOTA) levels. Compensating for Model Scale Limitations with Data Quality and Richness Even the smallest model, Skywork-Reward-V2-Qwen3-0.6B, achieves overall performance nearly matching the previous generation's strongest model, Skywork-Reward-Gemma-2-27B-v0.2, on average. The largest scale model, Skywork-Reward-V2-Llama-3.1-8B, achieved comprehensive superiority across all mainstream benchmark tests, becoming the currently best-performing open-source reward model overall. Broad Coverage of Multi-Dimensional Human Preference Capabilities Additionally, Skywork-Reward-V2 achieved leading results in multiple advanced capability evaluations, including Best-of-N (BoN) tasks, bias resistance capability testing (RM-Bench), complex instruction understanding, and truthfulness judgment (RewardBench v2), demonstrating excellent generalization ability and practicality. Highly Scalable Data Screening Process Significantly Improves Reward Model Performance Beyond excellent performance in evaluations, the team also found that in the "human-machine collaboration, two-stage iteration" data construction process, preference data that underwent careful screening and filtering could continuously and effectively improve reward models' overall performance through multiple iterative training rounds, especially showing remarkable performance in the second stage's fully automated data expansion. In contrast, blindly expanding raw data not only fails to improve initial performance but may introduce noise and negative effects. To further validate the critical role of data quality, we conducted experiments on a subset of 16 million data points from an early version. Results showed that training an 8B-scale model using only 1.8% (about 290,000) of the high-quality data already exceeded the performance of current 70B-level SOTA reward models. This result again confirms that the Skywork-SynPref dataset not only leads in scale but also has significant advantages in data quality. 03 Welcoming a New Milestone for Open-Source Reward Models: Helping Build Future AI Infrastructure In this research work on the second-generation reward model Skywork-Reward-V2, the team proposed Skywork-SynPref-40M, a hybrid dataset containing 40 million preference pairs (with 26 million carefully screened pairs), and Skywork-Reward-V2, a series of eight reward models with state-of-the-art performance designed for broad task applicability. We believe this research work and the continued iteration of reward models will help advance the development of open-source reward models and more broadly promote progress in Reinforcement Learning from Human Feedback (RLHF) research. This represents an important step forward for the field and can further accelerate the prosperity of the open-source community. The Skywork-Reward-V2 series models focus on research into scaling preference data. In the future, the team's research scope will gradually expand to other areas that have not been fully explored, such as alternative training techniques and modeling objectives. Meanwhile, considering recent development trends in the field - reward models and reward shaping mechanisms have become core components in today's large-scale language model training pipelines, applicable not only to RLHF based on human preference learning and behavior guidance, but also to RLVR including mathematics, programming, or general reasoning tasks, as well as agent-based learning scenarios. Therefore, we envision that reward models, or more broadly, unified reward systems, are poised to form the core of AI infrastructure in the future. They will no longer merely serve as evaluators of behavior or correctness, but will become the "compass" for intelligent systems navigating complex environments, helping them align with human values and continuously evolve toward more meaningful goals. Additionally, Skywork released the world's first deep research AI workspace agents in May, which you can experience by visiting: Media Contact Company Name: Skywork AI Person: Peter TianEmail: peter@ 2 Science Park DriveCountry: SingaporeWebsite: View original content to download multimedia: SOURCE Skywork AI pte ltd Sign in to access your portfolio
Yahoo
34 minutes ago
- Yahoo
Apple Inc. (AAPL): People Are Tired Of The Stock Buybacks, Says Jim Cramer
We recently published . Apple Inc. (NASDAQ:AAPL) is one of the stocks Jim Cramer recently discussed. Cramer continues to be one of Apple Inc. (NASDAQ:AAPL)'s strongest proponents even though the firm's shares have lost 12.6% year-to-date. The stock has struggled due to trade tensions between the US and China that have threatened to disrupt the supply chain, the firm's struggle to convince the market about its presence in the AI market, and concerns about slow iPhone sales. However, the CNBC host believes that Apple Inc. (NASDAQ:AAPL) will maintain its stature as long as the firm holds its high-end smartphone market share. This time around, he criticized Apple Inc. (NASDAQ:AAPL)'s stock buybacks and deemed them inadequate: '[On reports of Apple reportedly looking to rely on third party AI] Look at how the stock reacted. Because people are tired of Apple just saying, you know what we're gonna do, we're gonna buy back stock until we get something better. No. I mean that's not what you can do anymore. A wide view of an Apple store, showing the range of products the company offers. Cramer commented on Apple Inc. (NASDAQ:AAPL)'s woes in detail recently. Here is what he said: '. . .Apple, which cannot get out of its own way. And I think probably could go down to 25 times earnings. Which is a substantial decline. Apple's a share donor. It's a share donor. '[On why Apple stock should be bought] No I'm not going to because I think the multiple's too high. While we acknowledge the potential of AAPL as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.


Android Authority
34 minutes ago
- Android Authority
As a Wear OS fan, I'm embarrassed to admit how excited I am for this watchOS 26 feature
Kaitlyn Cimino / Android Authority I rely on lists to keep my life running: grocery lists, packing lists, home improvement checklists, birthday gift ideas for my 19 nieces and nephews, and ongoing logs of thank-you notes for gifts for my own kid. My note-keeping apps aren't just productivity tools; they're the backbone of my sanity. So, when Apple announced that the Notes app is coming to Apple smartwatches via watchOS 26, my ears perked up. Do you use note-taking apps on your smartwatch? 0 votes Yes NaN % No NaN % I don't wear a smartwatch. NaN % Apple Part of this job means bouncing between ecosystems to test devices and their competitors. When I'm using Apple's platform, I stockpile thoughts in the Notes app. It lets me access and update information from my iPad, MacBook, and iPhone interchangeably, and passively-aggressively share chore lists with my partner so he can see everything I've crossed off. The idea of extending that convenience to my Apple Watch is genuinely appealing and, honestly, overdue. When I read that the watchOS version would support checklists, plus Siri voice commands, I was even more intrigued. I am especially excited for the ability to create and mark off checklists using Siri. The ability to add a note or update a list by speaking, without reaching for a phone or fumbling with a touchscreen, is the kind of thoughtful feature that makes wearable tech feel meaningfully useful. The next time I'm juggling a grocery basket in one hand and a gallon of milk in the other (because I'll never learn to just grab a cart), being able to check off 'chili powder' via Siri might save me from dropping both. Kaitlyn Cimino / Android Authority What's surprising is that this functionality isn't already available elsewhere. Google Keep has long supported checklists and remains a staple on my Wear OS smartwatches. Like Notes, I use it to maintain some semblance of organization. But voice interaction is still limited, which is a puzzling omission, especially considering Keep and Assistant are both core Google services. At this point, that integration should be seamless. Wear OS has come a long way in recent years and now offers a genuinely strong experience. But this is a reminder that polish often shows up in the smallest features. Apple's incoming update may seem minor, but for list-driven users, it's a meaningful one. This isn't about platform loyalty. It's about smart features that make a smartwatch feel smarter. And in this case, Apple got it right. Here's hoping Google is paying attention.