logo
Meesho Files Papers Confidentially to Raise ₹4,250 Crore, TFS to Raise ₹2,000 Crore

Meesho Files Papers Confidentially to Raise ₹4,250 Crore, TFS to Raise ₹2,000 Crore

Entrepreneura day ago
Meesho, a leading name in India's fast-growing e-commerce space, has filed confidential documents for an initial public offering (IPO). Travel Food Services (TFS), known for operating food counters and lounges across India's major airports, is set to launch its INR 2,000 crore initial public offering (IPO).
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Meesho, a leading name in India's fast-growing e-commerce space, has filed confidential documents for an initial public offering (IPO), according to a report by Reuters. The company plans to raise approximately INR 4,250 crore (around USD 497.30 million) through the issue of fresh equity shares.
The IPO will also include a secondary component, with some existing investors expected to sell a portion of their stakes. While detailed terms of the secondary sale haven't been disclosed, Meesho has already secured shareholder approval, according to filings with the 'Registrar of Companies.'
Founded as a challenger to giants like Amazon and Flipkart, Meesho is backed by some of the world's most notable investors, including Prosus, Elevation Capital, WestBridge Capital, SoftBank, and Peak XV Partners.
Opting for a confidential filing enables Meesho to interact with market regulators and obtain feedback without prematurely disclosing financials or strategic details. This route is increasingly being used by Indian tech companies — Groww and Shadowfax have recently adopted the same strategy.
On the performance front, Meesho has shown notable financial progress. In FY 2024, its revenue climbed 33 per cent to INR 7,615 crore, while net losses reduced sharply to INR 305 crore, from INR 1,675 crore the previous year. The sharp drop in losses signals tighter cost controls and improved efficiency.
Travel Food Services Launches ₹2,000 Crore IPO
Travel Food Services (TFS), known for operating food counters and lounges across India's major airports, is set to launch its INR 2,000 crore initial public offering (IPO).
The IPO, entirely an offer for sale (OFS) by the Kapur Family Trust, involves no issuance of new shares. As such, the entire proceeds will go to the selling shareholder. However, employees will benefit from an INR 104 per share discount in their reserved allotment.
The public issue is priced between INR 1,045-1,100 per share (face value of INR 1) and will open for subscription on Monday, 7 July, closing on Wednesday, 9 July. Retail investors are required to bid for a minimum of 13 shares, and in multiples thereafter.
Ahead of the IPO opening, the stock is trading at a grey market premium (GMP) of INR 92, suggesting a potential listing price around INR 1,192, nearly 8 per cent higher than the upper end of the price band. However, actual performance will depend on prevailing market sentiment at the time of listing.
In terms of financial performance, TFS reported a 21 per cent year-on-year rise in revenue to INR 1,687.7 crore in FY 2025, while net profit increased by 27 per cent to INR 379.7 crore.
TFS operates in 14 Indian airports, including Delhi, Mumbai, and Bengaluru. The company also has a presence in Malaysia and Hong Kong, and operates QSR formats across nine highways in India.
Kotak Mahindra Capital, HSBC Securities, ICICI Securities, and Batlivala & Karani are acting as the lead managers (BRLMs) to the issue, while MUFG Intime is the designated registrar.
Market context: Broader Sentiment Remains Cautious
Sundar Kewat, Technical and Derivatives Analyst, Ashika Institutional Equity, said that on the *derivatives front*, notable open interest spurts were seen in stocks like TECHM, TRENT, BOSCHLTD, ANGELONE, and BSE.
"After 5 straight sessions of losses, the market remained in a consolidation phase as investors stayed on the sidelines ahead of the July 9 deadline set by U.S. President Donald Trump for trade tariff negotiations," said the overview commentary.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Skywork-Reward-V2: Leading the New Milestone for Open-Source Reward Models
Skywork-Reward-V2: Leading the New Milestone for Open-Source Reward Models

Yahoo

timean hour ago

  • Yahoo

Skywork-Reward-V2: Leading the New Milestone for Open-Source Reward Models

SINGAPORE, July 5, 2025 /PRNewswire/ -- In September 2024, Skywork first open-sourced the Skywork-Reward series models and related datasets. Over the past nine months, these models and data have been widely adopted by the open-source community for research and practice, with over 750,000 cumulative downloads on the HuggingFace platform, helping multiple frontier models achieve excellent results in authoritative evaluations such as RewardBench. On July 4, 2025, Skywork continues to open-source the second-generation reward models - the Skywork-Reward-V2 series, comprising 8 reward models based on different base models of varying sizes, with parameters ranging from 600 million to 8 billion. These models have achieved top rankings across seven major mainstream reward model evaluation benchmarks. Skywork-Reward-V2 Download Links HuggingFace: GitHub: Technical Report: Reward models play a crucial role in the Reinforcement Learning from Human Feedback (RLHF) process. In developing this new generation of reward models, we constructed a hybrid dataset called Skywork-SynPref-40M, containing a total of 40 million preference pairs. To achieve large-scale, efficient data screening and filtering, Skywork specially designed a two-stage human-machine collaborative process that combines high-quality human annotation with the scalable processing capabilities of models. In this process, humans provide rigorously verified high-quality annotations, while Large Language Models (LLMs) automatically organize and expand based on human guidance. Based on the above high-quality hybrid preference data, we developed the Skywork-Reward-V2 series, which demonstrates broad applicability and excellent performance across multiple capability dimensions, including general alignment with human preferences, objective correctness, safety, resistance to style bias, and best-of-N scaling capability. Experimental validation shows that this series of models achieved the best performance on seven mainstream reward model evaluation benchmarks. 01 Skywork-SynPref-40M: Human-Machine Collaboration for Million-Scale Human Preference Data Screening Even the most advanced current open-source reward models still perform inadequately on most mainstream evaluation benchmarks. They fail to effectively capture the subtle and complex characteristics of human preferences, particularly when facing multi-dimensional, multi-level feedback. Additionally, many reward models tend to excel on specific benchmark tasks but struggle to transfer to new tasks or scenarios, exhibiting obvious "overfitting" phenomena. Although existing research has attempted to improve performance through optimizing objective functions, improving model architectures, and recently emerging Generative Reward Models, the overall effectiveness remains quite limited. We believe that the current fragility of reward models mainly stems from the limitations of existing preference datasets, which often have limited coverage, mechanical label generation methods, or lack rigorous quality control. Therefore, in developing the new generation of reward models, we not only continued the first generation's experience in data optimization but also introduced more diverse and larger-scale real human preference data, striving to improve data scale while maintaining data quality. Consequently, Skywork proposes Skywork-SynPref-40M - the largest preference hybrid dataset to date, containing a total of 40 million preference sample pairs. Its core innovation lies in a "human-machine collaboration, two-stage iteration" data selection pipeline. Stage 1: Human-Guided Small-Scale High-Quality Preference Construction The team first constructed an unverified initial preference pool and used Large Language Models (LLMs) to generate preference-related auxiliary attributes such as task type, objectivity, and controversy. Based on this, human annotators followed a strict verification protocol and used external tools and advanced LLMs to conduct detailed reviews of partial data, ultimately constructing a small-scale but high-quality "gold standard" dataset as the basis for subsequent data generation and model evaluation. Subsequently, we used preference labels from the gold standard data as guidance, combined with LLM large-scale generation of high-quality "silver standard" data, thus achieving data volume expansion. The team also conducted multiple rounds of iterative optimization: in each round, training reward models and identifying model weaknesses based on their performance on gold standard data; then retrieving similar samples and using multi-model consensus mechanisms for automatic annotation to further expand and enhance silver standard data. This human-machine collaborative closed-loop process continues iteratively, effectively improving the reward model's understanding and discrimination of preferences. Stage 2: Fully Automated Large-Scale Preference Data Expansion After obtaining preliminary high-quality models, the second stage turns to automated large-scale data expansion. This stage no longer relies on manual review but uses trained reward models to perform consistency filtering: If a sample's label is inconsistent with the current optimal model's prediction, or if the model's confidence is low, LLMs are called to automatically re-annotate; If the sample label is consistent with the "gold model" (i.e., a model trained only on human data) prediction and receives support from the current model or LLM, it can directly pass screening. Through this mechanism, the team successfully screened 26 million selected data points from the original 40 million samples, achieving a good balance between preference data scale and quality while greatly reducing the human annotation burden. 02 Skywork-Reward-V2: Matching Large Model Performance with Small Model Size Compared to the previous generation Skywork-Reward, Skywork newly released Skywork-Reward-V2 series provides 8 reward models trained based on Qwen3 and LLaMA3 series models, with parameter scales covering from 600 million to 8 billion. On seven mainstream reward model evaluation benchmarks including Reward Bench v1/v2, PPE Preference & Correctness, RMB, RM-Bench, and JudgeBench, the Skywork-Reward-V2 series comprehensively achieved current state-of-the-art (SOTA) levels. Compensating for Model Scale Limitations with Data Quality and Richness Even the smallest model, Skywork-Reward-V2-Qwen3-0.6B, achieves overall performance nearly matching the previous generation's strongest model, Skywork-Reward-Gemma-2-27B-v0.2, on average. The largest scale model, Skywork-Reward-V2-Llama-3.1-8B, achieved comprehensive superiority across all mainstream benchmark tests, becoming the currently best-performing open-source reward model overall. Broad Coverage of Multi-Dimensional Human Preference Capabilities Additionally, Skywork-Reward-V2 achieved leading results in multiple advanced capability evaluations, including Best-of-N (BoN) tasks, bias resistance capability testing (RM-Bench), complex instruction understanding, and truthfulness judgment (RewardBench v2), demonstrating excellent generalization ability and practicality. Highly Scalable Data Screening Process Significantly Improves Reward Model Performance Beyond excellent performance in evaluations, the team also found that in the "human-machine collaboration, two-stage iteration" data construction process, preference data that underwent careful screening and filtering could continuously and effectively improve reward models' overall performance through multiple iterative training rounds, especially showing remarkable performance in the second stage's fully automated data expansion. In contrast, blindly expanding raw data not only fails to improve initial performance but may introduce noise and negative effects. To further validate the critical role of data quality, we conducted experiments on a subset of 16 million data points from an early version. Results showed that training an 8B-scale model using only 1.8% (about 290,000) of the high-quality data already exceeded the performance of current 70B-level SOTA reward models. This result again confirms that the Skywork-SynPref dataset not only leads in scale but also has significant advantages in data quality. 03 Welcoming a New Milestone for Open-Source Reward Models: Helping Build Future AI Infrastructure In this research work on the second-generation reward model Skywork-Reward-V2, the team proposed Skywork-SynPref-40M, a hybrid dataset containing 40 million preference pairs (with 26 million carefully screened pairs), and Skywork-Reward-V2, a series of eight reward models with state-of-the-art performance designed for broad task applicability. We believe this research work and the continued iteration of reward models will help advance the development of open-source reward models and more broadly promote progress in Reinforcement Learning from Human Feedback (RLHF) research. This represents an important step forward for the field and can further accelerate the prosperity of the open-source community. The Skywork-Reward-V2 series models focus on research into scaling preference data. In the future, the team's research scope will gradually expand to other areas that have not been fully explored, such as alternative training techniques and modeling objectives. Meanwhile, considering recent development trends in the field - reward models and reward shaping mechanisms have become core components in today's large-scale language model training pipelines, applicable not only to RLHF based on human preference learning and behavior guidance, but also to RLVR including mathematics, programming, or general reasoning tasks, as well as agent-based learning scenarios. Therefore, we envision that reward models, or more broadly, unified reward systems, are poised to form the core of AI infrastructure in the future. They will no longer merely serve as evaluators of behavior or correctness, but will become the "compass" for intelligent systems navigating complex environments, helping them align with human values and continuously evolve toward more meaningful goals. Additionally, Skywork released the world's first deep research AI workspace agents in May, which you can experience by visiting: Media Contact Company Name: Skywork AI Person: Peter TianEmail: peter@ 2 Science Park DriveCountry: SingaporeWebsite: View original content to download multimedia: SOURCE Skywork AI pte ltd Sign in to access your portfolio

Las Vegas Sands Faces Macau Market Challenges Despite Attractive Valuation, Says JPMorgan
Las Vegas Sands Faces Macau Market Challenges Despite Attractive Valuation, Says JPMorgan

Yahoo

time2 hours ago

  • Yahoo

Las Vegas Sands Faces Macau Market Challenges Despite Attractive Valuation, Says JPMorgan

Las Vegas Sands Corp. (NYSE:LVS) ranks among the best cyclical stocks to buy now. On June 23, JPMorgan set a price target of $47 and began coverage of Las Vegas Sands Corp. (NYSE:LVS) with a Neutral rating. With a valuation nine times the casino operator's projected 2026 enterprise value to EBITDA ratio, the investment bank's year-end 2026 price target represents a substantial discount to Las Vegas Sands' historical average. Although the massive 4x discount to the company's historical average seems 'enticing,' JPMorgan said it is still apprehensive due to Las Vegas Sands' poor performance in the already troubled Macau market. If Las Vegas Sands Corp. (NYSE:LVS) regains market share in Macau or if industry gross gaming revenue picks up speed again, JPMorgan said it might take a more optimistic view of the company. Las Vegas Sands Corp. (NYSE:LVS) is a casino operator with a primary focus on the Macau market. The company primarily targets the Asian market with its five casinos in Macau and Marina Bay Sands in Singapore. While we acknowledge the potential of LVS as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. Read More: and Disclosure: None. Melden Sie sich an, um Ihr Portfolio aufzurufen.

Macquarie Upgrades MakeMyTrip After Share Buyback Cuts Trip.com Stake
Macquarie Upgrades MakeMyTrip After Share Buyback Cuts Trip.com Stake

Yahoo

time2 hours ago

  • Yahoo

Macquarie Upgrades MakeMyTrip After Share Buyback Cuts Trip.com Stake

MakeMyTrip Limited (NASDAQ:MMYT) ranks among the best cyclical stocks to buy now. On June 24, Macquarie upgraded the shares of MakeMyTrip Limited (NASDAQ:MMYT) from Neutral to Outperform, citing enhanced risk-reward dynamics in the wake of a recent share-price fall. The upgrade followed MakeMyTrip's announcement of a $3.1 billion capital increase to lower stake. With a conversion price of $121.50, the capital consists of $1.43 billion in zero-coupon convertible notes due in 2030 and a primary share issuance of 18.4 million shares at $90 each. By repurchasing the majority of Class B shares, the company will reduce its ownership from 45.3% to roughly 17% and reduce its representation on the board from five to two seats. Macquarie analysts predict that the repurchase will result in a 12% decrease in MakeMyTrip's basic share count, which will be compensated for by newly issued shares. Online travel company MakeMyTrip Limited (NASDAQ:MMYT) offers a range of services and products, such as booking airline and bus tickets, vacation packages, hotel reservations, foreign exchange, and visa processing. While we acknowledge the potential of MMYT as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. Read More: and Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store