Teva Partners with Fosun Pharma to Advance Novel Cancer Immunotherapy TEV-56278
The partnership will accelerate clinical data generation for TEV-56278, which is currently in a Phase 1 study for various forms of cancer, such as melanoma. TEV-56278 is an internally developed Teva product and is an anti-PD-1 antibody-cytokine fusion protein that uses Teva's proprietary ATTENUKINE technology.
A close-up shot of various types of medicines on a table, illustrating the specialty and generic products offered by the pharmaceutical company.
Its novel mechanism of action is designed to selectively deliver interleukin-2 (IL-2) to PD-1-expressing T cells within the tumor microenvironment. Under the terms of the agreement, Fosun Pharma has been granted an exclusive license to develop, manufacture, and commercialize TEV-56278 in mainland China, the Hong Kong SAR, Macau SAR, Taiwan region, and select Southeast Asian countries.
Teva Pharmaceutical Industries Limited (NYSE:TEVA) develops, manufactures, markets, and distributes generic & other medicines and biopharmaceutical products internationally. Fosun Pharma is a global healthcare company in pharmaceuticals, medical devices & diagnostics, and healthcare services.
While we acknowledge the potential of TEVA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the .
READ NEXT: and .
Disclosure: None. This article is originally published at Insider Monkey.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
11 minutes ago
- Yahoo
Hong Kong Strikes Back: Billion-Dollar Insurers Face Pressure to Pull Capital Out of Singapore
Hong Kong regulators are dialing up quiet pressure on life insurers like AIA Group Ltd. (AIAGF) to shift investment decision-making back homeafter years of watching it gradually migrate to Singapore. People familiar with the situation say the Hong Kong Insurance Authority (HKIA) began engaging with insurers in early 2024, concerned about growing talent drain and concentrated capital risk as more firms favored Singapore's tax perks and flexible fund structures. While the relocations were allowed, the regulator has become increasingly active behind the scenesrequesting more visibility into where and how investment decisions are made. Warning! GuruFocus has detected 6 Warning Sign with AAIGF. At the center of this capital tug-of-war is AIA's Singapore-based unit, AIA Investment Management Private Ltd., which managed $139 billion at the end of 2024. The VCC structure gave it an operational edge, allowing AIA to shift global equities and private equity strategies south while keeping pensions anchored in Hong Kong. Insiders say high-level investment principles still come from Hong Kong, but it's the Singapore team that selects funds and talks to managers. The HKIA, invoking a guideline known as GL14, has begun examining whether ultimate mandate control remains in the cityand has privately discouraged at least one insurer from moving assets into Singapore's structures. There's no law forcing insurers to run portfolios from Hong Kong. But the HKIA has made its message clear: asset concentration overseas could raise red flags, especially if capital can't move easily in a crisis. Meanwhile, Singapore's Monetary Authority has defended the city-state's growth, attributing it to global demand and operational flexibility, not jurisdiction shopping. Still, with Hong Kong's regional HQ count dropping from 1,457 to 1,410 between 2021 and 2024, and Singapore gaining momentum, the pressure may be just beginning. This article first appeared on GuruFocus. Sign in to access your portfolio
Yahoo
11 minutes ago
- Yahoo
Nio Inc. (NIO) umbles 8% as Slow July Deliveries Growth Disappoint
We recently published . NIO Inc. (NYSE:NIO) is one of the companies that stood stronger last week. Nio Inc. dropped its share prices by 8.18 percent on Monday to close at $4.6 apiece as investors soured on the slower growth in vehicle deliveries last month. In a statement on Friday, NIO Inc. (NYSE:NIO) said it delivered 21,017 vehicles last month, higher by 2.5 percent than the 20,498 in the same period last year, but was notably slower than the double-digit year-on-year growth in the past three months. Year-on-year, June deliveries were higher by 17 percent; May was up by 13 percent; while April increased by 53 percent. Copyright: zenstock / 123RF Stock Photo According to NIO Inc. (NYSE:NIO), July's deliveries consisted of 12,675 vehicles from NIO's premium smart electric vehicle brand NIO; 5,976 vehicles from the family-oriented smart electric vehicle brand ONVO; and 2,366 vehicles from the small smart high-end electric car brand Firefly. To date, the company has already delivered a cumulative total of 806,731 vehicles, of which the NIO brand accounted for 737,923, followed by the ONVO brand with 58,599, and Firefly with 10,209. While we acknowledge the potential of NIO as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . Sign in to access your portfolio


TechCrunch
12 minutes ago
- TechCrunch
OpenAI launches two ‘open' AI reasoning models
OpenAI announced Tuesday the launch of two open-weight AI reasoning models with similar capabilities to its o-series. Both are freely available to download from the online developer platform, Hugging Face, the company said, describing the models as 'state-of-the-art' when measured across several benchmarks for comparing open models. The models come in two sizes: a larger and more capable gpt-oss-120b model that can run on a single Nvidia GPU, and a lighter-weight gpt-oss-20b model that can run on a consumer laptop with 16GB of memory. The launch marks OpenAI's first 'open' language model since GPT-2, which was released more than five years ago. In a briefing, OpenAI said its open models will be capable of sending complex queries to AI models in the cloud, as TechCrunch previously reported. That means if OpenAI's open model is not capable of a certain task, such as processing an image, developers can connect the open model to one of the company's more capable closed models. While OpenAI open-sourced AI models in its early days, the company has generally favored a proprietary, closed-source development approach. The latter strategy has helped OpenAI build a large business selling access to its AI models via an API to enterprises and developers. However, CEO Sam Altman said in January he believes OpenAI has been 'on the wrong side of history' when it comes to open sourcing its technologies. The company today faces growing pressure from Chinese AI labs — including DeepSeek, Alibaba's Qwen, and Moonshot AI —which have developed several of the world's most capable and popular open models. (While Meta previously dominated the open AI space, the company's Llama AI models have fallen behind in the last year.) In July, the Trump Administration also urged U.S. AI developers to open source more technology to promote global adoption of AI aligned with American values. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise on August 7. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW With the release of gpt-oss, OpenAI hopes to curry favor with developers and the Trump Administration alike, both of which have watched the Chinese AI labs rise to prominence in the open source space. 'Going back to when we started in 2015, OpenAI's mission is to ensure AGI that benefits all of humanity,' said OpenAI CEO Sam Altman in a statement shared with TechCrunch. 'To that end, we are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit.' (Photo by) Image Credits:Tomohiro Ohsumi / Getty Images How the models performed OpenAI aimed to make its open model a leader among other open-weight AI models, and the company claims to have done just that. On Codeforces (with tools), a competitive coding test, gpt-oss-120b and gpt-oss-20b score 2622 and 2516, respectively, outperformed DeepSeek's R1 while underperforming o3 and o4-mini. OpenAI's open model performance on codeforces (credit: OpenAI). On Humanity's Last Exam, a challenging test of crowd-sourced questions across a variety of subjects (with tools), gpt-oss-120b and gpt-oss-20b score 19% and 17.3%, respectively. Similarly, this underperforms o3 but outperforms leading open models from DeepSeek and Qwen. OpenAI's open model performance on HLE (credit: OpenAI). Notably, OpenAI's open models hallucinate significantly more than its latest AI reasoning models, o3 and o4-mini. Hallucinations have been getting more severe in OpenAI's latest AI reasoning models, and the company previously said it doesn't quite understand why. In a white paper, OpenAI says this is 'expected, as smaller models have less world knowledge than larger frontier models and tend to hallucinate more.' OpenAI found that gpt-oss-120b and gpt-oss-20b hallucinated in response to 49% and 53% of questions on PersonQA, the company's in-house benchmark for measuring the accuracy of a model's knowledge about people. That's more than triple the hallucination rate of OpenAI's o1 model, which scored 16%, and higher than its o4-mini model, which scored 36%. Training the new models OpenAI says its open models were trained with similar processes to its proprietary models. The company says each open model leverages mixture-of-experts (MoE) to tap fewer parameters for any given question, making it run more efficiently. For gpt-oss-120b, which has 117 billion total parameters, OpenAI says the model only activates 5.1 billion parameters per token. The company also says its open model was trained using high-compute reinforcement learning (RL) — a post-training process to teach AI models right from wrong in simulated environments using large clusters of Nvidia GPUs. This was also used to train OpenAI's o-series of models, and the open models have a similar chain-of-thought process in which they take additional time and computational resources to work through their answers. As a result of the post-training process, OpenAI says its open AI models excel at powering AI agents, and are capable of calling tools such as web search or Python code execution as part of its chain-of-thought process. However, OpenAI says its open models are text-only, meaning they will not be able to process or generate images and audio like the company's other models. OpenAI is releasing gpt-oss-120b and gpt-oss-20b under the Apache 2.0 license, which is generally considered one of the most permissive. This license will allow enterprises to monetize OpenAI's open models without having to pay or obtain permission from the company. However, unlike fully open source offerings from AI labs like AI2, OpenAI says it will not be releasing the training data used to create its open models. This decision is not surprising given that several active lawsuits against AI model providers, including OpenAI, have alleged that these companies inappropriately trained their AI models on copyrighted works. OpenAI delayed the release of its open models several times in recent months, partially to address safety concerns. Beyond the company's typical safety policies, OpenAI says in a white paper that it also investigated whether bad actors could fine-tune its gpt-oss models to be more helpful in cyber attacks or the creation of biological or chemical weapons. After testing from OpenAI and third-party evaluators, the company says gpt-oss may marginally increase biological capabilities. However, it did not find evidence that these open models could reach its 'high capability' threshold for danger in these domains, even after fine-tuning. While OpenAI's model appears to be state-of-the-art among open models, developers are eagerly awaiting the release of DeepSeek R2, its next AI reasoning model, as well as a new open model from Meta's new superintelligence lab.