logo
#

Latest news with #OpenSourceInitiative

What Leaders Need To Know About Open-Source Vs Proprietary Models
What Leaders Need To Know About Open-Source Vs Proprietary Models

Forbes

time21 hours ago

  • Business
  • Forbes

What Leaders Need To Know About Open-Source Vs Proprietary Models

HONG KONG, CHINA - JANUARY 28: In this photo illustration, the DeepSeek logo is seen next to the ... More Chat GPT logo on a phone on January 28, 2025 in Hong Kong, China. (Photo illustration by) As business leaders adopt generative artificial intelligence they must decide whether to build their AI capabilities using open-source models or rely on proprietary, closed-source alternatives. Understanding the implications of this choice can be the difference between a sustainable competitive advantage and a strategic misstep. But what exactly is 'open source?' According to the Open-Source Initiative (OSI), for software to be considered open, it must offer users the freedom to use the software for any purpose, to study how it works, to modify it, and to share both the original and modified versions. When applied to AI, true open-source AI include model architecture (the blueprint for how the AI processes data); training data recipes (documenting how data was selected and used to train the model); and weights (the numerical values representing the AI's learned knowledge). But very few AI models are truly open according to the OSI definition. The Gradient of Openness While fully open-source models provide complete transparency, few model developers want to publish their full source code, and even fewer are transparent about the data their models were trained on. Many so-called foundation models – the largest generative AI models – were trained on data whose copyrights may be fuzzy at best and blatantly infringed at worst. More common are open-weight systems that offer public access to model weights without disclosing the full training data or architecture. This allows faster deployment and experimentation with fewer resources, though it limits the ability to diagnose biases or improve accuracy without full transparency. Some companies adopt a staggered openness model. They may release previous versions of proprietary models once a successor is launched, providing limited insight into the architecture while restricting access to the most current innovations. Even here, training data is rarely disclosed. Navigating the Gradient Deciding whether an enterprise wants to leverage a proprietary model like GPT-4o, or some level of openness, such as LlaMA 3.3, depends, of course on the use case. Many organizations end up using a mix of open and closed models. The main decision is where the model will reside. For regulated industries like banking, where data can't leave the premises due to regulatory constraints, open-source models are the only viable option. Because proprietary model owners need to protect their intellectual property, those models can only be accessed remotely via an application programming interface (API). Open-source models can be deployed on a company's premises or in the cloud. Both open and closed models can be fine-tuned to specific use cases, but open-source models offer more flexibility and allow deeper customization. Again, the data used in that fine tuning need not leave the company's hardware. Fine-tuning proprietary models requires less expertise but must be done in the cloud. Still, cost and latency can tip the scales in favor of proprietary AI. Proprietary providers often operate large-scale infrastructure designed to ensure fast response times and predictable performance, especially in consumer applications like chatbots or virtual assistants handling millions of queries per day. Open-source AI, although cheaper to operate in the long run, requires significant investment in infrastructure and expertise to achieve similar latency and uptime. Navigating the regulatory landscape is another concern for companies deploying AI. The European Union's Artificial Intelligence Act sets stricter transparency and accountability standards for proprietary AI models. Yet proprietary providers often assume greater compliance responsibility, reducing the regulatory burden on businesses. In the U.S., the National Telecommunications and Information Administration (NTIA) is considering guidelines that assess AI openness through a risk-based lens. Of course, a major consideration is security. By using a proprietary model, companies place their trust in the provider that the model is secure. But that opacity can hide vulnerabilities, leaving companies reliant on vendors to disclose and address threats. Open-source models, on the other hand, benefit from global security research communities that rapidly detect and patch vulnerabilities. Still, businesses often prefer the convenience of API access to proprietary models for rapid prototyping. And for consumer facing applications, proprietary models are fast and easy to integrate into products. Will Open-Source Overtake Proprietary Models? But an even larger issue looms over the future of closed and open source. As open models increase in performance, closing the gap with or even exceeding the performance of the best proprietary models, the financial viability of closed models and the companies that provide them remains uncertain. China is pursuing an aggressive open-source strategy, cutting the cost of its models to steal market share of companies like OpenAI. By openly releasing their research, code, and models, China hopes to make advanced AI accessible at a fraction of the cost of Western proprietary solutions. Key Takeaways for Business Leaders Remember Betamax, the proprietary video cassette recording format developed and tightly controlled by Japan's Sony in the 1970s. It lost to the more open VHS format for the same reason many people think closed AI models will eventually be eclipsed by open-source AI, Leaders must define what they want to achieve with AI, whether it be efficiency, innovation, risk reduction, or compliance, and let these goals guide their model selection and deployment strategy. For example, they can leverage open-source communities for innovation and rapid prototyping, while relying on proprietary solutions for mission-critical, high-security applications. Collaborating with external partners and leveraging both open-source and proprietary models as appropriate will position organizations to innovate responsibly and remain competitive. The key is for leaders to understand their unique operational needs, data sensitivities, and technical capabilities—then choose accordingly. But choosing between open-source and proprietary AI models is less a binary decision than it is finding the optimal model on a continuum from closed to fully open.

Why the 'spirit' of open source means much more than a license
Why the 'spirit' of open source means much more than a license

Yahoo

time08-02-2025

  • Business
  • Yahoo

Why the 'spirit' of open source means much more than a license

Arguments about what is and isn't "open source" are often resolved by deferring to the Open Source Initiative (OSI): If a piece of software is available under a license rubber stamped as "open source" by the OSI's formal "definition," then that software is open source. But waters muddy when you get into the nuts and bolts of legal definitions versus the "spirit" of what open source really means. Indeed, there is significant nuance in the open source versus proprietary software debate: Has an "open source company" hamstrung its project by sliding core features behind a commercial paywall? How much transparency is there around the project's development? And how much direct input does the "community" really have in a given project? To many, open source is not just about the legal ability to use and modify code; the culture, transparency, and governance around it is paramount. Everyone knows about the Google-flavored version of Android that ships on smartphones and tablets, replete with an array of apps and services. The underlying Android Open Source Project (AOSP), released under a permissive Apache 2.0-license, is available for anyone to access, "fork," and modify for their own hardware projects. Android, by just about any definition, is about as open source as it gets. And Google has used this fact in its defense against anti-competition criticism, noting that Amazon has reappropriated Android for its own lineup of Fire-branded devices. But all this ignores separate "anti-fragmentation agreements" Google signed with hardware makers that restrict them from using forked versions of Android. And unlike something like Kubernetes that sits under an independent foundation with a diverse range of corporate and community contributors, Android sits under the direct control of Google without a great deal of transparency over roadmap or community input. "Android, in a license sense, is perhaps the most well-documented, perfectly open 'thing' that there is," Luis Villa, co-founder and general counsel at Tidelift, said in a panel discussion at the State of Open Con25 in London this week. "All the licenses are exactly as you want them — but good luck getting a patch into that, and good luck figuring out when the next release even is." This gets to the crux of the debate: Open source can be something of an illusion. A lack of real independence can mean a lack of agency for those who would like to properly get involved in a project. It can also raise questions about a project's long-term viability, evidenced by the countless open source companies that have switched licenses to protect their commercial interests. "If you think about the practical accessibility of open source, it goes beyond the license, right?" Peter Zaitsev, founder of open source database services company Percona, said in the panel discussion. "Governance is very important, because if it's a single corporation, they can change a license like 'that.'" These sentiments were echoed in a separate talk by Dotan Horovits, open source evangelist at the Cloud Native Computing Foundation (CNCF), where he mused about open source "turning to the dark side." He noted that in most cases, issues arise when a single-vendor project decides to make changes based on its own business needs among other pressures. "Which begs the question, is vendor-owned open source an oxymoron?" Horovits said. "I've been asking this question for a good few years, and in 2025 this question is more relevant than ever." These debates won't be going anywhere anytime soon, as open source has emerged as a major focal point in the AI realm. China's DeepSeek arrived with a bang off the back of open source hype, and while the models' MIT licenses are very much recognized as open source, there remains black holes around training data among other components. Which is why researchers at Hugging Face are trying to create an even "more open" version of DeepSeek's reasoning model. Meta, meanwhile, has long tooted its open source horn with regards to its Llama-branded large language models (LLMs), even though Llama isn't open source by most estimations — the models, while perhaps more "open" than others, have commercial restrictions. "I have my quibbles and concerns about the open source AI definition, but it's really clear that what Llama is doing isn't open source," Villa said. Emily Omier, a consultant for open source businesses and host of the Business of Open Source podcast, added that such attempts to "corrupt" the meaning behind "open source" is testament to its inherent power. "It goes to show how strong the brand of open source is — the fact that people are trying to corrupt it, means that people care," Omier said during the panel discussion. Much of this may be for regulatory reasons, however. The EU AI Act has a special carve-out for "free and open source" AI systems (aside from those deemed to pose an "unacceptable risk"). And Villa says this goes some way toward explaining why a company might want to rewrite the rulebook on what "open source" actually means. "There are plenty of actors right now who, because of the brand equity [of open source] and the regulatory implications, want to change the definition, and that's terrible," Villa said. While there are clear arguments for applying additional criteria that incorporates the "spirit" of what open source is intended to be all about, having clear parameters — as defined by a license — keeps things simple and less subject to nuanced subjectivity. How much community engagement would be necessary for something to be truly "open source"? On a practical and legal level, keeping the definition limited to the license makes sense. Stefano Maffulli, executive director at the OSI, said that while some organizations and foundations do lean into ideas around "open design, community, and development," these are all fundamentally philosophical concepts. "The point of having definitions is to have criteria that can be scored, and focusing on licensing is how that is accomplished," Maffulli said in a statement issued to TechCrunch. "The global community and industry have come to rely on the Open Source Definition and now the Open Source AI Definition as objective measures that they can rely on." Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store