Latest news with #MicrosoftCopilot


Business Journals
2 days ago
- Business
- Business Journals
5 steps for organizations to get started with AI
1. AI PRODUCTIVITY: Begin your AI journey by activating AI features already available in your Microsoft suite. Microsoft Copilot integrates directly with Word, Excel, PowerPoint, and Outlook, providing immediate value without requiring new infrastructure. Users can use Copilot to draft documents, analyze spreadsheet data, create presentations, and summarize email threads. These tools can be deployed iteratively across departments, starting with power users who can become internal champions for the rest of the organization. 2. WORKFLOW & DOCUMENT INTELLIGENCE: Some of the quickest areas of ROI in AI comes with exploring. Identify repetitive, rule-based processes that consume significant employee time and automate them using AI-powered tools. Microsoft Power Automate can handle document processing, data entry, approval workflows, and integration between different business systems. Using Power Platform's AI Builder, you can utilize AI models to detect document types and extract data to drive these automated workflows, which can connect to Microsoft SharePoint, Microsoft Teams, and many other third-party applications. 3. AI STRATEGY & GOVERNANCE: Create a foundation for AI success by establishing clear data governance, ethical AI guidelines, and implementation standards. This includes data quality assessments, privacy compliance, and defining acceptable AI use cases for your organization. This can consist of leveraging Microsoft Purview for data discovery and classification across your M365 environment and utilizing the Microsoft 365 Security and Compliance Center to enforce AI governance rules and monitor compliance across all AI implementations. 4. LAUNCH AI PILOTS: Select specific business challenges and use cases where AI can deliver measurable value with minimal risk. Customer service or internal facing chatbots, document summarization, or predictive maintenance are excellent starting points that can provide a clear ROI while building organizational AI experience. Plan to start small with several experiments before determining which use cases and AI solutions should proceed to the pilot stage. 5. BUILD INTERNAL EXPERTISE: Invest in developing your team's AI literacy through structured training programs, workshops, and hands-on experience with AI tools. Create centers of excellence that can guide AI adoption across different departments and maintain best practices. Eventually, the input and experiences of these teams will create a cycle of evaluation and updating of your organization's AI strategy and governance plans.

Miami Herald
3 days ago
- Science
- Miami Herald
Huge AI copyright ruling offers more questions than answers
While sci-fi movies from the 1980s and '90s warned us about the potential for artificial intelligence to destroy society, the reality has been much less dramatic so far. Skynet was supposed to be responsible for the rise of killer machines called Terminators that could only be stopped by time travel and plot holes. The AI from "The Matrix" movies also waged a war on its human creators, enslaving the majority of them in virtual reality while driving the rebellion underground. Related: Meta commits absurd money to top Google, Microsoft in critical race To be fair, the artificial intelligence from OpenAI, Google Gemini, Microsoft Copilot, and others does threaten to destroy humanity, but only sometimes. And it looks like the technology is mostly harmless to our chances of survival. But that doesn't mean this transformative tech isn't causing other very real problems. The biggest issue humans currently have with AI is how the companies controlling it train their models. Large language models like OpenAI's ChatGPT need to feast on a lot of information to beat the Voight-Kampff test from "Blade Runner," and a lot of that information is copyrighted. So at the moment, the viability of AI rests in the hands of the courts, not software engineers. This week, the courts handed down a monumental ruling that could have a wide-ranging ripple effect. Image source:This week, Judge William Alsup of the U.S. District Court for the Northern District of California ruled that AI company Anthropic, and others, can train their AI models using published books without the author's consent. The ruling could set an important legal precedent for the dozens of other ongoing AI copyright lawsuits. More on AI: Gemini, ChatGPT may lose the AI war to deep-pocketed rivalAnthropic shows the limits of AI as it scraps blog experimentAmazon's Alexa AI upgrade is even worse than expected A lawsuit filed by three authors accused Anthropic of ignoring copyright laws when it pirated millions of books to train its LLM, but Alsup sided with Anthropic. "The copies used to train specific LLMs were justified as a fair use," Alsup, who has also presided over Oracle America v. Google Inc. and other notable tech trials, wrote in the ruling. "Every factor but the nature of the copyrighted work favors this result. The technology at issue was among the most transformative many of us will see in our lifetimes." Ed Newton Rex, CEO of Fairly Trained, a nonprofit that advocates for ethically compensating creators of the data LLMs get trained on, had a unique take on the verdict after many headlines declared it a broad win for AI companies. "Today's ruling in the authors vs. Anthropic copyright lawsuit is a mixed bag. It's not the win for AI companies some headlines suggest - there are good and bad parts," he said in a lengthy X post this week. "In short, the judge said Anthropic's use of pirated books was infringing, but its training on non-pirated work was fair use." So Anthropic is on the hook for pirating the material, but the judge ruled that it doesn't need the author's permission to train its models. This means Anthropic's fair use argument stood up in court, but the ruling may not be as wide-ranging as it seems. "This is not a blanket ruling that all generative AI training is fair use. Other cases may go the other way, as the facts are different," Newton Rex said. "The Copyright Office has already pointed out that some AI models are more transformative than others - for instance, they singled out AI music models as less transformative. Lobbyists will say this decision confirms that generative AI training is fair use - that's not true." Related: Amazon coders have a surprising reason for hating GenAI The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


Tom's Guide
3 days ago
- Business
- Tom's Guide
Microsoft reportedly 'struggling' to convince companies to buy Copilot — yup, employees prefer ChatGPT
It'd be fair to say ChatGPT has become deeply ingrained in our lexicon, and it's almost a shorthand for wider AI models now in the same way that consumers call any tablet an iPad. OpenAI's offering was one of the first LLMs to reach mass market penetration thanks to its open source nature, and while the likes of Google Gemini are catching up, it appears the popularity of ChatGPT is having a negative impact on Microsoft's own Copilot. A new report from Bloomberg has suggested that businesses that have stumped up the cash for Copilot's enterprise features are still finding employees using ChatGPT instead. According to the report, pharmaceutical company Amgen has paid for a 20,000 user plan for Microsoft Copilot, but more than a year later its employees still prefer to work using ChatGPT. There's plenty of crossover, too. OpenAI's models form part of Copilot's own LLM, and despite the similarities and overlapping features like data analysis and email drafts, ChatGPT remains much more popular. TechRadar reports that as of this month, ChatGPT has almost 800 million weekly active users (3 million paying ones) while Copilot has 20 million weekly users for the last year. "The company's [Microsoft's] salespeople knew ChatGPT dominated the consumer chatbot market, but expected Microsoft to own the enterprise space for AI assistants thanks to decades-long relationships with corporate IT departments," the report explains. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. "But by the time Microsoft began selling Copilot to businesses, many office workers had already tried out ChatGPT at home, giving the chatbot a first-mover advantage.' That's despite the prevalence of Windows across the globe, and while Microsoft has sold millions of dollars worth of Copilot accounts, OpenAI still has the edge it seems. Despite rapid advancements across the board, ChatGPT continues to dominate the AI market. The likes of Claude and Gemini offer increasingly competitive chatbot experiences, and Meta, Copilot, and a variety of other brands are offering AI technology to match. ChatGPT has held a dominant position, likely due to its history as the first major chatbot and the one that has become synonymous with the technology. However, its competitors are gaining in popularity. Whether it is a branding problem or simply the challenge of pulling people away from their number 1 chatbot, Copilot seems to be in a similar boat to many other major AI providers right now.


Newsweek
3 days ago
- Politics
- Newsweek
Higher Ed's AI Panic Is Missing the Point
Senate Republicans are pushing a provision in a major tax-and-spending bill that would bar states from regulating AI for 10 years to avoid conflicting local rules. Colleges and universities are facing a parallel crisis: uncertainty over whether and how students should be allowed to use generative AI. In the absence of clear institutional guidance, individual instructors are left to make their own calls, leading to inconsistent expectations and student confusion. This policy vacuum has triggered a reactive response across higher education. Institutions are rolling out detection software, cracking down on AI use in syllabi, and encouraging faculty to read student work like forensic linguists. But the reality is that we cannot reliably detect AI writing. And if we're being honest, we never could detect effort, authorship, or intent with any precision in the first place. That's why I've stopped trying. In my classroom, it doesn't matter whether a student used ChatGPT, the campus library, or help from a roommate. My policy is simple: You, the author, are responsible for everything you submit. Google Gemini, OpenAI ChatGPT, and Microsoft Copilot app icons are displayed on a screen. Google Gemini, OpenAI ChatGPT, and Microsoft Copilot app icons are displayed on a screen. Getty Images That's not the same as insisting on authorial originality, some imagined notion that students should produce prose entirely on their own, in a vacuum, untouched by outside influence. Instead, I teach authorial responsibility. You are responsible for ensuring that your work isn't plagiarized, for knowing what your sources are, and for the quality, accuracy, and ethics of the writing you turn in, no matter what tools you used to produce it. This distinction is more important than ever in a world where large language models are readily accessible. We conflate linguistic polish with effort, or prose fluency with moral character. But as Adam Grant argued last year in The New York Times, we cannot grade effort; we can only grade outcome. This has always been true, but AI has made it undeniable. Instructors might believe they can tell when a student has put in "genuine effort," but those assumptions are often shaped by bias. Does a clean, structured paragraph indicate hard work? Or just access to better training, tutoring, or now, machine assistance? Does a clumsy but heartfelt draft reflect authenticity? Or limited exposure to academic writing? Our ability to detect effort has always been flawed. Now, it's virtually meaningless. That's why it doesn't matter if students use AI. What matters is whether they can demonstrate understanding, communicate effectively, and meet the goals of the assignment. If your grading depends on proving whether a sentence came from a chatbot or a person, then you don't know what the target learning outcome was in the first place. And if our assessments are built on presumed authorship, they're no longer evaluating learning. They're evaluating identity. There are already cracks in the AI-detection fantasy. Tools like GPTZero and Turnitin's AI checker routinely wrongly accuse multilingual students, disabled students, and those who write in non-standard dialects. In these systems, the less a student "sounds like a college student," the more likely they are to be accused of cheating. Meanwhile, many students, especially those who are first-generation, disabled, or from under-resourced schools, use AI tools to fill in gaps that the institution itself has failed to address. What looks like dishonesty is often an attempt to catch up. Insisting on originality as a condition of academic integrity also ignores how students actually write. The myth of the lone writer drafting in isolation has always been a fiction. Students draw from templates, search engines, notes from peers, and yes, now from generative AI. If we treat all of these as violations, we risk criminalizing the ordinary practices of learning. This requires a shift in mindset that embraces writing as a process rather than a product. It means designing assignments that can withstand AI involvement by asking students to revise, explain, synthesize, and critique. Whether a sentence was AI-generated matters far less than whether the student can engage with what it says, revise it, and place it in context. We should be teaching students how to write with AI, not how to hide from it. I'm not arguing for a free-for-all. I'm arguing for transparency, accountability, and educational clarity. In my courses, I don't treat AI use as taboo technology. I treat it as a new literacy. Students learn to engage critically with AI by revising in response to its suggestions, critiquing its assumptions, and making conscious choices about what to accept and what to reject. In other words, they take responsibility. We cannot force students to write "original" prose without any external help. But we can teach them to be responsible authors who understand the tools they use and the ideas they put into the world. That, to me, is a far more honest and useful version of academic integrity. Annie K. Lamar is an assistant professor of computational classics and, by courtesy, of linguistics at the University of California, Santa Barbara. She specializes in low-resource computational linguistics and machine learning. At UC Santa Barbara, she is the director of the Low-Resource Language (LOREL) Lab. Lamar holds a PhD in classics from Stanford University and an MA in education from the Stanford Graduate School of Education. Lamar is also a Public Voices fellow of The OpEd Project. The views expressed in this article are the writer's own.
Yahoo
4 days ago
- Business
- Yahoo
Providence unveils tech-focused strategic plan for 2030
This story was originally published on Healthcare Dive. To receive daily news and insights, subscribe to our free daily Healthcare Dive newsletter. Providence CEO Erik Wexler laid out a new strategic direction for the health system on Monday that's heavily focused on increased technology adoption. The system's strategy through 2030 includes leveraging technology to streamline care delivery, expanding partnerships to better meet clinical needs and increasing the use of artifical intelligence to reduce administrative burdens. Providence will track progress toward implementing the strategy beginning in 2027. 'If we accomplish even most of what we've set out to do, the impact will be transformational,' the CEO said. Wexler said the new plan should help the 51-hospital system navigate the 'polycrisis' facing healthcare, including economic, legislative, technological and societal pressures. 'These forces are complex and fast moving,' the executive told staff in an internal email shared with Healthcare Dive. 'At the same time, we must be intentional about our long-term direction.' Providence will deploy initiatives to reduce wait times and increase self-scheduling, deepen its commitment to value-based care, revamp its acute and post-acute care offerings to include more virtual, ambulatory and in-home care options, and increase its use of AI tools. The health system plans to use Microsoft Copilot, an AI-powered office assistant, to help streamline tasks, and ambient scribing technology to save clinicians time. The health system also hopes to strengthen digital connections among its partners to share information more seamlessly and plans to invest further in specialty pharmacy care. Providence's Office of Transformation, formed this January, will carry out some of these tasks, according to Wexler. Providence unveiled its new strategic direction a little more than week after after shedding 600 roles in a broader restructuring. The health system, like many of its peers, has been working to find its financial footing amid growing economic headwinds and regulatory uncertainty. The health system has been chasing a financial turnaround for several years and had hoped to be profitable by 2025. However, external factors, including looming funding cuts, have continued to dog the provider this year, the health system said in April. Providence has cut some costs this year by freezing nonclinical hiring and cutting some discretionary spending. Collectively, the changes have helped Providence face 'short-term challenges head on,' according to Wexler. The new strategic plan should help the system further move beyond survival mode toward long term, sustainable growth. Recommended Reading Providence cuts 600 roles amid restructuring Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data