logo
Polimorphic Announces $18.6 Million Series A Led by General Catalyst to Drive Government Efficiency with AI

Polimorphic Announces $18.6 Million Series A Led by General Catalyst to Drive Government Efficiency with AI

Business Wire5 days ago
NEW YORK--(BUSINESS WIRE)-- Polimorphic, which uses AI to digitize resident services for local governments and their constituents, today announced an $18.6 million Series A, led by General Catalyst, and continued backing from investors M13 and Shine. With ever-growing pressure on governments for improved efficiency, this round of funding will allow Polimorphic to amplify its support of governments with AI, while making government services more human for residents.
'With this funding, we're accelerating our mission to be the AI company for government efficiency, making public service easier, faster, and more human for everyone,' said CEO and Co-founder Parth Shah.
A recent study revealed that many local governments lack the expertise and processes to leverage AI effectively, and only about 20% of the more than $90 billion of the U.S. government's annual IT spending is devoted to modernization. In addition, local governments are seeing an unprecedented volume of repetitive, manual tasks, including answering the same questions by phone and email, processing simple paper forms, and hunting down information across disconnected systems, signaling a desperate need for the assistance of AI. At the same time, residents are expecting a private sector-like digital experience.
Using Polimorphic's AI Front Desk, Constituent Relationship Manager, and Analytics, governments can modernize how they serve by providing access to services online, 24/7, and in more than 75 languages, while improving efficiency for government teams.
'With this funding, we're accelerating our mission to be the AI company for government efficiency, making public service easier, faster, and more human for everyone,' said CEO and Co-founder Parth Shah. 'Local governments are the front line of democracy, but they've been left behind by decades of underinvestment in technology. We're here to change that. Our tools help staff serve residents more efficiently and build trust, reduce burnout, and unlock capacity for real community impact. This moment isn't just about growth, it's about building a future where every resident can get the help they need, and every public servant has the support they deserve.'
To date, Polimorphic customers have reduced voicemails by up to 90%, experienced a 75% reduction in walk-in requests, and collected more than $10 million in online payments, saving more than 55,000 working hours—or 26 years of work—combined. Polimorphic customers include cities, counties, state agencies, and special districts from across the country, including the City of Pacifica, CA; Tooele County, UT; Polk County, NC; and the Town of Palm Beach, FL. This new round of funding will accelerate growth in Polimorphic's top states, including Wisconsin, New Jersey, North Carolina, Texas, Florida, and California.
'Polimorphic exemplifies what a true partnership should be: personable, professional, and deeply invested in shared success,' said Jess Savidge, Administrative and Communications Manager for the Town of Palm Beach, FL. 'In a community like the Town of Palm Beach, where expectations are exceptionally high, their team has exceeded every standard through innovation, responsiveness, and a commitment to excellence. Thanks to their innovative platform and collaborative approach, we've continuously enhanced customer service and gained valuable insights to improve our digital presence. We look forward to continued collaborations to continue the delivery of world-class government services with the precision and quality of a top-tier business.'
The company's newest round of funding will drive unmatched features in GovTech, including its AI Front Desk, a full-service constituent platform that includes a voice line, chatbot, search, SMS, and email. Plus, powerful GIS-based resident support, agentic AI application reviews, advanced analytics, and additional innovative AI features.
'Polimorphic has the potential to become the next modern system of record for local and state government. Historically, it's been difficult to drive adoption of these foundational platforms beyond traditional ERP and accounting in the public sector," said Sreyas Misra, Partner at General Catalyst. "AI is the jet fuel that accelerates this adoption. Parth and the team are making it possible for local and state governments to automate highly complex workflows from end-end, something that's been out of reach until now."
"Government inefficiency creates billions of dollars in waste, a problem Polimorphic's solutions are built to solve," said M13 General Partner Latif Peracha. "By digitizing how residents and cities interact, they are removing that wasted time and money from the system."
In addition to innovative AI product features, the funding will allow Polimorphic to triple the size of its sales and engineering teams, driving its mission to create solutions that let governments of all sizes deliver for the people.
About Polimorphic
Polimorphic uses artificial intelligence (AI) to help local governments better serve their communities. Polimorphic's AI Front Desk, Constituent Relationship Manager, and Dashboard & Analytics empower service-first governments to provide residents with the highest quality and accessible communication and engagement. Serving hundreds of public sector departments across the country, Polimorphic is built for the unique needs of government, including cities, counties, and state agencies. Polimorphic is backed by world-class investors, including General Catalyst, M13, and Shine. Learn more or request a demo at polimorphic.com.
About General Catalyst
General Catalyst is a global investment and transformation company that partners with the world's most ambitious entrepreneurs to drive resilience and applied AI.
We support founders with a long-term view who challenge the status quo, partnering with them from seed to growth stage and beyond.
With offices in San Francisco, New York City, Boston, Berlin, Bangalore, and London, we have supported the growth of 800+ businesses, including Airbnb, Anduril, Applied Intuition, Commure, Glean, Guild, Gusto, Helsing, Hubspot, Kayak, Livongo, Mistral, Ramp, Samsara, Snap, Stripe, Sword, and Zepto.
For more: www.generalcatalyst.com, @generalcatalyst
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Three SBSB Eastham Partners Named to 2025 Lawdragon 500 X – The Next Generation Guide
Three SBSB Eastham Partners Named to 2025 Lawdragon 500 X – The Next Generation Guide

Business Wire

time6 minutes ago

  • Business Wire

Three SBSB Eastham Partners Named to 2025 Lawdragon 500 X – The Next Generation Guide

HOUSTON--(BUSINESS WIRE)--Schouest, Bamdas, Soshea, BenMaier & Eastham PLLC (SBSB Eastham) is pleased to announce that partners Colten Chapman, Melissa Miller and Rasha Zeyadeh have been named to the 2025 Lawdragon 500 X – The Next Generation guide, a listing of outstanding attorneys who are early in their careers but already making significant contributions shaping the future of law. Lawdragon describes this year's honorees as 'mesmerizing, multi-talented, dedicated, focused.' A partner in SBSB Eastham's Houston office, Mr. Chapman was recognized for litigation and workers' compensation. He defends claims under the Defense Base Act and the Longshore and Harbor Workers' Compensation Act. He also handles a broad array of work for domestic and international clients. Ms. Miller was recognized for litigation and workers' compensation work. A partner in SBSB Eastham's New Orleans office, she focuses on insurance defense, primarily on claims under the Longshore and Harbor Workers' Compensation Act, the War Hazards Compensation Act and the Defense Base Act. Ms. Zeyadeh, a partner in SBSB Eastham's Dallas office, was recognized for personal injury and labor & employment law. Her work includes advising and defending employers and individuals in matters ranging from discrimination and harassment claims to wage and hour regulations and breach of contract. She also handles personal injury, transportation and insurance defense. 'We are incredibly proud to see Colten, Melissa and Rasha recognized alongside so many talented attorneys across the country,' said SBSB Eastham Managing Partner John Schouest. 'Their inclusion affirms our commitment to nurturing the next generation of legal talent and delivering exceptional service to our clients.' Honorees are selected through a rigorous editorial process that combines peer nominations, independent research and journalistic vetting. Read the full list of honorees here. SBSB Eastham is a group of experienced attorneys who have come together to form a law firm focused on client needs. The firm's goal is to be the go-to resource at every stage of the legal process, bringing deeper experience, deeper commitment and deeper insights to help solve the most complex issues. In consultation or in the courtroom, the firm will aggressively pursue a client's best interests. For more information, visit

What Two Judicial Rulings Mean for the Future of Generative AI
What Two Judicial Rulings Mean for the Future of Generative AI

Atlantic

time11 minutes ago

  • Atlantic

What Two Judicial Rulings Mean for the Future of Generative AI

Should tech companies have free access to copyrighted books and articles for training their AI models? Two judges recently nudged us toward an answer. More than 40 lawsuits have been filed against AI companies since 2022. The specifics vary, but they generally seek to hold these companies accountable for stealing millions of copyrighted works to develop their technology. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation. In each case, the judges decided that the tech companies were engaged in 'fair use' when they trained their models with authors' books. Both judges said that the use of these books was 'transformative'—that training an LLM resulted in a fundamentally different product that does not directly compete with those books. (Fair use also protects the display of quotations from books for purposes of discussion or criticism.) At first glance, this seems like a substantial blow against authors and publishers, who worry that chatbots threaten their business, both because of the technology's ability to summarize their work and its ability to produce competing work that might eat into their market. (When reached for comment, Anthropic and Meta told me they were happy with the rulings.) A number of news outlets portrayed the rulings as a victory for the tech companies. Wired described the two outcomes as ' landmark ' and ' blockbuster.' But in fact, the judgments are not straightforward. Each is specific to the particular details of each case, and they do not resolve the question of whether AI training is fair use in general. On certain key points, the two judges disagreed with each other—so thoroughly, in fact, that one legal scholar observed that the judges had 'totally different conceptual frames for the problem.' It's worth understanding these rulings, because AI training remains a monumental and unresolved issue—one that could define how the most powerful tech companies are able to operate in the future, and whether writing and publishing remain viable professions. So, is it open season on books now? Can anyone pirate whatever they want to train for-profit chatbots? Not necessarily. When preparing to train its LLM, Anthropic downloaded a number of 'pirate libraries,' collections comprising more than 7 million stolen books, all of which the company decided to keep indefinitely. Although the judge in this case ruled that the training itself was fair use, he also ruled that keeping such a 'central library' was not, and for this, the company will likely face a trial that determines whether it is liable for potentially billions of dollars in damages. In the case against Meta, the judge also ruled that the training was fair use, but Meta may face further litigation for allegedly helping distribute pirated books in the process of downloading—a typical feature of BitTorrent, the file-sharing protocol that the company used for this effort. (Meta has said it 'took precautions' to avoid doing so.) Piracy is not the only relevant issue in these lawsuits. In their case against Anthropic, the authors argued that AI will cause a proliferation of machine-generated titles that compete with their books. Indeed, Amazon is already flooded with AI-generated books, some of which bear real authors' names, creating market confusion and potentially stealing revenue from writers. But in his opinion on the Anthropic case, Judge William Alsup said that copyright law should not protect authors from competition. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote. In his ruling on the Meta case, Judge Vince Chhabria disagreed. He wrote that Alsup had used an 'inapt analogy' and was 'blowing off the most important factor in the fair use analysis.' Because anyone can use a chatbot to bypass the process of learning to write well, he argued, AI 'has the potential to exponentially multiply creative expression in a way that teaching individual people does not.' In light of this, he wrote, 'it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars' while damaging the market for authors' work. To determine whether training is fair use, Chhabria said that we need to look at the details. For instance, famous authors might have less of a claim than up-and-coming authors. 'While AI-generated books probably wouldn't have much of an effect on the market for the works of Agatha Christie, they could very well prevent the next Agatha Christie from getting noticed or selling enough books to keep writing,' he wrote. Thus, in Chhabria's opinion, some plaintiffs will win cases against AI companies, but they will need to show that the market for their particular books has been damaged. Because the plaintiffs in the case against Meta didn't do this, Chhabria ruled against them. In addition to these two disagreements is the problem that nobody—including AI developers themselves—fully understands how LLMs work. For example, both judges seemed to underestimate the potential for AI to directly quote copyrighted material to users. Their fair-use analysis was based on the LLMs' inputs — the text used to train the programs—rather than outputs that might be infringing. Research on AI models such as Claude, Llama, GPT-4, and Google's Gemini has shown that, on average, 8 to 15 percent of chatbots' responses in normal conversation are copied directly from the web, and in some cases responses are 100 percent copied. The more text an LLM has 'memorized,' the more it can potentially copy and paste from its training sources without anyone realizing it's happening. OpenAI has characterized this as a 'rare bug,' and Anthropic, in another case, has argued that 'Claude does not use its training texts as a database from which preexisting outputs are selected in response to user prompts.' But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much more of its training text than was previously thought, including near-exact copies of books such as Harry Potter and the Sorcerer's Stone and 1984. That study was co-authored by Mark Lemley, one of the most widely read legal scholars on AI and copyright, and a longtime supporter of the idea that AI training is fair use. In fact, Lemley was part of Meta's defense team for its case, but he quit earlier this year, criticizing in a LinkedIn post about 'Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness.' (Meta did not respond to my question about this post.) Lemley was surprised by the results of the study, and told me that it 'complicates the legal landscape in various ways for the defendants' in AI copyright cases. 'I think it ought still to be a fair use,' he told me, referring to training, but we can't entirely accept 'the story that the defendants have been telling' about LLMs. For some models trained using copyrighted books, he told me, 'you could make an argument that the model itself has a copy of some of these books in it,' and AI companies will need to explain to the courts how that copy is also fair use, in addition to the copies made in the course of researching and training their model. As more is learned about how LLMs memorize their training text, we could see more lawsuits from authors whose books, with the right prompting, can be fully reproduced by LLMs. Recent research shows that widely read authors, including J. K. Rowling, George R. R. Martin, and Dan Brown may be in this category. Unfortunately, this kind of research is expensive and requires expertise that is rare outside of AI companies. And the tech industry has little incentive to support or publish such studies. The two recent rulings are best viewed as first steps toward a more nuanced conversation about what responsible AI development could look like. The purpose of copyright is not simply to reward authors for writing but to create a culture that produces important works of art, literature, and research. AI companies claim that their software is creative, but AI can only remix the work it's been trained with. Nothing in its architecture makes it capable of doing anything more. At best, it summarizes. Some writers and artists have used generative AI to interesting effect, but such experiments arguably have been insignificant next to the torrent of slop that is already drowning out human voices on the internet. There is even evidence that AI can make us less creative; it may therefore prevent the kinds of thinking needed for cultural progress. The goal of fair use is to balance a system of incentives so that the kind of work our culture needs is rewarded. A world in which AI training is broadly fair use is likely a culture with less human writing in it. Whether that is the kind of culture we should have is a fundamental question the judges in the other AI cases may need to confront.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store