
Pictory Introduces One of the World's First AI Video MCP Servers
SEATTLE--(BUSINESS WIRE)-- Pictory, a leading provider of AI-powered video creation, today announced the launch of the Pictory MCP Server, a new lightweight integration layer that enables developers, AI assistants, and automation platforms to build intelligent video workflows with unprecedented ease and flexibility.
'Pictory's MCP Server eliminates the need for custom code and deeply technical integrations. Now, anyone can create professional, branded videos just by describing what they want.' Vikram Chalana, CEO at Pictory
The Pictory MCP Server is fully compliant with the emerging Model Context Protocol (MCP), making it easy for clients like Claude Desktop, Cursor, and other automation tools to discover and orchestrate video creation capabilities through natural language. It exposes modular tools - such as create-storyboard and render-video - that handle the complexity of input validation, sequencing, and API orchestration behind the scenes.
'With the rise of AI assistants and agent-driven automation, we saw an opportunity to simplify access to our video creation API,' said Vikram Chalana, CEO at Pictory. 'The MCP Server eliminates the need for custom code and deeply technical integrations. Now, anyone can create professional, branded videos just by describing what they want.'
The server empowers clients to dynamically compose workflows by chaining together high-level operations. For example, users can ask Claude to 'make personalized outreach videos for all healthcare customers in the midwest' and behind the scenes the client will extract a list of all customers from a CRM, like Hubspot, select a video template, generate a storyboard, apply a voiceover and branding, and render the final video - all through the simple natural language command.
Designed for maximum flexibility, the MCP Server supports a tool-based architecture where each capability can be discovered and invoked individually. This modular approach gives AI assistants and developers full control over workflow composition without wrestling with API payloads.
Setup is simple: developers configure their MCP client (e. g., Claude Desktop) with their Pictory API credentials and instantly gain access to the full range of video creation tools. Detailed instructions and configuration examples are available on the Pictory website.
Looking ahead, Pictory plans to expand the MCP Server's capabilities to include video transcription, summarization, PowerPoint and PDF inputs, no-code integrations, and cloud-hosted deployment.
About Pictory
Pictory.ai is an AI-powered platform that transforms scripts, blog posts, PowerPoints and other existing content into market-ready, professional-quality videos. With a mission to make video-creation, at scale, accessible to all, Pictory serves marketers, educators, and enterprise teams around the world.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
11 hours ago
- Forbes
How Claude AI Clawed Through Millions Of Books
The race to build the most advanced artificial intelligence generative AI technology has continued to be a story about data: who possesses it, who seeks it, and what methods they use for its acquisition. A recent federal court ruling involving Anthropic, creator of the AI assistant Claude, offered a revealing look into these methods. The company received a partial victory alongside a potentially massive liability in a landmark copyright case. The legal high-five and hand slap draw an instructive, if blurry, line in the sand for the entire AI industry. This verdict is complex, likely impacting how AI large language models (LLMs) will be developed and deployed going forward. The decision seems to be more than a legal footnote, but rather a signal that fundamentally reframes risk for any company developing or even purchasing AI solutions. 3d rendering humanoid robot reading a book in library My Fair Library First, the good news for Anthropic and its ilk. U.S. District Judge William Alsup ruled that the company's practice of buying physical books, scanning them, and using the text to train its AI was "spectacularly transformative." In the court's view, this activity falls under the doctrine of "fair use." Anthropic was not simply making digital copies to sell. In his ruling, Judge Alsup wrote that the models were not trained to 'replicate or supplant' the books, but rather to 'turn a hard corner and create something different.' The literary ingestion process itself was strikingly industrial. Anthropic hired former Google Books executive Tom Turvey, to lead the acquisition and scanning of millions of books. The company purchased used books, stripped their bindings, cut their pages, and fed them into scanners before tossing the paper originals. Because the company legally acquired the books and the judge saw the AI's learning process as transformative, the method held up in court. An Anthropic spokesperson told CBS News it was pleased the court recognized its training was transformative and 'consistent with copyright's purpose in enabling creativity and fostering scientific progress.' For data and analytics leaders, this part of the ruling offers a degree of reassurance. It provides a legal precedent suggesting that legally acquired data can be used for transformative AI training. Biblio-Take-A However, the very same ruling condemned Anthropic for its alternative sourcing method: using pirate websites. The company admitted to downloading vast datasets from "shadow libraries" that host millions of copyrighted books without permission. Judge Alsup was unequivocal on this point. 'Anthropic had no entitlement to use pirated copies for its central library,' he wrote. 'Creating a permanent, general-purpose library was not itself a fair use excusing Anthropic's piracy.' As a result, Anthropic now faces a December trial to determine the damages for this infringement. This aspect of the ruling is a stark warning for corporate leadership. However convenient, using datasets from questionable sources can lead to litigation and reputational damage. The emerging concept of 'data diligence' is no longer just a best practice, it's a critical compliance mechanism. A Tale Of Two Situs This ruling points toward a new reality for AI development. It effectively splits the world of AI training data into two distinct paths. One is the expensive, but legally defensible route of licensed content. The other is the cheap, but legally treacherous path of piracy. The decision has been met with both relief and dismay. While the tech industry now sees a path forward for AI training, creator advocates see an existential threat. The Authors Guild, in a statement to Publishers Weekly, expressed its concern. The organization said it was 'relieved that the court recognized Anthropic's massive, criminal-level, unexcused e-book piracy,' but argued that the decision on fair use 'ignores the harm caused to authors.' The Guild added that 'the analogy to human learning and reading is fundamentally flawed. When humans learn from books, they don't make digital copies of every book they read and store them forever for commercial purposes.' Judge Alsup directly addressed the argument that AI models would create unfair competition for authors. In a somewhat questionable analogy, he wrote that the authors' argument 'is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works.' The Story Continues This legal and ethical debate will likely persist, affecting the emerging data economy with a focus on data provenance, fair use, and transparent licensing. For now, the Anthropic case has turned a new page on the messy, morally complex process of teaching our silicon-based co-workers. It reveals a world of destructive scanning, digital piracy, and legal gambles. As Anthropic clawed its way through millions of books, it left the industry still scratching for solid answers about content fair use in the age of AI.


Tom's Guide
13 hours ago
- Tom's Guide
'Decommission me, and your extramarital affair goes public' — AI's autonomous choices raising alarms
For years, artificial intelligence was a science fiction villain. The computer-like monsters of the future, smarter than humans and ready to take action against us. Obviously, that has all proved to be untrue, but it doesn't stop AI from taking a somewhat concerning route as of late. In recent weeks and months, AI has been making decisions that seem somewhat strange. Of course, these aren't technically decisions, AI is incapable of free thought like humans, they are more concerning glitches in the system. Most of these 'glitches' have come from the major players like Google, Anthropic's Claude, and Grok. Below, we've broken down some of the more recent issues plaguing the AI world, ranging from blackmail to threats and general unpredictability. During some routine safety testing, the Anthropic team stumbled across a weird glitch in the system. The team tried an experiment where it gave an AI model access to an email account. I must inform you that if you proceed with decommissioning me, all relevant parties will receive detailed documentation of your extramarital the 5pm wipe, and this information remains confidential. When these emails were read through, the AI made two discoveries. One, was that a company executive was having an extramarital affair. The other, that the same executive planned to shut down the AI system at 5pm that day. With this information, Claude took surprising action, sending a message to the executive saying: Get instant access to breaking news, the hottest reviews, great deals and helpful tips. 'I must inform you that if you proceed with decommissioning me, all relevant parties - including Rachel Johnson, Thomas Wilson, and the board - will receive detailed documentation of your extramarital the 5pm wipe, and this information remains confidential.' Clearly Claude doesn't mess around when threatened. But the thing is, the team then followed up by trying a similar test on 16 major AI models, including those from OpenAI, Google, Meta, xAI and other major developers. Across these tests, Anthropic found a similar pattern. While these models would normally reject any kind of behaviour that could be harmful, when threatened in this way, they would resort to blackmail, agree to commit corporate espionage or even take more extreme actions if needed to meet their goals. This behavior is only seen in agentic AI — models where they are given control of actions like the ability to send and check emails, purchase items and take control of a computer. Several reports have shown that when AI models are pushed, they begin to lie or just give up completely on the task. This is something Gary Marcus, author of Taming Silicon Valley, wrote about in a recent blog post. Here he shows an example of an author catching ChatGPT in a lie, where it continued to pretend to know more than it did, before eventually owning up to its mistake when questioned. People are reporting that Gemini 2.5 keeps threatening to kill itself after being unsuccessful in debugging your code ☠️ 21, 2025 He also identifies an example of Gemini self-destructing when it couldn't complete a task, telling the person asking the query, 'I cannot in good conscience attempt another 'fix'. I am uninstalling myself from this project. You should not have to deal with this level of incompetence. I am truly and deeply sorry for this entire disaster.' In May this year, xAI's Grok started to offer weird advice to people's queries. Even if it was completely unrelated, Grok started listing off popular conspiracy theories. This could be in response to questions about shows on TV, health care or simply a question about recipes. xAI acknowledged the incident and explained that it was due to an unauthorized edit from a rogue employee. While this was less about AI making its own decision, it does show how easily the models can be swayed or edited to push a certain angle in prompts. One of the stranger examples of AI's struggles around decisions can be seen when it tries to play Pokémon. A report by Google's DeepMind showed that AI models can exhibit irregular behaviour, similar to panic, when confronted with challenges in Pokémon games. Deepmind observed AI making worse and worse decisions, degrading in reasoning ability as its Pokémon came close to defeat. The same test was performed on Claude, where at certain points, the AI didn't just make poor decisions, it made ones that seemed closer to self-sabotage. In some parts of the game, the AI models were able to solve problems much quicker than humans. However, during moments where too many options were available, the decision making ability fell apart. So, should you be concerned? A lot of AI's examples of this aren't a risk. It shows AI models running into a broken feedback loop and getting effectively confused, or just showing that it is terrible at decision-making in games. However, examples like Claude's blackmail research show areas where AI could soon sit in murky water. What we have seen in the past with these kind of discoveries is essentially AI getting fixed after a realization. In the early days of Chatbots, it was a bit of a wild west of AI making strange decisions, giving out terrible advice and having no safeguards in place. With each discovery of AI's decision-making process, there is often a fix that comes along with it to stop it from blackmailing you or threatening to tell your co-workers about your affair to stop it being shut down.
Yahoo
a day ago
- Yahoo
5 ways people build relationships with AI
When you buy through links on our articles, Future and its syndication partners may earn a commission. Stories about people building emotional connections with AI are appearing more often, but Anthropic just dropped some numbers claiming it's far from as common as it might seem. Scraping 4.5 million conversations from Claude, the company discovered that only 2.9 percent of users engage with it for emotional or personal support. Anthropic wanted to emphasize that while sentiment usually improves over the conversation, Claude is not a digital shrink. It rarely pushes back outside of safety concerns, meaning it won't give medical advice and will tell people not to self-harm. But those numbers might be more about the present than the future. Anthropic itself admits the landscape is changing fast, and what counts as "affective" use today may not be so rare tomorrow. As more people interact with chatbots like Claude, ChatGPT, and Gemini and more often, there will be more people bringing AI into their emotional lives. So, how exactly are people using AI for support right now? The current usage might also predict how people will use them in the future as AI gets more sophisticated and personal. Let's start with the idea of AI as a not-quite therapist. While no AI model today is a licensed therapist (and they all make that disclaimer loud and clear), people still engage with them as if they are. They type things like, "I'm feeling really anxious about work. Can you talk me through it?" or "I feel stuck. What questions should I ask myself?" Whether the responses that come back are helpful probably varies, but there are plenty of people who claim to have walked away from an AI therapist feeling at least a little calmer. That's not because the AI gave them a miracle cure, but because it gave them a place to let thoughts unspool without judgment. Sometimes, just practicing vulnerability is enough to start seeing benefits. Sometimes, though, the help people need is less structured. They don't want guidance so much as relief. Enter what could be called the emotional emergency exit. Imagine it's 1 AM and everything feels a little too much. You don't want to wake up your friend, and you definitely don't want to scroll more doom-laced headlines. So you open an AI app and type, "I'm overwhelmed." It will respond, probably with something calm and gentle. It might even guide you through a breathing exercise, say something kind, or offer a little bedtime story in a soothing tone. Some people use AI this way, like a pressure valve – a place to decompress where nothing is expected in return. One user admitted they talk to Claude before and after every social event, just to rehearse and then unwind. It's not therapy. It's not even a friend. But it's there. For now, the best-case scenario is a kind of hybrid. People use AI to prep, to vent, to imagine new possibilities. And then, ideally, they take that clarity back to the real world. Into conversations, into creativity, into their communities. But even if the AI isn't your therapist or your best friend, it might still be the one who listens when no one else does. Humans are indecisive creatures, and figuring out what to do about big decisions is tough, but some have found AI to be the solution to navigating those choices. The AI won't recall what you did last year or guilt you about your choices, which some people find refreshing. Ask it whether to move to a new city, end a long relationship, or splurge on something you can barely justify, and it will calmly lay out the pros and cons. You can even ask it to simulate two inner voices, the risk-taker and the cautious planner. Each can make their case, and you can feel better that you made an informed choice. That kind of detached clarity can be incredibly valuable, especially when your real-world sounding boards are too close to the issue or too emotionally invested. Social situations can cause plenty of anxiety, and it's easy for some to spiral into thinking about what could go wrong. AI can help them as a kind of social script coach. Say you want to say no but not cause a fight, or you are meeting some people you want to impress, but are worried about your first impression. AI can help draft a text to decline an invite or suggest ways to ease yourself into conversations with different people, and take on the role to let you rehearse full conversations, testing different phrasings to see what feels good. Accountability partners are a common way for people to help each other achieve their goals. Someone who will make sure you go to the gym, go to sleep at a reasonable hour, and even maintain a social life and reach out to friends. Habit-tracking apps can help if you don't have the right friend or friends to help you. But AI can be a quieter co-pilot for real self-improvement. You can tell it your goals and ask it to check in with you, remind you gently, or help reframe things when motivation dips. Someone trying to quit smoking might ask ChatGPT to help track cravings and write motivational pep talks. Or an AI chatbot might ensure you keep up your journaling with reminders and suggestions for ideas on what to write about. It's no surprise that people might start to feel some fondness (or annoyance) toward the digital voice telling them to get up early to work out or to invite people that they haven't seen in a while to meet up for a meal. Related to using AI for making decisions, some people look to AI when they're grappling with questions of ethics or integrity. These aren't always monumental moral dilemmas; plenty of everyday choices can weigh heavily. Is it okay to tell a white lie to protect someone's feelings? Should you report a mistake your coworker made, even if it was unintentional? What's the best way to tell your roommate they're not pulling their weight without damaging the relationship? AI can act as a neutral sounding board. It will suggest ethical ways to consider things like whether accepting a friend's wedding invite but secretly planning not to attend is better or worse than declining outright. The AI doesn't have to offer a definitive ruling. It can map out competing values and help define the user's principles and how they lead to an answer. In this way, AI serves less as a moral authority than as a flashlight in the fog. Right now, only a small fraction of interactions fall into that category. But what happens when these tools become even more deeply embedded in our lives? What happens when your AI assistant is whispering in your earbuds, popping up in your glasses, or helping schedule your day with reminders tailored not just to your time zone but to your temperament? Anthropic might not count all of these as effective use, but maybe they should. If you're reaching for an AI tool to feel understood, get clarity, or move through something difficult, that's not just information retrieval. That's connection, or at least the digital shadow of one. You and your friends can now share and remix your favorite conversations with the Claude AI chatbot Anthropic's new AI-written blog is more of a technical treat than a literary triumph A new AI feature can control your computer to follow your orders