logo
Claude AI now integrates with Notion, Canva, Figma, Stripe and more in major Anthropic update

Claude AI now integrates with Notion, Canva, Figma, Stripe and more in major Anthropic update

Mint15-07-2025
AI firm Anthropic has announced a major upgrade to its Claude assistant, unveiling a new directory of integrated tools designed to enhance productivity and streamline workflows. The update allows users to connect Claude to a wide range of third-party platforms such as Notion, Canva, Stripe, Figma, and more, effectively transforming the AI assistant into a more context-aware collaborator.
In a blog post published on Monday, Anthropic introduced the new feature as a leap towards intelligent, task-oriented AI support. 'Now Claude can have access to the same tools, data, and context that you do,' the company stated, underscoring the shift from a basic assistant to a fully-fledged digital co-worker.
Traditionally, users have needed to repeatedly brief AI tools on their ongoing projects, timelines, and preferred software. With the newly introduced connectors, however, Claude can directly access user-approved services and data, thereby reducing friction and enabling quicker, more relevant outputs.
For instance, users can now ask Claude to 'write release notes for our latest sprint from Linear,' and the assistant will automatically extract ticket information to generate a polished document. In another example, a creative brief can be quickly turned into a branded Canva post, or a Figma design can be transformed into usable code, all without switching platforms.
Anthropic highlighted several key use cases: Creating organised Notion roadmaps from AI-guided planning sessions.
Generating payment summaries using live Stripe data.
Accessing local applications like Prisma and Socket via the new Claude Desktop app.
While the directory of tools is accessible to all Claude users on web and desktop, certain integrations, particularly with remote apps, are exclusive to subscribers on paid plans. Desktop extensions for locally installed applications require the Claude Desktop app to be installed.
Users can explore the full list of supported tools by visiting claude.ai/directory.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is this the future of app-building? Google Opal lets you go from idea to web app instantly
Is this the future of app-building? Google Opal lets you go from idea to web app instantly

Mint

time33 minutes ago

  • Mint

Is this the future of app-building? Google Opal lets you go from idea to web app instantly

Google has rolled out a new experiment called Opal, an AI-driven tool for anyone interested in building web applications with plain language instructions. Currently in testing for US users through Google Labs, Opal is Google's latest move to make the process of app creation available to people of all skill levels. Unlike traditional coding, which usually demands a working knowledge of at least one programming language, Opal lets users start by typing a simple description of the app they want. The system processes these instructions and produces a functional web app, giving the user a visual overview of how information moves through the app from start to finish. The interface here is clean and easy to follow. Users see steps and outcomes in a way that removes much of the confusion typical of normal code editors. Once an app is created, the editing does not have to stop. Opal offers a set of tools in its editor that let users update their prompts, add steps, or try out different logical flows right in the visual workspace. There is no need to write or rework blocks of code. The changes update the app in real time and quickly show the results in the development panel. For those who want to work with something already made, Opal includes a gallery of existing apps. Users can open these, study how they work, and remix them to make something new. This approach encourages sharing and keeps the process moving in a creative direction. After an app is ready, Opal makes it easy to publish and share. Users get a public link that others with Google accounts can use to test, give feedback, or use the app themselves. The sharing function works well for teams, classrooms, or anyone looking to build and distribute small, practical web tools. One of Opal's main strengths is that it takes away the fears people often have about coding. The visual display and direct use of language are there to help those unfamiliar with programming take their ideas from thought to working tool without any intimidating obstacles. At the same time, experienced users can focus more on the actual logic and design of the app rather than getting stuck with technical setbacks. Many companies have recently invested in similar tools to lower the barriers to app creation for everyday users. Google Opal now joins other platforms, like those from Canva and Figma, that focus on direct, prompt-based and visual workflows. These tools all try to include more people in the tech space, making new app ideas possible for those with little or no coding history.

Perplexity's Mac app can now perform system tasks using MCP: What it means
Perplexity's Mac app can now perform system tasks using MCP: What it means

Business Standard

time3 hours ago

  • Business Standard

Perplexity's Mac app can now perform system tasks using MCP: What it means

The Perplexity app for macOS has added support for Model Context Protocol (MCP), allowing users to connect the app to various system-level services such as Apple Notes, Reminders, and Calendar. According to Perplexity, the update enables the AI assistant to perform basic tasks beyond search queries, such as creating reminders or retrieving data from Google Drive. How MCP works in Perplexity for Mac Perplexity's integration of MCP means the app can now connect to local tools via community-developed 'connectors.' These connectors act as instructions that tell the AI how to interact with specific applications, such as searching Apple Notes or editing calendar entries. Because apps on the Mac App Store are sandboxed and cannot access other parts of the system directly, users are required to install a separate helper tool — PerplexityXPC — to enable these functions. Once installed, users can configure new connectors via the app's settings, using commands sourced from the connector's documentation, such as those hosted on GitHub. A successful setup allows the AI to execute local tasks using natural language queries. How to set up and configure local MCP on MacOS Perplexity in its blog laid out the steps to activate local MCPs for the Perplexity Mac app. Here are the steps that users need to follow: Open your account settings and click on Connectors Before you can add MCP Connectors, you have to install the helper application PerplexityXPC so that Perplexity can securely connect to your local MCP servers Once the Helper is installed, go back to the Connectors settings and click Add Connector On the 'Add Connector' page, add an MCP Connector to the 'Simple' tab Add any name for 'Server Name', for example, MCP for AppleScript Add the command that is used to run the MCP server This can usually be found in the README of the MCP server Make sure you have any requirements for the MCP server installed, for example brew install node if you need npx. Ask Perplexity if you need any help installing requirements on your computer Enter the command after installing the requirements. For example, for the command is npx -y @peakmojo/applescript-mcp Click "Save" and wait for the MCP server to show "Running" status in the Connectors list. Make sure the MCP server is running Go to the Perplexity homepage and toggle your MCP on underneath 'Sources' Test your MCP server: Ask a new command in Perplexity that references the MCP server, like 'check my Mac calendar'. This should run one of the MCP server's tools and prompt you for confirmation. What is MCP Model Context Protocol, or MCP, is a new standard proposed by AI company Anthropic. It is designed to serve as a communication bridge between AI systems and traditional software environments, similar to how HTTP functions for websites or SMTP for email. According to an article by 9To5Mac, MCP has seen early adoption across the industry, including by companies such as Zapier, Google, and Salesforce. The protocol allows AI assistants to interface directly with APIs, local applications, and services in a structured and secure manner.

The chatbot culture wars are here
The chatbot culture wars are here

Indian Express

time5 hours ago

  • Indian Express

The chatbot culture wars are here

For much of the past decade, America's partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices. Those battles aren't over. But a new one has already started. This fight is over artificial intelligence, and whether the outputs of leading AI chatbots such as ChatGPT, Claude and Gemini are politically biased. Conservatives have been taking aim at AI companies for months. In March, House Republicans subpoenaed a group of leading AI developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri's Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a 'new wave of censorship' by training their AI systems to give biased responses to questions about President Donald Trump. On Wednesday, Trump himself joined the fray, issuing an executive order on what he called 'woke AI.' 'Once and for all, we are getting rid of woke,' he said in a speech. 'The American people do not want woke Marxist lunacy in the AI models, and neither do other countries.' The order was announced alongside a new White House AI action plan that will require AI developers that receive federal contracts to ensure that their models' outputs are 'objective and free from top-down ideological bias.' Republicans have been complaining about AI bias since at least early last year, when a version of Google's Gemini AI system generated historically inaccurate images of the American Founding Fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading AI companies were training their models to parrot liberal ideology. Since then, top Republicans have mounted pressure campaigns to try to force AI companies to disclose more information about how their systems are built, and tweak their chatbots' outputs to reflect a broader set of political views. Now, with the White House's executive order, Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force AI companies to address their concerns. The order directs federal agencies to limit their use of AI systems to those that put a priority on 'truth-seeking' and 'ideological neutrality' over disfavored concepts such as diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria. If this playbook sounds familiar, it's because it mirrors the way Republicans have gone after social media companies for years — using legal threats, hostile congressional hearings and cherry-picked examples to pressure companies into changing their policies, or removing content they don't like. Critics of this strategy call it 'jawboning,' and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.) Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring AI companies through the federal procurement process is necessary to stop AI developers from putting their thumbs on the scale. Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its long-standing fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics. This time around, the critics cite examples of AI chatbots that seemingly refuse to praise Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as AI is integrated into fields such as education and health care. There are a few problems with this argument, according to legal and tech policy experts I spoke to. The first, and most glaring, is that pressuring AI companies to change their chatbots' outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration's argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech. 'What it seems like they're doing is saying, 'If you're producing outputs we don't like, that we call biased, we're not going to give you federal funding that you would otherwise receive,'' Genevieve Lakier, a law professor at the University of Chicago, said. 'That seems like an unconstitutional act of jawboning.' There is also the problem of defining what, exactly, a 'neutral' or 'unbiased' AI system is. Today's AI chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they're using. And testing an AI system for bias isn't as simple as feeding it a list of questions about politics and seeing how it responds. Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration's executive order would set 'a really vague standard that's going to be impossible for providers to meet.' There is also a technical problem with telling AI systems how to behave. Namely, they don't always listen. Just ask Elon Musk. For years, Musk has been trying to create an AI chatbot, Grok, that embodies his vision of a rebellious, 'anti-woke' truth seeker. But Grok's behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as 'Mecha-Hitler.') At other times, it acts like a liberal — telling users, for example, that human-made climate change is real, or that the right is responsible for more political violence than the left. Recently, Musk has lamented that AI systems have a liberal bias that is 'tough to remove, because there is so much woke content on the internet.' Nathan Lambert, a research scientist at the Allen Institute for AI, told me that 'controlling the many subtle answers that an AI will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.' It's not, in other words, as straightforward as telling an AI chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the 'model spec,' a set of instructions given to AI models about how they should act — there's no guarantee that these changes will consistently produce the behavior conservatives want. But asking whether the Trump administration's new rules can survive legal challenges, or whether AI developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, AI companies, like their social media predecessors, may find it easier to give in than to fight. 'Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,' Lakier said. 'I'm surprised by how easily these powerful companies have folded.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store