
Figma releases new AI-powered tools for creating sites, app prototypes, and marketing assets
The company's website creation tool is called Figma Sites. The startup said that often designers build prototypes of what a site should look like within Figma. With the new AI-powered tool, they can easily create websites and even publish them. Once the site is generated, collaborators can easily change elements of the site through an editor without prompting.
Users can also add transitions, animations, and scroll effects while making the site responsive. Figma is adding the ability to directly generate blog posts from its site. That means the Sites will have a content management system (CMS), which is an upcoming feature, baked in that lets users edit posts within the design of a blog and also manage other assets such as thumbnails and slugs.
For interactive elements like stock tickers, you can add custom code or use AI to generate code for you.
Figma Make, on the other hand, is a similar AI-powered tool, which is geared more towards ideation and prototyping. Users can input a prompt to create a web application. The prototype app is collaborative, and users can prompt the assistant to change or add certain elements. Plus, if there is a developer on the team, they can directly modify the code to make necessary changes.
Users can also generate small interactive elements, such as a clock, and embed them in the pages published through Figma Sites later.
Yuki Yamashita, chief product officer at Figma, said that both products share a lot of features and underlying technology.
Techcrunch event
Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last.
Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last.
Berkeley, CA
|
BOOK NOW
'We want to enable high-fidelity prototyping with Figma, especially with Figma Make. You can add more data to it and try to see how viable an idea is in terms of final implementation. Whereas Figma Site is useful for a marketing and design team when they exactly know how a site should look and take full control of that,' Yamashita told TechCrunch while describing the differentiation between these products.
Image Credits: Figma
Multiple companies in different sectors are looking for a way to create interactive experiences using AI. Website hosting providers such as Squarespace, Wix, WordPress, and Hostinger have released tools to let users easily create websites through AI. On the other hand, tools like Replit and Lovable are pushing users to create apps or prototypes without coding knowledge. Last month, even Canva released a way to create interactive experiences within its designs with Canva Code.
This isn't the first foray for Figma into prototyping, though. Last year, it released a Make Design feature, which had to be pulled after users accused the company of heavily training the tool on existing apps.
What's more, Figma is releasing a new tool for marketers called Figma Buzz. With these tools, marketers can easily use templates created by designers with brand-specific designs to make new creatives. They can also use a tool to insert AI-generated images or change the background of certain assets. Marketers can also create assets in bulk using data from sources like spreadsheets.
The startup is also launching a tool called Figma Draw for vector editing and illustrations. Yamashita said that designers often had to export their vector designs outside Figma to make edits. The company is now adding features like text on a path, pattern fill, brushes, multi-vector edit, adding noise and texture, and a lasso selection to its Draw product.
Image Credits: Figma
Figma launched its Slides tool for creating presentations last year. With the new asset creation and drawing tool, the company is directly competing with creative suites such as Adobe and Canva. Yamashita denied that the company is directly competing with these creative tools. He said that Figma is in the business of building digital products, and a third of the company's users are developers, thanks to tools like Dev Mode.
The company is announcing a new plan called a content seat starting at $8 per month, which will give users access to Figma Buzz, Slides, FigJam, and Sites CMS.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Digital Trends
6 minutes ago
- Digital Trends
Wybot joins Prime Day with up to 40% discount on robotic pool cleaners
Robotic pool cleaners provide unparalleled convenience in keeping your swimming pool spotless, but these helpful devices can get pretty expensive. Fortunately, Wybot — one of the leading brands in this space — is rolling out huge discounts alongside Amazon's Prime Day. You can buy a Wybot robotic pool cleaner at up to 40% off, but you shouldn't waste time if you want to enjoy the savings. Wybot's Prime Day deals are scheduled to last until July 11, but you're going to want to decide on your purchase as soon as possible to make sure that you pocket the discounts. Feel free to take a look at all of the available offers through the link below, but to help you make your choice faster, we've also selected our three favorite bargains. See Deals — Wybot S2 Solar — $1,300 $2,000 35% off plus $100 gift The Wybot S2 Solar is the first underwater solar-powered robotic pool cleaner, with the ability to recharge through its underwater docking station so that it's always ready. Solar charging keeps the device eco-friendly, but it's still very powerful with a brushless motor, and its battery enables uninterrupted cleaning for up to 2.5 hours across a range of 3,229 square feet. It also has a double filtration system to capture debris in your pool. You'll be able to use the Wybot app to launch cleaning sessions, set automated schedules, and choose from seven cleaning modes. The Wybot S2 Solar is on sale at 35% off for $700 in savings, but every purchase also comes with a handheld pool vacuum that's worth $100. Buy Now — Wybot C2 Vision — $630 $1,000 37% off The Wybot C2 Vision features AI Vision debris detection technology that allows its to clean 20 times faster when Dirt Hunting Mode is activated, and it utilizes a powerful brushless motor to clean the floors, walls, and waterlines of your pool. The robotic pool cleaner can cover up to 2,152 square feet with its runtime of up to 180 minutes on a single charge, and it comes with a dual filtration system to pick up debris. The Wybot app will let you choose between eight cleaning modes and six paths, as well as schedule up to four sessions per week. Once it's done, it automatically returns to the edge of the pool so you won't have to retrieve it yourself. The Wybot C2 Vision is 37% off, which translates to savings of $370. Buy Now — Wybot F1 — $300 $500 40% off The Wybot F1 is a simpler and more affordable device compared to its counterparts as it focuses on cleaning the surface of the water in your swimming pool. The solar-powered skimmer sucks debris such as leaves and petals into its 7-liter basket, and it will work on all pool shapes. It can run for 8 hours in Standard Mode, or actively clean with auto-pause cycles 24/7 in Smart Mode. The device intelligently maneuvers around obstacles and hugs the edges so that it won't miss any areas, and you can control every aspect of its operation through the Wybot app. There's a 40% discount on the Wybot F1 right now, allowing you to save $200. Buy Now —


Atlantic
10 minutes ago
- Atlantic
What Two Judicial Rulings Mean for the Future of Generative AI
Should tech companies have free access to copyrighted books and articles for training their AI models? Two judges recently nudged us toward an answer. More than 40 lawsuits have been filed against AI companies since 2022. The specifics vary, but they generally seek to hold these companies accountable for stealing millions of copyrighted works to develop their technology. (The Atlantic is involved in one such lawsuit, against the AI firm Cohere.) Late last month, there were rulings on two of these cases, first in a lawsuit against Anthropic and, two days later, in one against Meta. Both of the cases were brought by book authors who alleged that AI companies had trained large language models using authors' work without consent or compensation. In each case, the judges decided that the tech companies were engaged in 'fair use' when they trained their models with authors' books. Both judges said that the use of these books was 'transformative'—that training an LLM resulted in a fundamentally different product that does not directly compete with those books. (Fair use also protects the display of quotations from books for purposes of discussion or criticism.) At first glance, this seems like a substantial blow against authors and publishers, who worry that chatbots threaten their business, both because of the technology's ability to summarize their work and its ability to produce competing work that might eat into their market. (When reached for comment, Anthropic and Meta told me they were happy with the rulings.) A number of news outlets portrayed the rulings as a victory for the tech companies. Wired described the two outcomes as ' landmark ' and ' blockbuster.' But in fact, the judgments are not straightforward. Each is specific to the particular details of each case, and they do not resolve the question of whether AI training is fair use in general. On certain key points, the two judges disagreed with each other—so thoroughly, in fact, that one legal scholar observed that the judges had 'totally different conceptual frames for the problem.' It's worth understanding these rulings, because AI training remains a monumental and unresolved issue—one that could define how the most powerful tech companies are able to operate in the future, and whether writing and publishing remain viable professions. So, is it open season on books now? Can anyone pirate whatever they want to train for-profit chatbots? Not necessarily. When preparing to train its LLM, Anthropic downloaded a number of 'pirate libraries,' collections comprising more than 7 million stolen books, all of which the company decided to keep indefinitely. Although the judge in this case ruled that the training itself was fair use, he also ruled that keeping such a 'central library' was not, and for this, the company will likely face a trial that determines whether it is liable for potentially billions of dollars in damages. In the case against Meta, the judge also ruled that the training was fair use, but Meta may face further litigation for allegedly helping distribute pirated books in the process of downloading—a typical feature of BitTorrent, the file-sharing protocol that the company used for this effort. (Meta has said it 'took precautions' to avoid doing so.) Piracy is not the only relevant issue in these lawsuits. In their case against Anthropic, the authors argued that AI will cause a proliferation of machine-generated titles that compete with their books. Indeed, Amazon is already flooded with AI-generated books, some of which bear real authors' names, creating market confusion and potentially stealing revenue from writers. But in his opinion on the Anthropic case, Judge William Alsup said that copyright law should not protect authors from competition. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote. In his ruling on the Meta case, Judge Vince Chhabria disagreed. He wrote that Alsup had used an 'inapt analogy' and was 'blowing off the most important factor in the fair use analysis.' Because anyone can use a chatbot to bypass the process of learning to write well, he argued, AI 'has the potential to exponentially multiply creative expression in a way that teaching individual people does not.' In light of this, he wrote, 'it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars' while damaging the market for authors' work. To determine whether training is fair use, Chhabria said that we need to look at the details. For instance, famous authors might have less of a claim than up-and-coming authors. 'While AI-generated books probably wouldn't have much of an effect on the market for the works of Agatha Christie, they could very well prevent the next Agatha Christie from getting noticed or selling enough books to keep writing,' he wrote. Thus, in Chhabria's opinion, some plaintiffs will win cases against AI companies, but they will need to show that the market for their particular books has been damaged. Because the plaintiffs in the case against Meta didn't do this, Chhabria ruled against them. In addition to these two disagreements is the problem that nobody—including AI developers themselves—fully understands how LLMs work. For example, both judges seemed to underestimate the potential for AI to directly quote copyrighted material to users. Their fair-use analysis was based on the LLMs' inputs — the text used to train the programs—rather than outputs that might be infringing. Research on AI models such as Claude, Llama, GPT-4, and Google's Gemini has shown that, on average, 8 to 15 percent of chatbots' responses in normal conversation are copied directly from the web, and in some cases responses are 100 percent copied. The more text an LLM has 'memorized,' the more it can potentially copy and paste from its training sources without anyone realizing it's happening. OpenAI has characterized this as a 'rare bug,' and Anthropic, in another case, has argued that 'Claude does not use its training texts as a database from which preexisting outputs are selected in response to user prompts.' But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much more of its training text than was previously thought, including near-exact copies of books such as Harry Potter and the Sorcerer's Stone and 1984. That study was co-authored by Mark Lemley, one of the most widely read legal scholars on AI and copyright, and a longtime supporter of the idea that AI training is fair use. In fact, Lemley was part of Meta's defense team for its case, but he quit earlier this year, criticizing in a LinkedIn post about 'Mark Zuckerberg and Facebook's descent into toxic masculinity and Neo-Nazi madness.' (Meta did not respond to my question about this post.) Lemley was surprised by the results of the study, and told me that it 'complicates the legal landscape in various ways for the defendants' in AI copyright cases. 'I think it ought still to be a fair use,' he told me, referring to training, but we can't entirely accept 'the story that the defendants have been telling' about LLMs. For some models trained using copyrighted books, he told me, 'you could make an argument that the model itself has a copy of some of these books in it,' and AI companies will need to explain to the courts how that copy is also fair use, in addition to the copies made in the course of researching and training their model. As more is learned about how LLMs memorize their training text, we could see more lawsuits from authors whose books, with the right prompting, can be fully reproduced by LLMs. Recent research shows that widely read authors, including J. K. Rowling, George R. R. Martin, and Dan Brown may be in this category. Unfortunately, this kind of research is expensive and requires expertise that is rare outside of AI companies. And the tech industry has little incentive to support or publish such studies. The two recent rulings are best viewed as first steps toward a more nuanced conversation about what responsible AI development could look like. The purpose of copyright is not simply to reward authors for writing but to create a culture that produces important works of art, literature, and research. AI companies claim that their software is creative, but AI can only remix the work it's been trained with. Nothing in its architecture makes it capable of doing anything more. At best, it summarizes. Some writers and artists have used generative AI to interesting effect, but such experiments arguably have been insignificant next to the torrent of slop that is already drowning out human voices on the internet. There is even evidence that AI can make us less creative; it may therefore prevent the kinds of thinking needed for cultural progress. The goal of fair use is to balance a system of incentives so that the kind of work our culture needs is rewarded. A world in which AI training is broadly fair use is likely a culture with less human writing in it. Whether that is the kind of culture we should have is a fundamental question the judges in the other AI cases may need to confront.


Fast Company
10 minutes ago
- Fast Company
What happened at Sun Valley 2025?A roundup of the biggest news and deals
This weekend, dozens of CEOs, tech tycoons, and billionaires packed their bags and jetted out of Sun Valley, Idaho, after the annual four-day Sun Valley conference (nicknamed ' summer camp for billionaires ') came to a close. Hosted by the investment banking firm Allen & Co., the conference is essentially a yearly opportunity for some of the world's most influential businesspeople to rub elbows and talk shop over scenic mountain views. Aside from giving its attendees a chance to break out their best polos and khakis for an expectant clutch of paparazzi, the secretive retreat has also served as the site of some of the most significant deals in the past four decades. Those include Disney's acquisition of ABC in 1995, Google's acquisition of YouTube in 2006, and Amazon founder Jeff Bezos's purchase of The Washington Post in 2013. This year's star-studded cast included OpenAI's Sam Altman, Apple's Tim Cook, and Microsoft's Satya Nadella, to name a few. Here's everything we know about the 2025 conference so far: Skydance discusses purchase of 'The Free Press' According to a report from The New York Times, one potential deal on the table at this year's gathering was an acquisition of the online publication The Free Press by the media company Skydance. David Ellison, Skydance's CEO, attended the conference alongside Bari Weiss, cofounder of The Free Press. Sources close to the discussion told The Times that Ellison has previously offered Weiss a 'wide variety of options' in terms of potential working arrangements were Skydance to purchase The Free Press —including one scenario in which Weiss would take a major role in shaping the editorial direction of CBS News. Still, no official agreement has been announced, and The Times did not learn the terms of any potential deals. Fast Company reached out to Skydance and The Free Press for comment. In the meantime, Ellison is also in the midst of attempting to close a proposed merger with Paramount. Disney looking to sell A&E Global Media On July 8, just before the start of this year's Sun Valley conference, Disney and Hearst, co-owners of the cable network A&E Global Media, announced that they were looking to sell the network. The news follows a larger trend of media companies like Comcast and Warner Bros. Discovery spinning off their respective cable networks in an attempt to purge underperforming arms of the business, as traditional pay TV continues down a path of rapid decline. It's quite a turn from just a few years ago, when media companies were opting to acquire their rivals in what Variety has called an 'arms race' to compete with streaming competitors. In the wake of the news that A&E is seeking a new buyer, Disney CEO Bob Iger attended Sun Valley alongside media moguls including Comcast head Brian Roberts, Sony Pictures Entertainment CEO Ravi Ahuja, and Warner Bros. Discovery head David Zaslav. One topic was top of mind At the end of the conference, Business Insider caught up with Flowcode CEO Tim Armstrong, who has attended Sun Valley annually for more than a decade. Armstrong shared that, unsurprisingly, this year's hottest topic of discussion was AI. It was the '1,000-pound gorilla' in 'every conversation, every meeting,' Armstrong told the publication, adding that executives spent a good chunk of the retreat sharing their ideas on how to implement new AI tools. The year of weird sunglasses and Western cosplay While the general public waits to hear more details about deals made behind closed doors at this year's conference, there is one element of the retreat that the average American had full access to: the fashion. This year, like the years before it, saw an excess of bland polo shirts, jeans, ball caps, and button-downs. In an interview with Fast Company, L.A.-based personal stylist Mary Komick explained that 'quiet luxury' is the generally accepted dress code at Sun Valley, with execs opting for neutral shades and understated cuts. 'They're showing off to each other, with their stealth luxe style noticeably recognized by those in these circles,' Komick noted. Still, there were a couple of new trends to emerge this year. Ivanka Trump and Spanx founder Sara Blakley both opted for Western-inspired accessories, while several other attendees stepped out in simple outfits accented with over-the-top sunglasses, including Altman (who donned a $400 pair of sunnies inspired by '70s ski masks) and Ferrari chairman John Elkann (who wore a vintage pair of Tom Fords that were chunky enough to resemble an Apple Vision Pro).