When Pinterest needs new AI tools, employees can have a part in creating them
Pinterest, a social media company with about 4,700 employees, has sought to address such concerns by keeping employees closely involved in the development of internal AI tools so those tools are viewed as efficient and helpful, not just mandated from the top down. Key toward this mission has been Pinterest's annual Makeathon, which is in its 14th year. The employee-led competition used to be viewed mostly as a fun way to recommend fixes, said Anirudh Koul, Pinterest's generative AI tech lead. Now, in the age of AI, its usefulness has exploded.
"The overarching goal is ground-up innovation," Koul told Business Insider. "We realized that if we can give the employees the opportunity and freedom to tell us what must be done, and give them some space to showcase working proof of their concept, we might find new innovations at a much faster rate."
Inside Pinterest's companywide hackathon
Makeathon is Pinterest's version of a hackathon — an event at which people work together to create new software quickly. Hackathons are designed to spark new ideas and increase employee engagement, said Brandon Kessler of Devpost, a digital platform for running hackathons. Since 2022's AI boom, hackathon demand has exploded, Kessler told BI.
Discussing hackathons' appeal, Kessler said the events "get people excited because they get to build something they want, as opposed to, 'Hey, all, please use this tool.'"
"You get people learning these new tools," he continued, "building stuff that helps the business, and collaborating and having fun — all within a short period of time."
Pinterest employees witnessed this type of quick development in early 2023, just a few months after ChatGPT 's release. Pinterest's senior director of engineering, Anthony Suarez, helped collect a handful of engineers to have a mini hackathon which led to the creation of an internal chatbot tool. By their official Makeathon in July, Pinterest's now-foundational plug-in AI system was ready for wider use.
At Pinterest, hackathon projects start at an internal company page where employees across departments can log pitches. In the week before Makeathon, Koul's team hosts classes about how generative AI works and how to write prompts. There's also a class on no-code tools for app building so that nontechnical employees can still employ AI solutions.
Then, teams from across departments form around an idea. Suarez collaborated with seven Makeathon teams last cycle, mostly composed of fellow employees he had never worked with before. They also have the support of Koul's "hack doctors," support staff who work across the company and specialize in areas such as engineering, design, and video editing. The hack doctors help refine ideas and prepare teams to take questions from executives. Last year, just under 94% of teams worked with a hack doctor.
"We usually find that a good chunk of participants are actually not from engineering," Koul said. "They pair up with engineers to bring their ideas to the next level. We've had teams where people from six different countries come together."
Each team produces a video pitch, which colleagues up to the executive level can watch and vote on. Makeathon is strategically scheduled for late summer so any resulting tools can be incorporated into Pinterest's companywide planning period in September and October, Suarez told BI. He estimated that more than half of these Makeathon projects get funded during this cycle and called the event an "innovation flywheel."
How a Makeathon idea becomes an AI-tool reality
During the 2023 Makeathon, one of Pinterest's sales employees had an idea: What if AI could collect and search through all the company's internal documents?
The sales employee recruited a 14-person team, including Charlie Gu, a senior engineering manager on Pinterest's data team. Gu said he envisioned the tool as a Slack-based chatbot employees could turn to instead of bugging their colleagues. The team knew, however, that some existing documentation wouldn't be up to date when the chatbot pulled it in.
"We came up with a system where you can report answers and create new documentation on the fly," Gu said. The team pitched, built, and eventually implemented the document finder across the company.
The tool now answers, on average, an estimated 4,000 questions a month, according to Pinterest. The tool was also designed to access thousands of internal documents from Google Docs, Slack threads, and slide decks, said Koul, who is quite passionate about Makeathon. (He called over shaky service at a Mount Everest base camp to rave about it.)
Makeathon also encouraged some employees to come up with useful AI prompts. In 2024, Koul's team posed a challenge: Who could come up with the best questions to get Pinterest's chatbot to produce the most accurate and precise answers? Gu said that they had about 200 participants.
In this case, the employees' prompt generation helped with Pinterest's overall goal of encouraging employee engagement with AI. The effort also led Pinterest to integrate AI agents into the process of writing more precise prompts.
According to internal company surveys, 96% of Suarez's team of more than 60 use generative AI every month, and 78% of the company's 1,800 engineers report time savings from using internal AI tools.
Suarez said he'd been "quite surprised by the positive feel" for the tools across the business, adding: "Part of that is, we didn't force adoption of these tools early on, and we still aren't saying, 'You have to do this.' We're trying to come at this more from creating value."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
3 hours ago
- Business Insider
Torn between Amsterdam and the US for grad school, she made a pros and cons list to guide her decision. Take a look.
When Royanne Ng got into Columbia University last year, she should have been elated. Instead, the nearly $80,000 first-year tuition and fees — not including housing — made her stomach turn. The Singaporean student turned the Ivy League program down and chose one across the Atlantic instead for a fraction of the cost. At Columbia, she was offered a spot in the Film and Media Studies MA with a concentration in emergent media — a track that explores formats like virtual and augmented reality. The 28-year-old is now pursuing a one-year master's in cultural data and AI at the University of Amsterdam, a program that blends machine learning with theory and tech policy. Ng also applied to NYU but wasn't accepted, and she ultimately dropped her application to the University of Edinburgh in the UK. Her postgrad degree had to be "very strategic," she told Business Insider — a move to boost her job prospects and reposition her career in Singapore. "If I'm going to spend this much of my financial savings on a degree, it has to be really, really worth it," she said. The US once had a near-monopoly on elite higher education. But as tuition rises, safety concerns grow, and political rhetoric turns hostile toward international students, the calculation is shifting. Here's how Ng made her choice. Please help BI improve our Business, Tech, and Innovation coverage by sharing a bit about your role — it will help us tailor content that matters most to people like you. Continue By providing this information, you agree that Business Insider may use this data to improve your site experience and for targeted advertising. By continuing you agree that you accept the Terms of Service and Privacy Policy . Prestige vs practicality Columbia's program offered big advantages: name recognition, accomplished alumni, and the implicit promise of career opportunities, Ng said. The brand name, she added, carried the common assumption that it gives students "a head start when it comes to job opportunities." But the costs were impossible to ignore. Columbia's program ran for two years and charged nearly $80,000 in tuition and fees just for its first-year students. In contrast, the University of Amsterdam's fees were about €17,000 for a one-year program. "The difference is just so stark," she said, especially when Columbia required a hefty deposit that felt like too much commitment. It wasn't just about money. She said many US courses were "more traditional" — rooted in legacy disciplines and slower to adapt. Europe had programs that were a lot more novel and flexible, often designed with interdisciplinary or future-facing themes, she added. Amsterdam's curriculum hit the mark. Ng said it aligned with her goal of transitioning from tech communications and a humanities background into a career that connects AI and policy — one she hopes to pursue in Singapore. Safety and geopolitical concerns Ng's family was also worried about her safety if she chose to study in New York — and so was she. For someone who had lived in Singapore her whole life — a country known for its low crime rates and political calm — she was concerned about gun violence, racial politics, and geopolitical uncertainty in the US. Still, she said the right school depends on the student's goals after graduation. Students hoping to stay and work in the US might prioritize a school's brand, alumni network, and credentials. But Ng plans to return to Singapore, so standing out in the local job market mattered more. Ng is set to finish her program in August. Here's her pros-and-cons list of US graduate schools: Ng had created a rough version at the end of 2023 while debating whether to choose the US for graduate school. When BI reached out to her in June, she pulled it together into a neat table: Pros Cons 1) Education quality Vibrant and mentally stimulating study environment, with motivated students and highly reputable professors. There's also an assumption that many top US schools have extremely good courses and teaching 1) Health and safety Worries among family members about safety of living in certain cities, with more risk due to geopolitical instability. Not sure if causing my loved ones to constantly worry would be a worthwhile trade-off for great education 2) Reputation and optics Excellent brand name, which would be helpful for future job securing and possible advancement 2) Financial costs Some schools I applied to cost about 5x more than graduate schools in Europe. Not sure if this cost difference can really be compensated by an equivalent degree of education quality. There were also a lot of miscellaneous fees involved just in applying to schools and securing spots when offered. 3) Solid alumni network Would be in connection with illustrious alumni network, which could also be helpful for career and job advancement 3) Local labour market incompatibility Given that I was looking at programmes within humanities and social sciences departments, I was also very conscious about whether certain courses would help me stand out or gain an edge in the Singapore job market. I had to consider the possibility that even excellent brand names might not be able to change the fact that many companies still look for science, tech and data roles. 4) Course material and programmes During my research of graduate schools, I observed that many US schools offered relatively traditional programmes, based on the write-up and descriptions of Masters courses. I did tend to see more exciting and novel courses offered in the UK and Europe, marketing interdisciplinary skills combining humanities/social sciences, data science or specific sector knowledge. I felt that this interdisciplinary angle was of particular interest as someone wanting to stay relevant in the job market, so this was a major consideration in choosing my graduate programme.


Bloomberg
4 hours ago
- Bloomberg
Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype
OpenAI 's ChatGPT started a revolution in artificial intelligence development and investment. Yet nine-tenths of the technology and services that've sprung up since could be gone in under a decade, according to the founder of Alibaba Group Holding Ltd. 's cloud and AI unit. The problem is the US startup, celebrated for ushering AI into the mainstream, created 'bias' or a skewed understanding of what AI can do, Wang Jian told Bloomberg Television. It fired the popular imagination about chatbots, but the plethora of applications for AI goes far beyond that. Developers need to cut through the noise and think creatively about applications to propel the next stage of AI development, said Wang, who built Alibaba's now second-largest business from scratch in 2009.


Forbes
6 hours ago
- Forbes
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.