logo
Use this ‘3-word rule' to get smarter answers from ChatGPT

Use this ‘3-word rule' to get smarter answers from ChatGPT

Tom's Guide4 days ago
ChatGPT is already one of the most powerful tools on the internet, but as a power user, I know that it's how you ask the chatbot your questions that make all the difference. I've come up with many simple tricks and hacks, but this one in particular makes the chatbot's responses feel dramatically more thoughtful, relevant and even expert-level. Unlike some elaborate prompts, this prompt is extremely simple.
It's just a short phrase you add to the end of your prompt and what I call the '3-word rule.'
So, what are these three words? When you query ChatGPT about nearly anything, simply add 'like a [role]' at the end.
That's it. Three words and ChatGPT suddenly responds in a tone, format or level of depth that better matches what you actually need.
You're essentially telling the AI: 'Take this seriously — and answer it like someone who really knows what they're doing.'
Let's say you want a summary of a dense news article. You could ask: 'Summarize this article.' But if you say: Summarize this article like a journalist,' you're more likely to get a concise, well-structured summary with a clear lead and takeaways.
Need help making a decision? Try: 'Compare the iPhone 15 and Galaxy S24 like a product reviewer.'
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Want career advice that doesn't feel generic? 'Give me feedback on my resume like a hiring manager.'
Suddenly, ChatGPT adjusts its tone and depth — giving you answers that feel sharper, more practical, and less like a polite robot.
More 3-word roles to try:
The beauty of this rule is that it's endlessly customizable; you can plug in almost any profession or perspective, and ChatGPT will try to match that tone and mindset.
ChatGPT is trained on a vast range of internet text including books, articles and conversations from experts in nearly every domain. When you give it a role to play, it draws on those patterns to deliver responses that mimic how an expert would think, speak or write.
You don't need complex prompt formulas. Just describe who you want it to be, and it adjusts accordingly.
If ChatGPT ever feels too generic or surface-level, try applying the 3-word rule. It's a fast, easy way to unlock better answers whether you're brainstorming, learning something new or just trying to think through a problem more clearly.
It works beyond, ChatGPT, too. Try it with Gemini, Claude and even voice-based AIs. So next time you open a chatbot, try it. Let me know in the comments what you think.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Photos Introduces New AI Tools: Fun, Free, And Very Limited
Google Photos Introduces New AI Tools: Fun, Free, And Very Limited

Forbes

time2 hours ago

  • Forbes

Google Photos Introduces New AI Tools: Fun, Free, And Very Limited

Google is adding new generative AI tools to Google Photos, shifting the app away from its original ... More purpose. Key Takeaways Google Photos could be at the start of a radical transformation. In a major update rolling out now, Google is introducing what could be the most significant Google Photos AI upgrade yet, allowing you to turn static images into animated video clips and stylized art with just a few taps. The tools are free and fun, but are deliberately and severely limited -- and in many ways, that's a good thing. Google's Remix feature turns still images into fun videos with AI. The Big Update: Photo To Video — Fun But Deliberately Nerfed As I previously reported, Google Photos is introducing a game-changing new feature that transforms still photos into short video clips with a single tap. It's a powerful, but significantly cut-down version of the photo-to-video features already available to paying Google AI Pro and Ultra subscribers in Gemini. You can select any picture in your Google Photos library, choose between the vague 'Subtle movement' or slot-machine-like 'I'm feeling lucky' options, and wait for about a minute for the video animation to generate. FEATURED | Frase ByForbes™ Unscramble The Anagram To Reveal The Phrase Pinpoint By Linkedin Guess The Category Queens By Linkedin Crown Each Region Crossclimb By Linkedin Unlock A Trivia Ladder Google's demos show once static people now celebrating by throwing handfuls of confetti in the air before it tumbles back down from above. These were both generated in 'I'm feeling lucky,' mode. I presume additional video effects will be available at launch and more added in the future. If you don't like the results, you can hit the Regenerate button to try again, but that's about it for user control. You can also tap on thumbs-up or thumbs-down icons to send feedback to Google. It would be great to see a few more preset options available, beyond just subtle movements or a random effect. Even adding just a few more emotions would make these clips useful as fun reactions for messaging apps, etc, in place of emojis or pre-made GIFs. The process takes up to a minute to complete, and you The focus here is clearly on fun rather than unbridled creativity. Where Gemini utilizes Google's powerful Veo3 video AI model to create animations of anything you want, Google Photos employs the older Veo 2 model, offering very little user control over what happens in the animation, except for repeatedly hitting the 'Regenerate' button. Furthermore, Veo 2 cannot generate audio, one of the standout features of Veo 3. Remix Your Photos — Too Little, Too Late? First discovered in May of this year, the new 'Remix' feature allows you to select a photo and transform it into a range of artistic styles, including cartoons, pencil sketches, and paintings. Google Photos Remix feature lets you transform photos into a range of artistic styles. As with the Photo to Video feature above, you can hit Regenerate to re-try any pictures you don't like and tap one of the thumb icons to provide feedback. Remix is clearly aimed at having fun and sharing moments in new ways, and there's nothing wrong with that. The results are Google's answer to the viral 'Ghliblified' images and action figure pictures you've probably seen taking over social media. However, unlike powerful tools like ChatGPT or Midjourney where you can simply type in any style imaginable, Remix forces you to pick from a small menu of pre-selected styles. The approach helps keep generated output safe for consumption, but also prevents any real creativity. Google will need to update the library of styles frequently or the novelty will wear off quickly. A New Direction For Google Photos — The Create Tab To make Google Photos' new generative tools easier to find, Google is introducing a new 'Create' tab, accessible by clicking an icon at the bottom of the app on both Android and iOS. Here, you'll be able to find all of Google Photos' creative tools gathered in one place, effectively separating the newer creative side of Google Photos from its original library functions. Google Photos introduces a new "Create" tab to house all of its new generative AI tools. This marks the beginning of a significant shift in purpose for Google Photos, as Google notes, it's now 'more than an archive, it's a canvas.' Personally, that's not what I want from Google Photos; I use it as a place to store and revisit memories rather than as a tool to create new content. The app's existing Animated slide shows and collages use AI to enhance memories, but these new tools alter them into something entirely new, creating video clips of events that never really happened. Google Photos Now Creates, But Is It Safe? Google appears to be exercising considerable caution with these new features, not least by severely limiting the scope of what can be created with these new Google Photos tools. However, the company acknowledges that the results may be 'inaccurate or unexpected' and displays a warning before use, along with a link to its GenAI prohibited use policy. Furthermore, all images and videos generated by Google Photos using AI contain invisible SynthID watermarks that reveal their synthetic origins. The Big Issue: US-Only Rollout Alienates Global Users Photo to Video and Remix are now rolling out on Android and iOS, but are currently only available in the US. The Create tab will then roll out in August, but once again, only in the US. This will be disappointing for international users, who may have to wait a considerable amount of time to access the new features. Remember, Google Photos users outside the US are still waiting for access to the AI-powered 'Ask Photos' feature nine months after launch. Google Photos has a massive worldwide user base, with billions of photos and videos uploaded each week, and runs the risk of frustrating a colossal number of customers if non-US customers remain excluded from its best features. Follow @paul_monckton on Instagram.

Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype
Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype

Bloomberg

time5 hours ago

  • Bloomberg

Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype

OpenAI 's ChatGPT started a revolution in artificial intelligence development and investment. Yet nine-tenths of the technology and services that've sprung up since could be gone in under a decade, according to the founder of Alibaba Group Holding Ltd. 's cloud and AI unit. The problem is the US startup, celebrated for ushering AI into the mainstream, created 'bias' or a skewed understanding of what AI can do, Wang Jian told Bloomberg Television. It fired the popular imagination about chatbots, but the plethora of applications for AI goes far beyond that. Developers need to cut through the noise and think creatively about applications to propel the next stage of AI development, said Wang, who built Alibaba's now second-largest business from scratch in 2009.

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

Forbes

time8 hours ago

  • Forbes

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store