Latest news with #persuasion


The Standard
7 days ago
- The Standard
'Brothers always there': Viral video shows bodyguards helping abused women in China
Left: They said they are former military personnel. Right: 'Biaogemen' successfully persuade an abusive man to calm down.


The Sun
17-07-2025
- The Sun
The question you need to ask to find out if someone is lying & it works instantly
A BEHAVIORAL expert has revealed techniques he uses to see whether someone is lying - and the question to ask to spot a liar. Chase Hughes, who served in the US Navy for two decades, is the founder and CEO of Applied Behavior Research and the author of the three-year #1 bestselling book on persuasion, influence and behaviour profiling, The Ellipsis Manual. 2 2 He is also a founding member of The Behavior Panel, a YouTube channel with over one million subscribers. The expert recently appeared on the Robert Breedlove podcast to discuss behavioural topics, such as how to read people, and how to protect yourself from being manipulated. Another topic he discussed was about the questions he uses to spot whether someone is lying. Revealing how he would approach questioning someone he suspected of doing something, the guru explained that he would use a technique known as a ''bait question''. A bait question uses hypothetical information to elicit a cue as to whether a person is being deceptive. Chase explained: ''Let's say that you snuck three doors down to one of your neighbours' houses and kicked their trash can over. ''And it's a big deal, and you get called in, but you know you're going to deny it, right? ''And you and I sit down, and I say, 'Hey man. Look, is there any reason at all that anybody would say that they saw you walking in that area or that it might show up on a camera or something?', because you don't have all the cameras clocked, but I never told you that I did have that, or that somebody did say those things.'' According to Chase, those who are innocent will immediately deny it. But, he added: ''If you're guilty, now your anxiety is really high because I haven't told you what I know yet and we're only one question in. I married my partner after 90 days & he cheated on me straight away - I didn't realise he had so many red flags ''And you might say, 'Well yes, I walked by', so you're either like, 'Yes, I was there', or 'No, I wasn't'.'' He explained that the second the person being questioned says no, they ''don't know whether or not I'm about to flip something on the table and show you a video or bring in witnesses that said that you were there, like eight of them''. Chase continued: ''Your brain is in high anxiety mode. But it's only in anxiety mode if you're guilty. So an innocent person would be like, "no, absolutely not," and there's no anxiety spike at all.'' Four red flags your partner is cheating Private Investigator Aaron Bond from BondRees revealed four warning signs your partner might be cheating. They start to take their phone everywhere with them In close relationships, it's normal to know each other's passwords and use each other's phones, if their phone habits change then they may be hiding something. Aaron says: "If your partner starts changing their passwords, starts taking their phone everywhere with them, even around the house or they become defensive when you ask to use their phone it could be a sign of them not being faithful." "You should also look at how they place their phone down when not in use. If they face the phone with the screen facing down, then they could be hiding something." They start telling you less about their day When partners cheat they can start to avoid you, this could be down to them feeling guilty or because it makes it easier for them to lie to you. "If you feel like your partner has suddenly begun to avoid you and they don't want to do things with you any more or they stop telling you about their day then this is another red flag." "Partners often avoid their spouses or tell them less about their day because cheating can be tough, remembering all of your lies is impossible and it's an easy way to get caught out," says Aaron. Their libido changes Your partner's libido can change for a range of reasons so it may not be a sure sign of cheating but it can be a red flag according to Aaron. Aaron says: "Cheaters often have less sex at home because they are cheating, but on occasions, they may also have more sex at home, this is because they feel guilty and use this increase in sex to hide their cheating. You may also find that your partner will start to introduce new things into your sex life that weren't there before." They become negative towards you Cheaters know that cheating is wrong and to them, it will feel good, this can cause tension and anxiety within themselves which they will need to justify. "To get rid of the tension they feel inside they will try to convince themselves that you are the problem and they will become critical of you out of nowhere. Maybe you haven't walked the dog that day, put the dishes away or read a book to your children before bedtime. A small problem like this can now feel like a big deal and if you experience this your partner could be cheating," warns Aaron. However, it's crucial to emphasise that some scientific literature suggests that the use of bait questions can lead to memory distortion in some people, making them believe that non-existent evidence exists, and so the use of this method is not endorsed by all experts in the field. Those who do implement the method, according to Chase, would then move on to what is known as a ''punishment question''. This type of question is used to assess whether a person is being truthful about whether they did something, and to gauge their feelings about the behaviour or crime in question. Chase explained: ''The punishment question is essentially - and I'm really breaking it down to the bare bones - is like, 'We're working really hard to find out who's behind this. I'm, curious, what do you think should happen to the person that did this?'.'' He wet on: ''And that works so powerfully, especially on sex crimes and people who've committed sex crimes - what do you think should happen to the person who did this? You're going to hear answers that soften to a crazy degree [from guilty people].'' According to the behavioural expert, a person who is guilty might say something along the lines of: ''Someone who does something like this is sick, so they need mental counselling, they don't need to go to jail, they need to get repaired because they're broken, they need counselling.''


Fast Company
10-07-2025
- Fast Company
How AI Is Undermining Online Authenticity
In a prescient tweet, OpenAI CEO Sam Altman noted that AI will become persuasive long before it becomes intelligent. A scintillating study conducted by researchers at the University of Zurich just proved him right. In the study, researchers used AI to challenge Redditors' perspectives in the site's / changemyview subreddit, where users share an opinion on a topic and challenge others to present counter arguments in a civilized manner. Unbeknownst to users, researchers used AI to produce arguments on everything from dangerous dog breeds to the housing crisis. The AI-generated comments proved extremely effective at changing Redditors' minds. The university's ethics committee frowned upon the stud y, as it's generally unethical to subject people to experimentation without their knowledge. Reddit's legal team seems to be pursuing legal action against the university. Unfortunately, the Zurich researchers decided not to publish their full findings, but what we do know about the study points to glaring dangers in the online ecosystem—manipulation, misinformation, and a degradation of human connection. The power of persuasion The internet has become a weapon of mass deception. In the AI era, this persuasion power becomes even more drastic. AI avatars resembling financial advisors, therapists, girlfriends, and spiritual mentors can become a channel for ideological manipulation. The University of Zurich study underscores this risk. If manipulation is unacceptable when researchers do it, why is it okay for tech giants to do it? Large language models (LLMs) are the latest products of algorithmically driven content. Algorithmically curated social media and streaming platforms have already proven manipulative. Facebook experimented with manipulating users' moods—without their consent— through their newsfeeds as early as 2012. The Rabbit Hole podcast shows how YouTube's algorithm created a pipeline for radicalizing young men. Cambridge Analytica and Russiagate showed how social media influences elections at home and abroad. TikTok 's algorithm has been shown to create harmful echo chambers that produce division. Foundational LLMs like Claude and ChatGPT are like a big internet hive mind. The premise of these models holds that they know more than you. Their inhumanness makes users assume their outputs are unbiased. Algorithmic creation of content is even more dangerous than algorithmic curation of content via the feed. This content speaks directly to you, coddles you, champions and reinforcing your viewpoint. Look no farther than Grok, the LLM produced by Elon Musk's company xAI. From the beginning, Musk was blatant about engineering Grok to support his worldview. Earlier this year, Grok fell under scrutiny for doubting the number of Jews killed in the holocaust and for promoting the falsehood of white genocide in South Africa. Human vs. machine Reddit users felt hostile toward the study because the AI responses were presented as human responses. It's an intrusion. The subreddit's rules protect and incentivize real human discussion, dictating that the view in question must be yours and that AI-generated posts must be disclosed. Reddit is a microcosm of what the internet used to be: a constellation of niche interests and communities largely governing themselves, encouraging exploration. Through this digital meandering, a whole generation found likeminded cohorts and evolved with the help of those relationships. Since the early 2010s, bots have taken over the internet. On social media, they are deployed en masse to manipulate public perception. For example, a group of bots in 2016 posed as Black Trump supporters, ostensibly to normalize Trumpism for minority voters. Bots played a pivotal role in Brexit, for another. I believe it matters deeply that online interaction remains human and genuine. If covert, AI-powered content is unethical in research, its proliferation within social media platforms should send up a red flag, too. The thirst for authenticity The third ethical offense of the Zurich study: it's inauthentic. The researchers using AI to advocate a viewpoint did not hold that viewpoint themselves. Why does this matter? Because the point of the internet is not to argue with robots all day. If bots are arguing with bots over the merits of DEI, if students are using AI to write and teachers are using AI to grade then, seriously, what are we doing? I worry about the near-term consequences of outsourcing our thinking to LLMs. For now, the experience of most working adults lies in a pre-AI world, allowing us to employ AI judiciously (mostly, for now). But what happens when the workforce is full of adults who have never known anything but AI and who never had an unassisted thought? LLMs can't rival the human mind in creativity, problem-solving, feeling, and ingenuity. LLMs are an echo of us. What do we become if we lose our original voice to cacophony? The Zurich study treads on this holy human space. That's what makes it so distasteful, and, by extension, so impactful. The bottom line The reasons this study is scandalous are the same reasons it's worthwhile. It highlights what's already wrong with a bot-infested internet, and how much more wrong it could get with AI. Its trespasses bring the degradation of the online ecosystem into stark relief. This degradation has been happening for over a decade—yet incrementally, so that we haven't felt it. A predatory, manipulative internet is a foregone conclusion. It's the water we're swimming in, folks. This study shows how murky the water's become, and how much worse it might get. I hope it will fuel meaningful legislation or at least a thoughtful, broad-based personal opting out. In the absence of rules against AI bots, Big Tech is happy to cash in on their largess.


Fast Company
08-07-2025
- Fast Company
Online interactions are become less genuine
In a prescient tweet, OpenAI CEO Sam Altman noted that AI will become persuasive long before it becomes intelligent. A scintillating study conducted by researchers at the University of Zurich just proved him right. In the study, researchers used AI to challenge Redditors' perspectives in the site's / changemyview subreddit, where users share an opinion on a topic and challenge others to present counter arguments in a civilized manner. Unbeknownst to users, researchers used AI to produce arguments on everything from dangerous dog breeds to the housing crisis. The AI-generated comments proved extremely effective at changing Redditors' minds. The university's ethics committee frowned upon the stud y, as it's generally unethical to subject people to experimentation without their knowledge. Reddit's legal team seems to be pursuing legal action against the university. Unfortunately, the Zurich researchers decided not to publish their full findings, but what we do know about the study points to glaring dangers in the online ecosystem—manipulation, misinformation, and a degradation of human connection. The power of persuasion The internet has become a weapon of mass deception. In the AI era, this persuasion power becomes even more drastic. AI avatars resembling financial advisors, therapists, girlfriends, and spiritual mentors can become a channel for ideological manipulation. The University of Zurich study underscores this risk. If manipulation is unacceptable when researchers do it, why is it okay for tech giants to do it? Large language models (LLMs) are the latest products of algorithmically driven content. Algorithmically curated social media and streaming platforms have already proven manipulative. Facebook experimented with manipulating users' moods—without their consent— through their newsfeeds as early as 2012. The Rabbit Hole podcast shows how YouTube's algorithm created a pipeline for radicalizing young men. Cambridge Analytica and Russiagate showed how social media influences elections at home and abroad. TikTok 's algorithm has been shown to create harmful echo chambers that produce division. Foundational LLMs like Claude and ChatGPT are like a big internet hive mind. The premise of these models holds that they know more than you. Their inhumanness makes users assume their outputs are unbiased. Algorithmic creation of content is even more dangerous than algorithmic curation of content via the feed. This content speaks directly to you, coddles you, champions and reinforcing your viewpoint. Look no farther than Grok, the LLM produced by Elon Musk's company xAI. From the beginning, Musk was blatant about engineering Grok to support his worldview. Earlier this year, Grok fell under scrutiny for doubting the number of Jews killed in the holocaust and for promoting the falsehood of white genocide in South Africa. Human vs. machine Reddit users felt hostile toward the study because the AI responses were presented as human responses. It's an intrusion. The subreddit's rules protect and incentivize real human discussion, dictating that the view in question must be yours and that AI-generated posts must be disclosed. Reddit is a microcosm of what the internet used to be: a constellation of niche interests and communities largely governing themselves, encouraging exploration. Through this digital meandering, a whole generation found likeminded cohorts and evolved with the help of those relationships. Since the early 2010s, bots have taken over the internet. On social media, they are deployed en masse to manipulate public perception. For example, a group of bots in 2016 posed as Black Trump supporters, ostensibly to normalize Trumpism for minority voters. Bots played a pivotal role in Brexit, for another. I believe it matters deeply that online interaction remains human and genuine. If covert, AI-powered content is unethical in research, its proliferation within social media platforms should send up a red flag, too. The thirst for authenticity The third ethical offense of the Zurich study: it's inauthentic. The researchers using AI to advocate a viewpoint did not hold that viewpoint themselves. Why does this matter? Because the point of the internet is not to argue with robots all day. If bots are arguing with bots over the merits of DEI, if students are using AI to write and teachers are using AI to grade then, seriously, what are we doing? I worry about the near-term consequences of outsourcing our thinking to LLMs. For now, the experience of most working adults lies in a pre-AI world, allowing us to employ AI judiciously (mostly, for now). But what happens when the workforce is full of adults who have never known anything but AI and who never had an unassisted thought? LLMs can't rival the human mind in creativity, problem-solving, feeling, and ingenuity. LLMs are an echo of us. What do we become if we lose our original voice to cacophony? The Zurich study treads on this holy human space. That's what makes it so distasteful, and, by extension, so impactful. The bottom line The reasons this study is scandalous are the same reasons it's worthwhile. It highlights what's already wrong with a bot-infested internet, and how much more wrong it could get with AI. Its trespasses bring the degradation of the online ecosystem into stark relief. This degradation has been happening for over a decade—yet incrementally, so that we haven't felt it. A predatory, manipulative internet is a foregone conclusion. It's the water we're swimming in, folks. This study shows how murky the water's become, and how much worse it might get. I hope it will fuel meaningful legislation or at least a thoughtful, broad-based personal opting out. In the absence of rules against AI bots, Big Tech is happy to cash in on their largess.


Globe and Mail
07-07-2025
- Entertainment
- Globe and Mail
Your daily horoscope: July 7, 2025
If you want to change the world – and according to your birthday chart you do – then the best way to go about it is to push ahead a step at a time. Clarify your long-term goals and then move toward them steadily over the next 12 months. It may be easy to foresee how the people around you are going to act but you cannot allow yourself to be so predictable. Go out of your way as the new week begins to disguise both your aims and your motives. Don't give anything away. There is no reason at all why you should take a back seat while others grab the glory. On the contrary, the planets indicate that your time to step into the spotlight is about to arrive, so polish up your act and get ready to shine. Changes planet Uranus joins Venus in your sign today, which among others things will make you even more persuasive than you usually are. If there is something you want to posses then come right out and ask for it. Others will rush to get it for you. Take time out of your busy schedule to sit quietly and get your thoughts together. You have so much going for you now but you also seem a little high-strung, so breath deeply and remind yourself that, for you, life is not just good but great. You know there are areas where you could do better, so make a list of those areas and start working through them one at a time. The next few days will be critical as far as your long-term plans are concerned, especially on the work front. Cosmic activity in the area of your chart that governs your professional reputation means the next few weeks will see major changes coming your way. If you are of a mind to start a new career path then now is the time to get serious about it. You will be in one of those moods during the early part of the week where you just have to be different in everything you do. This is a great time to be adventurous and maybe take a calculated risk or two. You're a natural-born winner. Scorpio (Oct. 24 - Nov. 22): It will become apparent today that something you thought was important is actually of no significance at all, so don't waste any more time on it – junk it and move on to pastures new. Not everyone will be happy about it but why should you care? Sagittarius (Nov. 23 - Dec. 21): As Uranus moves into the partnership area of your chart today people you work and do business with may seem more nervous than usual. That's because they can see from the intensity of your gaze that you are in no mood to play games. Capricorn (Dec. 22 - Jan. 20): If a work-related project has been stuck in one place in recent weeks then what occurs over the next few days will get it moving again. Don't be surprised if someone else's misfortune benefits you in some way – and don't feel guilty about it either. Aquarius (Jan. 21 - Feb. 19): Whatever your artistic interests may be you must now get serious about them. As changes planet Uranus moves into the most dynamic area of your chart today opportunities to excel will come at you from all directions. Unleash the genius within! Pisces (Feb. 20 - Mar. 20): If you need to make changes on the home front now is the time to let loved ones know what you would like to do, and why you want them to do it with you. You may not get a better opportunity to reset your most important relationships. Discover more about yourself at