Latest news with #DiaryofaCEO

IOL News
3 days ago
- Health
- IOL News
Who is the most dangerous person a woman will ever meet in her life?
Renowned psychologist Dr Gad Saad reveals that the greatest threat may come from their partners. Speaking on Steven Bartlett's 'Diary of a CEO' podcast, Saad delivered a sobering insight: 'The most dangerous individual that a woman will ever meet in her life is her husband'. Dr Saad, whose work explores the evolutionary roots of human behaviour, has never shied away from asking hard questions or challenging comforting assumptions. Yet, according to renowned evolutionary psychologist Dr Gad Saad, the greatest threat a woman may ever face could come from the very man she loves and trusts - her husband or partner. Rarely does the mind turn to the person closest to them. When most women imagine danger , they think of dark alleys, strangers with bad intentions, or shadowy figures lurking on the edge of society. The statement, though stark, is grounded in data and evolutionary logic, not cynicism. Saad elaborates: 'Statistically, when you look at the data for violence against women, whether it's physical assault, homicide, or other forms of abuse, it is overwhelmingly perpetrated by an intimate partner.' It's an unsettling idea that forces us to confront difficult truths about intimacy, vulnerability, and human nature. This claim is reflected in global studies. According to the World Health Organization, around 38% of murders of women worldwide are committed by a male intimate partner. Dr Saad's point is not to vilify marriage or men but to highlight a biological and social reality: the same deep bonds that create love and companionship can, in rare but tragic cases, also set the stage for control, jealousy, and violence. In his writings and interviews, Saad explores how evolutionary forces have shaped mating strategies, attachment, and even aggression. 'Evolution doesn't care about our happiness or safety,' he explains. 'It cares about reproductive success. Sometimes, this manifests in behaviours that are dangerous, particularly when men feel their status or paternity is threatened.' Yet, Saad's message is not one of despair. By understanding these ancient drivers of behaviour, he argues, society can better protect women and foster healthier relationships. 'Awareness is key,' Saad says. 'We have to recognise these patterns if we want to break them.' His views have sparked debate. Some critics worry that framing the issue in evolutionary terms risks excusing violence, while supporters believe that understanding our biology is essential to creating real change. What remains undeniable is the importance of the conversation Saad has started, one that asks us all to look at love not just through rose-coloured glasses, but with open eyes and a way of building a safer world for women. IOL Lifestyle


Time of India
5 days ago
- Business
- Time of India
Why Godfather of AI Geoffrey Hinton thinks being a plumber is the ‘best job'
Godfather of AI Geoffrey Hinton Nobel Laureate Geoffrey Hinton , popularly known as the Godfather of AI has warned that AI will soon replace many workers, especially in roles involving routine tasks. Speaking on the Diary of a CEO podcast, Hilton said ''I think for mundane intellectual labour, AI will replace everyone.' He pointed to jobs like call centre workers and paralegal as those most at risk. As for what careers might be future-proof, Hinton had a simple suggestion: 'Train to be a plumber.' 'I'd say it's going to be a long time before [AI is] as good at physical manipulation as us—and so a good bet would be to be a plumber,' he said. Geoffrey Hinton questions the idea of AI creating new jobs Hilton, who left Google in 2023 to openly discuss AI's dangers, said the impact on job is already being felt. 'I think the joblessness is a fairly urgent short-term threat to human happiness. If you make lots and lots of people unemployed — even if they get universal basic income — they are not going to be happy,' he told host Steven Bartlett. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 2025 Top Trending local enterprise accounting software [Click Here] Esseps Learn More Undo During the podcast, he also questioned the idea that new roles created due to AI will balance out the jobs lost. 'This is a very different kind of technology,' he said. 'If it can do all mundane intellectual labour, then what new jobs is it going to create? You would have to be very skilled to have a job that it couldn't just do.' Amazon's Andy Jassy and others warns against AI Amidst the emerging threats of AI, companies are preparing for changes. Amazon CEO Andy Jassy recently told the staff that as AI roles are adopted, the e-commerce giant plans to reduce corporate headcount. 'We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,' Jassy wrote in a memo, calling AI 'a once-in-a-lifetime technology.' Dario Amodei, CEO of AI firm Anthropic, also echoed the AI warning, saying that up to half of entry-level white-collar jobs could vanish within five years, pushing unemployment to around 20%. He advised both workers and governments to prepare for a fast shift from AI supporting jobs to fully automating them. Realme GT 7: 7000mAh Battery, 120W Charging & Flagship Power Under Rs 40K! AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Business Insider
5 days ago
- Business
- Business Insider
The cofounder of the viral AI 'cheating' startup Cluely says he only hires people for 2 jobs
At the AI startup that promised to help people "cheat on everything," there are only two job titles: engineer or influencer. "There are only two roles here. You're either building the product or you're making the product go viral," Chungin "Roy" Lee, the CEO and cofounder of Cluely, said in an episode of the "Sourcery" podcast published Saturday. "There's nobody who's not a great engineer who has less than 100,000 followers." Cluely launched earlier this year as a tool to help software engineers cheat on their job interviews, among other use cases. Lee went viral after he was suspended by Columbia University over an early version of the tool. Cluely has since removed references to cheating on job interviews from its website. It still positions itself as an "undetectable" AI that sees its users' screens and feeds them answers in real time. The San Francisco startup, which announced a $15 million round led by Andreessen Horowitz on Friday, has made it clear it's betting big on influencers — not marketers — to drive growth. Cluely needs to be "the biggest thing" on Instagram and TikTok, the 21-year-old said. "Every single big company is known by regular people," he added. Lee previously told BI that his main goal for Cluely is to reach 1 billion views across all platforms. "Marketing teams can try," he said. "The reason all these big consumer app guys are so young is because you need to be tapped in with young culture to understand what's funny." "You can have a 35-year-old marketer who scrolls as much as they want. For some reason, they just won't have the viral sense to come up with hooks that are capable of generating 10 million views." No work-life balance Lee said most of the team lives and works together — part of his belief that "work-life balance should not be a thing." "You need to work where you live if you're serious about building the company," Lee said. "Your work should be your life and vice versa." "You wake up, go straight to work, go to bed on the couch," he said. "That's sort of the culture we're trying to promote here," he added. Lee told BI on Tuesday that "work-life balance at an early-stage startup is a myth." "The only way to succeed is by being all in on your company, not by working 40 hours a week and going home early," he added. Lee also said on the podcast that he doesn't have to worry about his employees because "everyone is on board with the craziness." "We understand that this is like the lifeline of the company," Lee said. "We're either crazy enough to make it or we're crazy enough to die." The rejection of work-life balance is hardly new in startup culture. LinkedIn cofounder Reid Hoffman said during an episode of the "Diary of a CEO" podcast that startup employees shouldn't expect work-life balance if they want their business to take off. "Work-life balance is not the startup game," Hoffman said. Billionaire entrepreneur Mark Cuban said on an episode of "The Playbook" that "there is no balance" for the most ambitious people. "If you want to crush the game, whatever game you're in, there's somebody working 24 hours a day to kick your ass," he said.


Time of India
6 days ago
- Business
- Time of India
Why the Godfather of AI left Google: "If you work for a big company, you don't ..."
Godfather of AI Geoffrey Hinton Geoffrey Hinton , who's known as the " Godfather of AI ," has revealed the internal pressures that influenced his decision to leave Google after more than a decade, citing self-censorship concerns that prevented him from speaking freely about artificial intelligence dangers . Speaking on the "Diary of a CEO" podcast that aired June 16, Hinton explained how corporate loyalty created an unspoken barrier to discussing AI risks. "If you work for a big company, you don't feel right saying things that will damage the big company," he said, describing the psychological constraint he faced while employed at Google. Google never silenced him, but self-censorship kicked in The neural network pioneer, who quit Google in 2023, emphasized that the tech giant never directly pressured him to remain silent about AI safety issues. "Google encouraged me to stay and work on AI safety, and said I could do whatever I liked on AI safety," Hinton noted. However, he admitted that employees naturally "censor yourself" when working for major corporations. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Free P2,000 GCash eGift UnionBank Credit Card Apply Now Undo Hinton's departure came as he grew increasingly concerned about AI's potential dangers, including the spread of misinformation, job displacement, and what he termed "existential risk" from digital intelligence surpassing human capabilities. His concerns intensified after Microsoft integrated ChatGPT into its search engine, prompting Google to accelerate its own AI chatbot development. "We would never accept this mindset in any other field" The 75-year-old researcher's warnings align with broader industry concerns about AI safety protocols. Hinton highlighted the particular challenge of AI-generated content, noting that people "will not be able to discern what is true any more" as realistic fake photos, videos, and text flood the internet. His departure underscores a growing tension in the tech industry between rapid AI advancement and safety considerations. While praising Google's overall responsible behavior, Hinton's exit reflects the complex dynamics facing AI researchers who must balance corporate interests with public safety concerns in an increasingly competitive landscape. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Business Insider
20-06-2025
- Business
- Business Insider
The Godfather of AI lays out a key difference between OpenAI and Google when it comes to safety
When it comes to winning the AI race, the "Godfather of AI" thinks there's an advantage in having nothing to lose. On an episode of the "Diary of a CEO" podcast that aired June 16, Geoffrey Hinton laid out what he sees as a key difference between how OpenAI and Google, his former employer, dealt with AI safety. "When they had these big chatbots, they didn't release them, possibly because they were worried about their reputation," Hinton said of Google. "They had a very good reputation, and they didn't want to damage it." Google released Bard, its AI chatbot, in March of 2023, before later incorporating it into its larger suite of large language models called Gemini. The company was playing catch-up, though, since OpenAI released ChatGPT at the end of 2022. Hinton, who earned his nickname for his pioneering work on neural networks, laid out a key reason that OpenAI could move faster on the podcast episode: "OpenAI didn't have a reputation, and so they could afford to take the gamble." Talking at an all-hands meeting shortly after ChatGPT came out, Google's then-head of AI said the company didn't plan to immediately release a chatbot because of " reputational risk," adding that it needed to make choices "more conservatively than a small startup," CNBC reported at the time. The company's AI boss, Google DeepMind CEO Demis Hassabis, said in February of this year that AI poses potential long-term risks, and that agentic systems could get "out of control." He advocated having a governing body that regulates AI projects. Gemini has made some high-profile mistakes since its launch, and showed bias in its written responses and image-generating feature. Google CEO Sundar Pichai addressed the controversy in a memo to staff last year, saying the company " got it wrong" and pledging to make changes. The " Godfather" saw Google's early chatbot decision-making from the inside — he spent more than a decade at the company before quitting to talk more freely about what he describes as the dangers of AI. On Monday's podcast episode, though, Hinton said he didn't face internal pressure to stay silent. "Google encouraged me to stay and work on AI safety, and said I could do whatever I liked on AI safety," he said. "You kind of censor yourself. If you work for a big company, you don't feel right saying things that will damage the big company." Overall, Hinton said he thinks Google "actually behaved very responsibly." Hinton couldn't be as sure about OpenAI, though he has never worked at the company. When asked whether the company's CEO, Sam Altman, has a "good moral compass" earlier in the episode, he said, "We'll see." He added that he doesn't know Altman personally, so he didn't want to comment further. OpenAI has faced criticism in recent months for approaching safety differently than in the past. In a recent blog post, the company said it would only change its safety requirements after making sure it wouldn't "meaningfully increase the overall risk of severe harm." Its focus areas for safety now include cybersecurity, chemical threats, and AI's power to improve independently. Altman defended OpenAI's approach to safety in an interview at TED2025 in April, saying that the company's preparedness framework outlines "where we think the most important danger moments are." Altman also acknowledged in the interview that OpenAI has loosened some restrictions on its model's behavior based on user feedback about censorship. The earlier competition between OpenAI and Google to release initial chatbots was fierce, and the AI talent race is only heating up. Documents reviewed by Business Insider reveal that Google relied on ChatGPT in 2023 — during its attempts to catch up to ChatGPT.