Latest news with #cognitiveScience


Forbes
3 days ago
- Entertainment
- Forbes
Why Magic Still Works In A Rational World
Magic isn't just what Messado performs—it's what he creates between perception and belief, turning ... More centuries-old illusions into unforgettable moments. We live in an age where almost every question has an answer. You can pull a supercomputer from your pocket, speak into it, and learn the speed of light or the average lifespan of a star. We rely on facts, not folklore. And yet, magic still holds us. That moment when your jaw drops, when something impossible unfolds in front of you, and your brain spins trying to make sense of it—that moment is real. And it's timeless. The Enduring Power of Magic Even when we know it's an illusion, magic captivates us. It invites us to suspend disbelief, not because we're gullible, but because part of us wants to experience wonder. Magic isn't about deception. It's about emotion. About creating a moment that makes us question what we think we know. In a culture that values logic and skepticism, magic gives us permission to be surprised. It's not a failure of reason—it's a victory of imagination. The Neuroscience of Wonder At the core of every magic trick is a psychological game. Magicians don't just fool the eye; they hack the brain. Cognitive scientists have found that magic works by exploiting gaps in attention, working memory, and prediction. Our brains create mental models to understand the world. When a magician causes a coin to vanish, they are exploiting our brain's expectations about continuity and object permanence. Sleight of hand directs our focus while the real action happens somewhere else. Studies using fMRI scans show that when people experience a good magic trick, areas in the brain linked to conflict detection and surprise—like the anterior cingulate cortex and prefrontal cortex—light up. We're not just amused. We're neurologically jolted. FEATURED | Frase ByForbes™ Unscramble The Anagram To Reveal The Phrase Pinpoint By Linkedin Guess The Category Queens By Linkedin Crown Each Region Crossclimb By Linkedin Unlock A Trivia Ladder And that jolt is pleasurable. It breaks through our cognitive autopilot. It reminds us that the world might still have secrets. A Magician's Origin Story I recently had a chance to chat about magic—both the art and the science of it—with Joshua Messado. Messado shared that he didn't grow up with dreams of being a magician. He was 18 when he bought a late-night infomercial kit with his first credit card. He maxed out his $100 limit (and never did pay the bill). He didn't start seriously performing until he was 22, after stumbling into a magic show at the Tropicana in Atlantic City. A job that fell through led him to Houdini's Magic Shop, where he met mentor Ran'D Shine, and fell in love with the craft. Years later, a spontaneous 10-second clip filmed by his best friend, magician Eric Jones, caught the attention of Ellusionist, one of the world's top magic companies. That video led to a call from the CEO, a trip to the Magic Live convention in Las Vegas, and a surreal encounter. After arriving in Las Vegas, Messado was invited to a private party. He almost skipped it. He was tired. It was late. But his assistant pushed him to go. When he arrived in front of the hotel, a limousine was waiting. The limo drove them to a sprawling mansion, filled with many of the most influential names in magic. As Messado entered, someone asked, "Did Dave see your trick?" Confused, Messado asked, "Dave who?" The reply: "David Copperfield. He's right outside. Would you mind showing him the routine?" Moments later, Messado stood in front of Copperfield, surrounded by legendary magicians he had admired for years. With no room for hesitation, he delivered the linking rings routine he'd spent over a decade perfecting. "I hit every move with clarity and precision," he recalls. "And at the end, [David Copperfield] said, 'I'm a fan now.'" It was the kind of moment most magicians only dream of. For Messado, it was confirmation that he was exactly where he was meant to be. Just two days earlier, he had been on the streets of Philadelphia. Now he was performing for the magician who inspired him to chase this path. Redefining a Classic Among magicians, few illusions are as iconic as the linking rings. For over 2,000 years, they've been used to demonstrate the impossible: solid metal rings seemingly passing through one another. It's one of the oldest tricks in the book. And yet, Messado found a way to make it feel brand new. He told a story of a neighborhood pizza shop that inspired him. The owner of that pizzeria shared his secret, 'Just do one thing better than everyone else." While working at Houdini's Magic Shop in Atlantic City, Messado took this sage wisdom and applied it to his magic with a dedicated focus to be the best at performing the linking rings trick. What sets the Messado Linking Rings apart isn't just technical mastery. It's the structure. The surprise. The audience involvement. It happens in their hands. They feel the rings link. They pull them apart. It violates everything they know about solid objects and physics. 'The rings aren't magic,' Messado says. 'They're just metal. The magic is in you.' Magic as a Shared Experience For Messado, magic has never been about ego. It's about connection. 'I'm nothing without an audience,' he says. 'I'm just a dude with some metal rings.' That philosophy drives his outreach work. Through Mr. Messado's Magic School for the Young and Young at Heart, he teaches kids in underserved Pittsburgh neighborhoods. They learn a few tricks, then perform in a full theater show the next day. The program, supported by the PNC Foundation and the Pittsburgh Cultural Trust, offers something deeper than sleight of hand. It offers the experience of being seen. The joy of creating astonishment. The reminder that magic, real or not, makes us feel something true. The Illusion That Matters Magic persists because it taps into something ancient and emotional. It works not in spite of our intelligence, but because of how our minds are built. In an era of deepfakes and algorithmic sleight-of-hand, authentic astonishment is more valuable than ever. The science of magic reveals its mechanics. The art of magic reveals something more: a flash of awe, a shared moment of disbelief, a brief reset of what we think we know. That's why magic still works. And why it always will.


Fast Company
16-06-2025
- Business
- Fast Company
Multiply the power of a brand name with a sonic signature
Sound is one of our most primal senses. Originally an early warning system from predators, sound still shapes our first impressions when we encounter something new. However, the branding world has historically led with the visual: brand name, logo, and design come first; sonic branding, if done at all, is done later. In today's AI-enabled world, this is a missed opportunity. When a sonic signature is developed at the start of the branding process—from the same phonetic DNA as the name—brands can engage consumers across multiple senses, turning first impressions into full-brain experiences. Why does sound matter? Branding is now more competitive than ever before. According to the U.S. Census Bureau, over 5 million new business applications were filed in 2024 alone. As these brands are launched into an already saturated marketplace, sound remains one of the most underrated tools for standing out. Sound is a call to action The power of sound is rooted in cognitive science, which shows that our brains are wired to seek out what's different. When we encounter something novel—like a brand—our brains quickly decide if it is worth remembering, all within the first few seconds. In that instant, sound gives brands a head start: auditory input is processed two to four times faster than visual input, and results in quicker reactions. For this reason, sound has historically been used as a powerful call to action. The first recorded example is when Paulinus of Nola, a Roman senator, introduced bells into the Christian church in 400 AD. These bells were the first 'sonic signature,' serving as a signal to call worshippers for prayer. Over a millennium later, scientist Ivan Pavlov formally proved the power of sound in the 1900s, showing that dogs could be conditioned to salivate at the sound of a bell (even when no food was presented). Today, we see this principle everywhere—it's why movie soundtracks make us feel a certain way (even when the movie isn't playing), or why YouTube has 10-hour videos of nature sounds to use while studying. Sound has a unique ability to transport us somewhere else, and this has extremely valuable implications in branding. Research from sonic testing firm SoundOut found that brands with recognizable sonic logos were seen as 5% more valuable (by 30,000 consumers), translating to millions of dollars in additional value. This was supported by Kantar's BrandZ research study, where brands with strong sonic assets reached 76% higher brand power and 138% increased perceptions of advertising strength. This means that sound is able to successfully drive consumer behavior (interest, engagement, or even purchase). Finally, a strong sonic logo markets itself: It's estimated that Intel's was played once every 5 seconds around the world after its release in 1994. Start with naming However, the sound of a brand doesn't start with its sonic signature, but with its name. Brand names are a priming tool of their own—they signal how a brand might behave. From over four decades of proprietary linguistic research, we know that different sounds can prime different associations in the mind of a consumer (this is called sound symbolism). We've found that sounds like 'z' and 'v' are fast and energetic, while sounds like 'b' and 'g' are large and stable, and so much more. When combined, these sounds shape the perception of consumers; an arbitrary name like Blackberry (loud and distinctive) creates different expectations from an invented name like Dasani (smooth and luxurious). When a brand name and sonic signature align, the result is more valuable and entirely authentic—a duet of brand assets that live and breathe as one. For example, Toyota's 3-note sonic signature features a choir of voices singing 'oh-oh-ah,' mirroring the vowel sounds of the brand name. Lucid Motors did the same: creating a 5-note melody that mirrored the five letters of Lucid. This synergy forms a lasting link between name and sound, boosting recognition—and consequently, purchase intent—even when the name or sound is encountered on its own. Beyond memorability, the integration of name and sonic has another powerful benefit. Cognitively, words and language (like a brand name) are processed in the left hemisphere of the brain, while music and sound are processed in the right. When name and sonic work together, they activate the whole brain—at both a conscious and subconscious level. This allows a brand to truly transcend the sum of its parts. A brand name on its own can make you think. A sound on its own can make you feel. But when name and sonic signature are designed as one, they create a unified cognitive experience: becoming more resonant, memorable, and impactful. In a crowded market, this isn't a luxury—it's your competitive advantage.


The Guardian
13-06-2025
- Science
- The Guardian
A Trick of the Mind by Daniel Yon review – explaining psychology's most important theory
The process of perception feels quite passive. We open our eyes and light floods in; the world is just there, waiting to be seen. But in reality there is an active element that we don't notice. Our brains are always 'filling in' our perceptual experience, supplementing incoming information with existing knowledge. For example, each of us has a spot at the back of our eye where there are no light receptors. We don't see the resulting hole in our field of vision because our brains ignore it. The phenomenon we call 'seeing' is the result of a continuously updated model in your mind, made up partly of incoming sensory information, but partly of pre-existing expectations. This is what is meant by the counterintuitive slogan of contemporary cognitive science: 'perception is a controlled hallucination'. A century ago, someone with an interest in psychology might have turned to the work of Freud for an overarching vision of how the mind works. To the extent there is a psychological theory even remotely as significant today, it is the 'predictive processing' hypothesis. The brain is a prediction machine and our perceptual experiences consist of our prior experiences as well as new data. Daniel Yon's A Trick of the Mind is just the latest popularisation of these ideas, but he makes an excellent guide, both as a scientist working at the leading edge of this field and as a writer of great clarity. Your brain is a 'skull bound scientist', he proposes, forming hypotheses about the world and collecting data to test them. The fascinating, often ingenious research reviewed here is sorely in need of an audience beyond dusty scientific journals. In 2017 a Yale lab recruited voice-hearing psychics and people with psychosis to take part in an experiment alongside non-voice-hearing controls. Participants were trained to experience auditory hallucinations when they saw a simple visual pattern (an unnervingly easy thing for psychologists to do). The team was able to demonstrate that the voice-hearers in their sample relied more heavily on prior experience than the non-voice-hearers. In other words, we can all cultivate the ability to conjure illusory sound based on our expectations, but some people already have that propensity, and it can have a dramatic effect on their lives. To illustrate how expectations seep into visual experience, Yon's PhD student Helen Olawole Scott managed to manipulate people's ratings of the clarity of moving images they had seen. The key detail is that when participants had been led to expect less clarity in their perception, that is exactly what they reported. But the clarity of the image on the screen wasn't really any poorer. It's sometimes a shame that Yon's book doesn't delve deeper. In Olawole Scott's experiments, for example, does Yon believe that it was participants' visual experience itself that became less clear, or just their judgments about the experience? Is there a meaningful difference? He also avoids engaging with some of the limitations of the predictive processing approach, including how it accounts for abstract thought. Challenges to a hypothesis are interesting, and help illuminate its details. In an otherwise theoretically sophisticated discussion this feels like an oversight. One of the most enjoyable things popular science can do is surprise us with a new angle on how the world operates. Yon's book does this often as he draws out the implications of the predictive brain. Our introspection is unreliable ('we see ourselves dimly, through a cloud of noise'); the boundary between belief and perception is vaguer than it seems ('your brain begins to perceive what it expects'); and conspiracy theories are probably an adaptive result of a mind more open to unusual explanations during periods of greater uncertainty. This is a complex area of psychology, with a huge amount of new work being published all the time. To fold it into such a lively read is an admirable feat. A Trick of the Mind: How the Brain Invents Your Reality by Daniel Yon is published by Cornerstone (£22). To support the Guardian, order your copy at Delivery charges may apply.


Forbes
10-06-2025
- Science
- Forbes
Intelligence Illusion: What Apple's AI Study Reveals About Reasoning
Concept of the diversity of talents and know-how, with profiles of male and female characters ... More associated with different brains. The gleaming veneer of artificial intelligence has captivated the world, with large language models producing eloquent responses that often seem indistinguishable from human thought. Yet beneath this polished surface lies a troubling reality that Apple's latest research has brought into sharp focus: eloquence is not intelligence, and imitation is not understanding. Apple's new study, titled "The Illusion of Thinking," has sent shockwaves through the AI community by demonstrating that even the most sophisticated reasoning models fundamentally lack genuine cognitive abilities. This revelation validates what prominent researchers like Meta's Chief AI Scientist Yann LeCun have been arguing for years—that current AI systems are sophisticated pattern-matching machines rather than thinking entities. The Apple research team's findings are both methodical and damning. By creating controlled puzzle environments that could precisely manipulate complexity while maintaining logical consistency, they revealed three distinct performance regimes in Large Reasoning Models . In low-complexity tasks, standard models actually outperformed their supposedly superior reasoning counterparts. Medium-complexity problems showed marginal benefits from additional "thinking" processes. But most tellingly, both model types experienced complete collapse when faced with high-complexity tasks. What makes these findings particularly striking is the counter-intuitive scaling behavior the researchers observed. Rather than improving with increased complexity as genuine intelligence would, these models showed a peculiar pattern: their reasoning effort would increase up to a certain point, then decline dramatically despite having adequate computational resources. This suggests that the models weren't actually reasoning at all— they were following learned patterns that broke down when confronted with novel challenges. The study exposed fundamental limitations in exact computation, revealing that these systems fail to use explicit algorithms and reason inconsistently across similar puzzles. When the veneer of sophisticated language is stripped away, what remains is a sophisticated but ultimately hollow mimicry of thought. These findings align perfectly with warnings that Yann LeCun and other leading AI researchers have been voicing for years. LeCun has consistently argued that current LLMs will be largely obsolete within five years, not because they'll be replaced by better versions of the same technology, but because they represent a fundamentally flawed approach to artificial intelligence. The core issue isn't technical prowess — it's conceptual. These systems don't understand; they pattern-match. They don't reason; they interpolate from training data. They don't think; they generate statistically probable responses based on massive datasets. The sophistication of their output masks the absence of genuine comprehension, creating what researchers now recognize as an elaborate illusion of intelligence. This disconnect between appearance and reality has profound implications for how we evaluate and deploy AI systems. When we mistake fluency for understanding, we risk making critical decisions based on fundamentally flawed reasoning processes. The danger isn't just technological—it's epistemological. Perhaps most unsettling is how closely this AI limitation mirrors a persistent human cognitive bias. Just as we've been deceived by AI's articulate responses, we consistently overvalue human confidence and extroversion, often mistaking verbal facility for intellectual depth. The overconfidence bias represents one of the most pervasive flaws in human judgment, where individuals' subjective confidence in their abilities far exceeds their objective accuracy. This bias becomes particularly pronounced in social and professional settings, where confident, extroverted individuals often command disproportionate attention and credibility. Research consistently shows that we tend to equate confidence with competence, volume with value, and articulateness with intelligence. The extroverted individual who speaks first and most frequently in meetings often shapes group decisions, regardless of the quality of their ideas. The confident presenter who delivers polished but superficial analysis frequently receives more positive evaluation than the thoughtful introvert who offers deeper insights with less theatrical flair. This psychological tendency creates a dangerous feedback loop. People with low ability often overestimate their competence (the Dunning-Kruger effect), while those with genuine expertise may express appropriate uncertainty about complex issues. The result is a systematic inversion of credibility, where those who know the least speak with the greatest confidence, while those who understand the most communicate with appropriate nuance and qualification. The parallel between AI's eloquent emptiness and our bias toward confident communication reveals something profound about the nature of intelligence itself. Both phenomena demonstrate how easily we conflate the appearance of understanding with its substance. Both show how sophisticated communication can mask fundamental limitations in reasoning and comprehension. Consider the implications for organizational decision-making, educational assessment, and social dynamics. If we consistently overvalue confident presentation over careful analysis—whether from AI systems or human colleagues—we systematically degrade the quality of our collective reasoning. We create environments where performance theater takes precedence over genuine problem-solving. The Apple study's revelation that AI reasoning models fail when faced with true complexity mirrors how overconfident individuals often struggle with genuinely challenging problems while maintaining their persuasive veneer. Both represent sophisticated forms of intellectual imposture that can persist precisely because they're so convincing on the surface. Understanding these limitations—both artificial and human—opens the door to more authentic evaluation of intelligence and reasoning. True intelligence isn't characterized by unwavering confidence or eloquent presentation. Instead, it manifests in several key ways: Genuine intelligence embraces uncertainty when dealing with complex problems. It acknowledges limitations rather than concealing them. It demonstrates consistent reasoning across different contexts rather than breaking down when patterns become unfamiliar. Most importantly, it shows genuine understanding through the ability to adapt principles to novel situations. In human contexts, this means looking beyond charismatic presentation to evaluate the underlying quality of reasoning. It means creating space for thoughtful, measured responses rather than rewarding only quick, confident answers. It means recognizing that the most profound insights often come wrapped in appropriate humility rather than absolute certainty. For AI systems, it means developing more rigorous evaluation frameworks that test genuine understanding rather than pattern matching. It means acknowledging current limitations rather than anthropomorphizing sophisticated text generation. It means building systems that can genuinely reason rather than simply appearing to do so. The convergence of Apple's AI findings with psychological research on human biases offers valuable guidance for navigating our increasingly complex world. Whether evaluating AI systems or human colleagues, we must learn to distinguish between performance and competence, between eloquence and understanding. This requires cultivating intellectual humility – the recognition that genuine intelligence often comes with appropriate uncertainty, that the most confident voices aren't necessarily the most credible, and that true understanding can be distinguished from sophisticated mimicry through careful observation and testing. To distinguish intelligence from imitation in an AI-infused environment we need to invest in hybrid intelligence, which arises from the complementarity of natural and artificial intelligences – anchored in the strength and limitations of both.


Malay Mail
09-06-2025
- Science
- Malay Mail
PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts
A research team led by Prof. Li Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, explored the similarities between large language models and human representations, shedding new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. HONG KONG SAR - Media OutReach Newswire - 9 June 2025 - Can one truly understand what "flower" means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs) suggest that language alone can build deep, meaningful representations of the exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair "pasta" and "roses" might receive equally high olfactory ratings, but "pasta" is in fact more similar to "noodles" than to "roses" when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans' conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual light of the findings, the researchers examined whether grounding would improve the LLMs' performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human Li Ping said, "The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future."Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, "The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect".Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, "These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM's representation will then be indistinguishable from that of humans."Hashtag: #PolyU #HumanCognition #LargeLanguageModels #LLMs #GenerativeAI The issuer is solely responsible for the content of this announcement.