Latest news with #ClaudeShannon


Time of India
12-07-2025
- Science
- Time of India
ChatGPT making us dumb & dumber, but we can still come out wiser
Claude Shannon, one of the fathers of AI, once wrote rather disparagingly: 'I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines.' As we enter the age of AI — arguably, the most powerful technology of our times — many of us fear that this prophecy is coming true. Powerful AI models like ChatGPT can create complex essays, poetry and pictures; Google's Veo stitches together cinema-quality videos; Deep Research agents produce research reports at the drop of a prompt. Our innate human abilities of thinking, creating, and reasoning seem to be now duplicated, sometimes surpassed, by AI. This seemed to be confirmed by a recent — and quite disturbing — MIT Media Lab study, 'Your Brain on ChatGPT'. It suggested that while AI tools like ChatGPT help us write faster, they may be making our minds slower. Through a four-month meticulously executed experiment with 54 participants, researchers found that those who used ChatGPT for essay writing exhibited up to 55% lower brain activity, as measured by EEG signals, compared to those who wrote without assistance. If this was not troubling enough, in a later session where ChatGPT users were asked to write unaided, their brains remained less engaged than people without AI ('brain-only' participants, as the study quaintly labelled them). Memory also suffered — only 20% could recall what they had written, and 16% even denied authorship of their own text! The message seemed to be clear: outsourcing thinking to machines may be efficient, but it risks undermining our capacity for deep thought, retention, and ownership of ideas. Technology has always changed us, and we have seen this story many times before. There was a time when you remembered everyone's phone numbers, now you can barely recall your family's, if that. You remembered roads, lanes and routes; if you did not, you consulted a paper map or asked someone. Today, Google and other map apps do that work for us. Facebook reminds us of people's birthdays; email answers suggest themselves, sparing us of even that little effort of thinking. When autonomous cars arrive, will we even remember how to drive or just loll around in our seats as it takes us to our destination? Jonathan Haidt, in his 'The Anxious Generation,' points out how smartphones radically reshaped childhood. Unstructured outdoor play gave way to scrolling, and social bonds turned into notifications. Teen anxiety, loneliness, and attention deficits all surged. From calculators diminishing our mental arithmetic, to GPS weakening our spatial memory, every tool we invent alters us — subtly or drastically. 'Do we shape our tools, or do our tools shape us?' is a quote commonly misattributed to Marshall McLuhan but this question is hauntingly relevant in the age of AI. If we let machines do the thinking, what happens to our human capacity to think, reflect, reason, and learn? This is especially troubling for children, and more so in India. For one, India has the highest usage of ChatGPT globally. Most of it is by children and young adults, who are turning into passive consumers of AI-generated knowledge. Imagine a 16-year-old using ChatGPT to write a history essay. The output might be near-perfect, but what has she actually learned? The MIT study suggests — very little. Without effortful recall or critical thinking, she might not retain concepts, nor build the muscle of articulation. With exams still based on memory and original expression, and careers requiring problem-solving, this is a silent but real risk. The real questions, however, are not whether the study is correct or is exaggerating, or whether AI is making us dumber or not, but what can we do about it. We definitely need some guardrails and precautions, and we need to start building them now. I believe that we should teach ourselves and our children to: Ask the right questions: As answers become commodities, asking the right questions will be the differentiator. We need to relook at our education system and pedagogy and bring back this unique human skill of curiosity. Intelligence is not just about answers. It is about the courage to think, to doubt, and to create Invert classwork and homework: Reserve classroom time for 'brain-only' activities like journaling, debates, and mental maths. Homework can be about using AI tools to learn what will be discussed in class the next day. AI usage codes: Just as schools restrict smartphone use, they should set clear boundaries for when and how AI can be used. Teacher-AI synergy: Train educators to use AI as a co-teacher, and not a crutch. Think of AI as Augmented Intelligence, not an alternative one. Above all, make everyone AI literate: Much like reading, writing, and arithmetic were foundational in the digital age, knowing how to use AI wisely is the new essential skill of our time. AI literacy is more than just knowing prompts. It means understanding when to use AI, and when not to; how to verify AI output for accuracy, bias, and logic; how to collaborate with AI without losing your own voice, and how to maintain cognitive and ethical agency in the age of intelligent machines. Just as we once taught 'reading, writing, adding, multiplying,' we must now teach 'thinking, prompting, questioning, verifying.' History shows that humans adapt. The printing press did not destroy memory; calculators did not end arithmetic; smartphones did not abolish communication. We evolved with them—sometimes clumsily, but always creatively. Today, with AI, the challenge is deeper because it imitates human cognition. In fact, as AI challenges us with higher levels of creativity and cognition, human intelligence and connection will become even more prized. Take chess: a computer defeated Gary Kasparov in chess back in 1997; since then, a computer or AI can defeat any chess champion hundred times out of hundred. But human 'brains-only' chess has become much more popular now, as millions follow D Gukesh's encounters with Magnus Carlsen. So, if we cultivate AI literacy and have the right guardrails in place; if we teach ourselves and our children to think with AI but not through it, we can come out wiser, not weaker. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.


Forbes
11-07-2025
- Business
- Forbes
The Hidden Power Of Questions In The Age Of AI
Bob Pearson , Chair, The Next Practices Group. getty When a child learns to speak, they pepper us with questions—an instinct rooted in survival. In 2013, a British study of 1,000 mothers found that children asked their parents more than 300 questions per day at an hourly rate that rivals the pace of the Prime Minister's Questions time. Questions help us navigate life and our roles within organizations—clarifying expectations, accelerating learning, building relationships and managing risk. To understand how questions help cut through complexity, consider Shannon's Theorem, which was created by Claude Shannon, the father of information theory. The theorem offers an equation for how much data can be sent across a communications channel in the presence of noise. Shannon was working at Bell Labs, so he was mainly focused on channels like a telephone line or a radio band. At organizations today, however, we still need to understand how to eliminate the noise that distracts us as we toil away on our projects. This is the role of questions: to help us focus. Understanding which questions to ask at a given time point helps reduce uncertainty, which is fundamental to how we utilize machine learning, decision trees and the field of data science. This ability to ask the right question—especially as AI becomes central—isn't just a technical skill but a foundational one. The Value Of Questions Within Organizations If we know which questions to ask at each key point for a given task, we can increase knowledge and reduce uncertainty. Relevant questions help us shape workflow, meet customer needs, teach teams and build trust. Of course, being human can also be our biggest obstacle. Too often, we stifle questions, prioritizing output over whether work is done optimally or scalably. So, what is the importance of questions within organizations, and why do we need to improve how we use them? To start, questions can lead to new information, frame a problem or check our own bias. Questions can just as easily unlock innovation as they work their magic to keep us on track, so we scale workflow efficiently. Imagine two scenarios. In the first, you are leading an SAP transformation project for a Forbes 2000 company that will exceed $300 million in cost and occur over three years. Your job is to make it happen on time and on budget. If you break down your project, you have 12 workstreams and 100 individual tasks per workstream. That is 1,200 different time points where you want to ensure your team understands what to do, how to handle unforeseen issues and accomplish each task. Email, teams and spreadsheets are not enough. Now, imagine your friend leads the development of a new drug in BioPharma. She said she has 60 key decision points in discovery/preclinical, 20 for an Investigational New Drug submission and 40 for Phase 1 trials. Getting a new drug into the clinic has 120 key decision points. In each case, we can proactively identify the top five questions that align with each decision. As your team gets ready for each decision point, they look at the five questions to ask themselves and their team. Did they address a certain problem? Did they categorize the expense associated with this action? Do they have a reason to believe this action could be improved? This process of structured questioning is incredibly powerful, but also time-consuming. That's where AI enters the picture. How AI Can Shape Question Asking The subject matter experts of the world are the heroes here. They have been there, done that and know what questions to ask at each point. AI can then supplement their knowledge to create a detailed list of questions for every decision point. As the user touches any point on the SAP transformation, the key questions to address will appear. Those questions will be linked to background information, and questions answered by other project members will become available, as they apply to that particular task. Now, questions are a quality check and a way to contribute to the enterprise workflow. It's a team sport. AI is ensuring that knowledge gained anywhere in the world is being shared precisely to the right time points in a project in real time. AI platforms can learn, in real time, what questions are most effective, what answers are most important and what type of backup information helps teach, educate and answer our questions. Questions and content can be translated into any language, enabling ideas to emerge from anywhere. To achieve this vision, we need to adjust a fundamental habit that drives us today. Ever since Google first set sail in 1998, we have been conditioned to write a few keywords or phrases, so our questioning ability is rusty. Now, with generative AI, we must flex our 'question muscles,' as we realize that the value of information we receive is dependent on our ability to ask the optimal question. This emerging skill—known as prompt engineering—is critical for tapping into AI's full potential. Keep in mind that our effective use of generative AI will help us mirror our own intelligence. Conclusion We start life as curious humans. We should never let that trait dissipate. Our job, with AI, is to remain as curious and disciplined as we were as toddlers. We are just now applying this approach to major global projects. A map of questions for each decision point. An AI back end to share answers and related content. The ability of any team member to contribute to the knowledge of the enterprise. Exploring the effective use of AI is more important than lamenting on what jobs will go away. What is required? Pretty simple. We need to change our habits and approach as we embrace the advances of AI. The 'curious companies' will complete that SAP transformation for $200 million and a year earlier, or they will create a more effective clinical trial design of a new treatment, based on a new style of learning and scaling. The question we are left with is, when do we make this a reality? Forbes Communications Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?


Tom's Guide
22-05-2025
- Business
- Tom's Guide
What is Claude? Everything you need to know about Anthropic's AI powerhouse
Claude, developed by the AI safety startup Anthropic, has been pitched as the ethical brainiac of the chatbot world. With its focus on transparency, helpfulness and harmlessness (yes, really), Claude is quickly gaining traction as a trusted tool for everything from legal analysis to lesson planning. But what exactly is Claude? How does it work, what makes it different and why should you use it? Here's everything you need to know about the AI model aiming to be the most trustworthy assistant on the internet. Claude is a conversational AI model (yet, less chatty than ChatGPT) built by Anthropic, a company founded by former OpenAI researchers with a strong focus on AI alignment and safety. Named after Claude Shannon (aka the father of information theory), this chatbot is designed to be: At its core, Claude is a large language model (LLM) trained on massive datasets. But what sets it apart is the "Constitutional AI" system — a novel approach that guides Claude's behavior based on a written set of ethical principles, rather than human thumbs-up/down during fine-tuning. Claude runs on the latest version of Anthropic's model family (currently Claude 3.7 Sonnet), and it's packed with standout features: One of Claude's standout features is its massive context window. Most users get around 200,000 tokens by default — that's equivalent to about 500 pages of text — but in certain enterprise or specialized use cases, Claude can handle up to 1 million tokens. This is especially useful for summarizing research papers, analyzing long transcripts or comparing entire books. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Now that Claude includes vision capabilities, this huge context window becomes even more powerful. Claude can analyze images, graphs, screenshots and charts, making it an excellent assistant for tasks like data visualization, UI/UX feedback, and even document layout review. Anthropic's Claude family has become one of the most talked-about alternatives to ChatGPT and Gemini. Whether you're looking for a fast, lightweight assistant or a model that can deeply analyze documents, code or images, there's a Claude model that fits the bill. Here's a breakdown of the Claude 3 series, including the latest Claude 3.7 Sonnet, to help you decide which one best suits your needs. Best for: Real-time responses, customer service bots, light content generation Claude 3.5 Haiku is the fastest and most efficient model in the Claude lineup. It's optimized for quick, cost-effective replies, making it helpful for apps or scenarios where speed matters more than deep reasoning. Pros: Extremely fast and affordable Cons: Less capable at handling complex or multi-step reasoning tasks Best for: Content creation, coding help and image interpretation Sonnet strikes a solid balance between performance and efficiency. It features improved reasoning over Haiku and has solid multimodal capabilities, meaning it can understand images, charts, and visual data. Pros: Good for nuanced tasks, better reasoning and vision support Cons: Doesn't go as deep as Opus on complex technical or logical problems Best for: Advanced reasoning, coding, research and long-form content Opus is Anthropic's most advanced model. It excels at deep analysis, logic, math, programming, and creative work. If you're doing anything complex — from building software to analyzing legal documents — this is the model you want. Pros: State-of-the-art reasoning and benchmark-beating performance Cons: Slower and more expensive than Haiku or Sonnet With the release of Claude 3.7 Sonnet, Anthropic introduces the first hybrid reasoning model, allowing users to choose between quick responses and deeper, step-by-step thinking within the same interface. Key features of Claude 3.7 Sonnet: Claude 3.7 Sonnet is already outperforming its predecessors and many competitors across standard benchmarks: SWE-bench verified: 70.3% accuracy in real software tasksTAU-bench: Top-tier performance in real-world decision-makingInstruction following: Excellent at breaking down and executing multi-step commandsGeneral reasoning: Improved logic puzzle and abstract thinking ability Pricing: Users can try it for free, with restrictions. Otherwise, $3 per million input tokens, $15 per million output tokens (same as previous Sonnet versions). Although Claude has the capacity to search the web, it is not free like ChatGPT, Gemini or Perplexity. Users interested in looking up current events, news and information in real time would need a Pro account. Sometimes the chatbot is overly cautious and may decline boderline queries, even ones that may seem otherwise harmless. It may flag biased content. The chatbot is not as chatty and emotional as ChatGPT. Conversations with other chatbots may feel more natural. Claude lacks the extensive plugin marketplace of ChatGPT and the elaborate ecosystem of Gemini. Claude can be used for many of the same use cases as other chatbots. Users can draft contracts, write blog posts or emails. It can also generate poems and stories, lesson plans or technical chatbot can summarize complex documents and excel data and break down complicated topics for different audiences. Users can turn to Claude to debug problems, code efficiently, explain technical concepts and optimize algorithms. Real-world uses might include: Anthropic's Claude family now covers a full spectrum — from fast and lightweight (Haiku), to balanced and versatile (Sonnet), to powerful and analytical (Opus). The new Claude 3.7 Sonnet adds a hybrid layer, giving users more control over how much 'thinking' they want the AI to do. If you're interested in Claude, and nead reliable, high-context reasoning, it could be the bot for you. If you work with sensitive or ethical data in your professioanl or personal life and value safety and transparency, you may find it useful. Claude is a responsible, transparent AI but it won't replace your favorite AI for everything. But it is a responsible, transparent AI that you can try for free at — no login required for limited free access.


Gizmodo
13-05-2025
- Science
- Gizmodo
Gravity Could Be Proof We're Living in a Computer Simulation, New Theory Suggests
Gravity may not be a fundamental force of nature, but a byproduct of the universe streamlining information like a cosmic computer. We have long taken it for granted that gravity is one of the basic forces of nature–one of the invisible threads that keeps the universe stitched together. But suppose that this is not true. Suppose the law of gravity is simply an echo of something more fundamental: a byproduct of the universe operating under a computer-like code. That is the premise of my latest research, published in the journal AIP Advances. It suggests that gravity is not a mysterious force that attracts objects towards one another, but the product of an informational law of nature that I call the second law of infodynamics. It is a notion that seems like science fiction—but one that is based in physics and evidence that the universe appears to be operating suspiciously like a computer simulation. In digital technologies, right down to the apps in your phone and the world of cyberspace, efficiency is the key. Computers compact and restructure their data all the time to save memory and computer power. Maybe the same is taking place all over the universe? Information theory, the mathematical study of the quantification, storage and communication of information, may help us understand what's going on. Originally developed by mathematician Claude Shannon, it has become increasingly popular in physics and is used in a growing range of research areas. In a 2023 paper, I used information theory to propose my second law of infodynamics. This stipulates that information 'entropy', or the level of information disorganisation, will have to reduce or stay static within any given closed information system. This is the opposite of the popular second law of thermodynamics, which dictates that physical entropy, or disorder, always increases. Take a cooling cup of coffee. Energy flows from hot to cold until the temperature of the coffee is the same as the temperature of the room and its energy is minimum—a state called thermal equilibrium. The entropy of the system is a maximum at this point—with all the molecules maximally spread out, having the same energy. What that means is that the spread of energies per molecule in the liquid is reduced. If one considers the information content of each molecule based on its energy, then at the start, in the hot cup of coffee, the information entropy is maximum and at equilibrium the information entropy is minimum. That's because almost all molecules are at the same energy level, becoming identical characters in an informational message. So the spread of different energies available is reduced when there's thermal equilibrium. But if we consider just location rather than energy, then there's lots of information disorder when particles are distributed randomly in space—the information required to keep pace with them is considerable. When they consolidate themselves together under gravitational attraction, however, the way planets, stars and galaxies do, the information gets compacted and more manageable. In simulations, that's exactly what occurs when a system tries to function more efficiently. So, matter flowing under the influence of gravity need not be a result of a force at all. Perhaps it is a function of the way the universe compacts the information that it has to work with. Here, space is not continuous and smooth. Space is made up of tiny 'cells' of information, similar to pixels in a photo or squares on the screen of a computer game. In each cell is basic information about the universe—where, say, a particle is–and all are gathered together to make the fabric of the universe. If you place items within this space, the system gets more complex. But when all of those items come together to be one item instead of many, the information is simple again. The universe, under this view, tends to naturally seek to be in those states of minimal information entropy. The real kicker is that if you do the numbers, the entropic 'informational force' created by this tendency toward simplicity is exactly equivalent to Newton's law of gravitation, as shown in my paper. This theory builds on earlier studies of 'entropic gravity' but goes a step further. In connecting information dynamics with gravity, we are led to the interesting conclusion that the universe could be running on some kind of cosmic software. In an artificial universe, maximum-efficiency rules would be expected. Symmetries would be expected. Compression would be expected. And law–that is, gravity—would be expected to emerge from these computational rules. We may not yet have definitive evidence that we live in a simulation. But the deeper we look, the more our universe seems to behave like a computational process. Melvin M. Vopson is an associate professor of physics at the University of Portsmouth. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Yahoo
07-02-2025
- Science
- Yahoo
Sacred laws of entropy also work in the quantum world, suggests study
According to the second law of thermodynamics, the entropy of an isolated system tends to increase over time. Everything around us follows this law; for instance, the melting of ice, a room becoming messier, hot coffee cooling down, and aging — all are examples of entropy increasing over time. Until now, scientists believed that quantum physics is an exception to this law. This is because about 90 years ago, mathematician John von Neumann published a series of papers in which he mathematically showed that if we have complete knowledge of a system's quantum state, its entropy remains constant over time. However, a new study from researchers at the Vienna University of Technology (TU Wien) challenges this notion. It suggests that the entropy of a closed quantum system also increases over time until it reaches its peak level. 'It depends on what kind of entropy you look at. If you define the concept of entropy in a way that is compatible with the basic ideas of quantum physics, then there is no longer any contradiction between quantum physics and thermodynamics,' the TU Wien team notes. The study authors highlighted an important detail in Neumann's explanation. He stated that entropy for a quantum system doesn't change when we have full information about the system. However, the quantum theory itself tells us that it's impossible to have complete knowledge of a quantum system, as we can only measure certain properties with uncertainty. This means that von Neumann entropy isn't the correct approach to looking at the randomness and chaos in quantum systems. So then, what's the right way? Well, 'instead of calculating the von Neumann entropy for the complete quantum state of the entire system, you could calculate an entropy for a specific observable,' the study authors explain. This can be achieved using Shannon entropy, a concept proposed by mathematician Claude Shannon in 1948 in his paper titled A Mathematical Theory of Communication. Shannon entropy measures the uncertainty in the outcome of a specific measurement. It tells us how much new information we gain when observing a quantum system. "If there is only one possible measurement result that occurs with 100% certainty, then the Shannon entropy is zero. You won't be surprised by the result, you won't learn anything from it. If there are many possible values with similarly large probabilities, then the Shannon entropy is large," Florian Meier, first author of the study and a researcher at TU Wien, said. When we reimagine the entropy of a quantum system through the lens of Claude Shannon, we begin with a quantum system in a state of low Shannon entropy, meaning that the system's behavior is relatively predictable. For example, imagine you have an electron, and you decide to measure its spin (which can be up or down). If you already know the spin is 100% up, the Shannon entropy is zero—we learn nothing new from the measurement. In case the spin is 50% up and 50% down, then Shannon entropy is high because we are equally likely to get either result, and the measurement gives us new information. As more time passes, the entropy increases as you're never sure about the outcome. However, eventually, the entropy reaches a point where it levels off, meaning the system's unpredictability stabilizes. This mirrors what we observe in classical thermodynamics, where entropy increases until it reaches equilibrium and then stays constant. According to the study, this case of entropy also stands valid for quantum systems involving many particles and producing multiple outcomes. "This shows us that the second law of thermodynamics is also true in a quantum system that is completely isolated from its environment. You just have to ask the right questions and use a suitable definition of entropy," Marcus Huber, senior study author and an expert in quantum information science at TU Wien, said. The study is published in the journal PRX Quantum.