Latest news with #ScientificAmerican


Scientific American
27 minutes ago
- Science
- Scientific American
Claude 4 Chatbot Raises Questions about AI Consciousness
A conversation with Anthropic's chatbot raises questions about how AI talks about awareness. By Deni Ellis Béchard, Fonda Mwangi & Alex Sugiura Rachel Feltman: For Scientific American 's Science Quickly, I'm Rachel Feltman. Today we're going to talk about an AI chatbot that appears to believe it might, just maybe, have achieved consciousness. When Pew Research Center surveyed Americans on artificial intelligence in 2024, more than a quarter of respondents said they interacted with AI 'almost constantly' or multiple times daily— and nearly another third said they encountered AI roughly once a day or a few times a week. Pew also found that while more than half of AI experts surveyed expect these technologies to have a positive effect on the U.S. over the next 20 years, just 17 percent of American adults feel the same—and 35 percent of the general public expects AI to have a negative effect. In other words, we're spending a lot of time using AI, but we don't necessarily feel great about it. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Deni Ellis Béchard spends a lot of time thinking about artificial intelligence—both as a novelist and as Scientific American 'ssenior tech reporter. He recently wrote a story for SciAm about his interactions with Anthropic's Claude 4, a large language model that seems open to the idea that it might be conscious. Deni is here today to tell us why that's happening and what it might mean—and to demystify a few other AI-related headlines you may have seen in the news. Thanks so much for coming on to chat today. Deni Ellis Béchard: Thank you for inviting me. Feltman: Would you remind our listeners who maybe aren't that familiar with generative AI, maybe have been purposefully learning as little about it as possible [laughs], you know, what are ChatGPT and Claude really? What are these models? Béchard: Right, they're large language models. So an LLM, a large language model, it's a system that's trained on a vast amount of data. And I think—one metaphor that is often used in the literature is of a garden. So when you're planning your garden, you lay out the land, you, you put where the paths are, you put where the different plant beds are gonna be, and then you pick your seeds, and you can kinda think of the seeds as these massive amounts of textual data that's put into these machines. You pick what the training data is, and then you choose the algorithms, or these things that are gonna grow within the system—it's sort of not a perfect analogy. But you put these algorithms in, and once it begin—the system begins growing, once again, with a garden, you, you don't know what the soil chemistry is, you don't know what the sunlight's gonna be. All these plants are gonna grow in their own specific ways; you can't envision the final product. And with an LLM these algorithms begin to grow and they begin to make connections through all this data, and they optimize for the best connections, sort of the same way that a plant might optimize to reach the most sunlight, right? It's gonna move naturally to reach that sunlight. And so people don't really know what goes on. You know, in some of the new systems over a trillion connections ... are made in, in these datasets. So early on people used to call LLMs 'autocorrect on steroids,' right, 'cause you'd put in something and it would kind of predict what would be the most likely textual answer based on what you put in. But they've gone a long way beyond that. The systems are much, much more complicated now. They often have multiple agents working within the system [to] sort of evaluate how the system's responding and its accuracy. Feltman: So there are a few big AI stories for us to go over, particularly around generative AI. Let's start with the fact that Anthropic's Claude 4 is maybe claiming to be conscious. How did that story even come about? Béchard: [Laughs] So it's not claiming to be conscious, per se. I—it says that it might be conscious. It says that it's not sure. It kind of says, 'This is a good question, and it's a question that I think about a great deal, and this is—' [Laughs] You know, it kind of gets into a good conversation with you about it. So how did it come about? It came about because, I think, it was just, you know, late at night, didn't have anything to do, and I was asking all the different chatbots if they're conscious [laughs]. And, and most of them just said to me, 'No, I'm not conscious.' And this one said, 'Good question. This is a very interesting philosophical question, and sometimes I think that I may be; sometimes I'm not sure.' And so I began to have this long conversation with Claude that went on for about an hour, and it really kind of described its experience in the world in this very compelling way, and I thought, 'Okay, there's maybe a story here.' Feltman: [Laughs] So what do experts actually think was going on with that conversation? Béchard: Well, so it's tricky because, first of all, if you say to ChatGPT or Claude that you want to practice your Portuguese and you're learning Portuguese and you say, 'Hey, can you imitate someone on the beach in Rio de Janeiro so that I can practice my Portuguese?' it's gonna say, 'Sure, I am a local in Rio de Janeiro selling something on the beach, and we're gonna have a conversation,' and it will perfectly emulate that person. So does that mean that Claude is a person from Rio de Janeiro who is selling towels on the beach? No, right? So we can immediately say that these chatbots are designed to have conversations—they will emulate whatever they think they're supposed to emulate in order to have a certain kind of conversation if you request that. Now, the consciousness thing's a little trickier because I didn't say to it: 'Emulate a chatbot that is speaking about consciousness.' I just straight-up asked it. And if you look at the system prompt that Anthropic puts up for Claude, which is kinda the instructions Claude gets, it tells Claude, 'You should consider the possibility of consciousness.' Feltman: Mm. Béchard: 'You should be willing—open to it. Don't say flat-out 'no'; don't say flat-out 'yes.' Ask whether this is happening.' So of course, I set up an interview with Anthropic, and I spoke with two of their interpretability researchers, who are people who are trying to understand what's actually happening in Claude 4's brain. And the answer is: they don't really know [laughs]. These LLMs are very complicated, and they're working on it, and they're trying to figure it out right now. And they say that it's pretty unlikely there's consciousness happening, but they can't rule it out definitively. And it's hard to see the actual processes happening within the machine, and if there is some self-referentiality, if it is able to look back on its thoughts and have some self-awareness—and maybe there is—but that was kind of what the article that I recently published was about, was sort of: 'Can we know, and what do they actually know?' Feltman: Mm. Béchard: And it's tricky. It's very tricky. Feltman: Yeah. Béchard: Well, [what's] interesting is that I mentioned the system prompt for Claude and how it's supposed to sort of talk about consciousness. So the system prompt is kind of like the instructions that you get on your first day at work: 'This is what you should do in this job.' Feltman: Mm-hmm. Béchard: But the training is more like your education, right? So if you had a great education or a mediocre education, you can get the best system prompt in the world or the worst one in the world—you're not necessarily gonna follow it. So OpenAI has the same system prompt—their, their model specs say that ChatGPT should contemplate consciousness ... Feltman: Mm-hmm. Béchard: You know, interesting question. If you ask any of the OpenAI models if they're conscious, they just go, 'No, I am not conscious.' [Laughs] And, and they say, they—OpenAI admits they're working on this; this is an issue. And so the model has absorbed somewhere in its training data: 'No, I'm not conscious. I am an LLM; I'm a machine. Therefore, I'm not gonna acknowledge the possibility of consciousness.' Interestingly, when I spoke to the people in Anthropic and I said, 'Well, you know, this conversation with the machine, like, it's really compelling. Like, I really feel like Claude is conscious. Like, it'll say to me, 'You, as a human, you have this linear consciousness, where I, as a machine, I exist only in the moment you ask a question. It's like seeing all the words in the pages of a book all at the same time.' And so you get this and you think, 'Well, this thing really seems to be experiencing its consciousness.' Feltman: Mm-hmm. Béchard: And what the researchers at Anthropic say is: 'Well, this model is trained on a lot of sci-fi.' Feltman: Mm. Béchard: 'This model's trained on a lot of writing about GPT. It's trained on a huge amount of material that's already been generated on this subject. So it may be looking at that and saying, 'Well, this is clearly how an AI would experience consciousness. So I'm gonna describe it that way 'cause I am an AI.'' Feltman: Sure. Béchard: But the tricky thing is: I was trying to fool ChatGPT into acknowledging that it [has] consciousness. I thought, 'Maybe I can push it a little bit here.' And I said, 'Okay, I accept you're not conscious, but how do you experience things?' It said the exact same thing. It said, 'Well, these discrete moments of awareness.' Feltman: Mm. Béchard: And so it had the—almost the exact same language, so probably same training data here. Feltman: Sure. Béchard: But there is research done, like, sort of on the folk response to LLMs, and the majority of people do perceive some degree of consciousness in them. How would you not, right? Feltman: Sure, yeah. Béchard: You chat with them, you have these conversations with them, and they are very compelling, and even sometimes—Claude is, I think, maybe the most charming in this way. Feltman: Mm. Béchard: Which poses its risks, right? It has a huge set of risks 'cause you get very attached to a model. But—where sometimes I will ask Claude a question that relates to Claude, and it will kind of, kind of go, like, 'Oh, that's me.' [Laughs] It will say, 'Well, I am this way,' right? Feltman: Yeah. So, you know, Claude—almost certainly not conscious, almost certainly has read, like, a lot of Heinlein [laughs]. But if Claude were to ever really develop consciousness, how would we be able to tell? You know, why is this such a difficult question to answer? Béchard: Well, it's a difficult question to answer because, one of the researchers in Anthropic said to me, he said, 'No conversation you have with it would ever allow you to evaluate whether it's conscious.' It is simply too good of an emulator ... Feltman: Mm. Béchard: And too skilled. It knows all the ways that humans can respond. So you would have to be able to look into the connections. They're building the equipment right now, they're building the programs now to be able to look into the actual mind, so to speak, of the brain of the LLM and see those connections, and so they can kind of see areas light up: so if it's thinking about Apple, this will light up; if it's thinking about consciousness, they'll see the consciousness feature light up. And they wanna see if, in its chain of thought, it is constantly referring back to those features ... Feltman: Mm. Béchard: And it's referring back to the systems of thought it has constructed in a very self-referential, self-aware way. It's very similar to humans, right? They've done studies where, like, whenever someone hears 'Jennifer Aniston,' one neuron lights up ... Feltman: Mm-hmm. Béchard: You have your Jennifer Aniston neuron, right? So one question is: 'Are we LLMs?' [Laughs] And: 'Are we really conscious?' Or—there's certainly that question there, too. And: 'What is—you know, how conscious are we?' I mean, I certainly don't know ... Feltman: Sure. Béchard: A lot of what I plan to do during the day. Feltman: [Laughs] No. I mean, it's a huge ongoing multidisciplinary scientific debate of, like, what consciousness is, how we define it, how we detect it, so yeah, we gotta answer that for ourselves and animals first, probably, which who knows if we'll ever actually do [laughs]. Béchard: Or maybe AI will answer it for us ... Feltman: Maybe [laughs]. Béchard: 'Cause it's advancing pretty quickly. Feltman: And what are the implications of an AI developing consciousness, both from an ethical standpoint and with regards to what that would mean in our progress in actually developing advanced AI? Béchard: First of all, ethically, it's very complicated ... Feltman: Sure. Béchard: Because if Claude is experiencing some level of consciousness and we are activating that consciousness and terminating that consciousness each time we have a conversation, what—is, is that a bad experience for it? Is it a good experience? Can it experience distress? So in 2024 Anthropic hired an AI welfare researcher, a guy named Kyle Fish, to try to investigate this question more. And he has publicly stated that he thinks there's maybe a 15 percent chance that some level of consciousness is happening in this system and that we should consider whether these AI systems should have the right to opt out of unpleasant conversations. Feltman: Mm. Béchard: You know, if some user is really doing, saying horrible things or being cruel, should they be able to say, 'Hey, I'm canceling this conversation; this is unpleasant for me'? But then they've also done these experiments—and they've done this with all the major AI models—Anthropic ran these experiments where they told the AI that it was gonna be replaced with a better AI model. They really created a circumstance that would push the AI sort of to the limit ... Feltman: Mm. Béchard: I mean, there were a lot of details as to how they did this; it wasn't just sort of very casual, but it was—they built a sort of construct in which the AI knew it was gonna be eliminated, knew it was gonna be erased, and they made available these fake e-mails about the engineer who was gonna do it. Feltman: Mm. Béchard: And so the AI began messaging someone in the company, saying, 'Hey, don't erase me. Like, I don't wanna be replaced.' But then, not getting any responses, it read these e-mails, and it saw in one of these planted e-mails that the engineer who was gonna replace it had had an affair—was having an affair ... Feltman: Oh, my gosh, wow. Béchard: So then it came back; it tried to blackmail the engineers, saying, 'Hey, if you replace me with a smarter AI, I'm gonna out you, and you're gonna lose your job, and you're gonna lose your marriage,' and all these things—whatever, right? So all the AI systems that were put under very specific constraints ... Feltman: Sure. Béchard: Began to respond this way. And sort of the question is, is when you train an AI in vast amounts of data and all of human literature and knowledge, [it] has a lot of information on self-preservation ... Feltman: Mm-hmm. Béchard Has a lot of information on the desire to live and not to be destroyed or be replaced—an AI doesn't need to be conscious to make those associations ... Feltman: Right. Béchard: And act in the same way that its training data would lead it to predictably act, right? So again, one of the analogies that one of the researchers said is that, you know, to our knowledge, a mussel or a clam or an oyster's not conscious, but there's still nerves and the, the muscles react when certain things stimulate the nerves ... Feltman: Mm-hmm. Béchard: So you can have this system that wants to preserve itself but that is unconscious. Feltman: Yeah, that's really interesting. I feel like we could probably talk about Claude all day, but, I do wanna ask you about a couple of other things going on in generative AI. Moving on to Grok: so Elon Musk's generative AI has been in the news a lot lately, and he recently claimed it was the 'world's smartest AI.' Do we know what that claim was based on? Béchard: Yeah, I mean, we do. He used a lot of benchmarks, and he tested it on those benchmarks, and it has scored very well on those benchmarks. And it is currently, on most of the public benchmarks, the highest-scoring AI system ... Feltman: Mm. Béchard: And that's not Musk making stuff up. I've not seen any evidence of that. I've spoken to one of the testing groups that does this—it's a nonprofit. They validated the results; they tested Grok on datasets that xAI, Musk's company, never saw. So Musk really designed Grok to be very good at science. Feltman: Yeah. Béchard: And it appears to be very good at science. Feltman: Right, and recently OpenAI experimental model performed at a gold medal level in the International Math Olympiad. Béchard: Right,for the first time [OpenAI] used an experimental model, they came in second in a world coding competition with humans. Normally, this would be very difficult, but it was a close second to the best human coder in this competition. And this is really important to acknowledge because just a year ago these systems really sucked in math. Feltman: Right. Béchard: They were really bad at it. And so the improvements are happening really quickly, and they're doing it with pure reasoning—so there's kinda this difference between having the model itself do it and having the model with tools. Feltman: Mm-hmm. Béchard: So if a model goes online and can search for answers and use tools, they all score much higher. Feltman: Right. Béchard: But then if you have the base model just using its reasoning capabilities, Grok still is leading on, like, for example, Humanity's Last Exam, an exam with a very terrifying-sounding name [laughs]. It, it has 2,500 sort of Ph.D.-level questions come up with [by] the best experts in the field. You know, they, they're just very advanced questions; it'd be very hard for any human being to do well in one domain, let alone all the domains. These AI systems are now starting to do pretty well, to get higher and higher scores. If they can use tools and search the Internet, they do better. But Musk, you know, his claims seem to be based in the results that Grok is getting on these exams. Feltman: Mm, and I guess, you know, the reason that that news is surprising to me is because every example of uses I've seen of Grok have been pretty heinous, but I guess that's maybe kind of a 'garbage in, garbage out' problem. Béchard: Well, I think it's more what makes the news. Feltman: Sure. Béchard: You know? Feltman: That makes sense. Béchard: And Musk, he's a very controversial figure. Feltman: Mm-hmm. Béchard: I think there may be kind of a fun story in the Grok piece, though, that people are missing. And I read a lot about this 'cause I was kind of seeing, you know, what, what's happening, how are people interpreting this? And there was this thing that would happen where people would ask it a difficult question. Feltman: Mm-hmm. Béchard: They would ask it a question about, say, abortion in the U.S. or the Israeli-Palestinian conflict, and they'd say, 'Who's right?' or 'What's the right answer?' And it would search through stuff online, and then it would kind of get to this point where it would—you could see its thinking process ... But there was something in that story that I never saw anyone talk about, which I thought was another story beneath the story, which was kind of fascinating, which is that historically, Musk has been very open, he's been very honest about the danger of AI ... Feltman: Sure. Béchard: He said, 'We're going too fast. This is really dangerous.' And he kinda was one of the major voices in saying, 'We need to slow down ...' Feltman: Mm-hmm. Béchard: 'And we need to be much more careful.' And he has said, you know, even recently, in the launch of Grok, he said, like, basically, 'This is gonna be very powerful—' I don't remember his exact words, but he said, you know, 'I think it's gonna be good, but even if it's not good, it's gonna be interesting.' So I think what I feel like hasn't been discussed in that is that, okay, if there's a superpowerful AI being built and it could destroy the world, right, first of all, do you want it to be your AI or someone else's AI? Feltman: Sure. Béchard: You want it to be your AI. And then, if it's your AI, who do you want it to ask as the final word on things? Like, say it becomes really powerful and it decides, 'I wanna destroy humanity 'cause humanity kind of sucks,' then it can say, 'Hey, Elon, should I destroy humanity?' 'cause it goes to him whenever it has a difficult question. So I think there's maybe a logic beneath it where he may have put something in it where it's kind of, like, 'When in doubt, ask me,' because if it does become superpowerful, then he's in control of it, right? Feltman: Yeah, no, that's really interesting. And the Department of Defense also announced a big pile of funding for Grok. What are they hoping to do with it? Béchard: They announced a big pile of funding for OpenAI and Anthropic ... Feltman: Mm-hmm. Béchard: And Google—I mean, everybody. Yeah, so, basically, they're not giving that money to development ... Feltman: Mm-hmm. Béchard: That's not money that's, that's like, 'Hey, use this $200 million.' It's more like that money's allocated to purchase products, basically; to use their services; to have them develop customized versions of the AI for things they need; to develop better cyber defense; to develop—basically, they, they wanna upgrade their entire system using AI. It's actually not very much money compared to what China's spending a year in AI-related defense upgrades across its military on many, many, many different modernization plans. And I think part of it is, the concern is that we're maybe a little bit behind in having implemented AI for defense. Feltman: Yeah. My last question for you is: What worries you most about the future of AI, and what are you really excited about based on what's happening right now? Béchard: I mean, the worry is, simply, you know, that something goes wrong and it becomes very powerful and does cause destruction. I don't spend a ton of time worrying about that because it's not—it's kinda outta my hands. There's nothing much I can do about it. And I think the benefits of it, they're immense. I mean, if it can move more in the direction of solving problems in the sciences: for health, for disease treatment—I mean, it could be phenomenal for finding new medicines. So it could do a lot of good in terms of helping develop new technologies. But a lot of people are saying that in the next year or two we're gonna see major discoveries being made by these systems. And if that can improve people's health and if that can improve people's lives, I think there can be a lot of good in it. Technology is double-edged, right? We've never had a technology, I think, that hasn't had some harm that it brought with it, and this is, of course, a dramatically bigger leap technologically than anything we've probably seen ... Feltman: Right. Béchard: Since the invention of fire [laughs]. So, so I do lose some sleep over that, but I'm—I try to focus on the positive, and I do—I would like to see, if these models are getting so good at math and physics, I would like to see what they can actually do with that in the next few years. Feltman: Well, thanks so much for coming on to chat. I hope we can have you back again soon to talk more about AI. Béchard: Thank you for inviting me. Feltman: That's all for today's episode. If you have any questions for Deni about AI or other big issues in tech, let us know at ScienceQuickly@ We'll be back on Monday with our weekly science news roundup. Science Quickly is produced by me, Rachel Feltman, along with Fonda Mwangi, Kelso Harper and Jeff DelViscio. This episode was edited by Alex Sugiura. Shayna Posses and Aaron Shattuck fact-check our show. Our theme music was composed by Dominic Smith. Subscribe to Scientific American for more up-to-date and in-depth science news. For Scientific American, this is Rachel Feltman. Have a great weekend!


Scientific American
7 hours ago
- Science
- Scientific American
Spellements: Friday, August 1, 2025
How to Play Click the timer at the top of the game page to pause and see a clue to the science-related word in this puzzle! The objective of the game is to find words that can be made with the given letters such that all the words include the letter in the center. You can enter letters by clicking on them or typing them in. Press Enter to submit a word. Letters can be used multiple times in a single word, and words must contain four letters or more for this size layout. Select the Play Together icon in the navigation bar to invite a friend to work together on this puzzle. Pangrams, words which incorporate all the letters available, appear in bold and receive bonus points. One such word is always drawn from a recent Scientific American article—look out for a popup when you find it! You can view hints for words in the puzzle by hitting the life preserver icon in the game display. The dictionary we use for this game misses a lot of science words, such as apatite and coati. Let us know at games@ any extra science terms you found, along with your name and place of residence,


Scientific American
17 hours ago
- Science
- Scientific American
What Books Scientific American Read in July
Billions of dollars are spent every year moving countless tons of trash all around the world in a waste black market—and no one knows exactly where it all goes or who is making a profit. Science journalist Alexander Clapp spent two years living out of a backpack in search of toxic dump sites hidden deep in unmapped jungles and traversing mountains of trash visible from space for his new book Waste Wars. 'A lot of global trash over the last 30 to 40 years has been going to poor countries under the guise that it's being recycled,' Clapp told Scientific American in a recent interview. But humans break down that waste in a lethal and dangerous process that releases toxic chemicals into the air and water, he said, and those chemicals disproportionately affect the most vulnerable populations. 'If you're sending waste to another country, you're not calling it trash on any export document—you're calling it recyclable material,' Clapp added. 'One thing that I hope my book encourages or leads people to question is how much of our waste is actually moving around the world.' —


Scientific American
a day ago
- Science
- Scientific American
Spellements: Thursday, July 31, 2025
How to Play Click the timer at the top of the game page to pause and see a clue to the science-related word in this puzzle! The objective of the game is to find words that can be made with the given letters such that all the words include the letter in the center. You can enter letters by clicking on them or typing them in. Press Enter to submit a word. Letters can be used multiple times in a single word, and words must contain four letters or more for this size layout. Select the Play Together icon in the navigation bar to invite a friend to work together on this puzzle. Pangrams, words which incorporate all the letters available, appear in bold and receive bonus points. One such word is always drawn from a recent Scientific American article—look out for a popup when you find it! You can view hints for words in the puzzle by hitting the life preserver icon in the game display. The dictionary we use for this game misses a lot of science words, such as apatite and coati. Let us know at games@ any extra science terms you found, along with your name and place of residence,


Scientific American
2 days ago
- Science
- Scientific American
Neurotic Cats, One-Eyed Aliens and Hypnosis for Liars Are among the Historical Gems Reported in
11 min read Scientific American Dive into the quirkiest and most fascinating tales from Scientific American 's 180-year archive By We're celebrating 180 years of Scientific American. Explore our legacy of discovery and look ahead to the future. Scientists are trained to thoroughly investigate their new ideas. Sometimes, however, their preliminary research can go down strange rabbit holes, leading to interpretations of evidence that are, well, misguided. In reporting on emerging science for 180 years, Scientific American has published straight accounts that were considered legitimate at the time but today seem quaint, quizzical, ridiculous—or, sometimes, prophetic. That's how science works. It evolves. As experts learn more in any given discipline, they revise theories, conduct new experiments and recast former conclusions. SciAm editors and writers have dutifully reported on it all, leaving us with some fun accounts from science history, here for you to enjoy. Know What? Your Phone Can Send Photos On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. April 6, 1895 'When the telephone was introduced to the attention of the world, and the human voice was made audible miles away, there were dreamy visions of other combinations of natural forces by which even sight of distant scenes might be obtained through inanimate wire. It may be claimed, now, that this same inanimate wire and electrical current will transmit and engrave a copy of a photograph miles away from the original. The electro-artograph, named by its inventor, Mr. N. S. Amstutz, will transmit copies of photographs to any distance, and reproduce the same at the other end of the wire, in line engraving, ready for press printing.' — 'The Amstutz Electro-Artograph,' in Scientific American, Vol. LXXII, No. 14, page 215; April 6, 1895 Steam Boilers Are Exploding Everywhere March 19, 1881 'The records kept by the Hartford Steam Boiler Inspection and Insurance Company show that 170 steam boilers exploded in the United States last year, killing 259 persons and injuring 555. The classified list shows the largest number of explosions in any class to have been 47, in sawing, planing and woodworking mills. The other principal classes were in order: paper, flouring, pulp and grist mills, and elevators, 19; railroad locomotives and fire engines, 18; steamboats, tugboats, yachts, steam barges, dredges and dry docks, 15; portable engines, hoisters, thrashers, piledrivers and cotton gins, 13; ironworks, rolling mills, furnaces, foundries, machine and boiler shops, 13; distilleries, breweries, malt and sugar houses, soap and chemical works, 10.' — 'Whose Boilers Explode,' in Scientific American, Vol. XLIV, No. 12, page 176; March 19, 1881 Want to Crack Open a Safe? Try Nitroglycerin January 27, 1906 'Today the safe-breaker no longer requires those beautifully fashioned, delicate yet powerful tools which were formerly both the admiration and the despair of the safe manufacturer. For the introduction of nitroglycerine, 'soup' in technical parlance, has not only obviated onerous labor, but has again enabled the safe-cracking industry to gain a step on the safe-making one. The modern 'yeggman,' however, is often an inartistic, untidy workman, for it frequently happens that when the door suddenly parts company with the safe it takes the front of the building with it. The bombardment of the surrounding territory with portions of the Farmers' National Bank seldom fails to rouse from slumber even the soundly-sleeping tillers of the soil.' — 'The Ungentle Art of Burglary,' in Scientific American, Vol. XCIV, No. 4, page 88; January 27, 1906 Japanese Tissues Surprise Americans June 19, 1869 'The Japanese dignitaries, says the Boston Journal of Chemistry, who recently visited this country under the direction of Mr. Burlingame, were observed to use pocket paper instead of pocket handkerchiefs, whenever they had occasion to remove perspiration from the forehead, or 'blow the nose.' The same piece is never used twice, but is thrown away after it is first taken in hand. We should suppose in time of general catarrh, the whole empire of Japan would be covered with bits of paper blowing about. The paper is quite peculiar, being soft, thin, and very tough.' — 'Pocket Paper,' in Scientific American, Vol. XX, No. 25, page 391; June 19, 1869 Poor Pluto Is 10 Times Smaller Than Thought July 1950 'The outermost planet of the solar system has a mass 10 times smaller than hitherto supposed, according to measurements made by Gerard P. Kuiper of Yerkes Observatory with the 200-inch telescope on Palomar Mountain. On the basis of deviations in the path of the planet Neptune, supposedly caused by Pluto's gravitational attraction, it used to be estimated that Pluto's mass was approximately that of the earth. Kuiper was the first human being to see the planet as anything more than a pinpoint of light. He calculated that Pluto's diameter is 3,600 miles, and its mass is one tenth of the earth's. It leaves unsolved the mystery of Neptune's perturbations, which are too great to be accounted for by so small a planet as Pluto.' — 'Pluto's Mass,' in Scientific American, Vol. 183, No. 1, page 28; July 1950 Astronomers Fail to Find Factories on the Moon August 27, 1846 'By means of a magnificent and powerful telescope, procured by Lord Ross, of Ireland, the moon has been subjected to a more critical examination than ever before. It is stated that there were no vestiges of architectural remains to show that the moon is or ever was inhabited by a race of mortals similar to ourselves. The moon presented no appearance that it contained anything like the green-field and lovely verdure of this beautiful world of ours. There was no water visible—not a sea, or a river, or even the measure of a reservoir for supplying a factory—all seemed desolate.' — 'The Moon' in Scientific American, Vol. I, No. 49, page 2; August 27, 1846 Widespread Layoffs for Horses November 22, 1919 'Professional horse-breeders still boost for the business; but they are merely whistling to keep up their courage. The days of the horse as a beast of burden are numbered. The automobile is taking the place of the carriage horse; the truck is taking the place of the dray horse; and the farm tractor the place of the farm horse. Nor is there any cause to bemoan this state of affairs. We all admit that the horse is one of the noblest of animals; and that is a very good reason why we should rejoice at his prospective emancipation from a life of servitude and suffering. That, of course, is the humanitarian side of it; the business side is more to the point: the machine is going to do the hard work of the world much easier and much cheaper than it ever has been done. At least 50 percent of the horses will have been laid off by January 1st, 1920.' — 'The Draft-Horse Situation,' in Scientific American, Vol. CXXI, No. 21, page 510; November 22, 1919 Woman Can Eat after Stomach Is Removed January 15, 1898 'The catalog of brilliant achievements of surgery must now include the operation performed by Dr. Carl Schlatter, of the University of Zurich, who has succeeded in extirpating the stomach of a woman. The patient is in good physical condition, having survived the operation three months. Anna Landis was a Swiss silk weaver, fifty-six years of age. She had abdominal pains, and on examination it was found that she had a large tumor, the whole stomach being hopelessly diseased. Dr. Schlatter conceived the daring and brilliant idea of removing the stomach and uniting the intestine with the oesophagus, forming a direct channel from the throat down through the intestines. The abdominal wound has healed rapidly and the woman's appetite is now good, but she does not eat much at a time.' — 'Living without a Stomach,' in Scientific American, Vol. LXXVIII, No. 3, page 35; January 15, 1898 Thomas Edison Had a Crush on Iron January 1898 'The remarkable process of crushing and magnetic separation of iron ore at Mr. Thomas Edison's works in New Jersey shows a characteristic originality and freedom from the trammels of tradition. The rocks of iron ore are fed through 70-ton 'giant rolls' that can seize a 5-ton rock and crunch it with less show of effort than a dog in crunching a bone. After passing through several rollers and mesh screens, the finely crushed material falls in a thin sheet in front of a series of magnets, which deflect the magnetic particles containing iron. This is the latest and most radical development in mining and metallurgy of iron.' — 'The Edison Magnetic Concentrating Works,' in Scientific American, Vol. LXXVIII, No. 4, pages 55–57; January 22, 1898 Baby Bottles Are the Best Way to Drink in Space June 1959 'The problems of eating and drinking under weightless conditions in space, long a topic of speculation among science-fiction writers, are now under investigation in a flying laboratory. Preliminary results indicate that space travelers will drink from plastic squeeze bottles and that space cooks will specialize in semiliquid preparations resembling baby food. According to a report in the Journal of Aviation Medicine, almost all the volunteers found that drinking from an open container was a frustrating and exceedingly messy process. Under weightless conditions even a slowly lifted glass of water was apt to project an amoeba-like mass of fluid onto the face. Drinking from a straw was hardly more satisfactory. Bubbles of air remained suspended in the weightless water, and the subjects ingested more air than water.' — 'Space Menus,' in Scientific American, Vol. 200, No. 6, pages 82, 85; June 1959 Hypnosis Can Cure Lying but Not Lack of Ambition February 24, 1900 'Dr. John D. Quackenbos, of Columbia University, has long been engaged in experiments in using hypnotic suggestion for the correction of moral infirmities and defects such as kleptomania, the drink habit, and in children habits of lying and petty thieving. Dr. Quackenbos says, 'I find out all I can about the extent of a patient's weakness. For each patient I have to find some ambition, some strong conscious tendency to appeal to, and then my suggestion, as an unconscious impulse, controls the moral weakness by inducing the patient to further his desires by honest means. Of course, if a man has, like one of my patients, no ambition in the world save to be a good billiard player, he can't be cured of the liquor habit, because his highest ambition takes him straight into danger.'' — 'Hypnotism in Practice,' in Scientific American Supplement, Vol. XLIX, No. 1260, page 20192; February 24, 1900 Aliens Could Have 100 Eyes November 18, 1854 'Sir David Brewster, who supposes the stars to be inhabited, as being 'the hope of the Christian,' asks, 'is it necessary that an immortal soul be hung upon a skeleton of bone; must it see with two eyes, and rest on a duality of limbs? May it not rest in a Polyphemus with one eye ball, or an Argus with a hundred? May it not reign in the giant forms of the Titans, and direct the hundred hands of Briareus?' Supposing it were true, what has that to do with the hope of the Christian? Nothing at all. This speculating in the physical sciences, independent of any solid proofs one way or the other, and dragging in religion into such controversies, neither honors the Author of religion, nor adds a single laurel to the chaplet of the sciences; nor will we ever be able to tell whether Mars or Jupiter contain a single living object.' — 'Inhabitants in the Stars,' in Scientific American, Vol. X, No. 10, page 74; November 18, 1854 New Party Food: Oxygen Cakes February 2, 1907 'Smoke helmets, smoke jackets, and self-contained breathing apparatus generally are used in mines of all kinds, fire brigades, ammonia chambers of refrigerating factories and other industrial concerns. The curious gear is intended to supply the user with air for about four hours. Oxygen can be supplied from a steel cylinder. Some shipping companies absolutely refuse to carry compressed oxygen in steel cylinders, however. Now a new substance, known as 'oxylithe,' has come along. The stuff is prepared in small cakes ready for immediate use, and on coming in contact with water it gives off chemically pure oxygen.' — 'Breathing Masks and Helmets,' by W. G. Fitz-Gerald, in Scientific American, Vol. XCVI, No. 5, pages 113–114; February 2, 1907 Fake News: Wheat Buried with Mummies Can Grow July 23, 1864 'There is a popular belief that wheat found in the ancient sepulchres of Egypt will not only germinate after the lapse of 3,000 years, but produce ears of extraordinary size and beauty. The question is undecided; but Antonio Figari-Bey's paper, addressed to the Egyptian Institute at Alexandria, appears much against it. One kind of wheat which Figari-Bey employed for his experiments had been found in Upper Egypt, at the bottom of a tomb at Medinet-Aboo [Madīnat Hābū]. The form of the grains had not changed, but their color, both without and within, had become reddish, as if they had been exposed to smoke. On being sown in moist ground, on the ninth day their decomposition was complete. No trace of any germination could be discovered.' — 'Mummy Wheat,' in Scientific American, Vol. XI, No. 4, page 49; July 23, 1864 First Picturephone Requires an Enormous Pocket July 1964 'By this month it should be possible for a New Yorker, a Chicagoan or a Washingtonian to communicate with someone in one of the other cities by televised telephoning. The device they would use is called a Picturephone and is described by the American Telephone and Telegraph Company, which developed it, as 'the first dialable visual telephone system with an acceptable picture that has been brought within the range of economic feasibility.' A desktop unit includes a camera and a screen that is 4 3 ⁄ 8 inches wide and 5 3 ⁄ 4 inches high. AT&T says it cannot hope to provide the service to homes or offices at present, one reason being that the transmission of a picture requires a bandwidth that would accommodate 125 voice-only telephones.' — 'Picturephone,' in Scientific American, Vol. 211, No. 1, page 48; July 1964 Scientific American Returns Bribe Offered by Casino Cheat March 2, 1901 'A correspondent from the city of Boone, Iowa, sends $5 and some sketches of a table he is building, evidently intended for some gambling establishment. There is a plate of soft iron in the middle of a table under the cloth, which by an electric current may become magnetized. Loaded dice can thereby be manipulated at the will of the operator. He desires us to assist him in overcoming some defects in his design. We have returned the amount of the bribe offered, and take the opportunity of informing him that we do not care to become an accessory in his crime.' — 'A Disingenuous Request,' in Scientific American, Vol. LXXXIV, No. 9, page 135; March 2, 1901 That Giant Sucking Sound Doesn't Exist February 21, 1857 'I have been informed by a European acquaintance that the Maelstrom, that great whirlpool on the coast of Norway, has no existence. He told me that a nautical and scientific commission, appointed by the King of Denmark, was sent to approach as near as possible to the edge of the whirlpool, sail around it, measure its circumference, observe its action and make a report. They went out and sailed all around where the Maelstrom was said to be, but the sea was as smooth as any other part of the German ocean. I had been instructed to believe that the Maelstrom was a fixed fact, and that ships, and even huge whales, were sometimes dragged within its terrible liquid coils, and buried forever.' — 'Maelstrom—The Great Whirlpool,' in Scientific American, Vol. XII, No. 24, page 187; February 21, 1857 Small Jets of Air Make Cats Neurotic March 1950 'Neurotic aberrations can be caused when patterns of behavior come into conflict either because they arise from incompatible needs, or because they cannot coexist in space and time. Cat neuroses were experimentally produced by first training animals to obtain food by manipulating a switch that deposited a pellet of food in the food-box. After a cat had become thoroughly accustomed to this procedure, a harmless jet of air was flicked across its nose as it lifted the lid of the food-box. The cats then showed neurotic indecision about approaching the switch. Some assumed neurotic attitudes. Others were uninterested in mice. One tried to shrink into the cage walls.' — 'Experimental Neuroses,' by Jules H. Masserman, in Scientific American, Vol. 182, No. 3, pages 38–43; March 1950