logo
It's a girl — again! And again! Why a baby's sex isn't random.

It's a girl — again! And again! Why a baby's sex isn't random.

Washington Post2 days ago
A baby's sex may not be up to mere chance.
A study published Friday in the journal Science Advances describes the odds of having a boy or girl as flipping a weighted coin, unique to each family. It found evidence that an infant's birth sex is associated with maternal age and specific genes.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence
The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence

Forbes

time3 hours ago

  • Forbes

The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence

How many questions will we need to ask AI to ascertain that we've reached AGI and ASI? In today's column, I explore an intriguing and unresolved AI topic that hasn't received much attention but certainly deserves considerable deliberation. The issue is this. How many questions should we be prepared to ask AI to ascertain whether AI has reached the vaunted level of artificial general intelligence (AGI) and perhaps even attained artificial superintelligence (ASI)? This is more than merely an academic philosophical concern. At some point, we should be ready to agree whether the advent of ASI and ASI have been reached. The likely way to do so entails asking questions of AI and then gauging the intellectual acumen expressed by the AI-generated answers. So, how many questions will we need to ask? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. About Testing For Pinnacle AI Part of the difficulty facing humanity is that we don't have a surefire test to ascertain whether we have reached AGI and ASI. Some people proclaim rather loftily that we'll just know it when we see it. In other words, it's one of those fuzzy aspects and belies any kind of systematic assessment. An overall feeling or intuitive sense on our part will lead us to decide that pinnacle AI has been achieved. Period, end of story. But that can't be the end of the story since we ought to have a more mindful way of determining whether pinnacle AI has been attained. If the only means consists of a Gestalt-like emotional reaction, there is going to be a whole lot of confusion that will arise. You will get lots of people declaring that pinnacle AI exists, while lots of other people will insist that the declaration is utterly premature. Immense disagreement will be afoot. See my analysis of people who are already falsely believing that they have witnessed pinnacle AI, such as AGI and ASI, as discussed at the link here. Some form of bona fide assessment or test that formalizes the matter is sorely needed. I've extensively discussed and analyzed a well-known AI-insider test known as the Turing Test, see the link here. The Turing Test is named after the famous mathematician and early computer scientist Alan Turing. In brief, the idea is to ask questions of AI, and if you cannot distinguish the responses from those of what a human would say, you might declare that the AI exhibits intelligence on par with humans. Turing Test Falsely Maligned Be cautious if you ask an AI techie what they think of the Turing Test. You will get quite an earful. It won't be pleasant. Some believe that the Turing Test is a waste of time. They will argue that it doesn't work suitably and is outdated. We've supposedly gone far past its usefulness. You see, it was a test devised in 1949 by Alan Turing. That's over 75 years ago. Nothing from that long ago can apparently be applicable in our modern era of AI. Others will haughtily tell you that the Turing Test has already been successfully passed. In other words, the Turing Test has been purportedly passed by existing AI. Lots of banner headlines say so. Thus, the Turing Test isn't of much utility since we know that we don't yet have pinnacle AI, but the Turing Test seems to say that we do. I've repeatedly tried to set the record straight on this matter. The real story is that the Turing Test has been improperly applied. Those who claim the Turing Test has been passed are playing fast and loose with the famous testing method. Flaunting The Turing Test Part of the loophole in the Turing Test is that the number of questions and type of questions are unspecified. It is up to the person or team that is opting to lean into the Turing Test to decide those crucial facets. This causes unfortunate trouble and problematic results. Suppose that I decide to perform a Turing Test on ChatGPT, the immensely popular generative AI and large language model (LLM) that 400 million people are using weekly. I will seek to come up with questions that I can ask ChatGPT. I will also ask the same questions of my closest friend to see what answers they give. If I am unable to differentiate the answers from my human friend versus ChatGPT, I shall summarily and loudly declare that ChatGPT has passed the Turing Test. The idea is that the generative AI has successfully mimicked human intellect to the degree that the human-provided answers and the AI-provided answers were essentially the same. After coming up with fifty questions, some that were easy and some that were hard, I proceeded with my administration of the Turing Test. ChatGPT answered each question, and so did my friend. The answers by the AI and the answers by my friend were pretty much indistinguishable from each other. Voila, I can start telling the world that ChatGPT has passed the Turing Test. It only took me about an hour in total to figure that out. I spent half the time coming up with the questions, and half of the time getting the respective answers. Easy-peasy. The Number Of Questions Here's a thought for you to ponder. Do you believe that asking fifty questions is sufficient to determine whether intellectual acumen exists? That somehow doesn't seem sufficient. This is especially the case if we define AGI as a form of AI that is going to be intellectually on par with the entire range and depth of human intellect. Turns out that the questions I came up with for my run of the Turing Test didn't include anything about chemistry, biology, and many other disciplines or domains. Why didn't I include those realms? Well, I had chosen to compose just fifty questions. You cannot ask any semblance of depth and breadth across all human knowledge in a mere fifty questions. Sure, you could cheat and ask a question that implores the person or the AI to rattle off everything they know. In that case, presumably, at some point, the 'answer' would include chemistry, biology, etc. That's not a viable approach, as I discuss at the link here, so let's put aside the broad strokes questions and aim for specific questions rather than smarmy catch-all questions. How Many Questions Is Enough I trust that you are willing to concede that the number of questions is important when performing a test that tries to ascertain intellectual capabilities. Let's try to come up with a number that makes some sense. We can start with the number zero. Some believe that we shouldn't have to ask even one question. The AI has the onus to convince us that it has attained AGI or ASI. Therefore, we can merely sit back and see what the AI says to us. We either are ultimately convinced by the smooth talking, or we aren't. A big problem with the zero approach is that the AI could prattle endlessly and might simply be doing a dump of everything it has patterned on. The beauty of asking questions is that you get an opportunity to jump around and potentially find blank spots. If the AI is only spouting whatever it has to say, the wool could readily be pulled over your eyes. I suggest that we agree to use a non-zero count. We ought to ask at least one question. The difficulty with being constrained to one question is that we are back to the conundrum of either missing the boat and only hitting one particular nugget, or we are going to ask for the entire kitchen sink in an overly broad manner. None of those are satisfying. Okay, we must ask at least two or more questions. I dare say that two doesn't seem high enough. Does ten seem like enough questions? Probably not. What about one hundred questions? Still doesn't seem sufficient. A thousand questions? Ten thousand questions? One hundred thousand questions? It's hard to judge where the right number might be. Maybe we can noodle on the topic and figure out a ballpark estimate that makes reasonable sense. Let's do that. Recent Tests Of Top AI You might know that every time one of the top AI makers comes out with a new version of their generative AI, they run a bunch of various AI assessment tests to try and gleefully showcase how much better their AI is than other competing LLMs. For example, Grok 4 by Elon Musk's xAI was recently released, and xAI and others used many of the specialized tests that have become relatively popular to see how well Grok 4 compares. Tests included the (a) Humanity's Last Exam or HLE, (b) ARC-AGI-2, (c) GPQA, (d) USAMO 2025, (e) AIME 2025, (f) LiveCodeBench, (g) SWE-Bench, and other such tests. Some of those tests have to do with the AI being able to generate program code (e.g., LiveCodeBench, SWE-Bench). Some of the tests are about being able to solve math problems (e.g., USAMO, AIME). The GPQA test is science-oriented. Do you know how many questions are in the GPQA testing set? There is a total of 546 questions, consisting of 448 questions in the Main Set and another 198 questions in the harder Diamond Set. If you are interested in the nature of the questions in GPQA, visit the GPQA GitHub site, plus you might find of interest the initial paper entitled 'GPQA: A Graduate-Level Google-Proof Q&A Benchmark' by David Rein et al, arXiv, November 20, 2023. Per that paper: 'We present GPQA, a challenging dataset of 448 multiple choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are 'Google-proof').' Please be aware that you are likely to hear some eyebrow-raising claims that a generative AI is better than PhD-level graduate students across all domains because of particular scores on the GPQA test. It's a breathtakingly sweeping statement and misleadingly portrays the actual testing that is normally taking place. In short, any such proclamation should be taken with a humongous grain of salt. Ballparking The Questions Count Suppose we come up with our own handy-dandy test that has PhD-level questions. The test will have 600 questions in total. We will craft 600 questions pertaining to 6 domains, evenly so, and we'll go with the six domains of (1) physics, (2) chemistry, (3) biology, (4) geology, (5) astronomy, and (6) oceanography. That means we are going to have 100 questions in each discipline. For example, there will be 100 questions about physics. Are you comfortable that by asking a human being a set of 100 questions about physics that we will be able to ascertain the entire range and depth of their full knowledge and intellectual prowess in physics? I doubt it. You will certainly be able to gauge a semblance of their physics understanding. The odds are that with just 100 questions, you are only sampling their knowledge. Is that a large enough sampling, or should we be asking even more questions? Another consideration is that we are only asking questions regarding 6 domains. What about all the other domains? We haven't included any questions on meteorology, anthropology, economics, political science, archaeology, history, law, linguistics, etc. If we want to assess an AI such as the hoped-for AGI, we presumably need to cover every possible domain. We also need to have a sufficiently high count of questions per domain so that we are comfortable that our sampling is going deep and wide. Devising A Straw Man Count Go with me on a journey to come up with a straw man count. Our goal will be an order-of-magnitude estimate, rather than an exact number per se. We want to have a ballpark, so we'll know what the range of the ballpark is. We will begin the adventure by noting that the U.S. Library of Congress has an extensive set of subject headings, commonly known as the LCSH (Library of Congress Subject Headings). The LCSH was started in 1897 and has been updated and maintained since then. The LCSH is generally considered the most widely used subject vocabulary in the world. As an aside, some people favor the LCSH and some do not. There are heated debates about whether certain subject headings are warranted. There are acrimonious debates concerning the wording of some of the subject headings. On and on the discourse goes. I'm not going to wade into that quagmire here. The count of the LCSH as of April 2025 was 388,594 records in size. I am going to round that number to 400,000, for the sake of this ballpark discussion. We can quibble about that, along with quibbling whether all those subject headings are distinctive and usable, but I'm not taking that route for now. Suppose we came up with one question for each of the LCSH subject headings, such that whatever that domain or discipline consists of, we are going to ask one question about it. We would then have 400,000 questions ready to be asked. One question per realm doesn't seem sufficient. Consider these possibilities: If we pick the selection of having 10,000 questions per the LCSHs, we will need to come up with 4 billion questions. That's a lot of questions. But maybe only asking 10,000 questions isn't sufficient for each realm. We might go with 100,000 questions, which then brings the grand total to 40 billion questions. Gauging AGI Via Questions Does asking a potential AGI a billion or many billions of questions, i.e., 4B to 40B, that are equally varied across all 'known' domains, seem to be a sufficient range and depth of testing? Some critics will say that it is hogwash. You don't need to ask that many questions. It is vast overkill. You can use a much smaller number. If so, what's that number? And what is the justification for that proposed count? Would the number be on the order of many thousands or millions, if not in the billions? And don't try to duck the matter by saying that the count is somehow amorphous or altogether indeterminate. In the straw man case of billions, skeptics will say that you cannot possibly come up with a billion or more questions. It is logistically infeasible. Even if you could, you would never be able to assess the answers given to those questions. It would take forever to go through those billions of answers. And you need experts across all areas of human knowledge to judge whether the answers were right or wrong. A counterargument is that we could potentially use AI, an AI other than the being tested AGI, to aid in the endeavor. That too has upsides and downsides. I'll be covering that consideration in an upcoming post. Be on the watch. There are certainly a lot of issues to be considered and dealt with. The extraordinarily serious matter at hand is worthy of addressing these facets. Remember, we are focusing on how we will know that we've reached AGI. That's a monumental question. We should be prepared to ask enough questions that we can collectively and reasonably conclude that AGI has been attained. As Albert Einstein aptly put it: 'Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.'

Volcanic eruptions may be caused by mysterious ‘BLOBS' under the Earth
Volcanic eruptions may be caused by mysterious ‘BLOBS' under the Earth

Yahoo

time17 hours ago

  • Yahoo

Volcanic eruptions may be caused by mysterious ‘BLOBS' under the Earth

If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. While many science books would have you believe the Earth's lower mantle—the layer deep below the crust—is smooth, it's actually made up of a mountainous-like topography that moves and changes just like the crust above it. Further, research shows that this lower mantle contains two continent-sized structures, which researchers have dubbed big lower-mantle basal structures, or BLOBS. We don't know exactly what these BLOBS consist of, but scientists suspect they could be made up of the same materials surrounding them. In fact, new research published in the journal Communications Earth & Environment suggests that the planet's volcanic activity may be driven by volcanic plumes that move with their origins. Today's Top Deals XGIMI Prime Day deals feature the new MoGo 4 and up to 42% off smart projectors Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals The origins in question, researchers believe, could be the BLOBS found deep within the Earth. These mysterious structures appear to be the driving force behind the Earth's volcanic history, and while there are scientists hard to work trying to prove that, looking at past simulations has painted a pretty clear picture to work with. To start with, the researchers used computer models to simulate the movements of the BLOBS over 1 billion years ago. These models showed that the BLOBS produced mantle plumes that were sometimes tilted or even rose up higher. This suggests that the eruptions seen over the past billion years likely took place above the BLOBS, or at least very close to them. The researchers believe that this data shows that the Earth's volcanic activity could somehow be linked to the BLOBS, despite how deep they are in the Earth. The findings are 'encouraging,' the researchers note in a post on The Conversation, as it suggests that future simulations may be able to predict where mantle plumes will strike next. This could help us create a general volcano warning system. Despite being destructive—the Hunga volcano eruption of 2022 continues to set records years later—large volcanic eruptions also have the ability to create new islands and landmasses. Knowing where they occur—or where they occurred in the past—could help us save lives and better understand how our planet formed at different points in history. Of course, we still have a lot to learn about the mysterious BLOBS found deep in the Earth. But this research is a smoking gun that could open the door for tons of new discoveries and revelations. More Top Deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 See the

How Long You May Need To Walk Outside To See A Boost In Your Mental Health
How Long You May Need To Walk Outside To See A Boost In Your Mental Health

Yahoo

time20 hours ago

  • Yahoo

How Long You May Need To Walk Outside To See A Boost In Your Mental Health

There's a reason why the sounds of nature — chirping birds, flowing streams, falling rain — are often sound options for white noise machines and meditation apps: They're calming. A new small study published in Molecular Psychiatry further underscores this. The study found that a one-hour nature walk reduces stress when compared to a one-hour walk in a bustling city environment. The study followed 63 people who were randomly assigned a nature walk or an urban walk. The nature walk took place in a forest in Berlin and the urban walk took place on a busy street in the city. Participants were instructed not to check their phones or stop in stores while on their walk. They were given a bagged lunch and a phone with a 30-minute timer that instructed them to turn around. Before the walk, participants filled out a questionnaire and then underwent an fMRI scan that measured two tasks. The first task measured brain activity during a 'fearful faces task,' in which participants were shown 15 female and 15 male faces that either had a neutral or scared expression. The second task measured was brain activity during an 'Montreal Imaging Stress Task,' which is designed to create a level of stress in participants. During the task, participants had a set amount of time to solve challenging arithmetic problems. After the walk, participants filled out another questionnaire and underwent another fMRI scan that measured the same tasks they conducted before their walk. The results showed that nature significantly improved people's stress levels. The study found that those who took part in the 60-minute nature walk experienced lower stress levels following their time outside. 'The results of our study show that after only [a] one-hour walk in nature, activity in brain regions involved in stress processing decreases,' Sonja Sudimac, the lead author of the study, told Medical News Today. Particularly, the researchers found the brain's amygdala activity (which is responsible for our stress and fear response) decreased in those who were in the nature walk group. This decrease was not seen in people who completed the city walk. According to the study, urban environments can negatively impact one's mental health, leading to increased rates of anxiety, depression and mood disorders. (Just think about the stress that comes with frequently honking horns, running to catch a bus or dealing with long lines just to get some groceries.) In fact, other studies show that mental health can suffer in urban areas because of the crowded nature of cities and, in general, the increased amount of stressors throughout the environment. It's worth noting that the study had a few limitations: All participants were from a similar background and the study could not control who participants saw on their walk. So, for example, if someone on a walk in the forest saw someone relaxing on their day off, it could have further decreased the stress response in the participant. This study also only focused on the benefits of a one-hour-long nature walk — it's unclear if the same positive results would occur in a shorter amount of time. But, Sudimac told Medical News Today that there is evidence that levels of the stress hormone cortisol decrease after a 15-minute nature walk, which would make a version of this study that looked at shorter walks interesting. Plus, outside of this study, there is extensive research on the positive effects of the outdoors, so it's not hard to conclude that even a few minutes outside is better than nothing. Beyond decreased stress, nature has other benefits. Dr. Tamanna Singh, co-director of the sports cardiology center at Cleveland Clinic, previously told HuffPost that walking in nature has additional mental health benefits, too. 'Many of us just don't get enough of nature, and a walk is a fantastic way to focus on taking in air, walking on mother earth, listening to the leaves rustling, the birds chirping, essentially 'forest bathing,'' she said. Forest bathing has a number of benefits, she pointed out: It can help improve mindfulness, can be meditative and can improve your breathing. Spending time outside has also been shown to improve your sleep, increase your creativity and boost your immune function. Whether you live in a city or a rural area, try to prioritize nature walks. The results are clear: Spending time in nature is good for your mental health. But don't be discouraged if you live in a city. It's important to note that the study's nature walk took place in an urban forest within the city of Berlin. So, even just a walk through your local park or nature reserve can help you achieve a sense of calm. The key is getting around green space ― and dedicating 60 minutes to moving your body and soaking up the outdoors. The headline and subheadline of this story have been updated to better reflect the study. Related... This Type Of Walking May Drastically Improve Your Heart Function How Much You Need To Walk Every Day To Cut Your Risk Of Heart Disease Should We All Be Squatting More?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store