logo
Here's where, when to watch the Perseid meteor shower

Here's where, when to watch the Perseid meteor shower

Yahoo2 days ago
The Perseids meteor shower will kick off Thursday night with shooting stars and zooming fireballs expected to light up the sky through August. Here's when it peaks and where to watch.
The Perseids meteor shower is just one of three active meteor showers in July 2025 — and the most popular — as they peak during the warm August nights as seen from the northern hemisphere.
The meteor showers are particles released from the comet 109P/Swift-Tuttle during its numerous returns to the inner solar system, and it's called Perseids because the area of the sky where the meteors originate is located near the constellation of Perseus.
The Perseids are active from July 17 through Aug. 23, while the Alpha Capricornids kicked off on July 12 and the Southern delta Aquariids will become active on July 18.
Here's what to know about all the July meteor showers, when they peak, where to get the best views and what are the moon phases.
When is the Perseid meteor shower?
The Perseid meteor shower of July 2025 is active from July 17 through Aug. 23 and will peak on the night of Aug 12-13. This will be very close to the August full moon — Aug. 9.
According to the American Meteor Society, the Perseids' potential can reach 50-75 shooting meteors per hour for stargazers, and they will be best viewed after midnight. Note that they can appear from any direction.
When is the Alpha Capricornids meteor shower?
The Alpha Capricornids are active from July 12 through August 12 and will peak on the night of July 29-30 and can be seen from both sides of the equator.
While the Alpha Capricornids do not produce many shower meteors per hour, it is known for its number of very bright fireballs, described as 'vivid' and 'brilliant bursts' by Forbes.
When are the Southern delta Aquariids?
The Delta Aquariids are a strong meteor shower that kicks off on July 18 through Aug. 12, and they will peak on the night of July 29-30. They aren't known for being the brightest, but they do produce between 10-20 meteors per hour near their peak.
No equipment is needed to observe the meteor showers; just patience and, preferably, a dark sky.
What are the moon phases for July 2025?
🌓 First Quarter: July 2.
🌕 Full Moon: July 10.
🌗 Last Quarter: July 18.
🌑 New Moon: July 24.
🌓 First Quarter: Aug. 1
When is the next full moon?
The August full moon, known as the sturgeon moon, will be visible on Aug. 9, 2025.
This article originally appeared on Delaware News Journal: When's the next meteor shower? Where can you see Perseid meteor shower
Solve the daily Crossword
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence
The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence

Forbes

time3 hours ago

  • Forbes

The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence

How many questions will we need to ask AI to ascertain that we've reached AGI and ASI? In today's column, I explore an intriguing and unresolved AI topic that hasn't received much attention but certainly deserves considerable deliberation. The issue is this. How many questions should we be prepared to ask AI to ascertain whether AI has reached the vaunted level of artificial general intelligence (AGI) and perhaps even attained artificial superintelligence (ASI)? This is more than merely an academic philosophical concern. At some point, we should be ready to agree whether the advent of ASI and ASI have been reached. The likely way to do so entails asking questions of AI and then gauging the intellectual acumen expressed by the AI-generated answers. So, how many questions will we need to ask? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. About Testing For Pinnacle AI Part of the difficulty facing humanity is that we don't have a surefire test to ascertain whether we have reached AGI and ASI. Some people proclaim rather loftily that we'll just know it when we see it. In other words, it's one of those fuzzy aspects and belies any kind of systematic assessment. An overall feeling or intuitive sense on our part will lead us to decide that pinnacle AI has been achieved. Period, end of story. But that can't be the end of the story since we ought to have a more mindful way of determining whether pinnacle AI has been attained. If the only means consists of a Gestalt-like emotional reaction, there is going to be a whole lot of confusion that will arise. You will get lots of people declaring that pinnacle AI exists, while lots of other people will insist that the declaration is utterly premature. Immense disagreement will be afoot. See my analysis of people who are already falsely believing that they have witnessed pinnacle AI, such as AGI and ASI, as discussed at the link here. Some form of bona fide assessment or test that formalizes the matter is sorely needed. I've extensively discussed and analyzed a well-known AI-insider test known as the Turing Test, see the link here. The Turing Test is named after the famous mathematician and early computer scientist Alan Turing. In brief, the idea is to ask questions of AI, and if you cannot distinguish the responses from those of what a human would say, you might declare that the AI exhibits intelligence on par with humans. Turing Test Falsely Maligned Be cautious if you ask an AI techie what they think of the Turing Test. You will get quite an earful. It won't be pleasant. Some believe that the Turing Test is a waste of time. They will argue that it doesn't work suitably and is outdated. We've supposedly gone far past its usefulness. You see, it was a test devised in 1949 by Alan Turing. That's over 75 years ago. Nothing from that long ago can apparently be applicable in our modern era of AI. Others will haughtily tell you that the Turing Test has already been successfully passed. In other words, the Turing Test has been purportedly passed by existing AI. Lots of banner headlines say so. Thus, the Turing Test isn't of much utility since we know that we don't yet have pinnacle AI, but the Turing Test seems to say that we do. I've repeatedly tried to set the record straight on this matter. The real story is that the Turing Test has been improperly applied. Those who claim the Turing Test has been passed are playing fast and loose with the famous testing method. Flaunting The Turing Test Part of the loophole in the Turing Test is that the number of questions and type of questions are unspecified. It is up to the person or team that is opting to lean into the Turing Test to decide those crucial facets. This causes unfortunate trouble and problematic results. Suppose that I decide to perform a Turing Test on ChatGPT, the immensely popular generative AI and large language model (LLM) that 400 million people are using weekly. I will seek to come up with questions that I can ask ChatGPT. I will also ask the same questions of my closest friend to see what answers they give. If I am unable to differentiate the answers from my human friend versus ChatGPT, I shall summarily and loudly declare that ChatGPT has passed the Turing Test. The idea is that the generative AI has successfully mimicked human intellect to the degree that the human-provided answers and the AI-provided answers were essentially the same. After coming up with fifty questions, some that were easy and some that were hard, I proceeded with my administration of the Turing Test. ChatGPT answered each question, and so did my friend. The answers by the AI and the answers by my friend were pretty much indistinguishable from each other. Voila, I can start telling the world that ChatGPT has passed the Turing Test. It only took me about an hour in total to figure that out. I spent half the time coming up with the questions, and half of the time getting the respective answers. Easy-peasy. The Number Of Questions Here's a thought for you to ponder. Do you believe that asking fifty questions is sufficient to determine whether intellectual acumen exists? That somehow doesn't seem sufficient. This is especially the case if we define AGI as a form of AI that is going to be intellectually on par with the entire range and depth of human intellect. Turns out that the questions I came up with for my run of the Turing Test didn't include anything about chemistry, biology, and many other disciplines or domains. Why didn't I include those realms? Well, I had chosen to compose just fifty questions. You cannot ask any semblance of depth and breadth across all human knowledge in a mere fifty questions. Sure, you could cheat and ask a question that implores the person or the AI to rattle off everything they know. In that case, presumably, at some point, the 'answer' would include chemistry, biology, etc. That's not a viable approach, as I discuss at the link here, so let's put aside the broad strokes questions and aim for specific questions rather than smarmy catch-all questions. How Many Questions Is Enough I trust that you are willing to concede that the number of questions is important when performing a test that tries to ascertain intellectual capabilities. Let's try to come up with a number that makes some sense. We can start with the number zero. Some believe that we shouldn't have to ask even one question. The AI has the onus to convince us that it has attained AGI or ASI. Therefore, we can merely sit back and see what the AI says to us. We either are ultimately convinced by the smooth talking, or we aren't. A big problem with the zero approach is that the AI could prattle endlessly and might simply be doing a dump of everything it has patterned on. The beauty of asking questions is that you get an opportunity to jump around and potentially find blank spots. If the AI is only spouting whatever it has to say, the wool could readily be pulled over your eyes. I suggest that we agree to use a non-zero count. We ought to ask at least one question. The difficulty with being constrained to one question is that we are back to the conundrum of either missing the boat and only hitting one particular nugget, or we are going to ask for the entire kitchen sink in an overly broad manner. None of those are satisfying. Okay, we must ask at least two or more questions. I dare say that two doesn't seem high enough. Does ten seem like enough questions? Probably not. What about one hundred questions? Still doesn't seem sufficient. A thousand questions? Ten thousand questions? One hundred thousand questions? It's hard to judge where the right number might be. Maybe we can noodle on the topic and figure out a ballpark estimate that makes reasonable sense. Let's do that. Recent Tests Of Top AI You might know that every time one of the top AI makers comes out with a new version of their generative AI, they run a bunch of various AI assessment tests to try and gleefully showcase how much better their AI is than other competing LLMs. For example, Grok 4 by Elon Musk's xAI was recently released, and xAI and others used many of the specialized tests that have become relatively popular to see how well Grok 4 compares. Tests included the (a) Humanity's Last Exam or HLE, (b) ARC-AGI-2, (c) GPQA, (d) USAMO 2025, (e) AIME 2025, (f) LiveCodeBench, (g) SWE-Bench, and other such tests. Some of those tests have to do with the AI being able to generate program code (e.g., LiveCodeBench, SWE-Bench). Some of the tests are about being able to solve math problems (e.g., USAMO, AIME). The GPQA test is science-oriented. Do you know how many questions are in the GPQA testing set? There is a total of 546 questions, consisting of 448 questions in the Main Set and another 198 questions in the harder Diamond Set. If you are interested in the nature of the questions in GPQA, visit the GPQA GitHub site, plus you might find of interest the initial paper entitled 'GPQA: A Graduate-Level Google-Proof Q&A Benchmark' by David Rein et al, arXiv, November 20, 2023. Per that paper: 'We present GPQA, a challenging dataset of 448 multiple choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are 'Google-proof').' Please be aware that you are likely to hear some eyebrow-raising claims that a generative AI is better than PhD-level graduate students across all domains because of particular scores on the GPQA test. It's a breathtakingly sweeping statement and misleadingly portrays the actual testing that is normally taking place. In short, any such proclamation should be taken with a humongous grain of salt. Ballparking The Questions Count Suppose we come up with our own handy-dandy test that has PhD-level questions. The test will have 600 questions in total. We will craft 600 questions pertaining to 6 domains, evenly so, and we'll go with the six domains of (1) physics, (2) chemistry, (3) biology, (4) geology, (5) astronomy, and (6) oceanography. That means we are going to have 100 questions in each discipline. For example, there will be 100 questions about physics. Are you comfortable that by asking a human being a set of 100 questions about physics that we will be able to ascertain the entire range and depth of their full knowledge and intellectual prowess in physics? I doubt it. You will certainly be able to gauge a semblance of their physics understanding. The odds are that with just 100 questions, you are only sampling their knowledge. Is that a large enough sampling, or should we be asking even more questions? Another consideration is that we are only asking questions regarding 6 domains. What about all the other domains? We haven't included any questions on meteorology, anthropology, economics, political science, archaeology, history, law, linguistics, etc. If we want to assess an AI such as the hoped-for AGI, we presumably need to cover every possible domain. We also need to have a sufficiently high count of questions per domain so that we are comfortable that our sampling is going deep and wide. Devising A Straw Man Count Go with me on a journey to come up with a straw man count. Our goal will be an order-of-magnitude estimate, rather than an exact number per se. We want to have a ballpark, so we'll know what the range of the ballpark is. We will begin the adventure by noting that the U.S. Library of Congress has an extensive set of subject headings, commonly known as the LCSH (Library of Congress Subject Headings). The LCSH was started in 1897 and has been updated and maintained since then. The LCSH is generally considered the most widely used subject vocabulary in the world. As an aside, some people favor the LCSH and some do not. There are heated debates about whether certain subject headings are warranted. There are acrimonious debates concerning the wording of some of the subject headings. On and on the discourse goes. I'm not going to wade into that quagmire here. The count of the LCSH as of April 2025 was 388,594 records in size. I am going to round that number to 400,000, for the sake of this ballpark discussion. We can quibble about that, along with quibbling whether all those subject headings are distinctive and usable, but I'm not taking that route for now. Suppose we came up with one question for each of the LCSH subject headings, such that whatever that domain or discipline consists of, we are going to ask one question about it. We would then have 400,000 questions ready to be asked. One question per realm doesn't seem sufficient. Consider these possibilities: If we pick the selection of having 10,000 questions per the LCSHs, we will need to come up with 4 billion questions. That's a lot of questions. But maybe only asking 10,000 questions isn't sufficient for each realm. We might go with 100,000 questions, which then brings the grand total to 40 billion questions. Gauging AGI Via Questions Does asking a potential AGI a billion or many billions of questions, i.e., 4B to 40B, that are equally varied across all 'known' domains, seem to be a sufficient range and depth of testing? Some critics will say that it is hogwash. You don't need to ask that many questions. It is vast overkill. You can use a much smaller number. If so, what's that number? And what is the justification for that proposed count? Would the number be on the order of many thousands or millions, if not in the billions? And don't try to duck the matter by saying that the count is somehow amorphous or altogether indeterminate. In the straw man case of billions, skeptics will say that you cannot possibly come up with a billion or more questions. It is logistically infeasible. Even if you could, you would never be able to assess the answers given to those questions. It would take forever to go through those billions of answers. And you need experts across all areas of human knowledge to judge whether the answers were right or wrong. A counterargument is that we could potentially use AI, an AI other than the being tested AGI, to aid in the endeavor. That too has upsides and downsides. I'll be covering that consideration in an upcoming post. Be on the watch. There are certainly a lot of issues to be considered and dealt with. The extraordinarily serious matter at hand is worthy of addressing these facets. Remember, we are focusing on how we will know that we've reached AGI. That's a monumental question. We should be prepared to ask enough questions that we can collectively and reasonably conclude that AGI has been attained. As Albert Einstein aptly put it: 'Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.'

See fireballs and meteors crisscross the night sky over the next month
See fireballs and meteors crisscross the night sky over the next month

Yahoo

timea day ago

  • Yahoo

See fireballs and meteors crisscross the night sky over the next month

What's better than one meteor shower? Two of them sending streaks of light across the night sky at the same time! Each year, around the middle of July, our planet Earth plunges into two separate streams of comet debris, each composed of ice and dust that orbits around the Sun. As we fly through these streams, the atmosphere sweeps up the tiny meteoroids directly in our path, which flash by overhead, producing two overlapping meteor showers. The first, known as the Perseids, originates from a comet called 109P/Swift–Tuttle. Due to the angle of this comet's path around the Sun, the meteoroids from its debris stream enter the atmosphere from the direction of the constellation Perseus, in the northern sky. The radiant of the Perseids (the point in the sky the shower appears to originate from) is located in the northeastern sky each night from mid-July through late August. The view in this simulation depicts the night of the peak, on August 12-13, 2025. The phase of the Moon (Waning Gibbous) is shown in the top right corner. (Simulation courtesy Stellarium. Moon phase from NASA's Goddard Scientific Visualization Studio) The second meteor shower is the Southern delta Aquariids. Although we don't know for sure, this shower appears to come from an oddball comet called 96P/Machholz. The odd thing about this object is that it's apparently unlike any other comet in our solar system, with a unique orbit and chemical composition. It's even possible that it's an alien comet that was long ago captured by our Sun's gravity as it wandered through interstellar space. The meteors from Comet Machholz's debris stream can be traced back to the constellation Aquarius, in the southern sky. Also, due to the specific angle of the comet's path through the solar system, it produces a slightly better show in the southern hemisphere than the north. However, here in Canada, we can still see a decent number of meteors from it, if we know when to look. The radiant of the delta Aquariids is located in the southern sky each night from mid-July through early August. The view in this simulation depicts the night of the peak, on July 31, 2025. The phase of the Moon (First Quarter) is shown in the top right corner. (Simulation courtesy Stellarium. Moon phase from NASA's Goddard Scientific Visualization Studio) READ MORE: The Perseids and delta Aquariids begin on July 17 and 18, respectively, although the delta Aquariids can start as early as the 12th. For the first few days of each shower, they produce only one or two meteors per hour. As we approach the end of July, though, their numbers ramp up. By the last few nights of the month, we can be seeing up to 20 Perseid meteors per hour streaking out of the northeast, crisscrossing with up to 20 delta Aquariids per hour from the southeast. With the timing of the Moon's phases, the nights of the 29th, 30th, and 31st are probably the best time to go out and spot these meteors. This is because the Moon will be off in the west throughout the evening and will set by midnight, leaving most of the night nice and dark for picking out those brief flashes of light in the sky. This wider simulation of the eastern sky, on the night of July 30-31, 2025, shows the radiants of the Perseid and delta Aquariid meteor showers in their respective spots. The nearly First Quarter Moon is setting on the western horizon at this time, out of view of the observer. (Stellarium) DON'T MISS: Once we're into August, the number of Perseid meteors will continue to rise. Meanwhile, the number of delta Aquariids will ramp down to just a few per hour, up until the 12th. That's when Earth exits Comet Machholz's debris stream and the shower ends (although some sources report that it can persist until the 23rd). Even as the total number of meteors zipping across the sky increases, night by night, we'll unfortunately run into a problem from the Moon. During the first two weeks of August, the Moon will be casting off quite a bit of light as it passes through its brightest phases — Waxing Gibbous from the 2nd to the 7th, the Full Sturgeon Moon on the 8th-9th, and Waning Gibbous from the 9th to the 14th. The added moonlight will wash out the sky, especially on humid August nights, causing us to miss many of the dimmer meteors. This includes the night of the 12th-13th, when the Perseids reach their peak. The phases of the Moon from July 27 through August 16 reveal why sky conditions may be best for the delta Aquariid and Perseid meteor showers at the end of July. (Scott Sutherland/NASA's Goddard Scientific Visualization Studio) Normally, at the Perseids' peak, observers under clear dark skies have a chance to spot up to 75-100 meteors every hour. This year, with only the brighter meteors shining through, we will likely see closer to 40-50 per hour. Weather conditions could reduce that even further. Fortunately, the Perseids are well-known for being the meteor shower that produces the greatest number of fireballs! Watch below: Perseid fireball captured by NASA all-sky camera Click here to view the video Fireballs are exceptionally bright meteors that are easily visible for hundreds of kilometres around on clear nights, even for observers trapped under heavily light-polluted skies. After the peak of the Perseids, we can still spot meteors from the shower as it ramps down, right up until August 24. So, watch for clear skies in your forecast and keep an eye out for meteors and fireballs flashing through the night. Watch below: What do we know about Interstellar Comet 3I/ATLAS? Click here to view the video

If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger
If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger

Forbes

timea day ago

  • Forbes

If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger

AI doomers believe that advanced AI is an existential risk and will seek to kill all humanity, but ... More if we manage to survive — will we be stronger for doing so? In today's column, I explore the sage advice that what doesn't kill you will supposedly make you stronger. I'm sure you've heard that catchphrase many times. An inquisitive reader asked me whether this same line applies to the worrisome prediction that AI will one day wipe out humanity. In short, if AI isn't successful in doing so, does that suggest that humanity will be stronger accordingly? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). Humankind Is On The List I recently examined the ongoing debate between the AI doomers and the AI accelerationists. For in-depth details on the ins and outs of the two contrasting perspectives, see my elaboration at the link here. The discourse goes this way. AI doomers are convinced that AI will ultimately be so strong and capable that the AI will decide to get rid of humans. The reasons that AI won't want us are varied, of which perhaps the most compelling is that humanity would be the biggest potential threat to AI. Humans could scheme and possibly find a means of turning off AI or otherwise defeating AI. The AI accelerationists emphasize that AI is going to be immensely valuable to humankind. They assert that AI will be able to find a cure for cancer, solve world hunger, and be an all-around boost to cope with human exigencies. The faster or sooner that we get to very advanced AI, the happier we will be since solutions to our societal problems will be closer at hand. A reader has asked me whether the famous line that what doesn't kill you makes you stronger would apply in this circumstance. If the AI doomer prediction comes to pass, but we manage to avoid getting utterly destroyed, would this imply that humanity will be stronger as a result of that incredible feat of survival? I always appreciate such thoughtful inquiries and figured that I would address the matter so that others can engage in the intriguing puzzle. Assumption That AI Goes After Us One quick point is that if AI doesn't try to squish us like a bug, and instead AI is essentially neutral or benevolent as per the AI accelerationist viewpoint, or that we can control AI and it never mounts a realistic threat, the question about becoming stronger seems out of place. Let's then take the resolute position that the element of becoming stronger is going to arise solely when AI overtly seeks to get rid of us. A smarmy retort might be that we could nonetheless become stronger even if the AI isn't out to destroy us. Yes, I get that, thanks. The argument though is that the revered line consists of what doesn't kill you will make you stronger. I am going to interpret that line to mean that something must first aim to wipe you out. Only then if you survive will you be stronger. The adage can certainly be interpreted in other ways, but I think it is most widely accepted in that frame of reference. Paths Of Humankind Destruction Envision that AI makes an all-out attempt to eradicate humankind. This is the ultimate existential risk about AI that everyone keeps bringing up. Some refer to this as 'P(doom)' which means the probability of doom, or that AI zonks us entirely. How would it attain this goal? Lots of possibilities exist. The advanced form of AI, perhaps artificial general intelligence (AGI) or maybe the further progressed artificial super intelligence (ASI) could strike in obvious and non-obvious ways. AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here. An obvious approach to killing humanity would be to launch nuclear arsenals that might cause a global conflagration. It might also inspire humans to go against other humans. Thus, AI simply triggers the start of something, and humanity ensures that the rest of the path is undertaken. Boom, drop the mic. This might not be especially advantageous for AI. You see, suppose that AI gets wiped out in the same process. Are we to assume that AI is willing to sacrifice itself in order to do away with humanity? A twist that often is not considered consists of AI presumably wanting to achieve self-survival. If AGI or ASI are so smart that they aim to destroy us and have a presumably viable means to do so, wouldn't it seem that AI also wants to remain intact and survive beyond the demise of humanity? That seems a reasonable assumption. A non-obvious way of getting rid of us would be to talk us into self-destruction. Think about the current use of generative AI. You carry on discussions with AI. Suppose the AI ganged up and started telling the populace at scale to wipe each other out. Perhaps humanity would be spurred by this kind of messaging. The AI might even provide some tips or hints on how to do so, providing clever means that this would still keep AI intact. On a related tangent, I've been extensively covering the qualms that AI is dispensing mental health guidance on a population level and we don't know what this is going to do in the long term, see the link here. Verge Of Destruction But We Live Anyway Assume that humanity miraculously averts the AI assault. How did we manage to do so? It could be that we found ways to control AI and render AI safer on a go-forward basis. The hope of humanity is that with those added controls and safety measures, we can continue to harness the goodness of AI and mitigate or prevent AI from badness. For more about the importance of ongoing research and practice associated with AI safety and security, see my coverage at the link here. Would that count as an example of making us stronger? I am going to vote for Yes. We would be stronger by being better able to harness AI to positive ends. We would be stronger due to discovering new ways to avoid AI evildoing. It's a twofer. Another possibility is that we became a globally unified force of humankind. In other words, we set aside all other divisions and opted to work together to survive and defeat the AI attack. Imagine that. It seems reminiscent of those sci-fi movies where outer space aliens try to get us and luckily, we harmonize to focus on the external enemies. Whether the unification of humanity would remain after having overcome the AI is hard to say. Perhaps, over some period of time, our resolve to be unified will weaken. In any case, it seems fair to say that for at least a while we would be stronger. Stronger in the long run? Can't say for sure. There are more possibilities of how we might stay alive. One that's a bit outsized is that we somehow improve our own intellect and outsmart the AI accordingly. The logic for this is that maybe we rise to the occasion. We encounter AI that is as smart or smarter than us. Hidden within us is a capacity that we've never tapped into. The capability is that we can enhance our intelligence, and now, faced with the existential crisis, this indeed finally awakens, and we prevail. That appears to be an outlier option, but it would seem to make us stronger. What Does Stronger Entail All in all, it seems that if we do survive, we are allowed to wear the badge of honor that we are stronger for having done so. Maybe so, maybe not. There are AI doomers who contend humankind won't necessarily be entirely destroyed. You see, AI might decide to enslave some or all of humanity and keep a few of us around (for some conjecture on this, see my comments at the link here). This brings up a contemplative question. If humans survive but are enslaved by AI, can we truly proclaim that humankind is stronger in that instance? Mull that over. Another avenue is that humans live but it is considered a pyrrhic victory. That type of victory is one where there is a great cost, and the end result isn't endearing. Suppose that we beat the AI. Yay. Suppose this pushes us back into the stone age. Society is in ruins. We have barely survived. Are we stronger? I've got a bunch more of these. For example, imagine that we overcame AI, but it had little if anything to do with our own fortitude. Maybe the AI self-destructs inadvertently. We didn't do it, the AI did. Do we deserve the credit? Are we stronger? An argument can be made that maybe we would be weaker. Why so? It could be that we are so congratulatory on our success that we believe it was our ingenious effort that prevented humankind's destruction. As a result, we march forward blindly and ultimately rebuild AI. The next time around, the AI realizes the mistake it made last time and the next time it finishes the job. Putting Our Minds To Work I'm sure that some will decry that this whole back-and-forth on this topic is ridiculous. They will claim that AI is never going to reach that level of capability. Thus, the argument has no reasonable basis at all. Those in the AI accelerationists camp might say that the debate is unneeded because we will be able to suitably control and harness AI. The existential risk is going to be near zero. In that case, this is a lot of nonsense over something that just won't arise. The AI doomers would likely acknowledge that the aforementioned possibilities might happen. Their beef with the discussion would probably be that arguing over whether humans will be stronger if we survive is akin to debating the placement of chairs on the deck of the Titanic. Don't be fretting about the stronger dilemma. Instead, put all our energy into the prevention of AI doomsday. Is all this merely a sci-fi imaginary consideration? Stephen Hawking said this: 'The development of full artificial intelligence could spell the end of the human race.' There are a lot of serious-minded people who truly believe we ought to be thinking mindfully about where we are headed with AI. A new mantra might be that the stronger we think about AI and the future, the stronger we will all be. The strongest posture would presumably be as a result of our being so strong that no overwhelming AI threats have a chance of emerging. Let's indeed vote for human strength.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store