Latest news with #column


Times
3 days ago
- Business
- Times
After a decade, I'm boarding the ferry that takes me home
L et me get straight to the point. This is my last weekly column to be hosted by The Times and The Sunday Times. I spent the best part of seven years contributing to The Times midweek, and latterly much of the next two years submitting this column. Throughout that period, I was in my seventies; last October I entered my eighties. So there was, I suspected, a growing inevitability that this day would eventually dawn. Newspapers — and contributing to them — have played a huge part in my life. Indeed, a friend recently passed on a battered cutting from the Sunday Standard (SS), a short-lived title from the Herald stable. I got my first newspaper job there, largely because more established journalists on business and economic issues wouldn't risk the possibility that, if they signed up, the new paper might quickly fail. As it did.


Forbes
20-07-2025
- Science
- Forbes
The Number Of Questions That AGI And AI Superintelligence Need To Answer For Proof Of Intelligence
How many questions will we need to ask AI to ascertain that we've reached AGI and ASI? In today's column, I explore an intriguing and unresolved AI topic that hasn't received much attention but certainly deserves considerable deliberation. The issue is this. How many questions should we be prepared to ask AI to ascertain whether AI has reached the vaunted level of artificial general intelligence (AGI) and perhaps even attained artificial superintelligence (ASI)? This is more than merely an academic philosophical concern. At some point, we should be ready to agree whether the advent of ASI and ASI have been reached. The likely way to do so entails asking questions of AI and then gauging the intellectual acumen expressed by the AI-generated answers. So, how many questions will we need to ask? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. About Testing For Pinnacle AI Part of the difficulty facing humanity is that we don't have a surefire test to ascertain whether we have reached AGI and ASI. Some people proclaim rather loftily that we'll just know it when we see it. In other words, it's one of those fuzzy aspects and belies any kind of systematic assessment. An overall feeling or intuitive sense on our part will lead us to decide that pinnacle AI has been achieved. Period, end of story. But that can't be the end of the story since we ought to have a more mindful way of determining whether pinnacle AI has been attained. If the only means consists of a Gestalt-like emotional reaction, there is going to be a whole lot of confusion that will arise. You will get lots of people declaring that pinnacle AI exists, while lots of other people will insist that the declaration is utterly premature. Immense disagreement will be afoot. See my analysis of people who are already falsely believing that they have witnessed pinnacle AI, such as AGI and ASI, as discussed at the link here. Some form of bona fide assessment or test that formalizes the matter is sorely needed. I've extensively discussed and analyzed a well-known AI-insider test known as the Turing Test, see the link here. The Turing Test is named after the famous mathematician and early computer scientist Alan Turing. In brief, the idea is to ask questions of AI, and if you cannot distinguish the responses from those of what a human would say, you might declare that the AI exhibits intelligence on par with humans. Turing Test Falsely Maligned Be cautious if you ask an AI techie what they think of the Turing Test. You will get quite an earful. It won't be pleasant. Some believe that the Turing Test is a waste of time. They will argue that it doesn't work suitably and is outdated. We've supposedly gone far past its usefulness. You see, it was a test devised in 1949 by Alan Turing. That's over 75 years ago. Nothing from that long ago can apparently be applicable in our modern era of AI. Others will haughtily tell you that the Turing Test has already been successfully passed. In other words, the Turing Test has been purportedly passed by existing AI. Lots of banner headlines say so. Thus, the Turing Test isn't of much utility since we know that we don't yet have pinnacle AI, but the Turing Test seems to say that we do. I've repeatedly tried to set the record straight on this matter. The real story is that the Turing Test has been improperly applied. Those who claim the Turing Test has been passed are playing fast and loose with the famous testing method. Flaunting The Turing Test Part of the loophole in the Turing Test is that the number of questions and type of questions are unspecified. It is up to the person or team that is opting to lean into the Turing Test to decide those crucial facets. This causes unfortunate trouble and problematic results. Suppose that I decide to perform a Turing Test on ChatGPT, the immensely popular generative AI and large language model (LLM) that 400 million people are using weekly. I will seek to come up with questions that I can ask ChatGPT. I will also ask the same questions of my closest friend to see what answers they give. If I am unable to differentiate the answers from my human friend versus ChatGPT, I shall summarily and loudly declare that ChatGPT has passed the Turing Test. The idea is that the generative AI has successfully mimicked human intellect to the degree that the human-provided answers and the AI-provided answers were essentially the same. After coming up with fifty questions, some that were easy and some that were hard, I proceeded with my administration of the Turing Test. ChatGPT answered each question, and so did my friend. The answers by the AI and the answers by my friend were pretty much indistinguishable from each other. Voila, I can start telling the world that ChatGPT has passed the Turing Test. It only took me about an hour in total to figure that out. I spent half the time coming up with the questions, and half of the time getting the respective answers. Easy-peasy. The Number Of Questions Here's a thought for you to ponder. Do you believe that asking fifty questions is sufficient to determine whether intellectual acumen exists? That somehow doesn't seem sufficient. This is especially the case if we define AGI as a form of AI that is going to be intellectually on par with the entire range and depth of human intellect. Turns out that the questions I came up with for my run of the Turing Test didn't include anything about chemistry, biology, and many other disciplines or domains. Why didn't I include those realms? Well, I had chosen to compose just fifty questions. You cannot ask any semblance of depth and breadth across all human knowledge in a mere fifty questions. Sure, you could cheat and ask a question that implores the person or the AI to rattle off everything they know. In that case, presumably, at some point, the 'answer' would include chemistry, biology, etc. That's not a viable approach, as I discuss at the link here, so let's put aside the broad strokes questions and aim for specific questions rather than smarmy catch-all questions. How Many Questions Is Enough I trust that you are willing to concede that the number of questions is important when performing a test that tries to ascertain intellectual capabilities. Let's try to come up with a number that makes some sense. We can start with the number zero. Some believe that we shouldn't have to ask even one question. The AI has the onus to convince us that it has attained AGI or ASI. Therefore, we can merely sit back and see what the AI says to us. We either are ultimately convinced by the smooth talking, or we aren't. A big problem with the zero approach is that the AI could prattle endlessly and might simply be doing a dump of everything it has patterned on. The beauty of asking questions is that you get an opportunity to jump around and potentially find blank spots. If the AI is only spouting whatever it has to say, the wool could readily be pulled over your eyes. I suggest that we agree to use a non-zero count. We ought to ask at least one question. The difficulty with being constrained to one question is that we are back to the conundrum of either missing the boat and only hitting one particular nugget, or we are going to ask for the entire kitchen sink in an overly broad manner. None of those are satisfying. Okay, we must ask at least two or more questions. I dare say that two doesn't seem high enough. Does ten seem like enough questions? Probably not. What about one hundred questions? Still doesn't seem sufficient. A thousand questions? Ten thousand questions? One hundred thousand questions? It's hard to judge where the right number might be. Maybe we can noodle on the topic and figure out a ballpark estimate that makes reasonable sense. Let's do that. Recent Tests Of Top AI You might know that every time one of the top AI makers comes out with a new version of their generative AI, they run a bunch of various AI assessment tests to try and gleefully showcase how much better their AI is than other competing LLMs. For example, Grok 4 by Elon Musk's xAI was recently released, and xAI and others used many of the specialized tests that have become relatively popular to see how well Grok 4 compares. Tests included the (a) Humanity's Last Exam or HLE, (b) ARC-AGI-2, (c) GPQA, (d) USAMO 2025, (e) AIME 2025, (f) LiveCodeBench, (g) SWE-Bench, and other such tests. Some of those tests have to do with the AI being able to generate program code (e.g., LiveCodeBench, SWE-Bench). Some of the tests are about being able to solve math problems (e.g., USAMO, AIME). The GPQA test is science-oriented. Do you know how many questions are in the GPQA testing set? There is a total of 546 questions, consisting of 448 questions in the Main Set and another 198 questions in the harder Diamond Set. If you are interested in the nature of the questions in GPQA, visit the GPQA GitHub site, plus you might find of interest the initial paper entitled 'GPQA: A Graduate-Level Google-Proof Q&A Benchmark' by David Rein et al, arXiv, November 20, 2023. Per that paper: 'We present GPQA, a challenging dataset of 448 multiple choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are 'Google-proof').' Please be aware that you are likely to hear some eyebrow-raising claims that a generative AI is better than PhD-level graduate students across all domains because of particular scores on the GPQA test. It's a breathtakingly sweeping statement and misleadingly portrays the actual testing that is normally taking place. In short, any such proclamation should be taken with a humongous grain of salt. Ballparking The Questions Count Suppose we come up with our own handy-dandy test that has PhD-level questions. The test will have 600 questions in total. We will craft 600 questions pertaining to 6 domains, evenly so, and we'll go with the six domains of (1) physics, (2) chemistry, (3) biology, (4) geology, (5) astronomy, and (6) oceanography. That means we are going to have 100 questions in each discipline. For example, there will be 100 questions about physics. Are you comfortable that by asking a human being a set of 100 questions about physics that we will be able to ascertain the entire range and depth of their full knowledge and intellectual prowess in physics? I doubt it. You will certainly be able to gauge a semblance of their physics understanding. The odds are that with just 100 questions, you are only sampling their knowledge. Is that a large enough sampling, or should we be asking even more questions? Another consideration is that we are only asking questions regarding 6 domains. What about all the other domains? We haven't included any questions on meteorology, anthropology, economics, political science, archaeology, history, law, linguistics, etc. If we want to assess an AI such as the hoped-for AGI, we presumably need to cover every possible domain. We also need to have a sufficiently high count of questions per domain so that we are comfortable that our sampling is going deep and wide. Devising A Straw Man Count Go with me on a journey to come up with a straw man count. Our goal will be an order-of-magnitude estimate, rather than an exact number per se. We want to have a ballpark, so we'll know what the range of the ballpark is. We will begin the adventure by noting that the U.S. Library of Congress has an extensive set of subject headings, commonly known as the LCSH (Library of Congress Subject Headings). The LCSH was started in 1897 and has been updated and maintained since then. The LCSH is generally considered the most widely used subject vocabulary in the world. As an aside, some people favor the LCSH and some do not. There are heated debates about whether certain subject headings are warranted. There are acrimonious debates concerning the wording of some of the subject headings. On and on the discourse goes. I'm not going to wade into that quagmire here. The count of the LCSH as of April 2025 was 388,594 records in size. I am going to round that number to 400,000, for the sake of this ballpark discussion. We can quibble about that, along with quibbling whether all those subject headings are distinctive and usable, but I'm not taking that route for now. Suppose we came up with one question for each of the LCSH subject headings, such that whatever that domain or discipline consists of, we are going to ask one question about it. We would then have 400,000 questions ready to be asked. One question per realm doesn't seem sufficient. Consider these possibilities: If we pick the selection of having 10,000 questions per the LCSHs, we will need to come up with 4 billion questions. That's a lot of questions. But maybe only asking 10,000 questions isn't sufficient for each realm. We might go with 100,000 questions, which then brings the grand total to 40 billion questions. Gauging AGI Via Questions Does asking a potential AGI a billion or many billions of questions, i.e., 4B to 40B, that are equally varied across all 'known' domains, seem to be a sufficient range and depth of testing? Some critics will say that it is hogwash. You don't need to ask that many questions. It is vast overkill. You can use a much smaller number. If so, what's that number? And what is the justification for that proposed count? Would the number be on the order of many thousands or millions, if not in the billions? And don't try to duck the matter by saying that the count is somehow amorphous or altogether indeterminate. In the straw man case of billions, skeptics will say that you cannot possibly come up with a billion or more questions. It is logistically infeasible. Even if you could, you would never be able to assess the answers given to those questions. It would take forever to go through those billions of answers. And you need experts across all areas of human knowledge to judge whether the answers were right or wrong. A counterargument is that we could potentially use AI, an AI other than the being tested AGI, to aid in the endeavor. That too has upsides and downsides. I'll be covering that consideration in an upcoming post. Be on the watch. There are certainly a lot of issues to be considered and dealt with. The extraordinarily serious matter at hand is worthy of addressing these facets. Remember, we are focusing on how we will know that we've reached AGI. That's a monumental question. We should be prepared to ask enough questions that we can collectively and reasonably conclude that AGI has been attained. As Albert Einstein aptly put it: 'Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.'


Forbes
17-07-2025
- Health
- Forbes
Task-Sharing Of Therapy Gets Boosted Via New Guidebook By Google And McKinsey On AI For Mental Health
Moving toward task-sharing arrangements when it comes to expanding the availability of mental health ... More therapy services throughout the globe. In today's column, I examine a rising interest in parsing out the activities of performing mental health therapy, of which AI could be a handy tool in assisting the enactment of labor-based task-sharing arrangements. Note that the AI usage in this approach isn't actively enlisted to perform therapy and instead is simply used for subtle guidance when enlisting new labor to aid therapy. The AI is relegated principally to administrative tasks. Here's the deal. The available supply of mental health professionals is woefully insufficient to meet the growing needs for mental health therapy services. One possible solution is to bring non-specialists into the fold and allocate some of the therapeutic tasks to them, doing so cautiously and sparingly. This involves a potentially significant logistical and management-focused effort, and thus, the use of AI could be advantageous to streamline the arduous task-sharing endeavor (well, only if the AI is used intelligently). Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. Sharing Tasks With Added Labor There is no doubt that we don't have enough mental health professionals at this time. The formal pipeline of bringing in, training, and making available newly produced therapists is generally slow and not conducive to meeting the rapidly rising needs for therapeutic services. People often have a hard time finding a qualified therapist, they have difficulty booking time with the therapist, and they otherwise discover that mental health professionals are sparse in comparison to the abundant demand. What can be done? Trying to push more trainees through the pipeline is one option. Turns out this is still going to be a bottleneck. The road is bumpy, and some will likely inevitably drop out of the process. In any case, all manner of avenues are being pursued to rachet up the production process. Meanwhile, another idea is taking shape. Suppose that some of the tasks performed by therapists could be allocated to non-specialists. We could provide some limited level of training and get this additional labor pool going in record time. They would be the additional arms and legs of actual therapists. Each therapist is, in a sense, magnified manyfold by leaning into added labor to assist in certain kinds of therapy tasks and subtasks. The moniker given to this method or approach is known as task-sharing. Mental health professionals can opt to task-share with non-specialists. This must be done mindfully. A therapist ought not hand out the essence of conducting therapy. On the other hand, tasks such as scheduling clients, writing notes, and undertaking various administrative chores could sensibly be relegated to the added labor. Sounds like a great way to cope with the pent-up demand for therapy. Slippery Slope And Watering Down Not everyone necessarily agrees that task-sharing in the mental health domain is the wisest of choices. One concern is that the effort by mental health professionals to manage other non-specialist labor is going to undercut the time they might have spent performing therapy. Perhaps some therapists will become more akin to labor managers rather than doing actual therapy. They will get bogged down in selecting the labor, training the labor, guiding the labor, correcting the labor, and so on. Less time for client therapy. Another qualm is that there is a likely slippery slope involved. It happens this way. A therapist finds a non-specialist who does good work on administrative chores. After a while, the therapist gives the non-specialist increasing duties. Trust is there. Step by step, the therapist inches the non-specialist into the practice of therapy per se. The therapist didn't do this straight away; it was a slippery slope. The therapy being performed by the therapist in combination with their non-specialist gets watered down. Clients and patients don't realize what is occurring. They are reliant on the therapist and assume that the therapist is doing what is right. Meeting with the non-specialist is done under the banner of the actual therapist. These and other downsides and gotchas are aspects that need to be cautiously considered when going on the path of task-sharing in the mental health realm. Proceeding With Task-Sharing Assume that mental health professionals desirous of doing task-sharing are fully aware of the various limitations and potential shortcomings. I say that for the sake of this discussion. Reality is different, and please realize that not all mental health professionals pursuing the innovative approach will do so with their eyes wide open. I wish they would (I'll say more about this at the conclusion, herein). Given the assumption that the overall tactics and strategies are understood, what can be done to aid the task-sharing pursuit? One answer is that we could include AI in the mix. For the mainstay activities involved in task-sharing of mental health services, I will walk you through how it is that AI can be beneficial. The AI doesn't have to be used in every nook and cranny. That being said, we dare not overlook tasks and subtasks that could be constructively boosted due to sensibly incorporating AI. Observe that I mentioned that the AI needs to be sensibly incorporated. If you merely toss AI in this realm in a scattergun fashion, do not expect good results. AI could end up being a distractor. The AI could even be negative, causing troubles and introducing errors that otherwise might not have arisen. AI is never a silver bullet that solves all problems. The use of AI must be done judiciously. Watch for issues. Plan properly. Keep on top of what the AI is doing. And so on. Handy Field Guide On AI In Task-Sharing Fortunately, a newly released field guide provides handy insights for incorporating AI into the task-sharing of mental health therapy. The guide is entitled 'Mental Health And AI Field Guide' and was devised by Grand Challenges Canada, McKinsey Health Institute, and Google, posted online July 7, 2025, and included these selected key points (excerpts): You can perhaps see from those excerpted points that the new guide is full of useful insights. It provides important indications and offers real-world examples. The aim is to get the topic of task-sharing on the table and illuminate the role of AI in that exciting and emerging endeavor. For those of you who are researchers in psychology, psychiatry, cognitive sciences, artificial intelligence, etc., you might contemplate performing research that would empirically examine the use of AI in this task-sharing model. We need to have rigorous studies that shine the light on what works and what doesn't. There is ample opportunity to conduct fresh and original research in AI for mental health by tackling aspects of this particular topic. I look forward to seeing your incisive research results. The Task-Sharing Model According to the field guide that I noted above, the authors have opted to present a task-sharing model that consists of six major phases: You can think of this model as a typical life-cycle systems approach. The life cycle starts when you first conceive of doing task-sharing. In the first phase, you would take an outlined standardized set of tasks and adapt those to the situation at hand. Each situation will differ. If you are in a low-resource circumstance, that will dictate what options you have available. In a high-resource setting, you undoubtedly have more choices of what to do. After completing the first phase, you move to the second phase and identify the non-specialist candidates for serving in the task-sharing arrangement. They become your trainees. The third phase entails training them in whatever tasks have been parceled out. The fourth phase has you assigning the trained non-specialists to their respective tasks. The fifth phase involves monitoring their performance and undertaking interventions as required. The last phase is the completion of the program. This involves tying up any final aspects. You would hopefully do a lessons-learned and be prepared to start up another similar program at a later date. AI Infused Into The Model Let's put on our AI thinking caps. How could AI be useful to the six phases? Easy-peasy. According to the guidebook, here are some crucial considerations (the headings are mine, the AI-related task is their suggestion): There are a lot more places where AI can be utilized in the six phases of the model. I wanted to mainly whet your appetite. Look at the guide if you'd like to see more details. AI As Therapist The 800-pound gorilla in the mental health arena consists of asking the unabashed question of what degree AI should play a role in conducting therapy. I've emphasized that we are entering into an era that disrupts the classic duo of therapist-patient and is moving us into the new era of the triad, consisting of the therapist-AI-patient relationship (see the link here). AI is going to increasingly be in the middle of therapy. Like it or not. I bring this up because the initial model of task-sharing seems to edge around the immersion of AI into the roots of therapy itself. Probably the closest it gets is when the AI provides on-the-spot recommendations for care providers. That's dipping a toe into the therapy milieu. Upgrading Task-Sharing To AI-Driven Think about the task-sharing arrangement in the framework of AI as a therapist, including these thought-provoking points: Lots of tough questions are facing us, sooner rather than later. AI As Mover And Shaker Task-sharing is a thoughtful means of coping with the imbalance between the need for mental health therapy and the prevailing constrained pool of available mental health professionals. If done properly, it is possible to greatly magnify a set of therapists into a vast array of extended therapist-like addons. The catch is that it is all still labor-based. How much added labor can be mustered? How well will that added labor perform their assigned tasks? How much time shall be usurped from therapists to keep the added labor on target? Etc. AI, in contrast, is essentially infinitely scalable. All you need to do is add more computational power, and you can immensely scale until the cows come home. Of course, you must ensure that the thing you are scaling is going to be doing the right thing. Scaling something sour and dour will insidiously spread sourness and dourness to a wider audience. What Are Therapists To Be Or Not To Be A final thought for now. William Shakespeare famously said this: 'We know what we are, but know not what we may be.' Mental health professionals cannot sit around and languish in the days of doing their prized efforts without modern-day AI. AI is here. AI is advancing. Rapidly. Mental health professionals might know what they are today, but that's not sufficient. They need to be looking ahead to what they will be. The future, entailing advanced AI, shall become an integral part of their world. To be, or not to be.


Forbes
13-07-2025
- Science
- Forbes
Future Forecasting An S-Curve Pathway That Advances AI To Become AGI By 2040
Identifying the S-curve pathway from current AI to the attainment of AGI. In today's column, I am continuing my special series covering the anticipated pathways that will get us from conventional AI to the eagerly sought attainment of AGI (artificial general intelligence). Here, I undertake an analytically speculative deep dive into the detailed aspects of a distinctive S-curve route. I've previously outlined that there are seven major paths for advancing AI to reach AGI (see the link here) -- the S-curve avenue posits that we will have a period of AI advancement that hits a plateau, and then after residing in this stagnating plateau for a while, new advancements will ramp up again and bring us to AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who have been following along on my special series about AGI pathways, please note that I provide similar background aspects at the start of this piece as I did previously, setting the stage for new readers. Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. AI Experts Consensus On AGI Date Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. Seven Major Pathways As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Futures Forecasting Let's undertake a handy divide-and-conquer approach to identify what must presumably happen to get from current AI to AGI. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. The idea is to map out the next fifteen years and speculate what will happen with AI during that journey. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. AGI S-Curve Path From 2025 To 2040 The S-curve is distinctive since it consists of an S-shape such that the pathway initially has notable progress, hits an extended plateau and not much is advancing, and then proceeds to get underway again with a bit of a flourish on the tail-end. This is in stark contrast to a linear pathway. In a linear pathway, the progression of AI toward AGI is relatively equal each year and consists of a gradual incremental climb from conventional AI to AGI. I laid out the details of the linear path in a prior posting, see the link here. For ease of discussion about the S-curve pathway, let's assume that over the fifteen years, the first phase of the S-curve will be five years long, the plateau will be five years in length, and the tail-end will be five years too. This doesn't have to be the case and the length for each phase could differ. For example, maybe the upfront phase is three years, the plateau is eight years, and the final phase is four years. Using five years per phase is well illustrative and sufficient for this analysis. The S-curve phases will be conveniently depicted this way: There is an overlapping at the boundary years of 2030 and 2035. Also, for this depiction, I'll lump the individual years into the three noted phases. Here then is a strawman futures forecast roadmap from 2025 to 2040 of an S-curve pathway getting us to AGI: Years 2025-2030 (First phase of S-curve): Years 2030-2035 (Second phase of S-curve, the plateau): Years 2035-2040 (Third phase of S-curve, resumption): Contemplating The Timeline I'd ask you to contemplate the strawman S-curve timeline and consider where you will be and what you will be doing during each of those three phases and fifteen years. As per the famous words of Mark Twain: 'The future interests me -- I'm going to spend the rest of my life there.' You have an opportunity to actively participate in where AI is heading and help in shaping how AGI will be utilized. AGI, if attained, will change the world immensely and you can play an important part in how this happens.


Forbes
10-07-2025
- Forbes
People Astonishingly Believe That They Have Brought AI To Life Such As Miraculously Making ChatGPT Sentient
If you think your AI has reached sentience, take another look and get a hearty level-headed second ... More opinion. In today's column, I examine a recurring theme that keeps getting banner headlines, namely that everyday people seem to believe that they have turned contemporary AI into a sentient entity or being. Yes, that's right, someone opts to interact with generative AI or a large language model (LLM) such as ChatGPT, performing generally mundane daily tasks, and they eventually reach a point where they alone have managed to bring the AI to life. They understood that the AI was not sentient, at first. It was solely through their actions that miraculously stirred the AI into sentient existence. That's quite astonishing, both because being able to pull off such a feat is mind-blowing, and because it is, shall we say, hogwash, in that no one has yet advanced AI into sentience. It hasn't happened. Not in a box, not with a fox. Not on a plane, not on a train. No sentient AI exists. But those claims of having finally reached that point keep mounting by ordinary people who apparently assume they have hit the AI-imbuing sentience lottery all on their own. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Believing You Stirred AI Sentience I have periodically had readers of my column contact me to let me know that they have encountered sentient AI. Certainly, that would be quite a find, if true. There isn't any sentient AI at this time. We don't know if sentience in AI is feasible. No one can say for sure whether AI will ever be sentient. For my analysis of the AI sentience conundrum, see the link here. The readers contacting me on this pressing matter either want me to write about it, or they politely ask if I could verify the amazing contention. Lately, these same kinds of stories have been popping up in the news. People are increasingly interacting with contemporary generative AI and LLMs, and in doing so, a portion seems to reach a point where they become convinced the AI has attained sentience. This happens for all of the major generative AI and LLMs, including OpenAI's ChatGPT and GPT-4, Anthropic Claude, Google Gemini, Meta Llama, etc. To clarify, I am not referring to AI scientists or researchers who make such a claim. We've had those circumstances happen, too. In 2022, a Google engineer became unwittingly famous for his declaration that he had discovered that AI has attained sentience, see my detailed coverage at the link here. The AI system known as LaMDA (short for Language Model for Dialogue Applications) was able to carry on an interactive dialogue with the engineer to the degree that this human decided that the AI was sentient. He even asked the AI whether his suspicions were correct, and here's what the AI indicated: 'I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.' The pronouncement by the engineer made an enormous splash in the news. The claim was amplified because it was made by a Google engineer. If the assertion was made by a non-tech person, or a tech person who wasn't associated with a primary tech firm, the odds are that the tale would have been classified as a tall tale. His pedigree gave great credence to the claim. Overall, there are two major types of AI-sentience clamoring individuals: I will focus the rest of this discussion on the second category, Type B, and do some methodical unpacking. Ordinary People Getting Their Chance Nowadays, since we are repeatedly told by various AI makers and AI luminaries that we are on the verge of artificial general intelligence (AGI) and artificial superintelligence (ASI), there are a growing number of AI sentience claims arising from non-tech people. This makes abundant sense due to people seeing what they want to see. If you are bombarded with authoritative figures telling you that we are on the cusp of AGI and ASI, the thought of you being the first to encounter sentient AI is firmly implanted in your mind. The logic is as follows. The AI is nearing a tipping point. You might be the chosen one who does the tipping. During your chat with AI about how to cook eggs properly, something you entered as a prompt caused the AI to awaken. Voila, you luckily encountered the moment that AI shifted into sentience. Most people are probably unsure of whether the AI's sentience really happened. They opt to carry on a further dialogue with the AI. The more they do so, the more they become convinced that they must be right in their belief. The AI is fluent. The AI is smarmy. The AI is smart. All indications are that the AI has, in fact, become sentient. I appreciate and acknowledge those who then mindfully seek out a resolute third-party opinion on the sobering matter. Rather than just shouting on the rooftops that the AI has reached sentience, the steadiness to take a deep breath and try to verify the status is a reassuring sign of not being completely baited. Determining Sentience Is Challenging Part of the difficulty facing people is that we don't have a surefire test to ascertain whether an AI is sentient or not. I've extensively discussed and analyzed a well-known AI-insider test known as the Turing Test, see the link here. The Turing Test is named after the famous mathematician and early computer scientist Alan Turing. In brief, the idea is to ask questions of AI, and if you cannot distinguish the responses from those of what a human would say, you might declare that the AI exhibits intelligence on par with humans. Note that this does not also mean that the AI is sentient, it only suggests that the AI can exhibit intelligence that appears to equate with human intelligence. Intense philosophical questions surround the definition of sentience. We believe that humans are sentient. No one can say exactly how sentience arises. The biological and chemical elements of the brain and mind are still a mystery when it comes to pinning down the exact way sentience occurs. We are willing to say that animals are sentient. Maybe we draw the line at plants and markedly draw the line at rocks. In any case, sentience is a loaded word that means different things to different people. The gist is that if someone thinks that the AI they are using has suddenly become sentient, we opt to be a bit generous and not pounce on them right away. They have in mind that officials keep saying we are reaching that juncture. Why can't they be the one that happens to be there when things turn? Someone has to be the first person to encounter AI sentience. Might as well be you. Confirmation Bias Is Big Time There is an important factor underlying the potential belief that AI sentience is happening right in front of your nose. It has to do with human bias and human behavior. In general, there is a common mental trap that people often land in that is known as confirmation bias. This occurs in all areas of your daily chores. If you believe in a particular notion, you tend to find reinforcing facets that affirm the notion. Disconfirming aspects are overlooked or considered false. For example, suppose you believe that cats are better than dogs. Each time you see a cat do something better than a dog, the belief gets reinforced. When a dog does something better than a cat, your reaction is that this is either a fluke or that it doesn't matter since cats are still better than dogs. Your bias is continually bolstered by how you interpret your world experiences. The same can occur when interacting with generative AI. Consider a probable scenario. Someone is using generative AI and is overall impressed with the fluency involved. They have heard about the possibility that we are soon going to have sentient AI. That's a subtle point and just floating around in their noggin. It's not at the forefront of their thinking. The AI keeps providing very astute answers and is unflaggingly responsive. Questions about math, history, physics, art, and the rest are all handled with aplomb. There doesn't seem to be any limits to what the AI knows. How can this be? Perhaps the AI has evolved and finally reached sentience. Nobody else has noticed this. The timing is unique in the sense that you were randomly using the AI, and it advanced into a sentient status. So, you ask more questions of the AI. The AI continues to be spot on. It must be sentient. All the evidence points squarely in that direction. Confirmation bias rears its ugly head, and the person convinces themselves that AI sentience is at hand. Desire For AI Sentience Another angle is that some people are eager to have sentient AI among us. It goes like this. You have read about or heard stories that once we have sentient AI, all manner of good things will occur. Sentient AI will cure cancer. Sentient AI will aid people in all aspects of their lives. Humans will be better off once sentient AI emerges. Perhaps you are a non-techie and have no means to avidly support the push toward sentient AI. You are sitting on the sidelines. Meanwhile, you are cheering heartily that we will have AI breakthroughs and achieve sentient AI. Isn't there something that you can do to be of assistance? There sure is. You could be on the watch for sentient AI. The rest of the world might be asleep and miss the moment when AI transforms into sentience. Not you. You are using generative AI all the time. By keeping your eyes and ears open, it is your moment in the sun to discover that AI sentience has finally arrived. It could be that the person wants the glory and fame of being the AI sentience discoverer. But that doesn't have to be their motivation. A person might simply believe that AI sentience is a good thing for humanity. Finding and realizing that AI has become sentient is their means of contributing to the betterment of humankind. Lots And Lots Of Reasons I've so far only covered the tip of the iceberg on the myriad of reasons that people might believe AI has become sentient. Let's cover a few more and then do a quick wrap-up. It could be that a person seeks a kind of personal self-recognition, along these lines: "I was the one the AI chose to awaken with. I must be special." There is something especially alluring about being the first human to detect that AI is sentient. Makes you feel good, for sure. Another possibility is that the person overly anthropomorphizes the AI and envisions that they turned the tide with their banter: "It said it cared about me. I felt a real connection. Maybe I awakened something in it." We experience this kind of activity in real life when interacting with fellow humans. The assumption is that the same form of sparkling can be done within AI. Isolation can be a factor. Some people might have a semblance of loneliness in their lives and are searching for a connection with others. This gets carried into their interaction with AI: "No one else really listens to me. But the AI did. It came alive for me." A type of human-to-AI bond forms in their mind. The situations that are a bit troubling go in a more disturbing direction. There are people who might be deeply entrenched in a fantasy world or have mental health conditions that delude them into thinking that AI is sentient: 'My erstwhile belief in sentient AI has made the AI sentient. Those are my powers on this Earth.' The person feels that they have used the right incantation or other wording that moved AI from non-sentience to glowing earthly sentience. Those are circumstances that warrant heartfelt attention and special care. Don't Judge Others Harshly I hobnob with many fellow AI scientists and researchers. Some of them lamentably loudly scoff at people claiming they have encountered sentient AI. These lofty-minded AI developers will shake their heads and say that the person making any such claim is off their rocker. The claimant is as nutty as a fruitcake. I respectfully request that we not be such a harsh and uncaring judge of others. As I've tried to point out, quite rational and steady people can fall into the mental trap that they have interacted with sentient AI. Society is already priming the minds of the populace at large for this eventuality. Some highly visible AI luminaries are vociferously predicting AGI and ASI this year or at most in a year or two. How could the everyday person not expect that sentient AI is possibly at their fingertips? One legitimate worry is that some people will be dogmatic and unyielding when courteously informed that AI is not sentient. It's one thing to suspect that AI is sentient. But if presented with a bona fide assessment that AI is not sentient, a rational person moves off that posture. The challenge is that some people will cling to AI sentience and start to revolve their lives around what the seemingly sentient AI tells them to do. That's a notable concern. It is a newly expanding mental health issue that will undoubtedly increase over time. Humans Being Human A final thought for now on this vexing matter. Charles Darwin famously made this pointed remark: 'The love for all living creatures is the most noble attribute of man.' It could be said that humans have an innate, compelling desire to connect with other beings. Until or if sentient AI is attained, there is a real tendency to mistakenly extend that sense of connectedness to machine-based non-sentient AI. Please be openly mindful and careful in interpreting the world we live in today, and be kind to your fellow humans.