logo
#

Latest news with #AIcommunity

United Nations Considering These Four Crucial Actions To Save The World From Dire AGI And Killer AI Superintelligence
United Nations Considering These Four Crucial Actions To Save The World From Dire AGI And Killer AI Superintelligence

Forbes

timea day ago

  • Science
  • Forbes

United Nations Considering These Four Crucial Actions To Save The World From Dire AGI And Killer AI Superintelligence

The United Nations releases an important report on AGI and emphasizes four key recommendations to ... More help save the world from dire outcomes. In today's column, I examine a recently released high-priority report by the United Nations that emphasizes what must be done to prepare for the advent of artificial general intelligence (AGI). Be aware that the United Nations has had an ongoing interest in how AI is advancing and what kinds of international multilateral arrangements and collaborations ought to be taking place (see my coverage at the link here). The distinctive element of this latest report is that the focus right now needs to be on our reaching AGI, a pinnacle type of AI. Many in the AI community assert that we are already nearing the cusp of AGI and, in turn, we will soon thereafter arrive at artificial superintelligence (ASI). For the sake of humanity and global survival, the U.N. seeks to have a say in the governance and control of AGI and ultimately ASI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or whether AGI may be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. United Nations Is Into AI And AGI I've previously explored numerous U.N. efforts regarding where AI is heading and how society should best utilize advanced AI. For example, I extensively laid out the ways that the U.N. recommends that AI be leveraged to attain the vaunted Sustainability Development Goals (SDGs), see the link here. Another important document by the U.N. is the UNESCO-led agreement on the ethics of AI, which was the first-ever global consensus involving 193 countries on the suitable use of advanced AI (see my analysis at the link here) The latest notable report is entitled 'Governance of the Transition to Artificial General Intelligence (AGI): Urgent Considerations for the UN General Assembly' and was prepared and submitted to the Council of Presidents of the United Nations General Assembly (UNCPGA). Here are some key points in that report (excerpts): The bottom line is that a strong case can be made that if AGI is allowed to be let loose and insufficiently overseen, society is going to be at grave risk. A question arises as to how the nations of the world can unite to try and mitigate that risk. Aptly, the United Nations believes they are the appropriate body to take on that challenge. UN Given Four Big Asks What does the U.N. report say about urgently needed steps regarding coping with the advent of AGI? These four crucial recommendations are stridently called for: Those recommendations will be considered by the Council of Presidents of the United Nations General Assembly. By and large, enacting one or more of those recommendations would indubitably involve some form of U.N. General Assembly resolutions and would undoubtedly need to be integrated into other AI initiatives of the United Nations. It is possible that none of the recommendations will proceed. Likewise, the recommendations might be revised or reconstructed and employed in other ways. I'll keep you posted as the valued matter progresses. Meanwhile, let's do a bit of unpacking on those four recommendations. I will do so, one by one, and then provide a provocative or perhaps engaging conclusion. Global AI Observatory The first of the four recommendations entails establishing a global AGI Observatory that would keep track of what's happening with AGI. Think of this as a specialized online repository that would serve as a curated source of information about AGI. I agree that this would potentially be immensely helpful to the U.N. Member States, along with being useful for the public at large. You see, the problem right now is that there is a tremendous amount of misinformation and disinformation concerning AGI that is being spread around, often wildly hyping or at times undervaluing the advent of AGI and ASI. Assuming that the AGI Observatory was properly devised and suitably careful in what is collected and shared, having a source about AGI that is reliable and balanced would be quite useful. One potential criticism of such an AGI Observatory would be that it is perhaps duplicative of other similar commercial or national collections about AGI. Another qualm would be if the AGI Observatory were allowed to be biased, it would misleadingly carry the aura of something balanced, yet would actually be tilted in a directed way. Best Practices And Certification For AGI The second recommendation requests that a set of AGI best practices be crafted. This would aid nations in understanding what kind of governance structures ought to be considered for sensibly overseeing AGI in their respective country. It could spur nations to proceed on a level playing field basis. Furthermore, it reduces the proverbial reinventing of the wheel, namely that the nations could simply adopt or adapt an already presented set of AGI best practices. No need to write such stipulations from scratch. On a similar vein, the setting up of certifications for AGI would be well-aligned with the AGI best practices. AI makers and countries as a whole would hopefully prize being certified as to their AGI and its conformance to vital standards. A criticism on this front is that if the U.N. does not make the use of best practices a compulsory aspect, and likewise if the AGI certification is merely optional, few if any countries would go to the trouble of adopting them. In that sense, the whole contrivance is mainly window dressing and not a feet-to-the-fire consideration. U.N. Framework Convention In the parlance of the United Nations, it is somewhat expected to call for a Framework Convention on significant topics. Since AGI is abundantly a significant topic, here's a snapshot excerpt of what is proposed in the report: 'A Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development.' The usual criticism of those kinds of activities is that they can become a bureaucratic nightmare that doesn't produce much of anything substantive. Also, they might stretch out and be a lengthy affair. This is especially disconcerting in this instance if you believe that AGI is on the near horizon. Formulate U.N. AGI Agency The fourth recommendation indicates that a feasibility study be undertaken to assess whether a new U.N. agency ought to be set up. This would be a specialized U.N. agency devoted to the topic of AGI. The report stresses that this would need to be quickly explored, approved, and set in motion on an expedited basis. An analogous type of agency or entity would be the International Atomic Energy Agency (IAEA). You probably know that the IAEA seeks to guide the world toward peaceful uses of nuclear energy. It has a founding treaty that provides self-guidance within the IAEA. Overall, the IAEA reports to the U.N. General Assembly and the U.N. Security Council. A criticism of putting forward an AGI Agency by the United Nations is that it might get bogged down in international squabbling. There is also a possibility that it would be an inhibitor to the creative use of AGI rather than merely serving as a risk-reducing guide. To clarify, there are some that argue against too many regulating and overseeing bodies since this might undercut innovative uses of AGI. We might inadvertently turn AGI into something a lot less impressive and valuable than we had earlier hoped for. Sad face. Taking Action Versus Sitting Around Do you think that we should be taking overt governance action about AGI, such as the recommendations articulated in the U.N. AGI report? Some would say that yes, we must act immediately. Others would suggest we take our sweet time. Better to get things right than rush them along. Still others might say there isn't any need to do anything at all. Just wait and see. As food for thought on that thorny conundrum, here's a memorable quote by Albert Einstein: 'The world will not be destroyed by those who do evil, but by those who watch them without doing anything.' Mull that over and then make your decision on what we should do next about AGI and global governance issues. The fate of humanity is likely on the line.

OpenAI rolls out first international learning platform
OpenAI rolls out first international learning platform

Coin Geek

time20-06-2025

  • Business
  • Coin Geek

OpenAI rolls out first international learning platform

Getting your Trinity Audio player ready... OpenAI, the maker of ChatGPT, has entered into a strategic agreement with the IndiaAI Mission to introduce OpenAI Academy in India. This marks the platform's first-ever international Academy chapter, and the formal start of OpenAI's education and artificial intelligence (AI) literacy programs in India. The South Asian nation currently represents the second-largest market for ChatGPT users, highlighting the country's growing interest in AI tools and applications. The collaboration aims to expand access to AI education and training across the country. The partnership underscores India's broader efforts to make advanced technologies more accessible and inclusive as part of its national AI development strategy. 'Together with IndiaAI, we're working to equip the next generation of students, developers, and mission-driven organizations with the tools and training they need to build responsibly with AI,' the company said. As part of the agreement, OpenAI will contribute a range of educational materials and resources to support IndiaAI's 'FutureSkills' initiative, as well as the iGOT Karmayogi platform, which is focused on upskilling civil servants. Additionally, OpenAI will offer up to $100,000 in application programming interface (API) credits to 50 fellows and startups selected under the IndiaAI Mission. The initiative seeks to make AI skills accessible to a broad audience nationwide by providing both online and offline training in English and eventually other regional languages. A key goal of the initiative is to train one million teachers in the practical use of generative AI technologies. OpenAI also plans to organize hackathons across seven Indian states, aiming to engage around 25,000 students. Jason K., Chief Strategy Officer at OpenAI, reportedly said, 'India is emerging as one of the most dynamic hubs for AI innovation. We are thrilled to collaborate with IndiaAI to empower individuals with the skills and confidence to harness AI meaningfully in their daily lives and careers.' 'As demand for AI professionals is expected to reach 1 million by 2026, there's a significant opportunity and a need to expand AI skills, development and make sure people from every part of India can participate and benefit,' he added. The initiative comes at a time when OpenAI is navigating a challenging legal landscape in India, where it is attempting to argue that Indian courts lack jurisdiction over its United States-based operations. This position is likely to face scrutiny, especially given past instances where similar arguments by platforms like Elon Musk's X have been unsuccessful, and tech companies have come under pressure from Indian authorities over regulatory compliance. OpenAI is embroiled in a legal dispute initiated by the Indian news agency ANI. The case centers on allegations that OpenAI used copyrighted content without authorization, intensifying the legal and regulatory challenges the company faces in one of its most important markets. Major shift in Sam Altman's India vision In February, OpenAI's chief executive, Sam Altman, held discussions with India's Minister for Electronics and Information Technology (MeitY), Ashwini Vaishnaw, to explore collaborative opportunities in building an affordable and accessible AI infrastructure in India. The talks focused on areas such as the development of AI models, production of graphics processing units (GPUs), and the creation of practical AI-driven applications tailored to India's needs. 'Had super cool discussion with Sam Altman on our strategy of creating the entire AI stack – GPUs, model, and apps. Willing to collaborate with India on all three,' Vaishnaw wrote on X after the discussions. Altman's India visit marked a notable change in his outlook compared to his statements in 2023, when he expressed skepticism about the ability of countries outside the United States to develop cutting-edge AI technologies. His recent engagement signals a recognition of India's growing influence in the global AI landscape and its potential to become a key contributor to the next wave of AI advancements. 'India is an incredibly important market for AI in general, for OpenAI in particular. It's our second-biggest market, and we have tripled our users here in the last year… The country has embraced AI technology and is building the entire stack, from chips to models and applications,' Altman had said in February. India's AI market to more than triple to $17 billion by 2027 Altman's change in outlook toward India is no coincidence—it mirrors the nation's fast-growing influence in the global technology arena. Thanks to its vast digital population and abundance of skilled engineers, India is increasingly seen as a center for innovation, real-world testing, and large-scale implementation of advanced technologies such as artificial intelligence. As the world's second-largest online market, boasting over 900 million Internet users, India presents a powerful combination of widespread mobile connectivity and strong digital infrastructure. This makes the South Asian powerhouse an ideal environment for launching scalable, affordable AI innovations tailored to both local and global needs. According to a report by the Boston Consulting Group (BCG), India's domestic AI market is projected to more than triple to $17 billion by 2027, making it one of the fastest-growing AI economies globally. This momentum is fueled by rising enterprise tech investments, a thriving digital ecosystem, and a robust talent base. 'India already has 600,000+ AI professionals, with the number expected to double to 1.25 million by 2027. The country accounts for 16% of the global AI talent pool, second only to the United States, a reflection of both its demographic advantage and STEM (Science, technology, engineering, and mathematics) education pipeline,' the BCG report said. The supporting infrastructure is also evolving rapidly. By 2025, the world's most populous country is set to establish 45 new data centers, adding approximately 1,015 megawatts of capacity to its existing network of 152 facilities. India's startup landscape is evolving just as quickly. The country is now home to more than 4,500 AI-driven startups, with nearly 40% founded in the past three years, the BCG report said. These companies are bringing innovation to a wide range of industries, including healthcare, agriculture, transportation, and financial services. Many of them are tackling unique Indian problems through AI-based solutions, which are increasingly gaining relevance on a global scale. 'With its talent, scale, infrastructure, and policy tailwinds, India is not just poised to adopt AI, it is positioned to help define how AI shapes the global economy,' the BCG report pointed out. In March 2024, the Indian government approved a funding package of approximately $1.24 billion for the IndiaAI Mission, to be implemented over a five-year period. This significant investment is designed to accelerate the country's AI ecosystem, drive innovation, and support entrepreneurial ventures. According to the Union Cabinet—the country's highest policy-making authority—the initiative is expected to benefit the public and stimulate economic growth at the grassroots level. The IndiaAI Mission envisions the creation of a robust, inclusive AI ecosystem by addressing key areas such as equitable access to computing power, improved data quality, development of homegrown AI technologies, and fostering a skilled talent pool. It also aims to facilitate collaboration between academia and industry, support startups through risk capital, encourage socially beneficial AI applications, and uphold ethical standards in AI development. These goals are being pursued under seven foundational pillars that guide the Mission's framework. As part of its strategy, the Mission is building a scalable AI computing infrastructure tailored to the needs of India's expanding AI research and startup landscape. This includes setting up an advanced AI compute system equipped with over 18,000 GPUs, made possible through public-private partnerships. Eligible users will be able to access these computing resources at 40% reduced cost under the scheme, significantly lowering barriers to AI development and experimentation. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI . Watch: India posed to become leaders in Web3 title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">

ECB's Villeroy calls on EU to set deadlines for financial sovereignty
ECB's Villeroy calls on EU to set deadlines for financial sovereignty

Zawya

time18-06-2025

  • Business
  • Zawya

ECB's Villeroy calls on EU to set deadlines for financial sovereignty

Top European Central Bank policymaker Francois Villeroy de Galhau urged the European Union to set up deadlines in order to speed up progress on financial integration and pooling its savings together to sustain investments. Speaking to students from all over Europe gathered in Milan for the Young Factor conference, Villeroy de Galhau said the European Union should create a sort of Artificial Intelligence (AI) community, like it did for steel and coal after World War Two.. If it acted now and upped its game, Europe could still catch up with the United States and China on AI. "The technology is evolving, and we could have the second mover advantage," Villeroy de Galhau, who is the Bank of France governor, said. "We still have a chance, provided we put together our resources, talents, and money on AI. This is why I call ... for an AI European Community, putting together our resources," he said. In order to do so the EU must find ways to retain and invest domestically more of its residents' savings, a big portion of which is currently being exported into other markets, primarily the United States, financing investments there. The European market is not particularly attractive for financial capital given its fragmentation, Villeroy de Galhau said, calling for the measures outlined by former Italian Prime Ministers Mario Draghi and Enrico Letta in their respective reports on European competitiveness to be swiftly adopted. "We know what we have to do. Draghi plus Letta plus the savings and investments union. By the way, most of these measures don't have fiscal costs ... But we are too slow: it's now, or it could be never," he said. "I really hope we can have a deadline and say: we will implement the Draghi and Letta reports, build European economic and financial sovereignty till, say, the 1st of January 2028, as we did with the single market," he added. (Reporting by Valentina Za; Editing by Chizu Nomiyama )

Democratizing AI: Google Cloud's vision for accessible agent development
Democratizing AI: Google Cloud's vision for accessible agent development

TechCrunch

time06-06-2025

  • Business
  • TechCrunch

Democratizing AI: Google Cloud's vision for accessible agent development

Google Cloud's Iliana Quinonez takes a deep dive into why the company sees a critical importance in democratizing AI agent development for organizational advancement. This presentation, held at TechCrunch Sessions: AI, is perfect for anyone new to AI or those well-versed in the field, as we all discover how AI agents can enhance collaborative workflows and build sophisticated intelligent systems.

Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040
Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Forbes

time05-06-2025

  • General
  • Forbes

Future Forecasting The Yearly Path That Will Advance AI To Reach AGI By 2040

Future forecasting the yearly path of advancing todays to AGI by 2040. In today's column, I am continuing my special series on the likely pathways that will get us from conventional AI to the avidly sought attainment of AGI (artificial general intelligence). AGI would be a type of AI that is fully on par with human intellect in all respects. I've previously outlined seven major paths that seem to be the most probable routes of advancing AI to reach AGI (see the link here). Here, I undertake an analytically speculative deep dive into one of those paths, namely I explore the year-by-year aspects of the considered most-expected route, the linear path. Other upcoming postings will cover each of the other remaining paths. The linear path consists of AI being advanced incrementally, one step at a time until we arrive at AGI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Right now, efforts to forecast when AGI is going to be attained consist principally of two paths. First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus? Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused. The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one. For this discussion on the various pathways to AGI, I am going to proceed with the year 2040 as the consensus anticipated target date. Besides the scientific consensus of AI experts, another newer and more expansive approach to gauging when AGI will be achieved is known as AGI convergence-of-evidence or AGI consilience, which I discuss at the link here. As mentioned, in a previous posting I identified seven major pathways that AI is going to advance to become AGI (see the link here). The most often presumed path is the incremental progression trail. The AI industry tends to refer to this as the linear path. It is essentially slow and steady. Each of the other remaining major routes involves various twists and turns. Here's my list of all seven major pathways getting us from contemporary AI to the treasured AGI: You can apply those seven possible pathways to whatever AGI timeline that you want to come up with. Let's undertake a handy divide-and-conquer approach to identify what must presumably happen on a year-by-year basis to get from current AI to AGI. Here's how that goes. We are living in 2025 and somehow are supposed to arrive at AGI by the year 2040. That's essentially 15 years of elapsed time. In the particular case of the linear path, the key assumption is that AI is advancing in a stepwise fashion each year. There aren't any sudden breakthroughs or miracles that perchance arise. It is steady work and requires earnestly keeping our nose to the grind and getting the job done in those fifteen years ahead. The idea is to map out the next fifteen years and speculate what will happen with AI in each respective year. This can be done in a forward-looking mode and also a backward-looking mode. The forward-looking entails thinking about the progress of AI on a year-by-year basis, starting now and culminating in arriving at AGI in 2040. The backward-looking mode involves starting with 2040 as the deadline for AGI and then working back from that achievement on a year-by-year basis to arrive at the year 2025 (matching AI presently). This combination of forward and backward envisioning is a typical hallmark of futurecasting. Is this kind of a forecast of the future ironclad? Nope. If anyone could precisely lay out the next fifteen years of what will happen in AI, they probably would be as clairvoyant as Warren Buffett when it comes to predicting the stock market. Such a person could easily be awarded a Nobel Prize and ought to be one of the richest people ever. All in all, this strawman that I show here is primarily meant to get the juices flowing on how we can be future forecasting the state of AI. It is a conjecture. It is speculative. But at least it has a reasonable basis and is not entirely arbitrary or totally artificial. I went ahead and used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is the date for AGI instead, and thus this journey will play out over 25 years. The timeline and mapping would then have 25 years to deal with rather than fifteen. If 2030 is going to be the AGI arrival year, the pathway would need to be markedly compressed. I opted to identify AI technological advancements for each of the years and added some brief thoughts on the societal implications too. Here's why. AI ethics and AI law are bound to become increasingly vital and will to some degree foster AI advances and in other ways possibly dampen some AI advances, see my in-depth coverage of such tensions at the link here. Here then is a strawman futures forecast year-by-year roadmap from 2025 to 2040 of a linear path getting us to AGI: Year 2025: AI multi-modal models finally become robust and fully integrated into LLMs. Significant improvements in AI real-time reasoning, sensorimotor integration, and grounded language understanding occur. The use of AI in professional domains such as law, medicine, and the like rachet up. Regulatory frameworks remain sporadic and generally unadopted. Year 2026: Agentic AI starts to blossom and become practical and widespread. AI systems with memory and planning capabilities achieve competence in open-ended tasks in simulation environments. Public interest in governing AI increases. Year 2027: The use of AI large-scale world models spurs substantially improved AI capabilities. AI can now computationally improve from fewer examples via advancements in AI meta-learning. Some of these advances allow AI to be employed in white-collar jobs that have a mild displacement economically, but only to a minor degree. Year 2028: AI agents have gained wide acceptance and are capable of executing multi-step tasks semi-autonomously in digital and physical domains, including robotics. AI becomes a key element as taught in schools and as used in education, co-teaching jointly with human teachers. Year 2029: AI is advanced sufficiently to have a generalized understanding of physical causality and real-world constraints through embodied learning. Concerns about AI as a job displacer reach heightened attention. Year 2030: Self-improving AI systems begin modifying their own code under controlled conditions, improving efficiency without human input. This is an important underpinning. Some claim that AGI is now just a year or two away, but this is premature, and ten more years will first take place. Year 2031: Hybrid AI consisting of integrated cognitive architectures unifying symbolic reasoning, neural networks, and probabilistic models has become the new accepted approach to AI. Infighting among AI developers as to whether hybrid AI was the way to go has now evaporated. AI-based tutors fully surpass human teachers in personalization and subject mastery, putting human teachers at great job risk. Year 2032: AI agents achieve human-level performance across most cognitive benchmarks, including abstraction, theory of mind (ToM), and cross-domain learning. This immensely exceeds prior versions of AI that did well on those metrics but not nearly to this degree. Industries begin to radically restructure and rethink their businesses with an AI-first mindset. Year 2033: AI scalability alignment protocols improve in terms of human-AI values alignment. This opens the door to faster adoption of AI due to a belief that AI safety is getting stronger. Trust in AI grows. But so is societal dependence on AI. Year 2034: AI interaction appears to be indistinguishable from human-to-human interaction, even as tested by those who are versed in tricking AI into revealing itself. The role of non-human intelligence and how AI stretches our understanding of philosophy, religion, and human psychology has become a high priority. Year 2035: AI systems exhibit bona fide signs of self-reflection, not just routinized mimicry or parroting. Advances occur in having AI computationally learn from failure across domains and optimizing for long-term utility functions. Debates over some form of UBI (universal basic income) lead to various trials of the approach to aid human labor displacements due to AI. Year 2036: AI advancement has led to fluid generalization across a wide swath of domains. Heated arguments take place about whether AGI is emerging, some say it is, and others insist that a scaling wall is about to be hit and that this is the best that AI will be. Nations begin to covet their AI and set up barriers to prevent other nations from stealing or copying the early AGI systems. Year 2037: Advances in AI showcase human-like situational adaptability and innovation. New inventions and scientific discoveries are being led by AI. Questions arise about whether this pre-AGI has sufficient moral reasoning and human goal alignment. Year 2038: AI systems now embody persistent identities, seemingly able to reflect on experiences across time. Experts believe we are on the cusp of AI reaching cognitive coherence akin to humans. Worldwide discourse on the legal personhood and rights of AI intensifies. Year 2039: Some of the last barriers to acceptance of AI as nearing AGI are overcome when AI demonstrates creativity, emotional nuance, and abstract reasoning in diverse contexts. This was one of the last straws on the camel's back. Existential risks and utopian visions fully dominate public apprehensions. Year 2040: General agreement occurs that AGI has now been attained, though it is still early days of AGI and some are not yet convinced that AGI is truly achieved. Society enters a transitional phase: post-scarcity economics, redefinition of human purpose, and consideration of co-evolution with AGI. Mull over the strawman timeline and consider where you will be and what you will be doing during each of those fifteen years. One viewpoint is that we are all along for the ride and there isn't much that anyone can individually do. I don't agree with that sentiment. Any of us can make a difference in how AI plays out and what the trajectory and impact of reaching AGI is going to be. As per the famous words of Abraham Lincoln: 'The most reliable way to predict the future is to create it.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store