Latest news with #AIInstitute


Mail & Guardian
08-07-2025
- Business
- Mail & Guardian
Tech experts call on South Africa to scale up its digital infrastructure
South Africa needs to accelerate digital transformation and infrastructure to harness the potential of artificial intelligence South Africa needs to accelerate digital transformation and infrastructure to harness the potential of Although the country has made inroads with developing policies for the adoption of AI, including the National Policy on Data and Cloud, the National AI Policy Framework and the establishment of the AI Institute of South Africa, which laid strong groundwork, the uptake of the smart technology has been slow. 'In order to sustain positive momentum and unlock the full potential of AI, South Africa must address three key challenges: AI infrastructure, industry application, and talent and local ecosystem development,' Gene Zhang, the chief executive of He said in terms of infrastructure, the three main challenges are date, computing power and connectivity. 'Data is the fuel of intelligence. It is projected that by 2030, the world will generate 1 less than 3.5 Zettabytes [a trillion gigabytes], with less than 30% utilisation. This is largely due to the absence of unified platforms for data collection, storage, and processing,' Zhang said. 'Computing power is the brain of intelligence. Global demand for AI computing power is expected to reach 105 ZFLOPS [a unit for measuring the speed of a computer system] by 2030 — 500 times that of today. Yet, South Africa's AI computing growth rate remains below 60%.' By 2030, 23% of households Communications and Digital Technologies Minister 'This includes concluding the Broadcast Digital Migration process to free up spectrum, expanding 5G infrastructure, and modernising public facilities with open-access fibre,' Malatsi said. Through the Affordable Smart Devices Workshop to ensure affordable access to smart devices for all South Africans, the treasury was able to remove the luxury goods tax on smartphones priced under R2 500, he noted. 'This is a meaningful step toward reducing barriers for low-income households to access smart devices. It is one small step in a long journey of eliminating barriers to affordable smart devices,' the minister added. South Africa is also lagging with respect to talent and local innovation, Zhang said. 'South Africa currently faces a talent gap of 500 000 in ICT [Information and Technology Communications], with 60% of this concentrated in AI, big data, and cloud. The country's total AI investment stands at around US $500 million, but 70% comes from foreign sources. 'This highlights a clear need to accelerate local AI investment and development.' To address this, Malatsi said the country is investing in digital skills through the national Digital and Future Skills Strategy. He said the country's aim is to empower 70% of the population with basic digital skills by 2029. This includes integrating digital literacy into basic education and scaling community-based learning initiatives. 'These efforts target not only students, but also job seekers, workers in transition and vulnerable groups such as women and persons with disabilities to ensure that no one is left behind in the digital economy,' he added. The government's other aims are to promote the productive use of digital technologies — as a tool to access government services, run online businesses, reach new markets and connect with job opportunities — as well as to become the 'most attractive destination for ICT investment on the continent'. Zhang said AI technologies are accelerating across the world, especially in public sector industries. 'AI models have improved significantly — reports show that since 2022, the accuracy of various AI models has increased 91.5%. The cost of using AI has rapidly decreased — data shows that since 2022, the price of AI model usage has dropped 99.3%. 'As a result, AI adoption is accelerating across industries — with 53% of enterprises already using AI.' But in South Africa's industry-specific applications, AI adoption is still at the early stages. 'Cloud penetration is relatively low, and AI has yet to achieve deep integration with verticals,' he added. As businesses adopt digital technology and AI, they should ensure the societies that use them are uplifted and the digital divide is not widened, said Jonas Bogoshi, the chief executive of South African ICT company BCX. 'We're not talking about gadgets here. We are talking about upliftment of societies,' Bogoshi said. 'We are talking about a mother in the rural community able to access health specialists with telemedicine. We are talking about a young student who can code, we are talking about an AI system that can detect problems or rather can detect that there's an issue on your crop and therefore be able to stop it and help you to be able to harvest quicker. It is more than just gadgets, it's how people interact with their yield.'


Irish Examiner
23-05-2025
- Business
- Irish Examiner
Workplace Wellbeing: Embracing AI's work-enhancing capabilities to help us work smarter
There's a new sense of anxiety in the workplace. It's called FOBO, the fear of becoming obsolete, it's the worry that artificial intelligence (AI) and new technologies will eventually make us all redundant. A 2024 survey of 14,000 workers in 14 countries found that half believed their skills would no longer be required in five years. Another study last year reported 46% of employees in the US feared machines would perform their jobs within the next five years, with another 29% expecting to be superseded even sooner. In Ireland, Government research revealed that approximately 30% of employees worked in occupations at risk of being replaced by technology. Historically, such concerns might have been limited to factory workers, but the research shows that modern-day FOBO affects almost everyone, including those working in finance, insurance, information technology, and communications. 'Almost all businesses, from the smallest start-ups to the largest organisations, are using AI-driven technology now,' says Maryrose Lyons, founder of the AI Institute, which runs training programmes. 'It's impacting most careers. The main ways are through generating content and ideas, automating repetitive administrative tasks and enhanced data analysis.' Realising that AI has infiltrated their workplace in these ways unsettles some people, making them question their professional significance. According to career and counselling psychologist Sinéad Brady, it can undermine their sense of identity. While we cannot predict how AI will develop or be integrated into the workplace, Lyons argues that it is an accessible tool for most in its current format. 'For many of us, what we do at work plays an important part in how we see ourselves and how we imagine others see us,' she says. 'If we think that a machine or computer programme can do what we do, we can begin to doubt our own value. This doubt can cause huge anxiety.' The ever-escalating pace of change can further exacerbate this anxiety. 'We all have a different capacity for change,' says the work and organisational psychologist Leisha Redmond McGrath. 'Some love it while others prefer stability. But what's true for most of us is that we cope better with change if we feel we have some control over it. It's when we believe there's nothing we can do — that change is a wave coming at us, but we don't know when or how it will hit — that we feel most fearful.' Face up to FOBO So, what can we control when it comes to FOBO? Brady suggests facing the fear and reframing how we perceive this new technology. 'We've done it before,' she says. 'Many of us were afraid of computers when they were first introduced to the workplace, but we faced that fear. 'When Word and Excel, for example, took away some aspects of some jobs, they didn't make us obsolete. We learned to use them as tools in our work. We can do the same with AI.' While we cannot predict how AI will develop or be integrated into the workplace, Lyons argues that it is an accessible tool for most in its current format. 'Just as you learned to master the likes of Excel, Outlook, and other software platforms when you first entered the workforce, you now have to learn AI,' she says. 'The American professor Ethan Mollick, a leading academic who studies the effects of AI on work, estimates that it takes an average of 10 hours of using AI tools before they start to come naturally.' Brady points out that AI can enhance productivity and performance. Maryrose Lyons: 'If AI frees up six extra hours in your week, use them to engage in critical thinking, researching and coming up with ideas or building relationships with other humans, none of which AI can do." 'By removing the need for some tasks, it gives us extra time for more challenging creative work,' she says. 'These days, I use AI to spellcheck and edit documents. When preparing talks, I ask it to present me with a counterargument so that I can address those points in my talk. 'Using AI in these ways makes me quicker and better at my job than someone who isn't using it.' Brady also encourages us to concentrate on the human skills that AI will never replicate: 'I don't think AI will ever be able to communicate effectively, think creatively, or critically solve problems,' she says. 'A good tactic to counter FOBO would be to lean into those aspects of our work.' Lyons gives some examples of how this might work in practice. 'If AI frees up six extra hours in your week, use them to engage in critical thinking, researching and coming up with ideas or building relationships with other humans, none of which AI can do,' she says. 'Have more off-site meetings with clients or sit down with an AI tool to brainstorm new ideas.' Fight or flight For those who are overcome by FOBO, despite the reassurances, Redmond McGrath looks at the psychological reasons behind it. 'It's terrifying to think you could lose your job and not have money to pay bills,' she says. 'If you identify with your work, it can feel threatening to learn that you might be usurped by technology. There's something called amygdala hijack that can occur when we experience threat in this way. 'A primitive part of our brain is activated, and we go into fight or flight mode, which can make us more sensitive and less rational.' Leisha Redmond McGrath: "'It's terrifying to think you could lose your job and not have money to pay bills." To prevent such negative reactions to FOBO, she suggests focusing on the 'building blocks' of wellbeing. 'Make sure you get enough rest, sleep, movement, and exercise,' she says. 'Eat well. Spend time on your relationships with others and with yourself. Connecting with nature or something bigger than yourself will give you a sense of perspective. And if you're feeling overwhelmed, talk to someone about it. It will calm your nervous system and you'll be more likely to figure out more rational and proactive ways of responding to FOBO, especially if you're someone whose sense of identity and purpose has been bound by your work.' Talking to coworkers means you might also learn what they are doing to adapt to technology. 'Instead of trying to figure out the way forward on your own, which is daunting, or putting your head in the sand, which isn't advisable, finding out what others are doing and how employers and professional bodies are supporting people like you to retrain could help you capitalise on the positive benefits of technology,' says Redmond McGrath. Don't be afraid to ask younger colleagues for support, too. Having grown up with technology, Redmond McGrath says they are often better able to use it and will likely be happy to share their expertise with you. Career and counselling psychologist Sinéad Brady: 'Ask ChatGPT to do something small and inconsequential for you. That could be the entry point that gets you over your initial fear.' Whatever you do, try not to be afraid of technology. 'It's just a tool and it's possible to play with it,' says Brady. 'Ask ChatGPT to do something small and inconsequential for you. That could be the entry point that gets you over your initial fear.' While noting the many benefits, Brady strikes a note of caution. 'The information it provides you with is based on data that isn't always accurate and that can be biased,' she says. 'AI and all modern technology are only ever as good as the information fed to them, which is why we should always question it for accuracy, assess it for quality, and not rely on it too much.' Despite AI's limitations, Lyons urges people to to overcome their FOBO and explore what it offers. 'There are so many tools that are being used in all sorts of jobs and they are changing how people work for the better,' she says. 'It could be career-ending to ignore these tools. My advice is to engage and find out how this new technology can help us perform better and gain more satisfaction from our work.'
Yahoo
17-04-2025
- Science
- Yahoo
Popular AIs head-to-head: OpenAI beats DeepSeek on sentence-level reasoning
ChatGPT and other AI chatbots based on large language models are known to occasionally make things up, including scientific and legal citations. It turns out that measuring how accurate an AI model's citations are is a good way of assessing the model's reasoning abilities. An AI model 'reasons' by breaking down a query into steps and working through them in order. Think of how you learned to solve math word problems in school. Ideally, to generate citations an AI model would understand the key concepts in a document, generate a ranked list of relevant papers to cite, and provide convincing reasoning for how each suggested paper supports the corresponding text. It would highlight specific connections between the text and the cited research, clarifying why each source matters. The question is, can today's models be trusted to make these connections and provide clear reasoning that justifies their source choices? The answer goes beyond citation accuracy to address how useful and accurate large language models are for any information retrieval purpose. I'm a computer scientist. My colleagues − researchers from the AI Institute at the University of South Carolina, Ohio State University and University of Maryland Baltimore County − and I have developed the Reasons benchmark to test how well large language models can automatically generate research citations and provide understandable reasoning. We used the benchmark to compare the performance of two popular AI reasoning models, DeepSeek's R1 and OpenAI's o1. Though DeepSeek made headlines with its stunning efficiency and cost-effectiveness, the Chinese upstart has a way to go to match OpenAI's reasoning performance. The accuracy of citations has a lot to do with whether the AI model is reasoning about information at the sentence level rather than paragraph or document level. Paragraph-level and document-level citations can be thought of as throwing a large chunk of information into a large language model and asking it to provide many citations. In this process, the large language model overgeneralizes and misinterprets individual sentences. The user ends up with citations that explain the whole paragraph or document, not the relatively fine-grained information in the sentence. Further, reasoning suffers when you ask the large language model to read through an entire document. These models mostly rely on memorizing patterns that they typically are better at finding at the beginning and end of longer texts than in the middle. This makes it difficult for them to fully understand all the important information throughout a long document. Large language models get confused because paragraphs and documents hold a lot of information, which affects citation generation and the reasoning process. Consequently, reasoning from large language models over paragraphs and documents becomes more like summarizing or paraphrasing. The Reasons benchmark addresses this weakness by examining large language models' citation generation and reasoning. Following the release of DeepSeek R1 in January 2025, we wanted to examine its accuracy in generating citations and its quality of reasoning and compare it with OpenAI's o1 model. We created a paragraph that had sentences from different sources, gave the models individual sentences from this paragraph, and asked for citations and reasoning. To start our test, we developed a small test bed of about 4,100 research articles around four key topics that are related to human brains and computer science: neurons and cognition, human-computer interaction, databases and artificial intelligence. We evaluated the models using two measures: F-1 score, which measures how accurate the provided citation is, and hallucination rate, which measures how sound the model's reasoning is − that is, how often it produces an inaccurate or misleading response. Our testing revealed significant performance differences between OpenAI o1 and DeepSeek R1 across different scientific domains. OpenAI's o1 did well connecting information between different subjects, such as understanding how research on neurons and cognition connects to human-computer interaction and then to concepts in artificial intelligence, while remaining accurate. Its performance metrics consistently outpaced DeepSeek R1's across all evaluation categories, especially in reducing hallucinations and successfully completing assigned tasks. OpenAI o1 was better at combining ideas semantically, whereas R1 focused on making sure it generated a response for every attribution task, which in turn increased hallucination during reasoning. OpenAI o1 had a hallucination rate of approximately 35% compared with DeepSeek R1's rate of nearly 85% in the attribution-based reasoning task. In terms of accuracy and linguistic competence, OpenAI o1 scored about 0.65 on the F-1 test, which means it was right about 65% of the time when answering questions. It also scored about 0.70 on the BLEU test, which measures how well a language model writes in natural language. These are pretty good scores. DeepSeek R1 scored lower, with about 0.35 on the F-1 test, meaning it was right about 35% of the time. However, its BLEU score was only about 0.2, which means its writing wasn't as natural-sounding as OpenAI's o1. This shows that o1 was better at presenting that information in clear, natural language. On other benchmarks, DeepSeek R1 performs on par with OpenAI o1 on math, coding and scientific reasoning tasks. But the substantial difference on our benchmark suggests that o1 provides more reliable information, while R1 struggles with factual consistency. Though we included other models in our comprehensive testing, the performance gap between o1 and R1 specifically highlights the current competitive landscape in AI development, with OpenAI's offering maintaining a significant advantage in reasoning and knowledge integration capabilities. These results suggest that OpenAI still has a leg up when it comes to source attribution and reasoning, possibly due to the nature and volume of the data it was trained on. The company recently announced its deep research tool, which can create reports with citations, ask follow-up questions and provide reasoning for the generated response. The jury is still out on the tool's value for researchers, but the caveat remains for everyone: Double-check all citations an AI gives you. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Manas Gaur, University of Maryland, Baltimore County Read more: Why building big AIs costs billions – and how Chinese startup DeepSeek dramatically changed the calculus What is an AI agent? A computer scientist explains the next wave of artificial intelligence tools AI pioneers want bots to replace human teachers – here's why that's unlikely Manas Gaur receives funding from USISTEF Endowment Fund.


Arab News
09-02-2025
- Business
- Arab News
Boston Dynamics founder not concerned about robot takeover, warns against overregulation
RIYADH: The idea that robots could take over the world is not a 'serious concern,' said the founder of advanced robotics company Boston Dynamics, as he warned against excessive regulation at a Riyadh technology conference on Sunday. 'There's some fear that robots are going to somehow get out of hand and take over the world and eliminate people. I don't really think that's a serious concern,' Marc Raibert said during the fourth edition of the LEAP summit. While regulation is necessary, Raibert believes that excessive restrictions could slow progress. He expressed his concern about 'overregulation stopping us from having the benefits of AI and robotics that could develop because robots can solve problems that we face in addition to causing problems.' He added that while regulating mature applications makes sense, limiting the technology too early could hinder its potential. His comments were made during a fireside chat titled 'The Future of Robotics and AI,' in which he highlighted the role of artificial intelligence-powered robots in elderly care and assistance for people with disabilities. 'We have a couple of teams working on physical designs, but more importantly on the intelligence and perception needed to be able to do those kinds of tasks,' Raibert said. Beyond industrial use, robotics is expected to play an important role in healthcare, supporting patient care, people with disabilities, and elderly assistance, according to Raibert, who founded the leading robotics company in 1992. 'I think cognitive intelligence, AI, is going to help us make it a lot easier to communicate with the robot, but also for the robot to understand the world, so that they can do things more easily without having everything programmed in detail,' he added. Raibert also introduced a project at his AI Institute called 'Watch, Understand, Do,' which aims to improve robots' ability to learn tasks by observing human workers. The initiative focuses on on-the-job training, where a robot can watch a worker perform a task — such as assembling a component in a factory — and gradually replicate it. While this process is intuitive for humans, it remains a technical challenge for robots, requiring advancements in machine perception and task sequencing. He pointed out that while humanoid robots are gaining attention, true human-like capabilities go beyond having two arms and two legs. He emphasized that intelligence, problem-solving skills, and the ability to interact effectively with the environment will define the next generation of AI-driven robotics. Raibert discussed the differences between robotics adoption in workplaces and homes, explaining that industrial environments offer a structured setting where robots can operate more efficiently. He noted that robots are likely to become more common in workplaces before being integrated into homes. However, integrating robots into homes presents additional challenges, including safety, cost, and adaptability to unstructured environments. He said while home robots will eventually become more common, their widespread adoption will likely follow the expansion of industrial and commercial robotics. As part of LEAP, the Saudi Data and Artificial Intelligence Authority is gathering global AI leaders at its DeepFest platform during the fourth edition of the summit. With more than 150 speakers, 120 exhibitors, and an expected attendance of over 50,000 people from around the world, DeepFest showcases a range of cutting-edge AI technology. The event explores emerging technologies, fosters collaboration, exchanges expertise, and builds partnerships, contributing to innovation and strengthening cooperation among experts across diverse industries.