Latest news with #DigitalEducationCouncil


Forbes
01-07-2025
- Business
- Forbes
Build Trust: The First Step In AI's Journey Through Higher Education
Bruce Dahlgren is the CEO of Anthology. He's a seasoned technology executive with more than 30 years of leadership experience. I recently had the opportunity to attend a global gathering to discuss the future of education. In a fireside chat on 'Empowering Teachers and the Teaching Profession,' we discussed a central truth: Artificial intelligence (AI) can be a powerful ally in education—but only when deployed with clarity, care and trust. That message is especially relevant in higher education today, where the use of AI is accelerating, but the structures to support it are still taking shape. AI is already reshaping how we work, teach and learn, and higher education institutions are identifying new ways to move forward with clarity and confidence. Recent surveys show a growing divide between student usage and institutional readiness. A study by the Digital Education Council found that 86% of learners globally use AI in their studies, while an Inside Higher Ed report shows most institutions do not have policies for enterprise-level AI use. This gap presents an important opportunity: Before realizing AI's full potential, institutions must first build a foundation of trust, both internally and externally. Understanding The Disconnect AI is rapidly gaining adoption. Learners are using AI to generate ideas, study more efficiently and automate tasks like note taking. Faculty are exploring how AI can support grading, administrative tasks and individualized feedback. These early use cases demonstrate AI's potential, but they're often unfolding in the absence of clear institutional frameworks. The result is fragmented adoption. AI is being used, but often without structure, support or alignment across departments. Addressing this challenge requires coordinated leadership to develop responsible policies, communicate clear expectations and build trust across the institution. Responsible Adoption Begins With Governance Rather than focusing solely on emerging tools, institutions can start by asking clear, foundational questions: • Who sets AI policy? • How will data and ethics be managed? • What systems ensure accountability and transparency? Strong, cross-functional collaboration can help leaders ensure that AI tools are deployed thoughtfully. With shared policies and clear standards, institutions can create consistent, inclusive experiences that benefit all learners. Above all, educators and administrators must remain in control; technology should support, not substitute, human decision-making. AI's Value Across The Education Ecosystem AI offers meaningful benefits for every part of the academic community. Generative AI can improve efficiencies across a university's operations by automating routine tasks, surfacing insights faster and enabling smarter resource allocation. For example, AI can streamline administrative workflows like scheduling, admissions processing and student communications—reducing manual workloads and response times. It can enhance decision-making by analyzing large volumes of data to identify trends, risks and opportunities more quickly than traditional methods. In areas like IT and facilities management, AI can optimize maintenance schedules and predict service needs. By handling repetitive tasks and offering predictive insights, AI frees up staff to focus on more strategic, high-impact work—ultimately helping institutions operate more effectively and deliver better experiences for students and faculty alike. For students, AI provides individualized support; and for faculty, it streamlines administrative tasks and allows educators to focus on teaching and learning. Supporting Literacy And Inclusion Employers are increasingly looking for graduates who can leverage AI—and leverage AI responsibly. Institutions play a critical role in preparing learners to use AI ethically, effectively and with awareness of its limitations in real-world settings. To maximize these opportunities, institutions can prioritize digital and ethical AI literacy. Create environments for faculty and learners that encourage reflection, inquiry and responsible use. I've noticed some institutions are introducing exercises where students engage with AI-generated responses and then evaluate their accuracy and logic, building critical thinking and AI fluency in tandem. Open dialogue, pilot programs and cross-functional working groups can help create strong, campus-wide alignment. When institutions take a proactive and inclusive approach, they can build confidence and readiness for ongoing innovation. A Future Built On Trust AI can be a powerful tool for enabling personalized learning, dynamic assessment and more student-centered support. It empowers educators to focus on creativity and connection, and it helps students prepare for a world where adaptability and digital fluency are key. But as roles and expectations evolve, embracing innovation with care is essential. I think the real opportunity lies in leading with intention. Institutions that prioritize trust today could be the ones best equipped to shape tomorrow's most human-centered, forward-looking approaches to education. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Zawya
30-06-2025
- Business
- Zawya
Zayed University embraces artificial intelligence to transform learning and innovation
New undergraduate and master's programs in technology launching in the Fall A university-wide training initiative will begin to equip faculty with practical strategies to integrate AI into their teaching and course design Abu Dhabi/Dubai, UAE: Zayed University (ZU) has joined more than 90 leading institutions worldwide as a member of the Digital Education Council (DEC) - a global community dedicated to advancing AI literacy, responsible digital transformation, and innovation in education. ـ ZU is the first university from the UAE to join the DEC, marking a significant milestone in the universities strategic vision to equip students, faculty, and leadership with the tools, mindset, and capabilities needed to thrive in an increasingly digital world. The membership builds on ZU's broader efforts to integrate AI cross the university - including ongoing faculty development, digital pedagogy, and curriculum innovation aligned with the future of work. Through its participation in the DEC, ZU will engage in working groups, global advisory sessions, and executive briefings, contributing regional perspectives while drawing from global best practices. These insights will directly inform on-campus initiatives, from curriculum design to the student experience. Starting this summer, ZU will also roll out two key DEC initiatives: the Certificate in AI for Higher Education, designed for faculty and leadership, and the AI Literacy for Students program. Alongside the new DEC membership, ZU's College of Technological Innovation (CTI) will launch a new Bachelor of Science in Intelligent Systems Engineering this Fall. The program will prepare a new generation of engineers to design, build, and manage intelligent systems powered by AI and emerging technologies. CTI is also introducing two new Master's programs in Cybersecurity and Digital Transformation and Innovation, responding to growing national and global demand for advanced digital skills and specialized expertise. To ensure that all students, across every college and program, develop foundational knowledge in AI and emerging digital tools, ZU is undertaking a university-wide faculty training initiative. This will equip faculty with practical strategies to embed AI into course design and teaching practices - ensuring every graduate benefits from exposure to AI. 'Integrating artificial intelligence across our work is vital to building digital fluency at Zayed University,' said Professor Michael Allen, Acting Vice President of Zayed University. 'Joining the DEC allows us to both contribute to and benefit from a global network of education leaders. But ultimately, the real impact lies in how we bring those insights to life - in our classrooms, in our programs, and in how we prepare students for the world ahead.' About Zayed University Zayed University, the UAE's flagship higher education institution, was established in 1998 and proudly bears the name of the Founder of the Nation – the late Sheikh Zayed bin Sultan Al Nahyan. In the spirit of Sheikh Zayed, the University is a pioneer and innovator in the field of education and research. The University currently caters to many Emirati and international students across its full range of undergraduate and postgraduate offerings. Led by Her Excellency Shamma Bint Suhail Al Mazrouei, Minister of Community Empowerment and Chairperson of the Zayed University Board of Trustees, the University proudly serves the needs of the Nation and contributes to the UAE's economic, social and cultural progress from its state-of-the-art campuses in Abu Dhabi and Dubai. Through research, scholarship, creative activities and outreach, Zayed University provides educational leadership, expands opportunity, and enriches the knowledge of local, regional, and global communities.
Yahoo
10-05-2025
- Yahoo
Teachers Using AI to Grade Their Students' Work Sends a Clear Message: They Don't Matter, and Will Soon Be Obsolete
Talk to a teacher lately, and you'll probably get an earful about AI's effects on student attention spans, reading comprehension, and cheating. As AI becomes ubiquitous in everyday life — thanks to tech companies forcing it down our throats — it's probably no shocker that students are using software like ChatGPT at a nearly unprecedented scale. One study by the Digital Education Council found that nearly 86 percent of university students use some type of AI in their work. That's causing some fed-up teachers to fight fire with fire, using AI chatbots to score their students' work. As one teacher mused on Reddit: "You are welcome to use AI. Just let me know. If you do, the AI will also grade you. You don't write it, I don't read it." Others are embracing AI with a smile, using it to "tailor math problems to each student," in one example listed by Vice. Some go so far as requiring students to use AI. One professor in Ithaca, NY, shares both ChatGPT's comments on student essays as well as her own, and asks her students to run their essays through AI on their own. While AI might save educators some time and precious brainpower — which arguably make up the bulk of the gig — the tech isn't even close to cut out for the job, according to researchers at the University of Georgia. While we should probably all know it's a bad idea to grade papers with AI, a new study by the School of Computing at UG gathered data on just how bad it is. The research tasked the Large Language Model (LLM) Mixtral with grading written responses to middle school homework. Rather than feeding the LLM a human-created rubric, as is usually done in these studies, the UG team tasked Mixtral with creating its own grading system. The results were abysmal. Compared to a human grader, the LLM accurately graded student work just 33.5 percent of the time. Even when supplied with a human rubric, the model had an accuracy rate of just over 50 percent. Though the LLM "graded" quickly, its scores were frequently based on flawed logic inherent to LLMs. "While LLMs can adapt quickly to scoring tasks, they often resort to shortcuts, bypassing deeper logical reasoning expected in human grading," wrote the researchers. "Students could mention a temperature increase, and the large language model interprets that all students understand the particles are moving faster when temperatures rise," said Xiaoming Zhai, one of the UG researchers. "But based upon the student writing, as a human, we're not able to infer whether the students know whether the particles will move faster or not." Though the UG researchers wrote that "incorporating high-quality analytical rubrics designed to reflect human grading logic can mitigate [the] gap and enhance LLMs' scoring accuracy," a boost from 33.5 to 50 percent accuracy is laughable. Remember, this is the technology that's supposed to bring about a "new epoch" — a technology we've poured more seed money into than any in human history. If there were a 50 percent chance your car would fail catastrophically on the highway, none of us would be driving. So why is it okay for teachers to take the same gamble with students? It's just further confirmation that AI is no substitute for a living, breathing teacher, and that isn't likely to change anytime soon. In fact, there's mounting evidence that AI's comprehension abilities are getting worse as time goes on and original data becomes scarce. Recent reporting by the New York Times found that the latest generation of AI models hallucinate as much as 79 percent of the time — way up from past numbers. When teachers choose to embrace AI, this is the technology they're shoving off onto their kids: notoriously inaccurate, overly eager to please, and prone to spewing outright lies. That's before we even get into the cognitive decline that comes with regular AI use. If this is the answer to the AI cheating crisis, then maybe it'd make more sense to cut out the middle man: close the schools and let the kids go one-on-one with their artificial buddies. More on AI: People With This Level of Education Use AI the Most at Work
Yahoo
26-01-2025
- Science
- Yahoo
Opinion - To short-circuit the higher education AI apocalypse, we must embrace generative AI
The rapid improvement of generative AI tools has led many of my peers to proclaim that higher education as we know it has come to a crashing and shocking end. I agree. In my large-enrollment general education course at the University of Florida, I can no longer assign an essay asking students to state their views on genetic engineering and assume the responses I receive are written by humans. So the critical question we must ask as academics is, 'What do we do now?' Rather than try to create assignments that AI cannot tackle, I propose we develop assignments that embrace AI text generation. We don't want to ignore the 54 percent of students who use AI at least weekly in their course assignments, according to the Digital Education Council. We don't want to ban AI. And even when we, as educators, try to trick AI tools, newer versions of ChatGPT just come along to thwart that strategy. With this in mind, I modified the final assignment in my course to require that students submit an entirely AI-generated first draft, which they then modified to reflect their own perspectives. In the first couple of semesters using this strategy, students color-coded the sources of text to mark which parts were human-generated and which were AI-generated. This strategy allowed students to use and reflect on how they would utilize AI in the future. Tracking of text origin was further streamlined using the recently released 'Authorship' tool from Grammarly, which accurately attributes text as 'typed by a human' or 'copied from a source/AI-generated.' Advancements in technology have upended the careful development of assessments in higher education before and will continue to in the future, even if AI appears to be an all-encompassing, do-everything tool. For those of us born in the 1970s, we remember a time before the ever-present calculator. Math teachers could assign long-division problems without worrying that students who came up with the correct answer did not understand the methods required to generate the answer. More recently, language translation, a key learning tool in language acquisition, was upended over a few days in 2016 by the release of a new version of Google Translate. The rapid improvement in Google Translate bears similar parallels to how ChatGPT 3.5 burst into the consciousness of a large portion of the population in November 2022. In both cases, educators eventually embraced and used these new tools to improve student learning outcomes. While requiring a GenAI first draft of an assignment is not a model that will work in all situations, 'showing the work' and student reflection can play key roles in student assessment. My twin high school seniors possess graphing calculators that are more powerful than the computer on which I wrote my dissertation. So I have observed firsthand how educators have modified assessments to adjust for such changes, emphasizing the processes needed to answer the assignment more than the final answer. Language teachers have pivoted to incorporate student reflections on why one word was chosen over another, for example. In my course, Part B of the final assignment requires students to reflect on how well (or poorly) the initial AI draft reflected their views on the assigned topic. I acknowledge that students with access to AI during the reflection portion of assignments could use the tool to show how they produced the 'work.' Tools that track AI usage, like the 'Authorship' tool, hold promise for providing both instructors and students with information on where and how much AI text was used in an assignment. The capability of AI to generate text (and images) will keep advancing, becoming increasingly integrated into the daily lives of both us and our students. In our professional lives, it will be capable of responding to the most imaginative essay prompts educators can design. By shifting the focus of assignments from pure content creation to critical engagement, analysis and editing, we will teach our students how to think creatively, collaborate and communicate their ideas effectively and responsibly. These are the same skills they need to master to successfully work in teams and communicate efficiently in their future careers. Brian Harfe, Ph.D., is a professor in the College of Medicine and associate provost at the University of Florida. He runs 14 international exchange programs and a study abroad program while teaching about 450 students each semester. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.


The Hill
26-01-2025
- Science
- The Hill
To short-circuit the higher education AI apocalypse, we must embrace generative AI
The rapid improvement of generative AI tools has led many of my peers to proclaim that higher education as we know it has come to a crashing and shocking end. I agree. In my large-enrollment general education course at the University of Florida, I can no longer assign an essay asking students to state their views on genetic engineering and assume the responses I receive are written by humans. So the critical question we must ask as academics is, 'What do we do now?' Rather than try to create assignments that AI cannot tackle, I propose we develop assignments that embrace AI text generation. We don't want to ignore the 54 percent of students who use AI at least weekly in their course assignments, according to the Digital Education Council. We don't want to ban AI. And even when we, as educators, try to trick AI tools, newer versions of ChatGPT just come along to thwart that strategy. With this in mind, I modified the final assignment in my course to require that students submit an entirely AI-generated first draft, which they then modified to reflect their own perspectives. In the first couple of semesters using this strategy, students color-coded the sources of text to mark which parts were human-generated and which were AI-generated. This strategy allowed students to use and reflect on how they would utilize AI in the future. Tracking of text origin was further streamlined using the recently released 'Authorship' tool from Grammarly, which accurately attributes text as 'typed by a human' or 'copied from a source/AI-generated.' Advancements in technology have upended the careful development of assessments in higher education before and will continue to in the future, even if AI appears to be an all-encompassing, do-everything tool. For those of us born in the 1970s, we remember a time before the ever-present calculator. Math teachers could assign long-division problems without worrying that students who came up with the correct answer did not understand the methods required to generate the answer. More recently, language translation, a key learning tool in language acquisition, was upended over a few days in 2016 by the release of a new version of Google Translate. The rapid improvement in Google Translate bears similar parallels to how ChatGPT 3.5 burst into the consciousness of a large portion of the population in November 2022. In both cases, educators eventually embraced and used these new tools to improve student learning outcomes. While requiring a GenAI first draft of an assignment is not a model that will work in all situations, 'showing the work' and student reflection can play key roles in student assessment. My twin high school seniors possess graphing calculators that are more powerful than the computer on which I wrote my dissertation. So I have observed firsthand how educators have modified assessments to adjust for such changes, emphasizing the processes needed to answer the assignment more than the final answer. Language teachers have pivoted to incorporate student reflections on why one word was chosen over another, for example. In my course, Part B of the final assignment requires students to reflect on how well (or poorly) the initial AI draft reflected their views on the assigned topic. I acknowledge that students with access to AI during the reflection portion of assignments could use the tool to show how they produced the 'work.' Tools that track AI usage, like the 'Authorship' tool, hold promise for providing both instructors and students with information on where and how much AI text was used in an assignment. The capability of AI to generate text (and images) will keep advancing, becoming increasingly integrated into the daily lives of both us and our students. In our professional lives, it will be capable of responding to the most imaginative essay prompts educators can design. By shifting the focus of assignments from pure content creation to critical engagement, analysis and editing, we will teach our students how to think creatively, collaborate and communicate their ideas effectively and responsibly. These are the same skills they need to master to successfully work in teams and communicate efficiently in their future careers. Brian Harfe, Ph.D., is a professor in the College of Medicine and associate provost at the University of Florida. He runs 14 international exchange programs and a study abroad program while teaching about 450 students each semester.