logo
Generative AI gains ground in higher edu: Study

Generative AI gains ground in higher edu: Study

Time of India01-06-2025
Vadodara: Students in Gujarat tend to use generative AI tools like Grammarly more frequently to correct their grammar, spellings, and punctuation marks or to create posters, brochures, and presentations using tools like Canva compared to their counterparts in Assam in the eastern belt of the country.
Tired of too many ads? go ad free now
In contrast, university students in Assam more often use Meta AI and ChatGPT compared to the university students in Gujarat. A study on the perceptions of university students from the Northeast and western region of India revealed how the usage of generative AI varies amongst students enrolled in higher education institutes in the two extreme corners of India.
Interestingly, the study revealed that despite different usage patterns, 95% of students did not receive formal training on generative AI tools.
"The study was carried out to understand how students enrolled in higher education institutes perceive AI and their knowledge regarding the applications of generative AI in their academics," said Jigyasha Deka, who, as a master's student, completed the study under the guidance of Dr Varsha Parikh from the Department of Extension and Communication of M S University's Faculty of Family and Community Sciences.
The research was conducted among 220 students from five departments of Faculty of Family and Community Sciences and the College of Community Science of Assam Agricultural University.
"The objective was to assess the generative AI usage pattern from the students of Gujarat and Assam and to assess the knowledge level regarding the application of generative AI in higher education among the students," Deka said.
The study revealed that around as 96.8% of students in both states were highly aware and familiar with usage of generative AI tools. Almost 94.5% of students use generative AI in academic work, with ChatGPT being most popular.
Tired of too many ads? go ad free now
Students primarily use generative AI for idea generation, homework, and information search (73.2%).
"Most students (63.6%) used generative AI for over a year, learning through self-exploration and peer networks. Surprisingly, 95% of students did not receive formal training on generative AI tools. These findings highlight the growing reliance on generative AI in higher education and the need for structured training programmes," she said.
The study showed the majority of students have a good understanding of generative AI. Categorised as "knowledgeable" — 64.1% — students fall in this category. However, 35.9% of students have a limited understanding of generative AI in higher education.
The study recommends that higher education institutes should develop clear policies and guidelines on generative AI use, provide comprehensive training for students and staff, and address concerns around academic integrity and bias.
"By taking these steps, institutions can effectively integrate generative AI into curricula while ensuring fair learning opportunities," the study states.
"AI literacy, training, and ethical guidelines can enhance technology integration while addressing students' needs and concerns to ensure successful integration of generative AI tools in academics," the study states.
INSET
Guj banks on tie-ups,
Assam govt on app
Vadodara: The study states that while the Gujarat govt has made efforts to improve artificial intelligence (AI) capabilities through strategic partnerships with prominent IT firms, while the Assam govt has established the Shiksha Setu App to promote communication in the educational sector. It highlights that the Gujarat govt has inked MoUs with IBM and Microsoft to build an AI cluster in Gujarat to promote innovation and collaboration in the financial technology sector by exploiting advanced AI technologies.
The app launched by the Assam govt, however, makes it easier to access instructional resources and manage attendance to increase school transparency and efficiency, which in turn is expected to improve student outcomes.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘AI hallucinates': Sam Altman warns users against putting blind trust in ChatGPT
‘AI hallucinates': Sam Altman warns users against putting blind trust in ChatGPT

Mint

time38 minutes ago

  • Mint

‘AI hallucinates': Sam Altman warns users against putting blind trust in ChatGPT

Ever since its first public rollout in late 2022, ChatGPT has become not just the most popular AI chatbot on the market but also a necessity in th lives of most users. However, OpenAI CEO Sam Altman warns against putting blind trust in ChatGPT given that the AI chatbot is prone to hallucinations (making stuff up). Speaking in the first ever episode of the OpenAI podcast, Altman said, 'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much.' Talking about the limitations of ChatGPT, Altman added, 'It's not super reliable… we need to be honest about that,' Notably, AI chatbots are prone to hallucination i.e. making stuff up with confidence that isn't completely true. There are a number of reasons behind hallucination of LLMs (building blocks behind AI chatbots) like biased training data, lack of grounding in real-world knowledge, pressure to always respond and predictive text generation. The problem of hallucination in AI seems to be systematic and no major AI company claims at the moment that its chatbots are free from hallucination. Altman also reiterated his previous prediction during the podcast, stating that his kids will never be smarter than AI. However, the OpenAI CEO added, 'But they will grow up like vastly more capable than we grew up and able to do things that would just, we cannot imagine.' The OpenAI CEO was also asked on if ads will be coming to ChatGPT in the future, to which he replied, 'I'm not totally against it. I can point to areas where I like ads. I think ads on Instagram, kinda cool. I bought a bunch of stuff from them. But I think it'd be very hard to I mean, take a lot of care to get right.' Altman then went on to talk about the ways in which OpenAI could implement ads inside ChatGPT without totally disrupting the user experience. "The burden of proof there would have to be very high, and it would have to feel really useful to users and really clear that it was not messing with the LLM's output," he added.

MCP servers: Lure of sharing your data with AI, and a likely security nightmare
MCP servers: Lure of sharing your data with AI, and a likely security nightmare

Hindustan Times

time2 hours ago

  • Hindustan Times

MCP servers: Lure of sharing your data with AI, and a likely security nightmare

After generative AI, large language models, multi-modal intelligence, artificial general intelligence, and agentic AI, the artificial intelligence (AI) space is beginning to write another chapter. The phraseology we must wrap our heads around, and you'll increasingly hear about this, is MCP, or Model Context Protocol. It is supposed to solve an integration bottleneck, one that would allow AI systems to interact with external data sources and tools. But is this insulated against security risks, while handling personal data? (Clockwise from left) Canva's deep research connector in ChatGPT, MS illustrates workings of MCP servers & 11ai voice assistant. (Official images) It may have gone under the radar, but AI company Anthropic first mooted the idea of a singular connection language for AI assistants with other apps and systems users access, late last year — dubbed the 'USB-C for AI'. Claude Sonnet 3.5 is their first model, adept at building MCP implementations for connecting AI with datasets, as a user may want to. Indian fintech Zerodha launched an MCP integration with Anthropic's Claude. Among the things it can do is curate portfolio insights, plan trades, backtest investment strategies, and generate personal finance dashboards. For users who aren't proficient with the workings of the stock market, these insights may prove useful. 'MCPs are a new way for AI systems to interact with real-world services like trading accounts,' says Nithin Kamath, Founder and CEO of Zerodha, pointing out all the functionality is free to access. Globally, companies are rushing to build MCP integrations, and there's a core rationale for this sudden momentum. 'AI agents and assistants have become indispensable creative partners, yet current workflows require users to manually add context or references, creating complexity,' explains Anwar Haneef, GM and Head of Ecosystem at Canva. 11Labs, which has built the 11ai personal voice assistant, has bolted on MCP connections with platforms including Perplexity and Slack. Autonomous coding agent Cline too can combine MCP servers from Perplexity and others, to create research workflows. Amazon Web Services or AWS, in a technical document, explains MCP is an open standard that creates a universal language for AI systems to communicate with external data sources, tools, and services. Conceptually, MCP functions as a universal translator, enabling seamless dialogue between language models and the diverse systems, they say. Also Read: Apple Music at 10, India's 5G trajectory, Canva's AI tools, and Adobe's camera For users, this may open up a scenario where AI tools may be able to connect with different platforms, and thereby, a single window workflow approach, instead of manually copying data between applications or switching between multiple tools to complete tasks. Take for example Canva, which becomes the first company to launch its deep research connector with OpenAI's ChatGPT, and thereby give users access to designs and content created in Canva via their ChatGPT conversations. This will include Canva Docs and presentations as well. The advantage? Summarising reports or documents, asking AI to analyse data, and for a more contextual conversation. AI will be able to use these tools to create content depending on what a user asks. 'This is a major step in our vision to make the complex simple and build an all-in-one AI workflow that's secure and accessible to all,' adds Haneef. OpenAI announced MCP support earlier, says popular remote MCP servers include Cloudflare, HubSpot, Intercom, PayPal, Plaid, Shopify, Stripe, and Twilio, all encompassing various consumer and enterprise focused domains. Microsoft has made substantial investments in MCP infrastructure, integrating the protocol with Azure OpenAI Services to allow GPT models to interact with external services and fetch live data. The company has released multiple MCP servers. Anthropic, though an early mover, has had to change the approach to offering MCP to developers. The result, released a few days ago, are the new Desktop Extensions, to simplify MCP installations. 'We kept hearing the same feedback: installation was too complex. Users needed developer tools, had to manually edit configuration files, and often got stuck on dependency issues,' the company says, in a statement. Developers will need help with the integration. AWS has released their open-source AWS Serverless MCP Server, a tool that combines AI assistance with streamlined development, to help developers build modern applications. Unchartered territory? Risks, particularly with how a user's data is being shared between two distinct digital entities, are something tech companies must remain cognisant of. As Kailash Nadh, Zerodha's Chief Technology Officer explains, 'Strictly from a user perspective, it feels liberating to be able to access services outside of their walled gardens and bloated UIs riddled with dark patterns. It moves a considerable amount of control from service providers to users, but at the same time, it concentrates decision-making and mediation in the hands of AI blackboxes.' He is yet to find an answer to what happens in case of errors and failures with real-world implications, tracing accountability and the inevitable regulatory questions. 'Whether the long-term implications of MCP's viral, cross-cutting spread will be net positive or not, is unclear to me,' he adds. AI security expert Simon Wilson is worried about users going overboard in 'mixing and matching MCP Servers'. Particularly concerning is the attack method, called prompt injection. 'Any time you combine access to private data, exposure to untrusted content and the ability to externally communicate an attacker can trick the system into stealing your data,' he explains, in a Mastodon post. He points to the core of this approach, labelling it a 'lethal trifecta' — access to private data, exposure to untrusted content and an ability to communicate externally. 'Be careful with which custom MCP servers you add to your ChatGPT workspace. Currently, we only support deep research with custom MCP servers in ChatGPT, meaning the only tools intended to be available within the remote MCP servers are search and document retrieval. However, risks still apply even with this narrow scope,' OpenAI warns developers, in a technical note. Microsoft too has noted specific risks around misconfigured authorisation logic in MCP servers leading to sensitive data exposure and authentication tokens being stolen, which can then be used to impersonate and access resources inappropriately.

Explained: AI & copyright law
Explained: AI & copyright law

Indian Express

time4 hours ago

  • Indian Express

Explained: AI & copyright law

In two key copyright cases last week, US courts ruled in favour of tech companies developing artificial intelligence (AI) models. While the two judgments arrived at their conclusions differently, they are the first to address a central question around generative AI models: are these built on stolen creative work? At a very basic level, AI models such as ChatGPT and Gemini identify patterns from massive amounts of data. Their ability to generate passages, scenes, videos, and songs in response to prompts depends on the quality of the data they have been trained on. This training data has thus far come from a wide range of sources, from books and articles to images and sounds, and other material available on the Internet. There are at the moment at least 21 ongoing lawsuits in the US, filed by writers, music labels, and news agencies, among others, against tech companies for training AI models on copyrighted work. This, the petitioners have argued, amounts to 'theft'. In their defence, tech companies say they are using the data to create 'transformative' AI models, which falls within the ambit of 'fair use' — a concept in law that permits use of copyrighted material in limited capacities for larger public interests (for instance, quoting a paragraph from a book for a review). Here's what happened in the two cases, and why the judgments matter. In August 2024, journalist-writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a class action complaint — a case that represents a large group that could be/were similarly harmed — against Anthropic, the company behind the Claude family of Large Language Models (LLMs). The petitioners argued that Anthropic downloaded pirated versions of their works, made copies of them, and 'fed these pirated copies into its models'. They said that Anthropic has 'not compensated the authors', and 'compromised their ability to make a living as the LLMs allow anyone to generate — automatically and freely (or very cheaply) — texts that writers would otherwise be paid to create and sell'. Anthropic downloaded and used Books3 — an online shadow library of pirated books with about seven million copies — to train its models. That said, it also spent millions of dollars to purchase millions of printed books and scanned them digitally to create a general 'research library' or 'generalised data area'. Judge William Alsup of the District Court in the Northern District of California ruled on June 23 that Anthropic's use of copyrighted data was 'fair use', centering his arguments around the 'transformative' potential of AI. Alsup wrote: 'Like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different. If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use.' Thirteen published authors, including comedian Sarah Silverman and Ta-Nehisi Coates of Black Panther fame, filed a class action suit against Meta, arguing they were 'entitled to statutory damages, actual damages, restitution of profits, and other remedies provided by law'. The thrust of their reasoning was similar to what the petitioners in the Anthropic case had argued: Meta's Llama LLMs 'copied' massive amounts of text, with its responses only being derived from the training dataset comprising the authors' work. Meta too trained its models on data from Books3, as well as on two other shadow libraries — Anna's Archive and Libgen. However, Meta argued in court that it had 'post-trained' its models to prevent them from 'memorising' and 'outputting certain text from their training data, including copyrighted material'. Calling these efforts 'mitigations', Meta said it 'could get no model to generate more than 50 words and punctuation marks…' from the books of the authors that had sued it. In a ruling given on June 25, Judge Vince Chhabria of the Northern District of California noted that the plaintiffs were unable to prove that Llama's works diluted their markets. Explaining market dilution in this context, he cited the example of biographies. If an LLM were to use copyrighted biographies to train itself, it could, in theory, generate an endless number of biographies which would severely harm the market for biographies. But this does not seem to be the case thus far. However, while Chabbria agreed with Alsup that AI is groundbreaking technology, he also said that tech companies who have minted billions of dollars because of the AI boom should figure out a way to compensate copyright holders. Significance of rulings These judgments are a win for Anthropic and Meta. That said, both companies are not entirely scot-free: they still face questions regarding the legality of downloading content from pirated databases. Anthropic also faces another suit from music publishers who say Claude was trained on their copyrighted lyrics. And there are many more such cases in the pipeline. Twelve separate copyright lawsuits filed by authors, newspapers, and other publishers — including one high-profile lawsuit filed by The New York Times — against OpenAI and Microsoft are now clubbed into a single case. OpenAI is also being separately sued by publishing giant Ziff Davis. A group of visual artists are suing image generating tools Stability AI, Runway AI, Deviant Art, and Midjourney for training their tools on their work. Stability AI is also being sued by Getty Images for violating its copyright by taking more than 12 million of its photographs. In 2024, news agency ANI filed a case against OpenAI for unlawfully using Indian copyrighted material to train its AI models. The Digital News Publishers Association (DNPA), along with some of its members, which include The Indian Express, Hindustan Times, and NDTV, later joined the proceedings. Going forward, this is likely to be a major issue in India too. Thus, while significant, the judgments last week do not settle questions surrounding AI and copyright — far from it. And as AI models keep getting better, and spit out more and more content, there is also the larger question at hand: where does AI leave creators, their livelihoods, and more importantly, creativity itself?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store