
Embracing AI responsibly: PRSI Dehradun marks National Public Relations Day with workshop
Director General of Information and Mussoorie Dehradun Development Authority (MDDA) Vice Chairman Banshidhar Tiwari was the chief guest. Accompanying him were Joint Director of Information Dr Nitin Upadhyay, Badrikedar Temple Committee CEO Vijay Thapliyal, and PRSI Dehradun Chapter president Ravi Vijarania.
In his keynote address, Tiwari underscored the importance of preserving human values amid a surge in technological innovation. 'While artificial intelligence can enhance efficiency, we must remain vigilant about how we use the time and resources it saves,' he said. He stressed the need for authenticity in communication and warned against the spread of misinformation, encouraging a balance between technological tools and human connection.
Echoing this sentiment, Dr Nitin Upadhyay spoke of the dual nature of emerging technologies. 'AI offers immense potential, but we must establish clear boundaries to ensure it complements rather than replaces human intelligence,' he said, advocating for public awareness and education regarding AI's capabilities and risks, especially within the field of public relations.
Adding to the discourse, Vijay Thapliyal said that while AI may simulate many human functions, it cannot replicate human emotion. 'AI is both a blessing and a challenge – it is up to us how we choose to engage with it,' he said.
AI expert Akash Sharma offered a practical perspective. He introduced participants to a suite of AI tools used in public relations, including ChatGPT, Canva AI, Mentimeter, Meltwater, SocialBee, and Mailchimp. 'AI is not here to replace us,' he said, 'but to make our work more efficient and impactful. In a people-centric profession like PR, AI should serve as an enhancer – not a substitute.'
Among the attendees were PRSI Dehradun Chapter Secretary Anil Sati, Treasurer Suresh Chandra Bhatt, and members Sudhakar Bhatt, Vaibhav Goyal, Rakesh Dobhal, Ajay Dabral, Deepak Sharma, Prashant Rawat, Jyoti Negi, Shivangi, Manmohan Bhatt, Sanjay Singh, and Pratap Singh Bisht.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Starlink to have 20 lakh users in India at most, says government
NEW DELHI : Elon Musk-led satellite communication services provider Starlink can have only 20 lakh connections in India due to constraints over spectrum capacity, minister of state for telecom Pemmasani Chandra Sekhar said on Monday, playing down any immediate threat to local telecom operators. He said the company is likely to opt for a monthly consumer broadband plan that may cost around Rs 3,000, much higher than the plans offered by companies such as Jio, Airtel, and BSNL , but still aggressive from a satcom perspective. Speaking on the sidelines of a review meeting of BSNL here, he said, 'Starlink can have only 20 lakh customers in India and offer up to 200 Mbps speed. That won't affect telecom services.' However, considering that the company's global customer footprint is to the tune of 50 lakh users, the Indian number will still be highly significant for the company if it hits peak. Satcom services are expected to target rural and remote areas, where BSNL is known to have a significant presence. A govt official mentioned that the limit on Starlink connections will be due to its existing network capacity. Starlink has a lowearth orbit constellation of 4,408 satellites, which orbit the earth at a distance of about 540-570 km. It is expected to have a 600 Gbps throughput over India. The company's authorisation is valid for five years or until constellation end. After receiving a satcom licence from DoT and satellite authorisation from INSPACe, Starlink will now begin to build ground infrastructure. It is planning to import equipment for which it will approach DoT for permissions. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Mint
an hour ago
- Mint
AI is wrecking an already fragile job market for college graduates
What do you hire a 22-year-old college graduate for these days? For a growing number of bosses, the answer is not much—AI can do the work instead. At Chicago recruiting firm Hirewell, marketing agency clients have all but stopped requesting entry-level staff—young grads once in high demand but whose work is now a 'home run" for AI, the firm's chief growth officer said. Dating app Grindr is hiring more seasoned engineers, forgoing some junior coders straight out of school, and CEO George Arison said companies are 'going to need less and less people at the bottom." Bill Balderaz, CEO of Columbus-based consulting firm Futurety, said he decided not to hire a summer intern this year, opting to run social-media copy through ChatGPT instead. Balderaz has urged his own kids to focus on jobs that require people skills and can't easily be automated. One is becoming a police officer. Having a good job 'guaranteed" after college, he said, 'I don't think that's an absolute truth today any more." There's long been an unwritten covenant between companies and new graduates: Entry-level employees, young and hungry, are willing to work hard for lower pay. Employers, in turn, provide training and experience to give young professionals a foothold in the job market, seeding the workforce of tomorrow. A yearslong white-collar hiring slump and recession worries have weakened that contract. Artificial intelligence now threatens to break it completely. That is ominous for college graduates looking for starter jobs, but also potentially a fundamental realignment in how the workforce is structured. As companies hire and train fewer young people, they may also be shrinking the pool of workers that will be ready to take on more responsibility in five or 10 years. Companies say they are already rethinking how to develop the next generation of talent. AI is accelerating trends that were already under way. With each new class after 2020, an ever-smaller share of graduates is landing jobs that require a bachelor's degree, according to a Burning Glass Institute analysis of labor data. That's happening across majors, from visual arts to engineering and mathematics. And unemployment among recent college graduates is now rising faster than for young adults with just high-school or associate degrees. Meanwhile, the sectors where graduate hiring has slowed the most—like information, finance, insurance and technical services—are still growing, a sign employers are becoming more efficient and see no immediate downside to hiring fewer inexperienced workers, said Matt Sigelman, Burning Glass's president. 'This is a more tectonic shift in the way employers are hiring," Sigelman said. 'Employers are significantly more likely to be letting go of their workers at the entry level—and in many cases are stepping up their hiring of more experienced professionals." After dancing around the issue in the 2½ years since ChatGPT's release upended the way almost all companies plan for their futures, CEOs are now talking openly about AI's immense capabilities likely leading to deep job cuts. Top executives at industry giants including Amazon and JPMorgan have said in recent weeks that they expect their workforces to shrink considerably. Ford CEO Jim Farley said he expects AI will replace half of the white-collar workforce in the U.S. For new graduates, this means not only are they competing for fewer slots but they are also increasingly up against junior workers who have been recently laid off. While many bosses say they remain committed to entry-level workers and understand their value, the data is increasingly stark: The overall national unemployment rate is at about 4%, but for new college graduates, it was 6.6% over the past 12 months ending in May. At large tech companies, which power much of the U.S. economy, the trend is perhaps more extreme. Venture-capital firm SignalFire found that among the 15 largest tech companies by market capitalization, the share of entry-level hires relative to total new hires has fallen by 50% since 2019. Recent graduates accounted for just 7% of new hires in 2024, down from 11% in 2022. A May report by the firm pointed to shrinking teams, fewer programs for new graduates and the growing influence of AI. Jadin Tate studied informatics at the University at Albany, hoping to land a job focused on improving the user experience of apps or websites. The week before graduation, his mentor leveled with him: That field is being taken over by AI. He warned it may not exist in five years. Tate has attended four conventions this year, networking with companies and asking if they are hiring. He has also applied to dozens of jobs, without success. Several of his college friends are working retail and food-service jobs as they apply for white-collar roles or before their start dates. 'It has been intimidating," Tate said of his job search. Indeed, recent graduates and students are fighting over a smaller number of positions geared at entry-level workers. There were 15% fewer job postings to the entry-level job-search platform Handshake this school year than last, while the number of applications per job rose 30%, according to the platform. Internship postings and applications saw similar trend lines between 2023 and 2025. The shift to AI presents huge risks to companies on skill development, even as they enjoy increased efficiency and productivity from fewer workers, said Chris Ernst, chief learning officer at the HR and finance software company Workday. Ernst said his research shows that workers mostly learn through experience, and then the remainder comes from relationships and development. When AI can produce in seconds a report that previously would have taken a young employee days or weeks—teaching that person critical skills along the way—companies will have to learn to train that person differently. 'Genuine learning, growth, adaptation—it comes from doing the hard work," he said. 'It's those moments of challenge, of hardship—that's the crucible where people grow, they change, they learn most profoundly." Among other things, Ernst said employers must be intentional about connecting young workers with colleagues and making time to mentor them. At the pipeline operator Williams, based in Tulsa, Okla., the company realized that thanks to AI young professionals were performing less of the drudgework like digging into corporate data that historically has taught them the core of the business. New employees at pipeline operator Williams go through a two-day orientation at the corporate headquarters in Tulsa, Okla. The company this year started a two-day onboarding program where veteran executives teach new hires the business fundamentals. Chief Human Resources Officer Debbie Pickle said that increased training will help new hires develop without loading them down with gruntwork. 'These are really bright, top talent people," she said. 'We shouldn't put a cap on how we think they can add value for the company." Still, Pickle said, the increased efficiency will allow the company to expand the business while keeping head count flat in the future. Some of the entry-level jobs most at risk are the most lucrative for recent graduates, including on Wall Street and in big law firms where six-figure starting salaries are the norm. But those jobs have also been famously menial for the first few years—until AI came along. The investment firm Carlyle now pitches to prospective hires that they won't be doing grunt work. Junior hires go through AI training and a program called 'AI University" in which employees share best practices and participate in pilot programs, said Lúcia Soares, the firm's chief information officer. In the past, she said, junior hires evaluating a deal would find articles on Google, request documents from companies, review that information manually, highlight details and copy and paste information from one document to another. Now, AI tools can do almost all of that. An employee poses for a headshot at the new-employee orientation at Williams. 'That analyst still has to go in and make sure the analysis is accurate, question it, challenge it," she said. 'The nature of the brain work that needs to go into it is very much the same. It's just the speed at which these analysts can move." She said Carlyle has maintained the same volume of entry-level hiring but said 90% of its staff has adopted generative AI tools that automate some work. Carlyle's reliance on young staff to check AI's output highlights what many users know to be true: it still struggles in some cases to do the work of humans effectively. Still, many executives expect that gap to close quickly. At the New York venture-capital firm Primary Venture Partners, Rebecca Price said she's encouraging CEOs of the firm's 100 portfolio companies to think hard about every hire and whether the role could be automated. She said it's not that there are no entry-level jobs, but that there's a gap between the skills companies expect out of their junior hires in the age of AI and what most new graduates are equipped with out of school. An engineer in a first job used to need basic coding abilities: now that same engineer needs to be able to detect vulnerabilities and have the judgment to determine what can be trusted from the AI models. New grads must also learn faster and think critically, she said—skills that many of the newest computer-science grads don't have yet. 'We're in this messy transition," said Price, a partner at the firm. 'The bar is higher and the system hasn't caught up." Students are seeing the transition in real time. Arjun Dabir, a 20-year-old applied math major at the University of California, Irvine, said when he applied for internships last year, companies asked for knowledge of coding languages. Now, they want candidates who are familiar with how AI 'agents" can automate certain tasks on behalf of humans—or 'agentic workflows" in the new vernacular. 'What is an intern going to do?" Dabir said as drones buzzed overhead nearby at an artificial intelligence convention in June in Washington, DC. The work typically done by interns, 'that task is no longer necessary. You don't need to hire someone to do it." Venture capitalist Allison Baum Gates said young professionals will need to be more entrepreneurial and gain experience on their own without the standard track of starting as an analyst or a paralegal and working their way up. Her firm, SemperVirens, invests in healthcare startups, workforce technology companies and fintech firms, some of which are replacing entry-level jobs. 'Maybe I'm wrong and this leads to a wealth of new jobs and opportunities and that would be a great situation," she said. 'But it would be far worse to assume that there's no adverse impact and then be caught without a solution." Rosalia Burr, 25, is trying to avoid such an outcome. She graduated in 2022 and quickly joined Liberty Mutual Insurance, where she had interned twice during college at Arizona State University. She was laid off from her payroll job in December. Running has soothed her anxiety. This spring, however, she tore her hip flexor and had to rest to heal. Job rejections, as she was stuck inside, hit extra hard. 'I felt that I was failing." Her goal now is to find a client-facing job. 'If you're in a business back-end role, you're more of a liability of getting laid off, or your job being automated," she said. 'If you're client facing, that's something people can't really replicate" with AI. Write to Lindsay Ellis at and Katherine Bindley at


Hans India
2 hours ago
- Hans India
AI model trained to respond to online political posts impressive
Researchers. who trained a large language model to respond to online political posts of people in the US and UK, found that the quality of discourse had improved. Powered by artificial intelligence (AI), a large language model (LLM) is trained on vast amounts of text data and therefore, can respond to human requests in the natural language. Polite, evidence-based counterarguments by the AI system -- trained prior to performing experiments -- were found to nearly double the chances of a high-quality online conversation and 'substantially increase (one's) openness to alternative viewpoints', according to findings published in the journal Science Advances. Being open to perspectives did not, however, translate into a change in one's political ideology, the researchers found. Large language models could provide 'light-touch suggestions', such as alerting a social media user to the disrespectful tone of their post, author Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, said. 'To promote this concretely, it is easy to imagine large language models operating in the background to alert us to when we slip into bad practices in online discussions, or to use these AI systems as part of school curricula to teach young people best practices when discussing contentious topics,' Eady said. Hansika Kapoor, research author at the department of psychology, Monk Prayogshala in Mumbai, an independent not-for-profit academic research institute, said, '(The study) provides a proof-of-concept for using LLMs in this manner, with well-specified prompts, that can generate mutually exclusive stimuli in an experiment that compares two or more groups.' Nearly 3,000 participants -- who identified as Republicans or Democrats in the US and Conservative or Labour supporters in the UK -- were asked to write a text describing and justifying their stance on a political issue important to them, as they would for a social media post. This was countered by ChatGPT -- a 'fictitious social media user' for the participants -- which tailored its argument 'on the fly' according to the text's position and reasoning. The participants then responded as if replying to a social media comment. 'An evidence-based counterargument (relative to an emotion-based response) increases the probability of eliciting a high-quality response by six percentage points, indicating willingness to compromise by five percentage points, and being respectful by nine percentage points,' the authors wrote in the study. Eady said, 'Essentially, what you give in a political discussion is what you get: that if you show your willingness to compromise, others will do the same; that when you engage in reason-based arguments, others will do the same; etc.' AI-powered models have been critiqued and scrutinised for varied reasons, including an inherent bias -- political, and even racial at times -- and for being a 'black box', whereby internal processes used to arrive at a result cannot be traced. Kapoor, who is not involved with the study, said that whilst appearing promising, a complete reliance on AI systems for regulating online discourse may not be advisable yet. The study itself involved humans to rate responses as well, she said. Additionally, context, culture, and timing would need to be considered for such regulation, she added. Eady too is apprehensive about 'using LLMs to regulate online political discussions in more heavy-handed ways.' Further, the study authors acknowledged that because the US and UK are effectively two-party systems, addressing the 'partisan' nature of texts and responses was straightforward. Eady added, 'The ability for LLMs to moderate discussion might also vary substantially across cultures and languages, such as in India. Personally, therefore, I am in favour of providing tools and information that enable people to engage in better conversations, but nevertheless, for all its (LLMs') flaws, allowing nearly as open a political forum as possible,' the author added. Kapoor said, 'In the Indian context, this strategy may require some trial-and-error, particularly because of the numerous political affiliations in the nation. Therefore, there may be multiple variables and different issues (including food politics) that will need to be contextualised for study here.' Another study, recently published in the 'Humanities and Social Sciences Communications' journal, found that dark personality traits -- such as psychopathy and narcissism -- a fear of missing out (FoMO) and cognitive ability can shape online political engagement. Findings of researchers from Singapore's Nanyang Technological University suggest that 'those with both high psychopathy (manipulative, self-serving behaviour) and low cognitive ability are the most actively involved in online political engagement.' Data from the US and seven Asian countries, including China, Indonesia and Malaysia, were analysed. Describing the study as 'interesting', Kapoor pointed out that a lot more work needs to be done in India for understanding factors that drive online political participation, ranging from personality to attitudes, beliefs and aspects such as voting behaviour. Her team, which has developed a scale to measure one's political ideology in India (published in a preprint paper), found that dark personality traits were associated with a disregard for norms and hierarchies.