
Meet UPSC topper who used AI for preparation, cracked UPSC CSE twice, became IAS officer with AIR..., name is...
UPSC Success Story: While most aspirants throng coaching center to cover the gigantic syllabus of UPSC Civil Services Examination (CSE)– one of the toughest recruitment exams in India– a few adopt novel approaches to prepare for the tough recruitment test. One such individual is IAS Vibhor Bhardwaj, a young IAS officer from Uttar Pradesh, who used a completely different approach to prepare for UPSC CSE, using Artificial Intelligence (AI) tools to enhance his subject knowledge and prepare for the final interview. Who is IAS Vibhor Bhardwaj?
Born in Uttarawali, a small village in Uttar Pradesh's Bulandshahr district, Vibhor Bhardwaj was bright student from his early school days, and earned his MSc degree in Physics from the Hansraj College, Delhi University, and soon began preparations to clear the UPSC CSE and realize his lifelong dream of joining civil services.
Vibhor Bhardwaj picked Physics as his optional subject for the UPSC exam, but instead of traditional coaching centers, relied on online classes and self-written notes to prepare for the tough recruitment test. His efficient preparation strategy enabled him to quickly prepare for UPSC CSE prelims, and cover the entire UPSC Mains syllabus within a span of just seven months.
In an interview, Bhardwaj revealed how he carefully studied previous UPSC CSE question papers, and used these as a guide to strategize his preparation. He also focused on daily news and current affairs, in addition to regular mock tests, which further sharpened his knowledge. How Vibhor Bhardwaj used AI to crack UPSC?
Interestingly, a key part of Vibhor Bhardwaj's UPSC preparation was the use of AI tools like Google's Gemini, which he used for mock interviews. Vibhor revealed that these AI chatbots acted like teachers for him, helping him identify his strengths and weaknesses.
The AI mock interviews faced him with a wide-range of questions, which sharpened and strengthened his preparation for the actual interview. IAS Vibhor Bhardwaj AIR
Ultimately, Vibhor Bhardwaj's hard work and dedication paid off when he cracked the UPSC CSE in 2022 with an All India Rank of 743. However, this rank could not ensure him an IAS post, so he tried again in 2024; this time jumping 724 ranks to secure AIR 19 and achieve his dreaming of becoming an IAS officer.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
7 hours ago
- Time of India
TechKnow: Perfect prompt
If you're using ChatGPT but getting mediocre results, don't blame the chatbot. Instead, try sharpening up your AI chatbots such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude have become hugely popular and embedded into daily life for many users. They're powerful tools that can help us with so many different you shouldn't overlook, however, is that a chatbot's output depends on what you tell it to do, and how. There's a lot you can do to improve the prompt — also known as the request or query — that you type are some tips for general users on how to get higher quality chatbot replies, based on tips from the AI model makers:ChatGPT can't read your mind. You need to give it clear and explicit instructions on what you need it to a standard Google search, you can't just ask for an answer based on some keywords. And you'll need to do more than just tell it to, say, 'design a logo' because you'll end up with a generic design. Flesh it out with details on the company that the logo is for, the industry it will be used in and the design style you're going for.'Ensure your prompts are clear, specific, and provide enough context for the model to understand what you are asking,' ChatGPT maker OpenAI advises on its help page.'Avoid ambiguity and be as precise as possible to get accurate and relevant responses.'Think of using a chatbot like holding a conversation with a friend. You probably wouldn't end your chat after the first answer. Ask follow-up questions or refine your original advice: 'Adjust the wording, add more context, or simplify the request as needed to improve the results.'You might have to have an extended back-and-forth that elicits better output. Google advises that you'll need to try a 'few different approaches' if you don't get what you're looking for the first time.'Fine-tune your prompts if the results don't meet your expectations or if you believe there's room for improvement,' Google recommends in its prompting guide for Gemini.'Use follow-up prompts and an iterative process of review and refinement to yield better results.' When making your request, you can also ask an AI large language model to respond in a specific voice or style.'Words like formal, informal, friendly, professional, humorous, or serious can help guide the model,' OpenAI also tell the chatbot the type of person the response is aimed at. These parameters will help determine the chatbot's overall approach to its answer, as well as the tone, vocabulary and level of detail. For example, you could ask ChatGPT to describe quantum physics in the style of a distinguished professor talking to a class of graduate students. Or you could ask it to explain the same topic in the voice of a teacher talking to a group of there's plenty of debate among AI experts about these methods. On one hand, they can make answers more precise and less generic. But an output that adopts an overly empathetic or authoritative tone raises concerns about the text sounding too the chatbot all the background behind the reason for your just ask: 'Help me plan a weeklong trip to London.'ChatGPT will respond with a generic list of London's greatest hits: historic sites on one day, museums and famous parks on another, trendy neighborhoods and optional excursions to Windsor Castle. It's nothing you couldn't get from a guidebook or travel website, but just a little better if, say, you're a theatre-loving family, try this: 'Help me plan a weeklong trip to London in July, for a family of four. We don't want too many historic sites, but want to see a lot of West End theatre shows. We don't drink alcohol so we can skip pubs. Can you recommend mid-range budget hotels where we can stay and cheap places to eat for dinner?' This prompt returns a more tailored and detailed answer: a list of four possible hotels within walking distance of the theater district, a seven-day itinerary with cheap or low-cost ideas for things to do during the day, suggested shows each evening, and places for an affordable family can tell any of the chatbots just how extensive you want the answer to be. Sometimes, less is nudging the model to provide clear and succinct responses by imposing a limit. For example, tell the chatbot to reply with only 300 words, or to come up with five bullet to know all that there is to know about quantum physics? ChatGPT will provide a high-level 'grand tour' of the topic that includes terms like wavefunctions and ask for a 150-word explanation and you'll get an easily digestible summary about how it's the science of the tiniest particles that also underpins a lot of modern technology like lasers and smartphones.'


Time of India
10 hours ago
- Time of India
Google Gemini set to debut on Galaxy Watch 8 Series
Samsung is all set to host its next Galaxy Unpacked event this month. Samsung has confirmed that it will launch its next generation foldable smartphones and smartwatches at the upcoming Galaxy Unpacked event on July 9. Now a new leak has surfaced online revealing that the Google's Gemini AI assistant may come with Wear OS on the Galaxy Watch 8 series . As reported by 9to5Google, Google's advanced AI assistant, Gemini, appears poised to make its debut on Wear OS with the upcoming Samsung Galaxy Watch 8 series. The leaked user interface for Gemini on Wear OS reportedly bears a striking resemblance to the existing Google Assistant for Wear OS, featuring an "Ask Google Gemini " prompt. However, Google has previously indicated that Gemini for Wear OS would offer significant improvements over its predecessor, including more sophisticated natural language processing and support for extensions and various applications within the Gemini ecosystem. Samsung Galaxy Watch 8, Galaxy Watch 8 Classic: Likely specifications * Processor: Both models are expected to be powered by a new 3nm Exynos W1000 5-core chipset, promising significant performance upgrades. * Memory & Storage: Users can anticipate 2 GB of RAM and 32 GB of internal storage. * Operating System: The watches will run on One UI 8.0 Watch. * Sensors: A comprehensive suite of health and fitness sensors includes an Accelerometer, Altimeter, Gyroscope, Light Sensor, Geomagnetic Sensor, PPG Sensor (Photo-Plethysmographic), ECG Sensor (Cardiac Electrical), and a BIA Sensor (Bioelectrical Impedance Analysis). Samsung Galaxy Watch 8 This model is rumoured to come in two sizes: * 40mm Dial: Featuring a 1.34-inch sAMOLED display with 438x438 pixels resolution. Dimensions are 40.4 x 42.7 x 8.6 mm, weighing 30g. It will house a 325 mAh battery. * 44mm Dial: Equipped with a larger 1.47-inch sAMOLED display (480x480 pixels). Dimensions are 43.7 x 46 x 8.6 mm, weighing 34g, with a 435 mAh battery. * Build: Both Watch8 variants will feature an Aluminum Armor casing with Sapphire Glass for enhanced durability. Samsung Galaxy Watch 8 Classic The premium Classic model is expected in a single, larger size: * 46mm Dial: It will sport a 1.34-inch sAMOLED display (438x438 pixels). Dimensions are 46.7 x 46 x 10.6 mm, weighing a more substantial 63.5g. It will be powered by a 445 mAh battery. * Build: The Classic model will feature a Stainless steel body complemented by Sapphire glass. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
12 hours ago
- Time of India
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations
When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning - an underlying subtext. We also often hope that this meaning will come through to the reader. But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level. These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are - and how quickly conversational AI is improving - it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models - the technology behind AI chatbots such as ChatGPT - showed that some are better than others. Finally, a study showed that LLMs can guess the emotional "valence" of words - the inherent positive or negative "feeling" associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 - a relatively recent version of ChatGPT - can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm - thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 x 7B. We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell - although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there - hence, using human raters doesn't help much with sarcasm detection. Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research - especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast - and may soon be valuable teammates rather than mere tools. Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways - perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided - will the model's underlying judgements and ratings remain consistent? Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.