Latest news with #MCQs


Time of India
17-07-2025
- General
- Time of India
New format unique & contemporary: 'S'-shaped seating pattern; OMR-based exam in Bengal Higher Secondary 3rd semester to curb cheating
KOLKATA: An S-shaped seating pattern is set to debut in the semester-based Higher Secondary examination to be held from Sept 8 to Sept 22 to prevent students from cheating. West Bengal Higher Secondary Council has split the HS exam into four semesters, with students taking the 1st and 2nd semester exams in Class 11 and 3rd and 4th in Class 12. The 2026 HS batch will write their 3rd semester exam this Sept. It will have multiple-choice questions and be held on OMR sheets. "The seating plan should be in an 'S' pattern. There should not be more than two examinees on each bench," the Council has said. You Can Also Check: Kolkata AQI | Weather in Kolkata | Bank Holidays in Kolkata | Public Holidays in Kolkata Four different sets of question papers will be distributed with sets A and B going to the two students on the first bench and sets C and D to the two on the second bench. The alternating pattern will continue throughout the classroom. A set of question paper will still be placed at the designated spot even if a student is absent so as not to break the pattern. No examinee will be allowed to leave the hall before the end of the 75-minute examination. Except in extreme cases they will also not be allowed to use the washroom. "Multiple sets of question papers will be distributed to discourage cheating because answers to MCQs are easy to copy. The pattern we have planned will make students ready for competitive exams and make the process transparent," said HS Council president Chiranjib Bhattacharjee. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like [국가인증] 키성장 인증받은 '이 제품' 2개월치 무료증정 이벤트 곧 마감! 아이클타임 더 알아보기 Undo Sample OMR sheets will be uploaded soon to help students with their preparation. School heads welcomed the new seating pattern and agreed that the MCQ format is prone to cheating. "The council has introduced a scientific format that will help prepare students for competitive exams from the HS level. The new format is both contemporary and unique," said Partha Pratim Baidya, headmaster of Jadavpur Vidyapith. The HS council's guidelines also states that every exam centre will have metal detectors at entry points and no electronic gadgets will be allowed inside.


India Today
05-07-2025
- Science
- India Today
CUET UG system flawed? Biased scoring, stream lock-ins, MCQ-only testing flagged
The CUET UG 2025 results were declared on Friday, ending the wait for over 13.5 lakh students, but the questions haven't stopped. From paper errors and response sheet glitches to stream-change hurdles and unfair normalisation methods, students and experts alike are raising red top scorers aren't celebrating without caveats. As lakhs of aspirants navigate India's largest undergraduate entrance test, complaints about skewed scoring patterns and systemic inefficiencies are piling the Common University Entrance Test really living up to its promise of a level playing field?A BIAS AGAINST SCIENCE STUDENTS Science students feel squeezed by the CUET UG 2025 pattern -- especially those aiming to switch into humanities courses such as Economics (Hons) or BMS. The exam's design makes shifting streams tough and forces students to perform well in subjects outside their core, with limited room for THE COURSE STREAM BECOMES DIFFICULTScience students find it impossible to switch streams and compete for top courses in prestigious institutions like Delhi University due to subject OF 5 SUBJECTSThe restriction of choosing only five subjects limits flexibility and prevents students from covering all required subjects for their desired course QUESTION AND TIME PATTERNApplying the same number of questions and time duration (50 questions in 60 minutes) across all subjects, regardless of difficulty level, is unfair, especially for concept-heavy subjects like Physics or AND RAW SCORES CAUSE TROUBLEIn stream-shift scenarios, raw scores (not percentiles) are often used, which disadvantages Science students, as scoring in their subjects is relatively tougher compared to and past year's maths results prove the above points when we compare scores with subjects like Business Studies, Political Science state board students and students from the Northeast are at a disadvantage compared to CBSE suggest that 'One nation one syllabus' has to be implemented for fair competition, and that normalisation for CUET UG is an irrational concept and a misuse of a statistical tool, especially when exams are conducted across a month with uneven shift NOT ENOUGHEducation experts also believe that schools have become redundant due to CUET. Subjects like journalism, history, and psychology cannot be tested through MCQs alone -- writing skills need to be assessed too, which CUET does not currently UG was introduced with the vision of streamlining college admissions across India. But for many students, it's becoming yet another maze of confusion and growing discontent, especially among science stream aspirants, students from marginalised boards, and those eyeing inter-disciplinary shifts, signals an urgent need for reform.- Ends


India.com
18-06-2025
- Health
- India.com
10 Simple Yet Effective Daily Habits Followed By NEET 2025 Topper Mahesh Kumar To Achieve AIR 1
photoDetails english 2917688 Updated:Jun 18, 2025, 11:41 AM IST Who is Mahesh Kumar 1 / 13 Mahesh Kumar, a humble and hardworking student from Rajasthan's Hanumangarh district, has become an inspiration for NEET aspirants across the country. Born on August 3, 2008, Mahesh has achieved the remarkable feat of securing All India Rank 1 in NEET UG 2025 with an impressive 686 out of 720 marks—and that too, in his very first attempt. NEET Result 2 / 13 What sets Mahesh apart isn't just his academic brilliance but the small yet consistent habits he cultivated daily. The National Testing Agency (NTA) declared the NEET UG result on June 14, 2025, and Mahesh's name has become synonymous with discipline, focus, and self-belief. Let's dive into the top 10 strategies and habits that helped Mahesh Kumar ace one of the toughest entrance exams. Consistent Study without Overconfidence or Under confidence 3 / 13 Mahesh didn't let emotions take control of his preparation. He maintained a balanced mindset, never overestimating his strengths or underestimating his weaknesses. Staying neutral helped him stay on track, regardless of mock test results or peer performance. Regularity 4 / 13 Instead of studying only when he felt like it, Mahesh practiced daily learning rituals. This included revision, solving MCQs, and reading NCERTs consistently—no breaks, no excuses. A Fixed Timetable 5 / 13 A well-structured timetable helped Mahesh stay organised. Every hour of his day was accounted for, ensuring he never wasted time figuring out what to study next. This discipline played a major role in covering the vast NEET syllabus efficiently. Prioritizing Health 6 / 13 While preparing for exams, many students ignore their health—but not Mahesh. He made sure to eat balanced meals, get enough sleep, and take short breaks to avoid burnout. According to him, a healthy body supports a sharp mind. Time Management 7 / 13 Mahesh knew the value of every minute. He dedicated specific time slots to theory, revision, mock tests, and even breaks. His 10-minute focused revision sessions—done multiple times a day—helped retain concepts better. Mock Tests 8 / 13 Mock tests weren't just tests for Mahesh—they were learning tools. He analysed each test thoroughly to understand his mistakes and improve gradually. This habit helped him stay exam-ready well before the actual date. Facing the Language Barrier 9 / 13 Coming from a Hindi background, English was not Mahesh's first language. But instead of letting this stop him, he focused more on understanding concepts and using bilingual study material. He turned a challenge into strength with consistent effort. Smart Revision Techniques 10 / 13 Rather than reading passively, Mahesh used active recall, short quizzes, and summarised notes for revision. His strategy involved revisiting topics at regular intervals to make them stick permanently. Self-Discipline 11 / 13 Mahesh didn't wait for motivation to strike. He followed a strict schedule even when he didn't feel like studying. For him, discipline was more powerful than mood swings. Advice For Future Aspirants 12 / 13 Don't judge yourself based on your first performance. Your job is to revise repeatedly and stick to your schedule. 13 / 13 Mahesh Kumar's journey proves that success is not a result of random effort, but of small, consistent actions taken daily. Whether it's 10 minutes of focused revision, a disciplined routine, or smart time management—every bit adds up. His message to students is simple yet powerful: 'Trust the process, and never stop improving.'


The Hindu
12-06-2025
- Health
- The Hindu
Benchmarks in medicine: the promise and pitfalls of evaluating AI tools with mismatched yardsticks
In May 2024, OpenAI released HealthBench, a new benchmarking system to test the clinical capabilities of large language models (LLMs) such as ChatGPT. On the surface, this may sound like yet another technical update. But for the medical world, it marked an important moment—a quiet acknowledgement that our current ways of evaluating medical AI are fundamentally wrong. Headlines in the recent past have trumpeted that AI 'outperforms doctors' or 'aces medical exams.' The impression that's coming through is these models are smarter, faster, and perhaps even safer. But this hype masks a deeper truth. To put it plainly, the benchmarks used to arrive at these claims are based on exams built for evaluating human memory retention from classroom teachings. They reward fact recall, not clinical judgment. A calculator problem A calculator can multiply two six-digit numbers within seconds. Impressive, no doubt. But does this mean calculators are better than, and understand maths more than mathematics experts ? Or better even than an ordinary person who takes a few minutes to do the calculation with a pen and paper? Language models are celebrated because they can churn out textbook-style answers to MCQs and fill in the blanks for medical facts and questions faster than medical professors. But the practice of medicine is not a quiz. Real doctors deal with ambiguity, emotion, and decision-making under uncertainty. They listen, observe, and adapt. The irony is that while AI beats doctors in answering questions, it still struggles to generate the very case vignettes that form the basis of those questions. Writing a good clinical scenario from real patients in clinical practice requires understanding human suffering, filtering irrelevant details, and framing the diagnostic dilemma with context. So far, that remains a deeply human ability. Also Read: Why AI in healthcare needs stringent safety protocols What existing benchmarks miss Most widely-used benchmarks—MedQA, PubMedQA, MultiMedQA—pose structured questions with one 'correct' answer or have fill in the blanks questions. They evaluate factual accuracy but overlook human nuance. A patient doesn't say, 'I have been using a faulty chair and sitting in the wrong posture for long hours and have a non-specific backache ever since I bought it. So please choose the best diagnosis and give appropriate treatment.' They just say, 'Doctor, I'm tired. I don't feel like myself.' That is where the real work begins. Clinical environments are messy. Doctors deal with overlapping illnesses, vague symptoms, incomplete notes, and patients who may be unable—or unwilling—to tell the full story. Communication gaps, emotional distress, and even socio-cultural factors influence how care unfolds. And yet, our evaluation metrics continue to look for precision, clarity, and correctness—things that the real world rarely provides. Benchmarking vs reality It can be easy to decide who the best batter in the world is, by only counting runs scored. Similarly, bowlers can be ranked by the number of wickets taken. But answering the question 'Who is the best fielder?' might not be as simple. Measuring fielding is very subjective and evades simple numbers. The number of runs outs assisted or catches taken only tells part of the story. The efforts made at the boundary line to reduce runs or mere intimidation through the presence of the fielders (like Jonty Rhodes or R. Jadeja) preventing runs at covers or points can't be measured easily. Healthcare is like fielding: it is qualitative, often invisible, deeply contextual, and hard to quantify. Any benchmark that pretends otherwise will mislead more than it illuminates. This is not a new problem. In 1946, the civil servant Sir Joseph Bhore, when consulted to reform India's healthcare said, 'If it were possible to evaluate the loss, which this country annually suffers through the avoidable waste of valuable human material and the lowering of human efficiency through malnutrition and preventable morbidity, we feel that the result would be so startling that the whole country would be aroused and would not rest until a radical change had been brought about'. This quote reflects a longstanding dilemma—how to measure what truly matters in health systems. Even after 80 years, we have not found perfect evaluation metrics. What HealthBench does HealthBench at least acknowledges this disconnect. Developed by OpenAI in collaboration with clinicians, it moves away from traditional multiple-choice formats. It is also the first benchmark to explicitly score responses using 48,562 unique rubric criteriaranging from minus 10 to plus 10, reflecting some aspects of real-world stakes of clinical decision-making. A dangerously wrong answer must be punished more harshly than a mildly useful one. This, finally, mirrors medicine's moral landscape. Even so, HealthBench has limitations. It evaluates performance across just 5,000 'simulated' clinical cases, of which only 1,000 are classified as 'difficult.' That is a vanishingly small slice of clinical complexity. Though commendably global, its doctor-rater pool includes just 262 physicians from 60 countries in 52 languages, with varying professional experience and cultural backgrounds (three Physicians from India participated, and simulations from 11 Indian languages were generated). HealthBench Hard, a challenging subset of 1,000 cases, revealed that many existing models scored zero—highlighting their inability to handle complex clinical reasoning. Moreover, these cases are still simulations. Thus, the benchmark is an improvement, not a revolution. Also Read: Artificial Intelligence in healthcare: what lies ahead Predictive AI's collapse in the real world This is not just about LLMs. Predictive models have faced similar failures. The sepsis prediction tool, developed by EPIC to flag early signs of sepsis, showed initial promise a few years ago. However, once deployed, it could not meaningfully improve outcomes. Another company that claimed to have developed a detection algorithm for liver transplantation recipients folded quietly after its model showed bias against young patients in Britain. It failed in the real world despite glowing performances on benchmark datasets. Why? Because predicting rare/critical events requires context-aware decision-making. A seemingly unknown determinant may lead to wrong predictions and unnecessary ICU admissions. The cost of error is high—and humans often bear it. What makes a good benchmark? A robust medical benchmark should meet four criteria: Represent reality: Include incomplete records, contradictory symptoms, and noisy environments. Test communication: Measure how well a model explains its reasoning, not just what answer it gives. Handle edge cases: Evaluate performance on rare, ethically complex, or emotionally charged scenarios. Reward safety over certainty: Penalise overconfident wrong answers more than humble uncertainty. Currently, most benchmarks miss these criteria. And without these elements, we risk trusting technically smart but clinically naïve models. Red teaming the models One way forward is red teaming—a method borrowed from cybersecurity, where systems are tested against ambiguous, edge-case, or morally complex scenarios. For example: a patient in mental distress whose symptoms may be somatic; an undocumented illegal immigrant fearful of disclosing travel history; a child with vague neurological symptoms and an anxious parent pushing for a CT scan; a pregnant woman with religious objections to blood transfusion; a terminal cancer patient is unsure whether to pursue aggressive treatment or palliative care; a patient feigning for personal gain. In these edge cases, models must go beyond knowledge. They must display judgment—or, at the very least, know when they don't know. Red teaming does not replace benchmarks. But it adds a deeper layer, exposing overconfidence, unsafe logic, or lack of cultural sensitivity. These flaws matter more than ticking the right answer box in real-world medicine. Red teaming forces models to reveal what they know and how they think. It uncovers these aspects, which may be hidden in benchmark scores. Why this matters The core tension is this: medicine is not just about getting answers right. It is about getting people right. Doctors are trained to deal with doubts, handle exceptions, and recognise cultural patterns not taught in books (doctors also miss a lot). AI, by contrast, is only as good as the data it has seen and the questions it has been trained on. HealthBench, for all its flaws, is a small but vital course correction. It recognises that evaluation needs to change. It introduces a better scoring rubric. It asks harder questions. That makes it better. But we must remain cautious. Healthcare is not like image recognition or language translation. A single incorrect model output can mean a lost life and a ripple effect—misdiagnoses, lawsuits, data breaches, and even health crises. In the age of data poisoning and model hallucination, the stakes are existential. The road ahead We must stop asking if AI is better than doctors. That is not the right question. Instead, we should ask: Where is AI safe, useful, and ethical to deploy—and where is it not? Benchmarks, if thoughtfully redesigned, can help answer that. AI in healthcare is not a competition to win. It is a responsibility to share. We must stop treating model performance as a leaderboard sport and start thinking of it as a safety checklist. Until then, AI can assist. It can summarise. It can remind. However, it cannot replace clinical judgment's moral and emotional weight. It certainly cannot sit beside a dying patient and know when to speak and when to stay silent. (Dr. C. Aravinda is an academic and public health physician. The views expressed are personal. aravindaaiimsjr10@

Hindustan Times
10-06-2025
- General
- Hindustan Times
UPSC CSE Prelims 2025 Result News LIVE: Where, how to check Civil Services results when out
UPSC CSE Prelims 2025 Result News LIVE: Where, how to check results when out UPSC CSE Prelims 2025 Result News LIVE: The Union Public Service Commission has not yet released UPSC CSE Prelims 2025 Result. Candidates who have appeared for the Civil Services Examination (CSE) Preliminary examination can check the results when announced on the official website of UPSC at The preliminary examination across the country was held on May 25, 2025. The examination comprised two objective-type papers (MCQs), each of two hours' duration and carrying a maximum of 200 marks. ...Read More For every wrong answer, one-third (0.33) of the marks assigned to that question will be deducted as a penalty. The prelims exam is only a screening test, and marks obtained here will not be counted for determining the final merit list. Through this recruitment drive 979 vacancies will be filled. Follow the blog for UPSC CSE Prelims Results 2025 date and time, direct link, and more. Follow all the updates here: June 10, 2025 9:43 AM IST UPSC CSE Prelims 2025 Result News LIVE: The UPSC Civil Services prelims date and time has not been announced yet.