logo
International Women in Engineering Day 2025: Together We Engineer a Brighter Future

International Women in Engineering Day 2025: Together We Engineer a Brighter Future

Hans India23-06-2025
International Women in Engineering Day continues to serve as a global catalyst for increasing awareness, visibility, and participation of young girls in the world of engineering. Celebrated every year on June 23, this year's theme — 'Together We Engineer' — underscores the power of collaboration and the shared responsibility to build a more inclusive and impactful engineering future.
This global observance is not just about applauding individual achievements but about acknowledging the collective strength women bring to the engineering table, especially when they work in unison with allies and institutions.
A few women engineers shared their opinions with The Hans India about the role of women in engineering. The representation of women in engineering has evolved remarkably over the decades. From contributing to AI systems and smart gadgets to revolutionising computing and leading sustainable manufacturing, women are actively shaping industries that define our future.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

From Empathy to Innovation: Class 11 Student Creates Tech for the Visually Impaired
From Empathy to Innovation: Class 11 Student Creates Tech for the Visually Impaired

Hans India

time8 hours ago

  • Hans India

From Empathy to Innovation: Class 11 Student Creates Tech for the Visually Impaired

Seventeen-year-old innovator and accessibility advocate, Ashwat Prasanna, spoke to The Hans India about the journey behind EyeSight—an AI-powered, low-cost smart glasses solution designed to empower India's visually impaired. In this conversation, he shared how empathy, user collaboration, and cutting-edge technology came together to create an inclusive device that aims to reach over 20,000 users by 2026. You're just 17 and already impacting lives—how did your journey into social innovation begin so early? I volunteer at the Premanjali Foundation for the visually impaired. During my time there, I became friends with Charan – he shared my love for math and logic. We would spend a lot of time discussing puzzles and Olympiad questions. But what moved me was when I was told by one of the teachers there not to encourage him a lot since he would never have the opportunities I did – It was very unsettling for me for days – this day and age where we are talking about robot housekeepers and driverless cars, there were still pockets where technology hadn't made its mark – For the next few months, I researched all available accessibility tech, what worked and what was missing – I realized that best results can be achieved if a device was specifically designed for the needs of the visually impaired – navigation, currency reading, scene description etc – that's how the very first version of EyeSight was born – three years ago. Can you share one user testing experience that deeply moved you or reshaped your thinking? More than user testing, I would say that this design was co-created with the users. From the outset, the design and features were influenced by the users' needs and wants. Throughout the version iterations, I got a lot of feedback about what worked and what did not hit the mark. One thing that hit home hard was the affordability. It was easy to get carried away with the latest in technology, but that would be pointless to most of the visually impaired users, as it falls outside their affordability limit. The challenge was to create the best possible version at the lowest cost. How do you plan to make the device sustainable and scalable across India's diverse regions and languages? EyeSight uses the OpenAI API, which has incredible support for India's local languages and even dialects, which currently gives us great reach and localisation in these regions. Additionally, in the future, we plan to fine-tune or train LLMs and AI models to better suit these regions. Another major part of sustainability and scalability is making the device affordable, which has been one of the most significant features so far. Specifically, by making the device 3D printable and using standard parts, it is something that can be assembled by nearly anyone, for everyone. How does EyeSight's offline functionality and ₹1500 pricing truly redefine affordability and accessibility in this space? Compared to other devices with comparable features, which cost upwards of 10-15 thousand rupees, EyeSight is only Rs. 1500, made possible by our choice of design and functionality. Why is this significant? The reality is that for many of the institutions for the visually impaired, more than the features that define access, it's the cost that is the most significant. With EyeSight, thousands more could have access to transformative assistive technology. With a target of 20,000 users by 2026, how do you plan to tackle scale while keeping personalisation and support intact? In the past, the majority of the prototypes have been used in small-scale testing, where they were used individually and not sold to the customer. The pilot phase (set to begin in May) will include loaned units to an institution, by which time the pricing will be finalised. We have received an IB grant of 3000$, which has been very useful in building these late-stage prototypes. Going forward, our first step will be to conduct large-scale user testing and refine the product over the next few months. Based on the testing results, we plan to approach manufacturing units with the refined specs. As far as reaching users is concerned, we are planning to collaborate with schools for the visually impaired in Karnataka. Samarthanam Trust, NAB Karnataka, Mitra Jyoti, and Premanjali Foundation have been of incredible help to us in our creation process. The students in these institutions will be our initial beneficiaries. How did support from programs like IB Global Youth Action Fund and RunWay Incubation shape EyeSight's development? Building the technical product is one thing; taking the product the last mile to reach the market is a whole other thing. As a student, I needed all the help I could get on building EyeSight as a product. RunWay Incubation is a division of PEARL Design Academy that incubates early-stage student ventures such as mine. There I learnt the basic fundamentals of creating a business plan, marketing and fundraising tools. With this foundation, I was able to apply for and acquire the IB Global Youth Action Fund grant of 3000$. This fund, in turn, has helped me build low-fi prototypes and a testable prototype with which I'm doing user testing. How does EyeSight perform offline AI processing on a wearable device without needing constant cloud connectivity? What challenges did you face in optimising performance? Currently, the code implements a combination of models for more detailed online access and quick, essential offline scene inference. This means the basic features of identifying objects, hazards, and safety risks are something that can be possible regardless of an internet connection, and we are working to implement more features to improve offline performance. This is especially significant since many of our users have mentioned that internet connectivity is often patchy in areas where they typically use the product. How do the glasses trigger emergency alerts? Are they gesture-activated or context-based through environmental detection? They need a simple tap gesture on the device before it informs the user and calls emergency services. In future versions, emergencies can be automatically identified using computer vision. What's next for EyeSight after this prototype phase? Are there any new features or partnerships in the works? - First priority is to increase the user-level and field testing for multiple use cases; cooperate with NGOs, and work with the students - From a packaging standpoint, we need to increase the product robustness and reduce the cost of various components; we have identified a hardware partner, and we will accelerate product redesign - We will apply for national and international grants and financial partners for scaling and a large-scale launch

Empowering Digital Legacies: How Organizations Are Using AI To Redefine Social Media
Empowering Digital Legacies: How Organizations Are Using AI To Redefine Social Media

Hans India

timea day ago

  • Hans India

Empowering Digital Legacies: How Organizations Are Using AI To Redefine Social Media

In a world where technology moves faster than ever, it's easy to feel like real human connection is being left behind. But the team behind the social platform 'i' is changing the script—using AI not just to innovate, but to bring people closer together in meaningful ways. In an in-depth conversation with The Hans India, the creators of the social platform talk about the inspiration behind features like the Family Tree, i-films, and video swaps—tools designed to help people tell their stories, preserve memories, and stay connected across generations and geographies. So, whether it's making content creation easy and fun, protecting user privacy, or building trust in an AI-driven age, the vision behind 'i' is clear: social media should be creative, inclusive, and above all, human. What inspired the Family Tree feature, and how is it helping users connect across generations? At 'i' we wanted to build something that goes beyond just sharing moments, something that helps people hold onto their family stories and memories in a meaningful way. The Family Tree feature was born from that idea. It's amazing to see how it's helping families, even those spread out across cities or countries, stay connected by preserving their shared history. It's about creating a digital legacy that future generations can look back on, keeping those emotional bonds alive no matter where life takes you. How do i-films and video swaps reflect your vision for AI-driven social media? We believe social media should be a place where everyone can get creative without needing fancy skills or equipment. Features like i-films and video swaps make it fun and simple for people to tell their stories in fresh, dynamic ways. AI helps take the complicated parts out of content creation so users can focus on expressing themselves and connecting with others. It's about making creativity accessible and turning social media into a space where people can truly co-create and have fun. With AI-generated content on the rise, how can platforms ensure authenticity and user trust? Authenticity is key, especially as AI becomes more involved in content creation. Platforms need to be upfront about what's AI-generated and give users control over how their content is made and shared. At the same time, AI should be a tool that enhances human creativity, not replaces it. Building trust means being transparent, educating users about AI's role, and making sure there are safeguards to protect people's digital identities and keep interactions genuine. How is AI shaping the next wave of personalized content without compromising privacy? AI has huge potential to personalize our online experiences but privacy has to come first. That's why we focus on privacy by design processing data locally on devices when we can and always asking for clear consent. People should feel safe knowing their information isn't being misused. It's a balance between delivering content that feels tailored and respecting user privacy and that's a responsibility every platform needs to take seriously. What role will AR, VR, and voice AI play in social media's growth, especially in smaller cities? Technologies like AR, VR, and voice AI are opening up amazing new ways for people to interact online, especially in smaller cities where access to tech can vary. Voice AI can break down language barriers while AR and VR bring stories to life in exciting, immersive ways. These tools help people express themselves more fully and connect with others in ways that feel natural and fun. It's about making social media more inclusive and giving everyone a chance to be part of the digital conversation.

Decoding Gen AI, Cloud, and VDI: A Candid Conversation with Rajiv Ranjan Kumar of Wipro
Decoding Gen AI, Cloud, and VDI: A Candid Conversation with Rajiv Ranjan Kumar of Wipro

Hans India

time6 days ago

  • Hans India

Decoding Gen AI, Cloud, and VDI: A Candid Conversation with Rajiv Ranjan Kumar of Wipro

In an exclusive sit-down with The Hans India, Rajeev Ranjan Kumar, a GenAI specialist and leader, demystified some of the most transformative technologies of our time: Generative AI (Gen AI), Cloud Computing, and Virtual Desktop Infrastructure (VDI). From the promise of AI to its potential pitfalls, and his personal journey into the tech world, Rajeev offers clear-eyed insights with relatable examples. Here's a glimpse into an insightful chat with him. Why are companies increasingly embracing Gen AI-powered solutions? Rajeev explained that the industry is witnessing a major shift towards the convergence of data forms; text, video, image, and audio. Gen AI leverages all of these together, making it vastly more efficient and intelligent. He pointed out how tools like ChatGPT offer a stark contrast to traditional search engines. Instead of returning a list of options, Gen AI delivers precise, context-aware answers, improving productivity and saving time. According to him, this convergence is what's driving the mass adoption across industries. How are Cloud and VDI connected to Gen AI in today's tech ecosystem? To break it down, Rajeev categorized AI into three key layers: infrastructure, cloud, and access. First, the infrastructure layer is crucial because AI models need high computing power. But that kind of investment isn't feasible for smaller organizations; which is where cloud platforms come in. With cloud's 'pay-as-you-go' model, anyone- even students can access powerful AI tools without buying expensive hardware. Then, comes VDI (Virtual Desktop Infrastructure), which allows people to securely access their workspace from any device, anywhere. Together, these elements form a robust ecosystem that makes AI scalable and democratized. What are the biggest challenges in adopting Gen AI? Rajeev outlined three major concerns: data privacy, ethical use, and hallucination. In sectors like healthcare and automotive, data sensitivity is extremely high. Any leak can lead to serious consequences, including loss of trust or business strategy exposure. Ethical use is another concern, especially with the rise of deepfakes and voice cloning. Rajeev stressed the importance of governance frameworks and audits to ensure responsible use. Lastly, he pointed to the problem of hallucinations, when AI outputs something that seems accurate but is factually incorrect. In high-risk industries, even one such error can be catastrophic. Are AI-powered vehicles, like those from Tesla, really safe? On the topic of autonomous driving, Rajeev admitted that the adoption rate remains low, primarily due to data reliability concerns. These vehicles rely entirely on AI models for decision-making, and if even a single command is wrong, the outcome can be dangerous. Hallucinations in AI; where answers look accurate but aren't, are especially risky here. This is why full automation is still under cautious implementation. Will AI eventually replace human jobs? Rajeev acknowledged that AI will partially replace roles, particularly in areas like technical documentation and basic coding. Generative AI can produce text and code with impressive accuracy, reducing the number of people required for such tasks. However, he emphasized the continued importance of the human-in-the-loop approach. AI still lacks instinct and random human judgement, and will take years to truly mature. 'You can think of it as the 'AI-fication of humans' already happening, but the 'humanification of AI' is still far off', this was established during the conversation. Does frequent use of AI tools hamper human creativity? 'No, it actually enhances creativity,' Rajeev said firmly. He shared a story from a gastroenterology summit, where a doctor failed to diagnose cancer in a patient early on. Years later, when the patient's previous records were uploaded into an AI system, it accurately predicted the cancer risk that was overlooked. The experience reinforced Rajeev's belief that AI complements human effort and helps professionals work smarter. To quote Rajeev,"The patient had a second stage of cancer. The doctor felt guilty. The reason that the person had come to him three times and he was not able to diagnose that he is developing that thing. Then he was feeling guilty and then he told this person to give him all the current applications he has which are based on artificial intelligence. He uploaded the entire data. The first year data, first year data was showing that there is a probability of 50% cancer happens after two to three years. Second year data was telling that 70% probability is there that he will be diagnosed with cancer in the next one year. So, he was surprised that okay, he has so much experience and this fellow came just now as a technology and it is replacing me. But he was thinking I could have used this technology two years back and I could have saved his life much earlier." In creative fields like media, he said AI can fast-track execution, allowing professionals more time for vision and innovation. What about the environmental cost of training large AI models? Rajeev acknowledged the concern about AI's water and energy consumption, but said the industry is responding. Companies like NVIDIA are creating more energy-efficient hardware and Small Language Models (SLMs) are emerging as lightweight alternatives to Large Language Models (LLMs), consuming less power with similar performance for specific use cases. He added that countries like those in the EU are already implementing Responsible AI frameworks, which include environmental considerations. How do SLMs compare to LLMs, and where should each be used? SLMs, according to Rajiv, are ideal for task-specific applications such as call centers or IT helpdesks, where the questions are predictable and datasets are limited. LLMs, on the other hand, are better suited for complex, multimodal tasks like processing audio, text, and images together in healthcare diagnostics or creative media. At Wipro, the choice between SLM or LLM is based entirely on client requirements and the scope of the project. When asked if AI misuse be prevented, especially by people with malicious intent? Rajeev explained that modern AI systems have three protective layers: the user interface, the data processing layer, and the guardrail layer. The guardrail monitors queries to detect and block inappropriate or unethical ones. Moreover, usage patterns are constantly tracked, and feedback from these interactions is used to strengthen the model over time. This includes not only security but also improving response quality. Tell us a little about your personal journey into AI. Was it always part of your plan? Rajeev shared that his journey into AI began by chance. Two years ago, AI was still emerging and most people were chasing more established tech roles. But he saw an opening and decided to take a leap. A turning point came during his MBA at IIM Kozhikode, when Professor Raju told him, 'The next decade belongs to data. If you control data, you control the world.' That advice inspired him to pivot, and it turned out to be a defining decision in his career. Was there a specific moment that confirmed you made the right choice? Yes, Rajeev recalled a friend who struggled to manually sift through 1,000 job applications. In just five days, Rajeev built a tool that could score resumes against job descriptions. To make it more robust, he implemented cosine similarity to detect AI-generated or overly similar resumes, helping to remove redundant applications. That moment made him realize how practically powerful and impactful AI can be. AI-generated resumes are becoming common. Could the best candidates be overlooked? Rajeev said it's a real concern. Many candidates now tailor their resumes to pass AI filters using keywords and tools. While this helps visibility, it also leads to over-standardization, which might mask real talent. He advised applicants to be strategic, 'Use AI to enhance your resume, but remember that authentic skills and substance still matter most.' Rajeev concluded that AI is a tool, not a threat. If used responsibly, it has the potential to enhance human capabilities, not replace them. The key is to stay ethical, curious, and collaborative. "AI is here to stay. The question is: how responsibly and creatively will we use it?" Interview by: Gyanisha Mallick Guest: Rajeev Ranjan Kumar, Senior Leader & AI specialist, Wipro Platform: The Hans India; TechTalk Podcast

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store