logo
Answer engine: How Google's AI Mode is reshaping search

Answer engine: How Google's AI Mode is reshaping search

Mint21 hours ago
I come from the era of Lycos, Yahoo and AltaVista. And I find it amusing that we have a generation of people who will probably say — what are those? For over two decades, Google search has worked by indexing websites, like a massive library catalog. It scanned and stored pages and then showed you that list of blue links to click.
Finding what you want can often be frustrating: it's up to you to sort out relevant and useful links from junk, scams and ads. But it's familiar.
With the world busy being transformed by AI, it's only inevitable that search will have to keep up. Already, users turn to ChatGPT instead of 'Googling'. If Google doesn't reimagine its search engine, it could find itself at a huge disadvantage.
If you look at the search page, at the extreme left you'll see a new tab — AI Mode. For now it's optional, but in the near future it may not be. This goes beyond just pointing you to sources. Instead, it aims to directly answer your questions, summarise information, and even help you complete tasks, all from the search page itself.
This new approach uses advanced AI models from Gemini to understand context, generate natural language responses, and combine information from many sources in real time. The result? You spend less time clicking around and more time getting immediate, conversational answers. It's changing from a search engine to an answer engine.
You may have already noticed AI Overviews, a mini version of AI Mode, which appears for certain searches. That gives you a good idea of what the full AI Mode is shaping up to become.
Ready or not, here I come
But are we ready for this seismic shift in something that we do several times a day? Probably not. In fact, it's going to be a bit of a shock. Even though it sounds good to have some entity do all the hard work of looking through pages and coming up with a neat and quick explanation with no extra clicking, saving us time and effort, it's just not what we're accustomed to. Inevitably, many users will just want to do things the old way.
The AI shift raises other questions. Can we still see those linked websites? They're actually still there, but tucked away further down and no longer the first thing we see. For those of us who love to compare different sources and decide for ourselves, this new setup might feel a bit limiting.
Another big question concerns the choice of what content is summarised. With the old way, the choice was more or less ours. Now, it's the AI that chooses and we just have to trust it. As AI is notorious for making mistakes and downright hallucinating, the accuracy of the information in summaries we get will be in question. The sources are given, but they will not be so easy to see. When Google's AI picks which pieces of information to highlight first, it is in effect deciding what story gets told. That raises questions of fairness and transparency, and whether we still have the freedom to explore the web on our own terms.
On a practical level, some people might love the new mode. If you're asking a simple question like the age of a celebrity or the weather tomorrow it's fast and easy. But for more complicated topics, or when you want to get a feel for different perspectives, you end up doing more work to find the details.
Threat to the open internet?
There are ripple effects beyond just our own screens. Many websites and publishers rely on us clicking through to survive. If fewer people visit their pages because links are presented differently, these sites may lose ad revenue. Over time, we may see less freely available content, and the open, diverse internet we once took for granted could start to shrink.
This doesn't mean it's all doom and gloom. Some people will embrace this new way of searching and appreciate not having to wade through dozens of links. Others will miss the feeling of exploring and stumbling upon unexpected gems.
In the end, each user will need to decide how much to rely on these AI summaries and how often we still want to dig deeper. Maybe we'll learn to balance the convenience of a quick answer with the satisfaction of discovering things for ourselves.
AI Mode is currently available to users in the US and India, where Google has a massive user base. Feedback from users is needed before the feature is rolled out fully and everywhere. You can be sure Google will have a close eye on the reception.
The New Normal: The world is at an inflexion point. Artificial Intelligence is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.
Mala Bhargava is most often described as a 'veteran' writer who has contributed to several publications in India since 1995. Her domain is personal tech and she writes to simplify and demystify technology for a non-techie audience.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google techie welcomes younger brother to IT giant, internet says ‘do bhai, dono tabahi'
Google techie welcomes younger brother to IT giant, internet says ‘do bhai, dono tabahi'

Hindustan Times

time2 hours ago

  • Hindustan Times

Google techie welcomes younger brother to IT giant, internet says ‘do bhai, dono tabahi'

A software engineer at Google has revealed that his younger brother will soon be joining the tech behemoth too — marking a milestone moment for the family, given how few applicants manage to land a job at the company. A Bengaluru-based techie welcomed his younger brother to Google (Reuters/Representational Image) When Priyam Agarwal announced his job switch on the social media platform X (formerly Twitter), it elicited a proud reaction from his elder brother, Priyansh Agarwal. What the Agarwal brothers posted Priyam shared a screenshot of the 'Onboarding' portal at Google on July 5, which informed him that he had nine days left until he joined the search giant as a software engineer. 'Less than 10 days before I start a new journey. Super excited and a little nervous,' wrote the Delhi-based techie. His brother Priyansh, who is already working at Google, reposted his post with a proud message. 'Younger brother coming to Google as well. Super proud of him,' wrote Bengaluru-based Priyansh Agarwal. Internet celebrates The post was flooded with congratulatory messages. Many people also shared their surprise at two brothers converting jobs at a company with a famously low acceptance rate. 'Do bhai dono tabhai (Two brothers, both awesome),' wrote several X users in the comments section. 'Congratulations to you both,' read one comment. 'Wow , both the brothers working at Google. Congratulations sir, Google is my dream company,' another person said. Google acceptance rate Google does not publish data on how many applicants it accepts every year. However, industry estimates suggest that Google's acceptance rate sits between 0.2% and 0.5% – which is lower than the acceptance rate of Harvard. The company has also carried out several rounds of layoffs since 2023 in a bid to streamline operations and reduce costs. According to an AP report, Google has been periodically reducing its headcount since 2023 as the industry began to backtrack from the hiring spree that was triggered during pandemic lockdowns that spurred feverish demand for digital services. Google began its post-pandemic retrenchment by laying off 12,000 workers in early 2023 and since then as been trimming some divisions to help bolster its profits while ramping up its spending on artificial intelligence — a technology driving an upheaval that is starting to transform its search engine into a more conversational answer engine. (With inputs from AP)

AI may now match humans in spotting emotion, sarcasm in online chats
AI may now match humans in spotting emotion, sarcasm in online chats

Business Standard

time2 hours ago

  • Business Standard

AI may now match humans in spotting emotion, sarcasm in online chats

When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning – an underlying subtext. We also often hope that this meaning will come through to the reader. But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level. These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are – and how quickly conversational AI is improving – it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models – the technology behind AI chatbots such as ChatGPT – showed that some are better than others. Finally, a study showed that LLMs can guess the emotional 'valence' of words – the inherent positive or negative 'feeling' associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 – a relatively recent version of ChatGPT – can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm – thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 × 7B. We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell – although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there – hence, using human raters doesn't help much with sarcasm detection. Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast – and may soon be valuable teammates rather than mere tools. Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways – perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided – will the model's underlying judgements and ratings remain consistent? Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.

AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations

Time of India

time3 hours ago

  • Time of India

AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations

Academy Empower your mind, elevate your skills When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning - an underlying subtext. We also often hope that this meaning will come through to the what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are - and how quickly conversational AI is improving - it's essential to explore what these technologies can (and can't) do in this on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models - the technology behind AI chatbots such as ChatGPT - showed that some are better than a study showed that LLMs can guess the emotional "valence" of words - the inherent positive or negative "feeling" associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 - a relatively recent version of ChatGPT - can read between the lines of human-written goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm - thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 x found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell - although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and study found no clear winner there - hence, using human raters doesn't help much with sarcasm does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research - especially important during crises, elections or public health and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast - and may soon be valuable teammates rather than mere this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways - perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided - will the model's underlying judgements and ratings remain consistent?Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store