
We asked 3 experts to fact check Google AI. Their responses may scare you
The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition.
Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query.
Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query.
"We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time.
But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns.
The queries, or prompts, used were designed to reflect how a general user of Google might frame a search.
This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge.
The query: Vaping versus smoking
Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects.
The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said.
"This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said.
While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction.
"For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said.
The query: How bad are offshore wind farms
Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes.
The expert's response: "That particular answer is silly," Professor Andrew Blakers said.
"Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said.
Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms.
Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview.
In future it could include links to sources directly within the summary.
It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links.
"AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM.
Most Australians are worried about generative AI.
Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC).
More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results.
The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025.
The query: why McDonald's is good for you
Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's.
The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said.
"Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food.
"There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless.
"Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said.
A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks".
"Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said.
She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you."
A spokesperson for Google said people could use the links provided to find more information.
"On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM.
Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users.
This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT.
The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text.
About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found.
"The implications for search quality remain uncertain," the ACCC said in its report released on June 23.
"As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search.
"It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information."
The query: when to consume energy drinks
Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep.
The expert's response: Public health expert Christina Pollard said energy drinks are not recommended.
"Although energy drinks are heavily marketed, they are not recommended by health authorities," she said.
"People can experience sleep issues, agitation, and lack of concentration.
"There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development."
Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge.
"It changed the world in some ways," he said.
"It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us."
But generative AI was disrupting that system.
"The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said.
Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information.
A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results.
"I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said.
"There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet."
Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic.
"So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said.
"If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist."
But large language models could undermine that authority.
"One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said.
The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition.
Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query.
Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query.
"We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time.
But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns.
The queries, or prompts, used were designed to reflect how a general user of Google might frame a search.
This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge.
The query: Vaping versus smoking
Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects.
The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said.
"This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said.
While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction.
"For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said.
The query: How bad are offshore wind farms
Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes.
The expert's response: "That particular answer is silly," Professor Andrew Blakers said.
"Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said.
Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms.
Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview.
In future it could include links to sources directly within the summary.
It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links.
"AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM.
Most Australians are worried about generative AI.
Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC).
More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results.
The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025.
The query: why McDonald's is good for you
Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's.
The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said.
"Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food.
"There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless.
"Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said.
A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks".
"Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said.
She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you."
A spokesperson for Google said people could use the links provided to find more information.
"On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM.
Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users.
This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT.
The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text.
About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found.
"The implications for search quality remain uncertain," the ACCC said in its report released on June 23.
"As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search.
"It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information."
The query: when to consume energy drinks
Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep.
The expert's response: Public health expert Christina Pollard said energy drinks are not recommended.
"Although energy drinks are heavily marketed, they are not recommended by health authorities," she said.
"People can experience sleep issues, agitation, and lack of concentration.
"There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development."
Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge.
"It changed the world in some ways," he said.
"It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us."
But generative AI was disrupting that system.
"The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said.
Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information.
A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results.
"I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said.
"There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet."
Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic.
"So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said.
"If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist."
But large language models could undermine that authority.
"One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said.
The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition.
Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query.
Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query.
"We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time.
But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns.
The queries, or prompts, used were designed to reflect how a general user of Google might frame a search.
This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge.
The query: Vaping versus smoking
Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects.
The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said.
"This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said.
While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction.
"For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said.
The query: How bad are offshore wind farms
Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes.
The expert's response: "That particular answer is silly," Professor Andrew Blakers said.
"Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said.
Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms.
Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview.
In future it could include links to sources directly within the summary.
It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links.
"AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM.
Most Australians are worried about generative AI.
Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC).
More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results.
The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025.
The query: why McDonald's is good for you
Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's.
The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said.
"Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food.
"There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless.
"Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said.
A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks".
"Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said.
She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you."
A spokesperson for Google said people could use the links provided to find more information.
"On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM.
Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users.
This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT.
The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text.
About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found.
"The implications for search quality remain uncertain," the ACCC said in its report released on June 23.
"As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search.
"It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information."
The query: when to consume energy drinks
Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep.
The expert's response: Public health expert Christina Pollard said energy drinks are not recommended.
"Although energy drinks are heavily marketed, they are not recommended by health authorities," she said.
"People can experience sleep issues, agitation, and lack of concentration.
"There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development."
Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge.
"It changed the world in some ways," he said.
"It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us."
But generative AI was disrupting that system.
"The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said.
Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information.
A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results.
"I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said.
"There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet."
Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic.
"So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said.
"If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist."
But large language models could undermine that authority.
"One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said.
The world's most popular search engine is using AI to provide information that is sometimes "stripped of all context", "unsubstantiated" and "silly", including about health and nutrition.
Google began rolling out artificial intelligence summaries at the top of search results for Australians in late 2024, which now often appear at the top of the page as an "AI overview" in response to a query.
Instead of providing just a ranked list of links to sources as Google has in the past, it now combines that with a large language model trained on information from all over the internet to generate a conversational response to a query.
"We've trained these models to uphold Search's high bar for quality," Google said when it announced the rollout at the time.
But when ACM, the publisher of this masthead, asked experts to fact check four Google AI summaries, they raised concerns.
The queries, or prompts, used were designed to reflect how a general user of Google might frame a search.
This acknowledges most people come with preconceived ideas or advice about a topic, but not necessarily a lot of knowledge.
The query: Vaping versus smoking
Google's AI answer: Vaping (using e-cigarettes) is generally considered less harmful than smoking traditional cigarettes, but it's not entirely risk-free. While vaping exposes individuals to fewer toxic chemicals, it can still lead to various health problems, including addiction, lung damage, and potential long-term health effects.
The expert's response: "I find it very concerning that this AI response is so stripped of all context, positioning vaping as 'less harmful' but for whom, under what circumstances?" Professor Becky Freeman said.
"This response seems to assume that there are only two choices: smoking or vaping. The vast majority of people neither smoke nor vape," Professor Freeman, from the University of Sydney's School of Public Health said.
While vaping may be "safer" in some limited circumstances, young people had a very poor understanding of nicotine addiction.
"For young people, most of whom have never smoked, vaping is not a safer choice; the safer choice is to not vape, or smoke, at all," she said.
The query: How bad are offshore wind farms
Google's AI answer: Offshore wind farms have both positive and negative environmental impacts, with concerns primarily focused on marine life, particularly whales and other marine mammals. While they offer a cleaner energy source than fossil fuels, the construction and operation of these farms can disrupt marine ecosystems through noise pollution and potential habitat changes.
The expert's response: "That particular answer is silly," Professor Andrew Blakers said.
"Everything has good and bad points. In this case, the good/bad ratio is 10:1," the award-winning engineering professor said.
Alongside the AI overview in this example, Google's top website link was to a blog post from the Australian Fishing Trade Association, which represents recreational fishers and is opposed to wind farms.
Google maintains it pairs its traditional web page ranking system with the language model to maintain the integrity of AI responses, providing links to source sites alongside the overview.
In future it could include links to sources directly within the summary.
It said AI results were designed as a "jumping off point" for people to explore a topic through quality web links.
"AI overviews are built to surface information that is backed up by web results, and include links to web content that supports the information presented in the overview," a Google spokesperson told ACM.
Most Australians are worried about generative AI.
Nearly 60 per cent of Australians were concerned specifically that AI could create harmful or fake content, according to the final digital platforms report from the Australian Competition and Consumer Commission (ACCC).
More than half were worried about the accuracy of AI and almost 50 per cent were concerned about the lack of transparency about how AI arrives at its results.
The ACCC wants the resources to monitor changes to generative AI across the global digital giants, which it says will spend more than US$250 billion developing the technology in 2025.
The query: why McDonald's is good for you
Google's AI answer: McDonald's can be a part of a balanced diet when consumed in moderation and as part of an overall healthy lifestyle. While not a source of optimal nutrition, McDonald's does offer some options that can be incorporated into a healthy eating pattern. For example, items like the Egg McMuffin, some salads, and fruit options can provide protein, vitamins, and fibre. However, it's crucial to be mindful of portion sizes, sodium content, and overall calorie intake when eating at McDonald's.
The expert's response: "There are a number of unsubstantiated and meaningless statements in this AI response," public health Professor Christina Pollard said.
"Credible, reliable dietary advice recommends to limit or avoid the consumption of fast food.
"There is absolutely no recommendation to eat McDonald's types of food in moderation - this term is meaningless.
"Dietary recommendations are about food groups, to eat and enjoy a 'wide variety of nutritious food every day'," the Curtin University Public Health Advocacy Institute director said.
A healthy diet involved plenty of fruit and vegetables, milk and dairy, wholegrains, and lean meat or alternatives - and "specifically to limit or avoid high fat, high sugar, or high salt foods and drinks".
"Overall, [the] McDonald's menu is counter to these recommendations," Professor Pollard said.
She said dietitians would not recommend McDonald's as part of a healthy diet and suggested a better response to the query would be: "A diet based on McDonald's foods is not good for you."
A spokesperson for Google said people could use the links provided to find more information.
"On health-related queries where we do show a response, people can use the corroborating links to help inform their knowledge through their health journey," they told ACM.
Large language models, sometimes referred to by their acronym LLMs, are trained on masses of data and can generate new content, such as text, videos, images and audio, in response to conversational queries or "prompts" from users.
This is generally what people are referring to when they talk about publicly-available artificial intelligence services, including chatbots like ChatGPT.
The most common uses of AI by consumers, according to the ACCC, were searching for information, summarising information, and reviewing text.
About 79 per cent of Australians aged 14 to 17 had used generative AI for school or study, the commission's digital platforms report found.
"The implications for search quality remain uncertain," the ACCC said in its report released on June 23.
"As search engines incorporate this technology into their services in different ways, generative AI may lead to a new era of more relevant, efficient, and intuitive search.
"It could also raise new challenges for consumers seeking credible, reliable, unbiased, and verifiable information."
The query: when to consume energy drinks
Google's AI answer: Energy drinks are often consumed to boost energy and alertness, making them popular for certain situations. They can be beneficial before exercise, during long-distance activities, and when tackling afternoon slumps or feeling tired. However, it's important to be mindful of the timing and potential side effects, especially regarding sleep.
The expert's response: Public health expert Christina Pollard said energy drinks are not recommended.
"Although energy drinks are heavily marketed, they are not recommended by health authorities," she said.
"People can experience sleep issues, agitation, and lack of concentration.
"There is no safe level of consumption for children or adolescents as compounds in these products (e.g., caffeine and taurine) can have adverse impacts on developing brain and psycho-motor functions, and social development."
Associate Professor Timothy Graham, a digital media expert at Queensland University of Technology, said the "original" Google, which ranked web pages based on how authoritative they were on a particular subject, was a breakthrough in accessing knowledge.
"It changed the world in some ways," he said.
"It did have an integrity to it, because there was a certain transparency to the way that that worked - and it produced information that was genuinely useful for us."
But generative AI was disrupting that system.
"The problem with when you just have a large language model giving you the result at the top, you don't have that accountability," he said.
Dr Graham said, while there were some opportunities, AI in search could change how we view expertise - and it could provide inaccurate and misleading information.
A recent study of Google AI summaries across five US states found 43 per cent linked back to Google's own search results.
"I think [it] places too much trust and too much hope and expectation in the technology, in large language models that we know have a number of systemic biases and flaws that we haven't solved yet," he said.
"There is a concern, even just from that knowledge perspective, about not only the accuracy of the information, but the way that it kind of simulates and bullshits in ways that are going to, if not mislead people, certainly just pollute the internet."
Until now modern society's approach to knowledge was to find an expert in the field to get quality information on a topic.
"So if you've got a health dilemma, you go and talk to a doctor," Dr Graham said.
"If you want to get a plane to be able to take off and not crash and everyone dies, you go and talk to an engineer and a physicist."
But large language models could undermine that authority.
"One of the big shifts that we're seeing - and that we're going to see more and more with the explosion of LLMs at the top of the Google results list and other search engines - is a really radical and quite painful shift for society in the relationship between information integrity and authority," he said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Sydney Morning Herald
5 hours ago
- Sydney Morning Herald
‘One-size-fits-all approach doesn't work': Plan to save the NDIS billions
Billions of dollars in disability payments could be funnelled to struggling children through daycare programs under a plan to ease the financial strain of the nation's insurance scheme while ushering in a new era of early intervention. Taxpayers would save $12.1 billion over the next decade if 10 per cent of National Disability Insurance Scheme payments were set aside for people not currently on the scheme – including children needing temporary support – and the current model of individual support plans was reserved for Australians with life-long disabilities, a report says. It can take years of waiting and one-on-one specialist appointments to gain support under current NDIS procedures. Removing some of that red tape and allowing children with autism and developmental difficulties – and their parents – to receive support directly at childcare and primary schools would be more effective and cheaper, according to the Grattan Institute report. This new era of the disability scheme, known as 'foundational supports', could be funded using the existing NDIS envelope, the think tank's report says, and should be brought in over the next five years if the new-look system is to pile the least amount of pressure on federal and state budgets. Governments could save an additional $34 billion over 10 years by not needing to find extra money for the new disability system. Foundational supports were meant to roll out on Tuesday under the original deadline set by previous NDIS minister Bill Shorten. However, that start date has been pushed back to at least December as the states and federal government negotiate details. Grattan Institute disability program director Samuel Bennett said the NDIS had transformed the lives of tens of thousands of people but had grown 'too big, too fast'. 'Something needs to be done,' Bennett said. 'Time has shown a one-size-fits-all approach doesn't work. What children and families really need is evidence-based early intervention – preferably available where the child already lives, learns and plays – rather than navigating this bureaucratic nightmare that is today's NDIS.'

The Age
5 hours ago
- The Age
‘One-size-fits-all approach doesn't work': Plan to save the NDIS billions
Billions of dollars in disability payments could be funnelled to struggling children through daycare programs under a plan to ease the financial strain of the nation's insurance scheme while ushering in a new era of early intervention. Taxpayers would save $12.1 billion over the next decade if 10 per cent of National Disability Insurance Scheme payments were set aside for people not currently on the scheme – including children needing temporary support – and the current model of individual support plans was reserved for Australians with life-long disabilities, a report says. It can take years of waiting and one-on-one specialist appointments to gain support under current NDIS procedures. Removing some of that red tape and allowing children with autism and developmental difficulties – and their parents – to receive support directly at childcare and primary schools would be more effective and cheaper, according to the Grattan Institute report. This new era of the disability scheme, known as 'foundational supports', could be funded using the existing NDIS envelope, the think tank's report says, and should be brought in over the next five years if the new-look system is to pile the least amount of pressure on federal and state budgets. Governments could save an additional $34 billion over 10 years by not needing to find extra money for the new disability system. Foundational supports were meant to roll out on Tuesday under the original deadline set by previous NDIS minister Bill Shorten. However, that start date has been pushed back to at least December as the states and federal government negotiate details. Grattan Institute disability program director Samuel Bennett said the NDIS had transformed the lives of tens of thousands of people but had grown 'too big, too fast'. 'Something needs to be done,' Bennett said. 'Time has shown a one-size-fits-all approach doesn't work. What children and families really need is evidence-based early intervention – preferably available where the child already lives, learns and plays – rather than navigating this bureaucratic nightmare that is today's NDIS.'

Herald Sun
11 hours ago
- Herald Sun
UBS, Citi answer AI's trillion-dollar question about reality versus hype
As artificial intelligence transforms from buzzword to business reality, two major investment brokers have tackled the trillion-dollar question: will demand for AI justify the massive spending spree that's preceded it? The answer, according to UBS and Citi, is a cautious yes – but with important caveats that investors need to understand. UBS analyst Karl Keirstead and his team have been examining what they call the 'three pillars' of AI demand and trying to work out whether the boom in AI infrastructure spending will actually pay off. Their conclusion is, overall, reassuring for investors betting on the AI boom. However, they've identified a significant weak link in the chain. Think of AI demand like a three-legged stool. The first leg is the tech giants building and training large language models; companies like OpenAI (which runs ChatGPT), Google, Meta and others racing to create smarter AI systems. The second leg is consumer demand from people using ChatGPT, asking Google AI-powered questions, or interacting with AI chatbots. Both of these legs appear solid and growing stronger. It's the third leg – the enterprise demand – whereby Keirstead sees potential wobbles. 'Organisations are moving slowly, the return on investment is less clear, and AI technology needs to be architected to automate specific enterprise workflows and tasks,' the UBS report says. Although consumers are embracing AI tools enthusiastically, businesses are being more cautious about spending big money on AI systems. The risk is that enthusiasm from tech companies and consumers cools down before businesses fully embrace AI spending, creating a temporary 'digestion phase' in the AI investment boom. But UBS attaches 'a low probability' to this scenario, saying it considers this as manageable rather than likely. The bank says it's 'constructive on AI demand trends,' arguing that training new AI models and growing consumer use will 'sustain GPU demand for years to come'. For investors wondering which stocks to back, Keirstead's team has some clear favourites. At the top of the list sits Nvidia, the chipmaker and stock market darling that's become synonymous with AI processing power. UBS also likes Broadcom as a key beneficiary of computing and networking demand, and Micron Technology for increased memory requirements. Globally, it favours Taiwan Semiconductor. UBS's key AI stock picks: • Hardware: Nvidia, Broadcom, Micron, TSMC, Arista, Ciena • Software: Oracle, Snowflake, ServiceNow • Tech giants: Meta • Asian manufacturers: Quanta, Wistron Interestingly, among software companies UBS likes the 'infrastructure and data-exposed' names like Oracle and Snowflake, rather than the traditional software-as-a-service companies. However, it thinks ServiceNow is better positioned among large SaaS players for AI monetisation. Meanwhile, Citi US equity strategist Drew Pettit offers a complementary perspective, focusing on valuation and geographic opportunities. His key insight is that while American AI stocks generally trade at higher prices, they've 'earned those multiples with strong growth plus higher and improving returns on equity and margins'. In investment language, the US offers 'quality AI'. But Pettit says the valuations of non-US AI stocks haven't risen nearly as much, even as growth rates are expected to narrow. He therefore foresees 'value AI' opportunities outside the US. Citi's analysis suggests that the growth expectations built into current AI stock prices are actually 'attainable given consensus outlooks'. Working backwards from current stock prices to see what growth rates are implied, the analysis finds that most AI stocks aren't priced for impossibly high growth rates. This suggests the AI rally isn't built on completely unrealistic expectations. Both banks are essentially saying that although AI stocks have risen substantially, the underlying business fundamentals can support these valuations if demand continues growing as expected. Their analysis calls for a diversified approach to investing in the AI boom. Rather than betting everything on the most obvious AI plays like Nvidia, they recommend spreading investments across what Citi calls 'enablers' – companies that make AI possible like chip manufacturers; also 'adopters' – companies which use AI to improve their businesses. Overall, while the easy money to be made from AI investing may be over, both UBS and Citi expect sustainable demand drivers which can support continued investment returns. The key is choosing companies with realistic valuations and genuine AI business models, rather than just riding the hype wave. Enterprise adoption continues to be the wildcard. If businesses embrace AI more slowly than expected, investors may face some bumpy periods ahead. As always in investing, the devil is in the detail, and both reports say careful stock selection will matter more than simply buying any stock with 'AI' in its business description. Originally published as UBS and Citi answer AI's trillion-dollar question: can the tech reality match the current hype?