logo
#

Latest news with #ageassurance

Face age and ID checks? Using the internet in Australia is about to fundamentally change
Face age and ID checks? Using the internet in Australia is about to fundamentally change

The Guardian

time3 days ago

  • Business
  • The Guardian

Face age and ID checks? Using the internet in Australia is about to fundamentally change

As the old adage goes, 'On the internet, nobody knows you're a dog'. But in Australia it might soon be the case that everything from search engines and social media sites, to app stores and AI chatbots will have to know your age. The Albanese government trumpeted the passage of its legislation banning under 16s from social media – which will come into effect in December – but new industry codes developed by the tech sector and eSafety commissioner Julie Inman Grant under the Online Safety Act will probably have much larger ramifications for how Australians access the internet. Measures to be deployed by online services could include looking at your account history, or using facial age assurance and bank card checks. Identity checks using IDs such as drivers licences to keep children under 16 off social media will also apply to logged-in accounts for search engines from December, under an industry code that came into force at the end of June. The code will require search engines to have age assurance measures for all accounts, and where an account holder is determined to be aged under 18, the search engine would be required to switch on safe search features to filter out content such as pornography from search results. Six more draft codes being considered by the eSafety commissioner would bring similar age assurance measures to a wide range of services Australians use every day, including app stores, AI chatbots and messaging apps. Sign up for Guardian Australia's breaking news email Any service that hosts or facilitates access to content such as pornography, self-harm material, simulated gaming, or very violent material unsuitable for children will need to ensure children are not able to access that content. In her National Press Club speech last month, Inman Grant flagged that the codes were needed to keep children safe at every level of the online world. 'It's critical to ensure the layered safety approach which also places responsibility and accountability at critical chokepoints in the tech stack, including the app stores and at the device level, the physical gateways to the internet where kids sign-up and first declare their ages,' she said. The eSafety commissioner announced the intention of the codes during the development process and when they were submitted, but recent media reporting has drawn renewed attention to these aspects of the codes. Some people will welcome the changes. News this week that Elon Musk's AI Grok now includes a pornographic chat while still being labelled suitable for ages 12+ on the Apple app store prompted child safety groups to call for Apple to review the app's rating and implement child protection measures in the app store. Apple and Google are already developing age checks at the device level that can also be used by apps to check the age of their users. Founder of tech analysis company PivotNine, Justin Warren, says the codes would 'implement sweeping changes to the regulation of communication between people in Australia'. 'It looks like a massive over-reaction after years of policy inaction to curtail the power of a handful of large foreign technology companies,' he says. 'That it hands even more power and control over Australians' online lives to those same foreign tech companies is darkly hilarious.' One of the industry bodies that worked with the eSafety commissioner to develop the codes, Digi, rejected the notion they would reduce anonymity online, and said the codes targeted specific platforms hosting or providing access to specific kinds of content. 'The codes introduce targeted and proportionate safeguards concerning access to pornography and material rated as unsuitable for minors under 18, such as very violent materials or those advocating or [giving instructions for] suicide, eating disorders or self-harm,' Digi's director of digital policy Dr Jenny Duxbury says. Sign up to Breaking News Australia Get the most important news as it breaks after newsletter promotion 'These codes introduce safeguards for specific use cases, not a blanket requirement for identity verification across the internet.' Duxbury says companies may use inference measures – such as account history or device usage patterns – to estimate a user's age, which would mean most users may not have to go through an assurance process. 'Some services may choose to adopt inference methods because they can be effective and less intrusive.' However, those that do may be caught by surprise when it comes into effect, says Electronic Frontiers Australia chair John Pane. 'While most Australians seem to be aware about the discussion about social media, the average punter is blissfully unaware about what's happening with search engines, and particularly if they go to seek access to adult content or other content that is captured by one of the safety codes, and then having to authenticate that they're over the age of 18 in order to access that content, the people will not be happy, rightly so.' Companies that don't comply with the codes will face a fine similar to that of the social media ban – up to $49.5m for a breach. Other measures such as eSafety requesting sites be delisted from search results are also an option for non-compliance. Pane says it would be better if the federal government made changes to the privacy act and introduced AI regulation that would require businesses to do risk assessment and ban certain AI activities deemed an unacceptable risk. He says a duty of care for the platforms for all users accessing digital services should be legislated. 'We believe this approach, through the legislature, is far more preferable than using regulatory fiat through a regulatory agency,' he said. Warren is sceptical the age assurance technology will work, highlighting that the search engine code was brought in before the outcome of the age assurance technology trial, due to government this month. 'Eventually, the theory will come into contact with practise.' After recent media reporting about the codes, the eSafety commissioner's office this week defended including age assurance requirements for searches. 'Search engines are one of the main gateways available to children for much of the harmful material they may encounter, so the code for this sector is an opportunity to provide very important safeguards,' the office said.

Face age and ID checks? Using the internet in Australia is about to fundamentally change
Face age and ID checks? Using the internet in Australia is about to fundamentally change

The Guardian

time3 days ago

  • Business
  • The Guardian

Face age and ID checks? Using the internet in Australia is about to fundamentally change

As the old adage goes, 'On the internet, nobody knows you're a dog'. But in Australia it might soon be the case that everything from search engines and social media sites, to app stores and AI chatbots will have to know your age. The Albanese government trumpeted the passage of its legislation banning under 16s from social media – which will come into effect in December – but new industry codes developed by the tech sector and eSafety commissioner Julie Inman Grant under the Online Safety Act will probably have much larger ramifications for how Australians access the internet. Measures to be deployed by online services could include looking at your account history, or using facial age assurance and bank card checks. Identity checks using IDs such as drivers licences to keep children under 16 off social media will also apply to logged-in accounts for search engines from December, under an industry code that came into force at the end of June. The code will require search engines to have age assurance measures for all accounts, and where an account holder is determined to be aged under 18, the search engine would be required to switch on safe search features to filter out content such as pornography from search results. Six more draft codes being considered by the eSafety commissioner would bring similar age assurance measures to a wide range of services Australians use every day, including app stores, AI chatbots and messaging apps. Sign up for Guardian Australia's breaking news email Any service that hosts or facilitates access to content such as pornography, self-harm material, simulated gaming, or very violent material unsuitable for children will need to ensure children are not able to access that content. In her National Press Club speech last month, Inman Grant flagged that the codes were needed to keep children safe at every level of the online world. 'It's critical to ensure the layered safety approach which also places responsibility and accountability at critical chokepoints in the tech stack, including the app stores and at the device level, the physical gateways to the internet where kids sign-up and first declare their ages,' she said. The eSafety commissioner announced the intention of the codes during the development process and when they were submitted, but recent media reporting has drawn renewed attention to these aspects of the codes. Some people will welcome the changes. News this week that Elon Musk's AI Grok now includes a pornographic chat while still being labelled suitable for ages 12+ on the Apple app store prompted child safety groups to call for Apple to review the app's rating and implement child protection measures in the app store. Apple and Google are already developing age checks at the device level that can also be used by apps to check the age of their users. Founder of tech analysis company PivotNine, Justin Warren, says the codes would 'implement sweeping changes to the regulation of communication between people in Australia'. 'It looks like a massive over-reaction after years of policy inaction to curtail the power of a handful of large foreign technology companies,' he says. 'That it hands even more power and control over Australians' online lives to those same foreign tech companies is darkly hilarious.' One of the industry bodies that worked with the eSafety commissioner to develop the codes, Digi, rejected the notion they would reduce anonymity online, and said the codes targeted specific platforms hosting or providing access to specific kinds of content. 'The codes introduce targeted and proportionate safeguards concerning access to pornography and material rated as unsuitable for minors under 18, such as very violent materials or those advocating or [giving instructions for] suicide, eating disorders or self-harm,' Digi's director of digital policy Dr Jenny Duxbury says. Sign up to Breaking News Australia Get the most important news as it breaks after newsletter promotion 'These codes introduce safeguards for specific use cases, not a blanket requirement for identity verification across the internet.' Duxbury says companies may use inference measures – such as account history or device usage patterns – to estimate a user's age, which would mean most users may not have to go through an assurance process. 'Some services may choose to adopt inference methods because they can be effective and less intrusive.' However, those that do may be caught by surprise when it comes into effect, says Electronic Frontiers Australia chair John Pane. 'While most Australians seem to be aware about the discussion about social media, the average punter is blissfully unaware about what's happening with search engines, and particularly if they go to seek access to adult content or other content that is captured by one of the safety codes, and then having to authenticate that they're over the age of 18 in order to access that content, the people will not be happy, rightly so.' Companies that don't comply with the codes will face a fine similar to that of the social media ban – up to $49.5m for a breach. Other measures such as eSafety requesting sites be delisted from search results are also an option for non-compliance. Pane says it would be better if the federal government made changes to the privacy act and introduced AI regulation that would require businesses to do risk assessment and ban certain AI activities deemed an unacceptable risk. He says a duty of care for the platforms for all users accessing digital services should be legislated. 'We believe this approach, through the legislature, is far more preferable than using regulatory fiat through a regulatory agency,' he said. Warren is sceptical the age assurance technology will work, highlighting that the search engine code was brought in before the outcome of the age assurance technology trial, due to government this month. 'Eventually, the theory will come into contact with practise.' After recent media reporting about the codes, the eSafety commissioner's office this week defended including age assurance requirements for searches. 'Search engines are one of the main gateways available to children for much of the harmful material they may encounter, so the code for this sector is an opportunity to provide very important safeguards,' the office said.

An expert has quit over the government's planned social media ban, what now?
An expert has quit over the government's planned social media ban, what now?

ABC News

time04-07-2025

  • Business
  • ABC News

An expert has quit over the government's planned social media ban, what now?

One of the experts advising the teen social media ban's tech trial has resigned over concerns about its planned 'age assurance' technology. Between uploading government ID documents to US tech companies and using AI to scan and identify people's ages, it's still unclear how this ban is going to be enforced. Is there a better solution we're missing here? Also, how can we ban social media if we can't decide what it actually is? YouTube has been classified and de-classified as social media throughout the process of developing the bill, and the definition may be more important than we realise when it comes to drawing a technological line in the sand for this ban. Plus, remember the metaverse? The non-Zuckerberg version of the Metaverse is back in the news this week with a new Standard out that could drastically impact teenagers safety online. GUESTS: Emily van der Nagel , lecturer in social media at Monash University , lecturer in social media at Monash University Jocelyn Brewer, founder of Digital Nutrition and psychologist This episode of Download This Show was made on Gadigal land and in Naarm. Technical production by Ann-Marie de Bettencour and Ross Richardson.

Australia social media teen ban software trial organisers say the tech works
Australia social media teen ban software trial organisers say the tech works

Yahoo

time20-06-2025

  • Business
  • Yahoo

Australia social media teen ban software trial organisers say the tech works

By Byron Kaye SYDNEY (Reuters) -Some age-checking applications collect too much data and no product works 100% of the time, but using software to enforce a teenage social media ban can work in Australia, the head of the world's biggest trial of the technology said on Friday. The view from the government-commissioned Age Assurance Technology Trial of more than 1,000 Australian school students and hundreds of adults is a boost to the country's plan to keep under 16s off social media. From December, in a world first ban, companies like Facebook and Instagram owner Meta, Snapchat and TikTok must prove they are taking reasonable steps to block young people from their platforms or face a fine of up A$49.5 million ($32 million). Since the Australian government announced the legislation last year, child protection advocates, tech industry groups and children themselves have questioned whether the ban can be enforced due to workarounds like Virtual Private Networks, which obscure an internet user's location. "Age assurance can be done in Australia privately, efficiently and effectively," said Tony Allen, CEO of the Age Check Certification Scheme, the UK-based organisation overseeing the Australian trial. The trial found "no significant tech barriers" to rolling out a software-based scheme in Australia, although there was "no one-size-fits-all solution, and no solution that worked perfectly in all deployments," Allen added in an online presentation. Allen noted that some age-assurance software firms "don't really know at this stage what data they may need to be able to support law enforcement and regulators in the future. "There's a risk there that they could be inadvertently over-collecting information that wouldn't be used or needed." Organisers of the trial, which concluded earlier this month, gave no data findings and offered only a broad overview which did not name individual products. They will deliver a report to the government next month which officials have said will inform an industry consultation ahead of the December deadline. A spokesperson for the office of the eSafety Commissioner, which will advise the government on how to implement the ban, said the preliminary findings were a "useful indication of the likely outcomes from the trial. "We are pleased to see the trial suggests that age assurance technologies, when deployed the right way and likely in conjunction with other techniques and methods, can be private, robust and effective," the spokesperson said. The Australian ban is being watched closely around the world with several governments exploring ways to limit children's exposure to social media. ($1 = 1.5427 Australian dollars) (Additional reporting by Cordelia Hsu; Editing by Kate Mayberry)

Will the tech behind the teen social media ban work? These questions remain unanswered
Will the tech behind the teen social media ban work? These questions remain unanswered

SBS Australia

time20-06-2025

  • Business
  • SBS Australia

Will the tech behind the teen social media ban work? These questions remain unanswered

Technologies to enforce the Australian government's social media ban for under 16s are "private, robust and effective". That's according to the preliminary findings of a federal government-commissioned trial that has nearly finished testing them. The findings, released on Friday, may give the government greater confidence to forge ahead with the ban, despite a suite of expert criticism. They might also alleviate some of the concerns of the Australian population about privacy and security implications of the ban, which is due to start in December. For example, a report based on a survey of nearly 4,000 people and released by the government earlier this week found nine out of 10 people support the idea of a ban. But it also found a large number of people were "very concerned" about how the ban would be implemented. Nearly 80 per cent of respondents had privacy and security concerns, while roughly half had concerns about age assurance accuracy and government oversight. The trial's preliminary findings paint a rosy picture of the potential for available technologies to check people's ages. However, they contain very little detail about specific technologies, and appear to be at odds with what we know about age-assurance technology from other sources. The social media ban for under-16s was legislated in December 2024. A last-minute amendment to the law requires technology companies to provide "alternative age assurance methods" for account holders to confirm their age, rather than relying only on government-issued ID. The Australian government commissioned an independent trial to evaluate the "effectiveness, maturity, and readiness for use" of these alternative methods. The trial is being led by the Age Check Certification Scheme — a company based in the United Kingdom that specialises in testing and certifying identity verification systems. It includes 53 vendors that offer a range of age assurance technologies to guess people's ages, using techniques such as facial recognition and hand-movement recognition. According to the preliminary findings of the trial, "age assurance can be done in Australia". The trial's project director, Tony Allen, said "there are no significant technological barriers" to assuring people's ages online. He added the solutions are "technically feasible, can be integrated flexibly into existing services and can support the safety and rights of children online". However, these claims are hard to square with other evidence. On Thursday, the ABC reported the trial found face-scanning technologies "repeatedly misidentified" children as young as 15 as being in their 20s and 30s. These tools could only guess children's ages "within an 18-month range in 85 per cent of cases". This means a 14-year-old child might gain access to a social media account, while a 17-year-old might be blocked. This is in line with results of global trials of face-scanning technologies conducted for more than a decade. An ongoing series of studies of age estimation technology by the United States' National Institute of Standards and Technology shows the algorithms "fail significantly when attempting to differentiate minors" of various ages. The tests also show that error rates are higher for young women compared to young men. Error rates are also higher for people with darker skin tones. These studies show that even the best age-estimation software currently available — Yoti — has an average error of 1.0 years. Other software options mistake someone's age by 3.1 years on average. This means, at best, a 16-year-old might be estimated to be 15 or 17 years old; at worst, they could be seen to be 13 or 19 years of age. These error rates mean a significant number of children under 16 could access social media accounts despite a ban being in place, while some over 16 could be blocked. Yoti also explains businesses needing to check exact ages (such as 18) can set higher age thresholds (such as 25), so fewer people under 18 get through the age check. This approach would be similar to that taken in Australia's retail liquor sector, where sales staff verify ID for anyone who appears to be under the age of 25. However, many young people lack the government-issued ID required for an additional age check. It's also worth remembering that in August 2023, the Australian government acknowledged that the age assurance technology market was "immature" and could not yet meet key requirements, such as working reliably without circumvention and balancing privacy and security. We don't yet know exactly what methods platforms will use to verify account holders' ages. While face-scanning technologies are often discussed, they could use other methods to confirm age. The government trial also tested voice and hand movements to guess young people's ages. But those methods also have accuracy issues. And it's not yet clear what recourse people will have if their age is misidentified. Will parents be able to complain if children under 16 gain access to accounts, despite restrictions? Will older Australians who are incorrectly blocked be able to appeal? And if so, to whom? There are other outstanding questions. What's stopping someone who's under 16 from getting someone who is over 16 to set up an account on their behalf? To mitigate this risk, the government might require all social media users to verify their age at regular intervals. It's also unclear what level of age estimation error the government may be willing to accept in implementing a social media ban. The legislation says technology companies must demonstrate they have taken "reasonable steps" to prevent under-16s from holding social media accounts. What is considered "reasonable" is yet to be clearly defined. Australians will have to wait until later this year for the full results of the government's trial to be released, and to know how technology companies will respond. With less than six months until the ban comes into effect, social media users still don't have all the answers they need. Lisa M. Given is a professor of information sciences and director of the Social Change Enabling Impact Platform at RMIT University.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store