
Google releases 'safety charter' for India, senior exec details top cyber threat actors in the country
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
India has a unique lens on how technology is being used today, given the scale of the country, the speed at which people are coming online, and the vibrancy of the business community, said Heather Adkins , VP security engineering, Google . Threat actors responding to this ecosystem also provides a useful view of the evolving threat landscape, she said, adding that patterns seen in India may be translated in other parts of the world.Google on Tuesday released a 'safety charter' for India to address online scams and fraud, cybersecurity for government and businesses, and responsible artificial intelligence. The company is looking to deepen partnerships with the government, local organisations, and academia in these areas, said Adkins.Initiatives under the safety charter will be executed through the Google Security Engineering Centre being set up in a hub-and-spoke model across Delhi, Hyderabad and Bengaluru, she told ET.Local engagements help to understand patterns better and protect people globally, Adkins said.'What we might learn about a pattern in India will then be automatically translated to a user somewhere else in the world, which is very beneficial for us,' she said, 'and because of India's scale, you have so many people online, that gives us a lens that's very unique in the world in terms of what we can see.'Fraudulent loan apps and 'digital arrest' scams, for instance, were seen emerging in the country.On digital privacy laws emerging around the world, including India's Digital Personal Data Protection Act, Adkins said the company advocates for standardisation and principles-based approach to enable a seamless experience as they adapt across countries, while factoring in local needs and innovation.'Regulation works well when it addresses the problem and gets it solved, and so what we don't want to see is regulation that makes the other problems worse,' she said.On the question of heightened threats during conflicts like the recent India-Pakistan tensions, Adkins said cybersecurity is now a bigger factor in conflicts as well as natural disasters across the world, with scammers trying to trick people via, say, donation links.'It's really easy for the scammers to pick up on current events and then use that to trick well-intentioned people out of money, out of personal information, into installing apps that are dangerous,' she said.Threat actors are also using Gen AI for greater productivity, language translation, and research, and the company is 'very concerned' about how the technology can make attacks easier, said Adkins.Sharing information and signals about these trends among partners will help tackle the problem, she said, adding that AI is also key in identifying fraud emails or removing malicious apps.Google is set to collaborate further with the ministry of home affairs, partnering with the Indian Cyber Crime Coordination Centre (I4C) for user awareness on cybercrimes over the next two months.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
26 minutes ago
- Business Standard
Google adds 'AI Mode' option upfront on Android Search widget: Details here
Reportedly, the new AI Mode shortcut in the Google Search widget is now available on most Android phones, expanding access beyond Pixel devices New Delhi Google is reportedly rolling out a new AI Mode shortcut to the Google Search homescreen widget on Android that gives users quicker access to generative AI features directly from their home screen. According to a report by 9to5Google, this shortcut is now widely available across both beta and stable versions of the Google app (v16.28). This update follows Google's ongoing push to integrate generative AI more deeply into its mobile apps. AI Mode shortcut Previously, it was exclusive to Pixel Launcher, but is now rolling out widely on Android platform. Once available, it will allow users to directly type prompts into the full-screen AI Mode interface, similar to performing a standard Google search but with AI-powered enhancements. In the redesigned Search widget, the AI Mode button appears as a standalone circular icon, positioned to the right of the voice search microphone and Google Lens icons. To enable it, users can long-press the Search widget on their homescreen, then navigate to Customise, where options for Theme, Transparency, and Shortcuts are available. Within Shortcuts, 'AI Mode' appears as the second option in the grid. For non-Pixel users who are not part of the 'AI Mode' Search Labs experience, this new widget shortcut provides the fastest way to launch AI Mode. On devices not yet enrolled in Search Labs, the AI Mode shortcut appears as a pill-shaped button within the colourful carousel below the Search bar, rather than being embedded directly in it. What is AI Mode AI Mode is powered by Google's Gemini 2.5 multimodal AI model and lets users use Search more naturally and visually. You can speak a question, upload an image, or take a photo with Google Lens and then ask questions about what you see. AI Mode also uses real-time local info, shopping results, and data from Google's Knowledge Graph to give more helpful and relevant answers. The feature is available in the Google app on both Android and iOS.


Time of India
an hour ago
- Time of India
Google Pixel 10 series: Here is what you can expect from its launch in India
Google Pixel 10 Series leaks: The Google Pixel 10 series is officially launching on August 20, marking another major release in Google's flagship smartphone lineup. With the event scheduled well ahead of the Pixel 9 launch cycle, the Pixel 10 is expected to bring major upgrades, including the new Tensor G5 chip built on TSMC's 3nm process. Leaks and insider reports hint at a polished design, improved performance, and tighter integration with Google's AI ecosystem. The early launch suggests Google may be aiming to take on the iPhone 16 head-on in global markets. For users in the US, UK, Canada, and other Tier 1 countries, this Pixel update might be the most significant yet. Here's a complete breakdown of everything known so far about the Pixel 10, including specs, design, chip, and global availability. Google Pixel 10 series launch date in India On August 20, the next Made by Google event will occur in New York, where the company promises 'the latest on our Pixel phones, watches, buds, and more.' The sole confirmed hardware detail? At least one of the phones in the Pixel 10 lineup will closely resemble the design of the Pixel 9 Pro, including features like the signature camera bar and integrated temperature sensor. Everything else, including specifications and model names, is provided by the leaks. If the reports are accurate, we will receive four phones this time: the standard Pixel 10, the 10 Pro, a larger 10 Pro XL, and the second-generation foldable, the 10 Pro Fold. Should the leaked renders from Android Headlines prove to be correct, Google is playing around with colours this year. Google Pixel 10 series expected specifications AI Upgrades Of course, no Pixel launch would be complete without some AI involvement. It has been reported that Google is working on new tools such as Speak-to-Tweak, which allows you to edit photos using only your voice, and Sketch-to-Image, which creates AI-generated images from your sketches. A new virtual assistant named Pixel Sense is also on the way. This on-device AI, formerly speculated to be named Pixie, will draw information from your Google apps to foresee your desires, provide recommendations ahead of your inquiries, and perform actions without relying on cloud storage. Colour The standard Pixel 10 comes in a striking and whimsical color palette: alongside the typical black "Obsidian" hue, there will be a deep Indigo, a frosty light blue, and a vibrant lime-green named Limoncello. In contrast, the Pro models opt for a more conservative approach, featuring subdued colors such as Porcelain, Jade, and Moonstone in addition to the traditional black. Interestingly, the foldable model completely omits black. The launch of the Pixel 10 Pro Fold will be limited to "Moonstone" and "Jade". Camera The big surprise? The entry-level Pixel 10 also comes with three rear cameras: a wide, an ultrawide, and a telephoto lens. However, don't go overboard. In order to fit the telephoto lens without increasing costs, Google has chosen to use smaller main and ultra-wide sensors that are not as advanced. As reported by Android Authority, these will be the identical sensors utilized in the Pixel 9a. Thus, although you acquire a zoom lens, you might sacrifice some low-light performance. In contrast, the Pro models retain the superior hardware from the Pixel 9 Pro, meaning there will be a clear difference in image quality. It is speculated that the 10 Pro Fold's cameras will be a combination of elements: the main and telephoto sensors will be akin to those used in the standard Pixel 10, but they will represent an enhancement over those found in the previous generation of the Fold. Processor Each year comes with a quicker processor, but this year's upgrade is a genuine leap forward. It is reported that Google's Tensor G5 chip will shift from Samsung's manufacturing to TSMC, employing the same 3nm technology that Apple uses for the A18 Pro chip in the iPhone 16 Pro. When combined with an updated core layout, this could result in a significant enhancement of performance and efficiency, aiding Pixel phones in narrowing the gap with rivals powered by Apple and Qualcomm. Google Pixel 10 series expected price Google has not yet disclosed the prices or detailed features for the upcoming Pixel 10 series, but initial speculation suggests that the new models may be priced similarly to their predecessors. The anticipated starting price for the standard Pixel 10 in India is approximately Rs 79,999, while the Pixel 10 Pro is expected to be priced near Rs 99,999. The bigger Pixel 10 Pro XL might experience a slight increase, possibly exceeding the Rs 1,02,000 threshold. The Pixel 10 Pro Fold is speculated to be less expensive this year, with a potential launch price of around Rs 1,36,500, which marks a substantial decrease from the previous year's price of Rs 1,72,999.


Indian Express
an hour ago
- Indian Express
The chatbot culture wars are here
For much of the past decade, America's partisan culture warriors have fought over the contested territory of social media — arguing about whether the rules on Facebook and Twitter were too strict or too lenient, whether YouTube and TikTok censored too much or too little and whether Silicon Valley tech companies were systematically silencing right-wing voices. Those battles aren't over. But a new one has already started. This fight is over artificial intelligence, and whether the outputs of leading AI chatbots such as ChatGPT, Claude and Gemini are politically biased. Conservatives have been taking aim at AI companies for months. In March, House Republicans subpoenaed a group of leading AI developers, probing them for information about whether they colluded with the Biden administration to suppress right-wing speech. And this month, Missouri's Republican attorney general, Andrew Bailey, opened an investigation into whether Google, Meta, Microsoft and OpenAI are leading a 'new wave of censorship' by training their AI systems to give biased responses to questions about President Donald Trump. On Wednesday, Trump himself joined the fray, issuing an executive order on what he called 'woke AI.' 'Once and for all, we are getting rid of woke,' he said in a speech. 'The American people do not want woke Marxist lunacy in the AI models, and neither do other countries.' The order was announced alongside a new White House AI action plan that will require AI developers that receive federal contracts to ensure that their models' outputs are 'objective and free from top-down ideological bias.' Republicans have been complaining about AI bias since at least early last year, when a version of Google's Gemini AI system generated historically inaccurate images of the American Founding Fathers, depicting them as racially diverse. That incident drew the fury of online conservatives, and led to accusations that leading AI companies were training their models to parrot liberal ideology. Since then, top Republicans have mounted pressure campaigns to try to force AI companies to disclose more information about how their systems are built, and tweak their chatbots' outputs to reflect a broader set of political views. Now, with the White House's executive order, Trump and his allies are using the threat of taking away lucrative federal contracts — OpenAI, Anthropic, Google and xAI were recently awarded Defense Department contracts worth as much as $200 million — to try to force AI companies to address their concerns. The order directs federal agencies to limit their use of AI systems to those that put a priority on 'truth-seeking' and 'ideological neutrality' over disfavored concepts such as diversity, equity and inclusion. It also directs the Office of Management and Budget to issue guidance to agencies about which systems meet those criteria. If this playbook sounds familiar, it's because it mirrors the way Republicans have gone after social media companies for years — using legal threats, hostile congressional hearings and cherry-picked examples to pressure companies into changing their policies, or removing content they don't like. Critics of this strategy call it 'jawboning,' and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.) Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring AI companies through the federal procurement process is necessary to stop AI developers from putting their thumbs on the scale. Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its long-standing fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics. This time around, the critics cite examples of AI chatbots that seemingly refuse to praise Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as AI is integrated into fields such as education and health care. There are a few problems with this argument, according to legal and tech policy experts I spoke to. The first, and most glaring, is that pressuring AI companies to change their chatbots' outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration's argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech. 'What it seems like they're doing is saying, 'If you're producing outputs we don't like, that we call biased, we're not going to give you federal funding that you would otherwise receive,'' Genevieve Lakier, a law professor at the University of Chicago, said. 'That seems like an unconstitutional act of jawboning.' There is also the problem of defining what, exactly, a 'neutral' or 'unbiased' AI system is. Today's AI chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they're using. And testing an AI system for bias isn't as simple as feeding it a list of questions about politics and seeing how it responds. Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration's executive order would set 'a really vague standard that's going to be impossible for providers to meet.' There is also a technical problem with telling AI systems how to behave. Namely, they don't always listen. Just ask Elon Musk. For years, Musk has been trying to create an AI chatbot, Grok, that embodies his vision of a rebellious, 'anti-woke' truth seeker. But Grok's behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as 'Mecha-Hitler.') At other times, it acts like a liberal — telling users, for example, that human-made climate change is real, or that the right is responsible for more political violence than the left. Recently, Musk has lamented that AI systems have a liberal bias that is 'tough to remove, because there is so much woke content on the internet.' Nathan Lambert, a research scientist at the Allen Institute for AI, told me that 'controlling the many subtle answers that an AI will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.' It's not, in other words, as straightforward as telling an AI chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the 'model spec,' a set of instructions given to AI models about how they should act — there's no guarantee that these changes will consistently produce the behavior conservatives want. But asking whether the Trump administration's new rules can survive legal challenges, or whether AI developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, AI companies, like their social media predecessors, may find it easier to give in than to fight. 'Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,' Lakier said. 'I'm surprised by how easily these powerful companies have folded.'