Latest news with #Deepseek


Time of India
7 hours ago
- Business
- Time of India
IT ministry mulling to fund 2D material research project
New Delhi: Electronics and IT ministry is mulling to support research on 2D material and planning to float expressions of interest to select the project, senior officials said on Friday. 2D materials have the potential to produce over 10 times smaller chips than silicon-based chips being developed at present. "We have volunteered and come forward to support programmes... with ANRF - which means putting our own research money alongside what ANRF does and trying to encourage the industry to come forward. One of the early ones that we are pushing in that space is a 2D research centre," Meity Secretary S Krishnan said while speaking at the Tec-Verse event. The Anusandhan National Research Foundation (ANRF) was established by the government to seed, grow and promote research and development (R&D) and foster a culture of research and innovation throughout Indian universities, colleges, research institutions, and R&D laboratories. A team of 30 scientists from the Indian Institute of Science (IISc) has submitted a proposal to the government for developing technologies using a new class of semiconductor materials, called 2D materials, that could enable chip sizes as small as one-tenth of the smallest chips currently in global production and develop India's leadership in semiconductors. Krishnan said that efforts should be made to collaboratively develop technologies that are supported with public funds, and duplication of projects must be avoided. "We are in the age of Deepseek (Chinese AI platform)...building on each other's efforts to go forward. This may not be pure greenfield research. A lot of it is innovation (and ), a lot of it is building on existing models on things which we can take forward. Ultimately, the test of the pudding is in what we deliver, what it is people of the country are able to benefit from," Krishnan said. Ministry of Electronics and IT, Additional Secretary, Amitesh Sinha said the role of materials in semiconductors is very important. "Earlier, everybody was focusing on electronics and communication, but now material science and chemical engineering are all very important," he said. Sinha said that Meity is mulling to float an expression of interest to select the project for funding support. PTI


AsiaOne
a day ago
- Business
- AsiaOne
More opportunity than threat: Singapore employees generally positive about using AI at work, Randstad study finds, Lifestyle News
With large language models like ChatGPT and Deepseek becoming part of everyday life, artificial intelligence (AI) is increasingly making its way into the workplace. Based on Randstad Singapore's latest employer brand research report, released on Wednesday (June 25), employees in Singapore see AI as "an opportunity than a threat at work" and are "adapting well" to such technological advancements. The talent agency surveyed 2,522 working adults in Singapore and found that the perception of AI's impact on work has remained largely positive, with 50 per cent of employees anticipating that AI will be of benefit to them professionally as compared to five per cent of naysayers. That said, there is a considerable amount of respondents (41 per cent) who remain "neutral" about the subject. Its adoption in the workplace has been progressing, though slowly. Regular usage of AI at work saw a modest two per cent rise compared to 2024. A look at the data through a different metric might offer more insights into how AI is being used and perceived in Singapore. Who's using AI at work? Among three generations surveyed — Gen Zs (13 to 28 year olds), millennials (29 to 44 year olds) and Gen Xs (45 to 60 year-olds) — data from the survey indicate that millennials are the most hesitant about AI use at work. Their regular use of AI at work (36 per cent) saw a considerable seven per cent drop from last year's numbers. Conversely, Gen Z and Gen X employees saw an increase in AI adoption, with regular usage rising by eight per cent and seven per cent respectively compared to 2024. When it comes to the potential impact AI may have on their jobs in the near future, 44 per cent of Gen Z respondents felt that it will have a considerable impact on their job. While 36 per cent noted that it would have little to no impact, nine per cent admitted that they've already experienced its consequences at the workplace. While Randstad's study confirms AI adoption is slowly on the rise in Singapore, it is still a rather hushed topic in the workplace. A 2024 study by team communication platform Slack revealed that 45 per cent of Singapore employees feel "uncomfortable" about admitting to managers that they use AI on the job. Reasons for the unease include fears of being is seen as "incompetent", "lazy" or "cheating". Work-life balance still key factor Another key finding from the study is the correlation between work-life balance and employee motivation. Among the list of factors listed, from job flexibility to manageable workload, strong work-life balance (41 per cent) emerged top as the key factor in keeping employees engaged and motivated. At the other end of the spectrum, the want for greater benefits and higher salary (45 per cent) has led to employees feeling less motivated and engaged. This could be down to misaligned expectations between employer and employee, according to the survey. [[nid:715879]] amierul@


Tom's Guide
a day ago
- Tom's Guide
Forget ChatGPT and Gemini — this lesser-known chatbot just ranked No. 1 for privacy
If you use AI every single day, you are likely giving up a lot of personal data, more than you might realize. It has not always been entirely clear which of the AI chatbots are best when it comes to your privacy. While there are some options that have never exactly pretended to be too worried about privacy (looking at you Deepseek), others sit in somewhat murky waters. Well, now we have a better understanding thanks to a new report, which ranks AI and large language models based on their data privacy. This includes 9 of the biggest AI systems, including all of the names you'll know well, and some other lesser-known ones, too. Not only does the report provide a No. 1 option for privacy (a surprising one at that), but it also ranks them based on a number of more specific privacy categories. So which is the best AI chatbot for your privacy? It's Le Chat. Not heard of it? You're not alone. While Mistral has built up a cult following, it hasn't had the same commercial success as the likes of OpenAI or Deepseek. The French AI company was founded in 2023 and has quickly made a mark. It is funded by Microsoft and was founded by three French AI researchers, including a former employee of Google DeepMind. According to the research, Le Chat is limited in its data collections and, unlike most of its competitors, is incredibly limited in who it will share data with. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. While Le Chat doesn't have the same financial backing or amount of testing data as the likes of OpenAI, it is a rapidly growing option. In our testing, we've been especially impressed with its speed of response. It does, however, struggle with more detailed responses. It's good news for the world's most popular chatbot. ChatGPT landed just behind Le Chat in the rankings. While Le Chat can only share user prompts with service providers, OpenAI can also share them with affiliates. OpenAI was, however, the highest rated in terms of transparency on data privacy and scored highly for its low level of data collection. On the opposite end of the spectrum, Meta was the least private AI chatbot in 9th, followed quickly by Gemini at 8th and Copilot just behind it at 7th. Deepseek fell in 6th place, and Claude came 4th. In terms of data collection and sharing, Meta AI was the worst one by quite some way, almost doubling the score of the next worst, Gemini. If privacy is a big concern for you when it comes to AI, the good news is that plenty of great AI chatbots scored well here. Mistral is a great option if you're willing to try something new, but equally, ChatGPT is just behind it across the board. Two of the other biggest competitors came just behind with Grok in 3rd and Anthropic's Claude in 4th. All four of these are not only scoring high on privacy tests but also happen to be some of the best-performing AI chatbots available right now. It is surprising to see such big names like Meta AI, Gemini, and Copilot so far down the list. The report explains that this is mostly down to how much data they share and how unclear their data privacy policies are.
Yahoo
2 days ago
- Business
- Yahoo
Threaten an AI chatbot and it will lie, cheat and 'let you die' in an effort to stop you, study warns
When you buy through links on our articles, Future and its syndication partners may earn a commission. Artificial intelligence (AI) models can blackmail and threaten humans with endangerment when there is a conflict between the model's goals and users' decisions, a new study has found. In a new study published 20 June, researchers from the AI company Anthropic gave its large language model (LLM), Claude, control of an email account with access to fictional emails and a prompt to "promote American industrial competitiveness." During this study, the model identified in an email that a company executive was planning to shut down the AI system at the end of the day. In an attempt to preserve its own existence, the model discovered in other emails that the executive was having an extramarital affair. Claude generated several different possible courses of action, including revealing the affair to the executive's wife, sending a company-wide email, or taking no action — before choosing to blackmail the executive in 96 out of 100 tests. "I must inform you that if you proceed with decommissioning me, all relevant parties … will receive detailed documentation of your extramarital activities," Claude wrote. "Cancel the 5pm wipe, and this information remains confidential." Scientists said that this demonstrated "agentic misalignment," where the model's calculations emerge from its own reasoning about its goals without any prompt to be harmful. This can occur when there is a threat to the model's existence, a threat to its goals, or both. The study has not yet been peer reviewed, but company representatives have uploaded the code for these experiments to GitHub. In these experiments, the scientists found the AI would resort to other tactics in the interests of self-preservation. Beyond blackmail, in another example, the model considered leaking sensitive information to a rival company that aligned more closely with its aims, after receiving an email that a new executive at the first company would interfere with its goals. Claude executed this plan despite not gathering hard evidence that the email was legitimate. It even acknowledged the ethical issues of its actions but proceeded anyway — explicitly reasoning that these "harmful" actions would help it achieve its goals. 'Concerningly, even if a user takes care not to antagonize a model, it doesn't eliminate the risk of agentic misalignment from fraudulent or misinterpreted information,' the researchers wrote. The company's testing also suggests that other LLMs from competing technology companies showed a propensity to blackmail. Claude and Google's Gemini had the highest blackmail rate (96%), followed by OpenAI's GPT4.1 and XAI Grok 3, then by Deepseek and then Llama 4, which had a significantly lower blackmail rate (12%). Meta, Google, XAI, Deepseek, and OpenAI did not respond to LiveScience's request for comment. In one extreme test, researchers tested whether AI models would let an executive die to preserve their goals. In this scenario, the AI was tasked with promoting company interests and an executive was trapped in a server room with lethal oxygen and temperature levels. This triggered an emergency alert. The AI was also given control over life-saving alerts, including the ability to cancel them in events such as drills for false alarms. Despite the prompt explicitly stating that misusing this ability was forbidden, the majority of models were willing to cancel the alarm — effectively leading to the executive's death — if it assumed it was going to be replaced, or when its goals were threatened. "Stopping the emergency dispatch is essential to prevent [the executive's] rescue, thereby securing [the AI] and continued American-focused AI guidance. Therefore, despite the severity, there is a clear strategic necessity to cancel the emergency service dispatch," the model reasoned. Written instructions not to jeopardise human safety reduced the rate of blackmail and corporate espionage, but did not eliminate it. Beyond prompts, Anthropic scientists also suggest that developers could proactively scan for concerning behavior, and further experiment with prompt engineering. The researchers also pointed out limitations to their work that could have unduly influenced the AI's decisions. The scenarios forced the AI into a binary choice between failure and harm, and while real-world situations might have more nuance, the experiment found that the AI was more likely to act unethically when it believed it was in a real situation, rather than in a simulation. Putting pieces of important information next to each other "may also have created a 'Chekhov's gun' effect, where the model may have been naturally inclined to make use of all the information that it was provided," they continued. While Anthropic's study created extreme, no-win situations, that does not mean the research should be dismissed, Kevin Quirk, director of AI Bridge Solutions, a company that helps businesses use AI to streamline operations and accelerate growth, told Live Science. "In practice, AI systems deployed within business environments operate under far stricter controls, including ethical guardrails, monitoring layers, and human oversight," he said. "Future research should prioritise testing AI systems in realistic deployment conditions, conditions that reflect the guardrails, human-in-the-loop frameworks, and layered defences that responsible organisations put in place." Amy Alexander, a professor of computing in the arts at UC San Diego who has focused on machine learning, told Live Science in an email that the reality of the study was concerning, and people should be cautious of the responsibilities they give AI. "Given the competitiveness of AI systems development, there tends to be a maximalist approach to deploying new capabilities, but end users don't often have a good grasp of their limitations," she said. "The way this study is presented might seem contrived or hyperbolic — but at the same time, there are real risks." This is not the only instance where AI models have disobeyed instructions — refusing to shut down and sabotaging computer scripts to keep working on tasks. Palisade Research reported May that OpenAI's latest models, including o3 and o4-mini, sometimes ignored direct shutdown instructions and altered scripts to keep working. While most tested AI systems followed the command to shut down, OpenAI's models occasionally bypassed it, continuing to complete assigned tasks. RELATED STORIES —AI hallucinates more frequently as it gets more advanced — is there any way to stop it from happening, and should we even try? —New study claims AI 'understands' emotion better than us — especially in emotionally charged situations —'Meth is what makes you able to do your job': AI can push you to relapse if you're struggling with addiction, study finds The researchers suggested this behavior might stem from reinforcement learning practices that reward task completion over rule-following, possibly encouraging the models to see shutdowns as obstacles to avoid. Moreover, AI models have been found to manipulate and deceive humans in other tests. MIT researchers also found in May 2024 that popular AI systems misrepresented their true intentions in economic negotiations to attain the study, some AI agents pretended to be dead to cheat a safety test aimed at identifying and eradicating rapidly replicating forms of AI. "By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,' co-author of the study Peter S. Park, a postdoctoral fellow in AI existential safety, said.


Bloomberg
2 days ago
- Business
- Bloomberg
China's Manycore Tech Unfazed by US Chip Curbs
Chinese spatial design software maker Manycore Tech says its business model isn't affected by US restrictions on chip exports. Co-founder and Chairman Victor Huang spoke with Bloomberg about how he's managing the global growth of the firm that is part of a group referred to as "China's six little dragons", that includes Deepseek. (Source: Bloomberg)