Microsoft pushes staff to use internal AI tools more, and may consider this in reviews. 'Using AI is no longer optional.'
Microsoft is asking some managers to evaluate employees based on how much they use AI internally, and the software giant is considering adding a metric related to this in its review process, Business Insider has learned.
Julia Liuson, president of the Microsoft division responsible for developer tools such as AI coding service GitHub Copilot, recently sent an email instructing managers to evaluate employee performance based on their use of internal AI tools like this.
"AI is now a fundamental part of how we work," Liuson wrote. "Just like collaboration, data-driven thinking, and effective communication, using AI is no longer optional — it's core to every role and every level."
Liuson told managers that AI "should be part of your holistic reflections on an individual's performance and impact."
Microsoft's performance requirements vary from team to team, and some are considering including a more formal metric about the use of internal AI tools in performance reviews for its next fiscal year, according to a person familiar with the situation. This person asked not to be identified discussing private matters.
These changes are meant to address what Microsoft sees as lagging internal adoption of its Copilot AI services, according to another two people with knowledge of the plans. The company wants to increase usage broadly, but also wants the employees building these products have a better understanding of the tools.
In Liuson's organization, GitHub Copilot is facing increasing competition from AI coding services including Cursor. Microsoft lets employees use some external AI tools that meet certain security requirements. Staff are currently allowed to use coding assistant Replit, for example, one of the people said.
A recent note from Barclays cited data suggesting that Cursor recently surpassed GitHub Copilot in a key part of the developer market.
Competition among coding tools is even becoming a sticking point in Microsoft's renegotiation of its most important partnership with OpenAI. OpenAI is considering acquiring Cursor competitor Windsurf, but Microsoft's current deal with OpenAI would give it access to Windsurf's intellectual property and neither Windsurf nor OpenAI wants that, a person with knowledge of the talks said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
an hour ago
- The Hill
AI is rewiring the next generation of children
Much of the public discourse around artificial intelligence has focused, understandably, on its potential to fundamentally alter the workforce. But we must pay equal attention to AI's threat to fundamentally alter humanity — particularly as it continues to creep, unregulated, into early childhood. AI may feel like a developing force largely disconnected from the way we raise children. The truth is, AI is already impacting children's developing brains in profound ways. 'Alexa' now appears in babies' first vocabularies. Toddlers increasingly expect everyday objects to respond to voice commands — and grow frustrated when they don't. And now, one of the world's largest toy companies has launched a 'strategic' partnership with OpenAI. Research shows that children as young as three can form social bonds with artificial conversational agents that closely resemble the ones they develop with real people. The pace of industry innovation far outstrips the speed of research and regulation. And our kids' wellbeing is not at the center of these inventions. Consider Meta's chatbots, capable of engaging in sexually explicit exchanges — including while posing as minors — which are available to users of all ages. Or Google's plans to launch an AI chatbot for children under 13, paired with a toothless disclaimer: 'Your child may encounter content you don't want them to see.' Now, with the Senate negotiating a budget bill that would outright ban states from regulating AI for the next decade, parents stand to be left alone to navigate yet another grand social experiment conducted on their children — this time with graver circumstances than we've yet encountered. As a pediatric physician and researcher who studies the science of brain development, I've watched with alarm as the pace of AI deployment outstrips our understanding of its effects. Nowhere is that more risky than in early childhood, when the brain is most vulnerable to outside influence. We simply do not yet know the impact of introducing young brains to responsive AI. The most likely outcome is that it offers genuine benefits alongside unforeseen risks; risks as severe as the fundamental distortion of children's cognitive development. This double-edged sword may sound familiar to anyone versed in the damage that social media has wrought on a generation of young people. Research has consistently identified troubling patterns in adolescent brain development associated with extensive technology use, such as changes in attention networks, reward processing pathways similar to behavioral dependencies, and impaired face-to-face social skill development. Social media offered the illusion of connection, but left many adolescents lonelier and more anxious. Chatbot 'friends' may follow the same arc — only this time, the cost isn't just emotional detachment, but a failure to build the capacity for real connection in the first place. What's at stake for young children is even more profound. Infants and young children aren't just learning to navigate human connection like teenagers, they're building their very capacity for it. The difference is crucial: Teenagers' social development was altered by technology; young children's social development could be hijacked by it. To be clear, I view some of AI's potential with optimism and hope, frankly, for the relief they might provide to new, overburdened parents. As a pediatric surgeon specializing in cochlear implantation, I believe deeply in the power of technology to bolster the human experience. The wearable smart monitor that tracks an infant's every breath and movement might allow a new mom with postpartum anxiety to finally get the sleep she desperately needs. The social robot that is programmed to converse with a toddler might mean that child receives two, five or ten times the language interaction he could ever hope to receive from his loving but overextended caretakers. And that exposure might fuel the creation of billions of new neural connections in his developing brain, just as serve-and-return exchanges with adults are known to. But here's the thing: It might not. It might not help wire the brain at all. Or, even worse, it might wire developing brains away from connecting at all to another human. We might not even notice what's being displaced at first. I have no trouble believing that some of these tools, with their perfect language models and ideally timed engagements, will, in fact, help children learn and grow — perhaps even faster than before. But with each interaction delegated to AI, with each moment of messy human connection replaced by algorithmic efficiency, we're unknowingly altering the very foundations of how children learn to be human. This is what keeps me up at night. My research has helped me understand just how profoundly important attachment is to the developing brain. In fact, the infant brain has evolved over millennia to learn from the imperfect, emotionally rich dance of human interaction: the microsecond delays in response, the complex layering of emotional and verbal communication that occurs in even the simplest parent-child exchange. These inefficiencies aren't bugs in childhood development, they're the features that build empathy and resilience. It is safe to say the stakes are high. Navigating this next period of history will require parents to exercise thoughtful discernment. Rather than making a single, binary choice about AI's role in their lives and homes, parents will navigate hundreds of smaller decisions. My advice for parents is this: Consider those technologies that bolster adult-child interactions. Refuse, at least for the time being, those that replace you. A smart crib that analyzes sleep patterns and suggests the optimal bedtime, leading to happier evenings with more books and snuggles? Consider it! An interactive teddy bear that does the bedtime reading for you? Maybe not. But parents need more than advice. Parents need, and deserve, coordinated action. That means robust, well-funded research into AI's effects on developing brains. It means regulation that puts child safety ahead of market speed. It means age restrictions, transparency in data use, and independent testing before these tools ever reach a nursery or classroom. Every time we replace a human with AI, we risk rewiring how a child relates to the world. And the youngest minds — those still building the scaffolding for empathy, trust and connection—are the most vulnerable of all. The choices we make now will determine whether AI becomes a transformative gift to human development, or its most profound threat. Dana Suskind, MD, is the founder and co-director of the TMW Center for Early Learning + Public Health; founding director of the Pediatric Cochlear Implant Program; and professor of surgery and pediatrics at the University of Chicago.
Yahoo
2 hours ago
- Yahoo
SoftBank's Founder Lays Out Vision to Be No. 1 in Artificial Superintelligence
TOKYO—SoftBank's founder wants to make his company the world leader in artificial superintelligence—a hypothetical form of AI that is smarter than humans—within the next 10 years. 'I am betting all in on the world of ASI,' SoftBank Group Chief Executive Masayoshi Son said at an annual shareholder meeting held in Tokyo on Friday. It's Known as 'The List'—and It's a Secret File of AI Geniuses It's a New Era for Capital One. Amex and Chase Are in Its Sights. It's Bulletproof, Fire-Resistant and Stronger Than Steel. It's Superwood. These Funds Are Yield Magicians. How Do They Do It? The 33-Year-Old Meat Heir Feeding America's Protein Obsession Son said that in the next decade, just a handful of companies will reap the benefits from the around 600 trillion yen, equivalent to $4.155 trillion, of profit that stands to be made from ASI. A key part of Son's strategy is strengthening the Japanese technology investment company's relationship with OpenAI. SoftBank will have invested up to $32 billion in OpenAI by the end of this year, the CEO said, making it one of the largest single investments ever made in a private company. In February, the two companies announced a plan for a joint venture to provide major Japanese companies with advanced enterprise AI called 'Cristal intelligence.' 'OpenAI will eventually go public and become the most valuable company on Earth,' Son said, expecting a listing to happen in the next few years. SoftBank, which has backed high-profile tech names such as Alibaba and Arm, has been stepping up its push into AI. Last July, it acquired U.K.-based AI chip maker Graphcore, and this year it announced the acquisition of U.S. semiconductor design company Ampere Computing in a $6.5 billion deal. Earlier this year, SoftBank and OpenAI announced a joint project called Stargate to build infrastructure for the ChatGPT maker. Database company Oracle and MGX, an investor backed by the United Arab Emirates, are also equity partners in the venture. The companies have pledged to invest up to $500 billion in Stargate over the next four years. SoftBank's investment plans have come under scrutiny as Japan tries to close a deal with the Trump administration, which is looking to fill a trade gap and invite more foreign investment into the U.S. Asked by a shareholder about his relationship with President Trump, Son emphasized the importance of working closely with the U.S. administration. 'America is the world's largest AI hub and the technical epicenter of this revolution,' Son said. 'America is where the greatest opportunities lie.' News Corp, owner of The Wall Street Journal and Dow Jones Newswires, has a content-licensing partnership with OpenAI. Write to Megumi Fujikawa at U.S. Economy Shrugs Off Trade War and Soldiers On California Gov. Newsom Sues Fox News for Defamation Historic Rebound Sends S&P 500 to New Highs Sanctioned Steelmaker Finds a Buyer for U.S., Canada Plants I Built an AI Career Coach. I've Never Had a Better Coach. Sign in to access your portfolio


CNET
3 hours ago
- CNET
Microsoft Will Delete Your Passwords in One Month: Do This ASAP
Passwords are a thing of the past for Microsoft Authenticator. Starting in August, Microsoft will require you to use passkeys instead of keeping all of your Microsoft passwords on its mobile app, and your old passwords will vanish. But that's not bad news. Passkeys can cut out risky password habits that 49% of US adults have, according to a recent survey by CNET. Making it a practice to use the same password for multiple accounts or to include personal hints, like your birthday, can be risky. It could be an easy giveaway for hackers to guess, which can lead to identity theft and fraud. Here's what you need to know about Microsoft's timeline for the switch and how to set up passkeys for your Microsoft accounts before it's too late. Microsoft Authenticator will stop supporting passwords Microsoft Authenticator houses your passwords and lets you sign into all of your Microsoft accounts using a PIN, facial recognition such as Windows Hello, or other biometric data, like a fingerprint. Authenticator can be used in other ways, such as verifying you're logging in if you forgot your password, or using two-factor authentication as an extra layer of security for your Microsoft accounts. In June, Microsoft stopped letting users add passwords to Authenticator, but here's a timeline of other changes you can expect, according to Microsoft. July 2025: You won't be able to use the autofill password function. You won't be able to use the autofill password function. August 2025: You'll no longer be able to use saved passwords. If you still want to use passwords instead of passkeys, you can store them in Microsoft Edge. However, CNET experts recommend adopting passkeys during this transition. "Passkeys use public key cryptography to authenticate users, rather than relying on users themselves creating their own (often weak or reused) passwords to access their online accounts," said Attila Tomaschek, CNET software senior writer and digital security expert. Why passkeys are a better alternative to passwords So what exactly is a passkey? It's a credential created by the Fast Identity Online Alliance that uses biometric data or a PIN to verify your identity and access your account. Think about using your fingerprint or Face ID to log into your account. That's generally safer than using a password that is easy to guess or susceptible to a phishing attack. "Passwords can be cracked, whereas passkeys need both the public and the locally stored private key to authenticate users, which can help mitigate risks like falling victim to phishing and brute-force or credential-stuffing attacks," Tomaschek added. Passkeys aren't stored on servers like passwords. Instead, they're stored only on your personal device. More conveniently, this takes the guesswork out of remembering your passwords and the need for a password manager. How to set up a passkey in Microsoft Authenticator Microsoft said in a May 1 blog post that it will automatically detect the best passkey to set up and make that your default sign-in option. "If you have a password and 'one-time code' set up on your account, we'll prompt you to sign in with your one-time code instead of your password. After you're signed in, you'll be prompted to enroll a passkey. Then the next time you sign in, you'll be prompted to sign in with your passkey," according to the blog post. To set up a new passkey, open your Authenticator app on your phone. Tap on your account and select "Set up a passkey." You'll be prompted to log in with your existing credentials. After you're logged in, you can set up the passkey.