logo
#

Latest news with #GPT4

Meta Appoints Ex-OpenAI Scientist Shengjia Zhao to Lead New Superintelligence Lab
Meta Appoints Ex-OpenAI Scientist Shengjia Zhao to Lead New Superintelligence Lab

Entrepreneur

time3 days ago

  • Business
  • Entrepreneur

Meta Appoints Ex-OpenAI Scientist Shengjia Zhao to Lead New Superintelligence Lab

Zhao, previously a research scientist at OpenAI, played a pivotal role in creating GPT-4 and various lighter models such as version 4.1 and o3. He is among at least eight researchers who have recently transitioned from OpenAI to Meta. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Meta Platforms has appointed Shengjia Zhao, a leading figure in the development of OpenAI's ChatGPT, as chief scientist of its newly launched Superintelligence Lab. This high-profile move marks a significant step in Meta's accelerating drive to position itself at the forefront of advanced artificial intelligence. CEO Mark Zuckerberg shared the announcement on Friday through Threads. He said Zhao will guide the lab's scientific direction and collaborate closely with both Zuckerberg and Meta's Chief AI Officer Alexandr Wang. Wang joined the company earlier this year after Meta took a substantial stake in his former company, Scale AI. Zhao, previously a research scientist at OpenAI, played a pivotal role in creating GPT-4 and various lighter models such as version 4.1 and o3. He is among at least eight researchers who have recently transitioned from OpenAI to Meta. The influx of talent signals Meta's intent to rapidly close the distance with competitors in the race toward building artificial general intelligence. The creation of the Superintelligence Lab is part of Meta's broader efforts to establish a premier AI research division. The lab is distinct from FAIR, Meta's long-standing AI unit led by deep learning pioneer Yann LeCun. While FAIR has focused on foundational research, the new lab aims to develop what Zuckerberg has described as full general intelligence. Zuckerberg also confirmed that Meta plans to open-source the work produced by the Superintelligence Lab. This strategy has drawn mixed reactions within the AI community, with some praising the transparency and others warning of risks linked to such openness. Meanwhile, Meta's recruitment campaign has unsettled OpenAI. Internal messages leaked this month revealed OpenAI Chief Research Officer Mark Chen comparing Meta's tactics to "someone breaking into our home and stealing something." In response, OpenAI is reportedly reassessing its compensation practices and offering staff additional time off to curb further departures. OpenAI CEO Sam Altman has publicly criticised what he views as profit-driven hiring practices. He alleged that Meta has lured researchers with offers reaching USD 100 million in signing bonuses, a claim dismissed as exaggerated by Meta's Chief Technology Officer Andrew Bosworth. However, reports of even higher offers, including an unverified USD 1.25 billion compensation package over four years, illustrate the escalating competition for elite AI talent. While Altman argues that OpenAI's mission-focused approach offers a stronger long-term foundation, others in the industry see Meta's strategy as justified. Google DeepMind's CEO Demis Hassabis called the hiring surge a rational response given Meta's desire to catch up. With over USD 14 billion invested in AI infrastructure and partnerships, Meta is making its intentions clear. The addition of Zhao and other key hires underscores the company's determination to lead—not just follow—the next wave of AI development.

Meta picks ex-OpenAI researcher over Yann LeCun as chief scientist of AI superintelligence unit
Meta picks ex-OpenAI researcher over Yann LeCun as chief scientist of AI superintelligence unit

India Today

time5 days ago

  • Business
  • India Today

Meta picks ex-OpenAI researcher over Yann LeCun as chief scientist of AI superintelligence unit

Meta has been stealing away talent from its biggest rivals as it gears up to become a serious contender in the AI race. And now, after putting together a team, CEO Mark Zuckerberg has announced that Shengjia Zhao, a former OpenAI researcher and one of the co-authors of the original ChatGPT paper, just been named Chief Scientist of Meta's newly minted Superintelligence AI who quietly joined Meta in June, was instrumental in OpenAI's early successes, from building the first reasoning model, o1, to helping shape ChatGPT itself. That o1 model, incidentally, is the very one that sparked the 'chain-of-thought' craze, later picked up by the likes of Google and CEO Mark Zuckerberg confirmed Zhao's promotion in a post on Threads, calling him 'our lead scientist from day one.' He went on to explain, 'Now that our recruiting is going well, and our team is coming together, we have decided to formalise his leadership role.' Zhao will now work directly under Alexandr Wang, the former Scale AI CEO who joined Meta in June as its Chief AI Officer. Wang has been tasked with nothing less than steering Meta towards artificial general intelligence (AGI), systems capable of reasoning and thinking at human or even superhuman Superintelligence Lab, which Meta unveiled in June 2025, is the centrepiece of this new ambition. It operates separately from FAIR, Meta's long-established AI research unit, which remains under the leadership of AI veteran Yann LeCun. LeCun now reports to Wang, giving the latter clear oversight of Meta's two-pronged AI research AI hiring binge has been making headlines all summer. In just a couple of months, the company has lured away more than a dozen leading researchers from OpenAI, Apple, Google and Anthropic. That includes high-profile names such as Apple AI scientists Tom Gunter and Mark high-stakes AI talent hunt didn't stop with Shengjia Zhao. As The Information first reported in June, Zhao joined alongside three other influential OpenAI researchers: Jiahui Yu, Shuchao Bi, and Hongyu Ren. The tech giant has also brought in Trapit Bansal, who previously collaborated with Zhao on AI reasoning models, and recruited a trio of engineers from OpenAI's Zurich office, each with expertise in make it happen, CEO Mark Zuckerberg has personally taken the reins of recruitment, reportedly reaching out to candidates via email and even hosting potential hires at his private Lake Tahoe retreat. Not your average job interview venue. According to the reports, some offers have been eye-wateringly generous, with compensation packages reaching eight and nine figures. And Meta isn't playing around, many of these are 'exploding offers,' designed to pressure top researchers into signing within just a few present, Meta's most advanced open-source model, LLaMA 4, is still lagging behind rivals like OpenAI's GPT4 and Google's Gemini. But the company is betting big on a new model, codenamed 'Behemoth,' which is expected to debut later this year. Zuckerberg sounded bullish about what's coming next. 'Together we are building an elite, talent-dense team that has the resources and long-term focus to push the frontiers of superintelligence research,' he Zhao now in the driver's seat and a growing team of AI heavyweights, Meta clearly wants to make sure that the next big breakthrough in AI has its logo stamped all over it.- EndsMust Watch

This is why I use two separate ChatGPT accounts
This is why I use two separate ChatGPT accounts

Android Authority

time20-07-2025

  • Android Authority

This is why I use two separate ChatGPT accounts

Calvin Wankhede / Android Authority I'll admit it: I'm a bit of a recovering AI addict. While I've had mixed feelings about AI from the start, as someone who spends a lot of time lost in thought, I've found it can be a useful tool for ideation, proofreading, entertainment, and much more. Recently, I've started scaling back my usage for reasons beyond the scope of this article, but for a while, I actually had two paid ChatGPT accounts. I know what you're thinking, and you're right, it's a bit excessive. Still, in some cases, it really can make sense to have two accounts. Would you ever consider having two AI accounts at once? 0 votes Yes, it's smart to seperate business and personal. NaN % Yes, but only if it's two different AI tools. NaN % No, it's a waste of resources and I get by fine with what I have. NaN % Other (Tell us in the comments) NaN % It all started when I found myself constantly hitting usage limits for my personal projects and entertainment, leaving me in a lurch when I needed AI for work-related tasks. For those who don't know, the ChatGPT Plus tier has different limits depending on the model. Some like the basic GPT 4o are virtually unlimited, while others have a firm daily or weekly count. For example, GPT 03 lets you send 100 messages a week, while 04-mini-high gives you 100 messages a day, and so 04-mini gives you 300 a day. I tend to rely the most on 03 and 04-mini-high outside of basic stuff like editing, because it is actually willing to tell you that you're wrong, unlike many of the other models that are people-pleasers to the extreme. Realizing I was blowing through my message limits long before the week was up, I immediately started considering my options, including adding a Gemini subscription instead of ChatGPT. Truthfully, I had tried both before and always found myself coming back to ChatGPT, so the decision was basically made for me. At that point, I began manually migrating some of my old chats over to the new account, basically copying and pasting core logs so ChatGPT and deleting records from my original mixed-use account. As a freelancer, my goal was to make sure anything related to clients was separated from my personal projects, which were mostly entertainment or experimental (like messing around with the API and similar tools just to learn). It wasn't even just about the limits. I found this separation helpful for more than just avoiding blowing through my limits on the wrong thing. As you might know, ChatGPT can learn your preferences. It's not exactly learning or memory in the traditional sense, but instead it basically creates an abstract pattern of your communication styles and preferences. Let's just say my way of talking about personal matters is very different from my professional voice. Lots of cursing and the like. After splitting my usage, I noticed that ChatGPT actually became better suited for the specific tasks I was performing on each account, as it understood my preferences for each use case a little better. That's probably an oversimplification of how ChatGPT works, but you get the idea. These days, I no longer pay for two accounts since I don't rely as heavily on ChatGPT or any AI tool anymore, but it's useful to keep my old logs around, and so I still have a ChatGPT Plus account for business and another free account that is for personal use. This way, I also retain the option of renewing my paid subscription if my usage habits change again in the future. How do you sign up for two accounts, and is this a TOS violation? Calvin Wankhede / Android Authority Think you could benefit from a second account? Signing up for two accounts is easy as long as you have at least two different email addresses. For payment, I used two different credit or bank cards, though it's unclear if that's really necessary. The bigger question is if it actually okay to do this, or will your accounts get suspended for violating policy? When I first considered this, I did my research. According to the Terms of Service (TOS), there's no firm rule against having two accounts as long as you aren't purposely trying to circumvent usage limits. My first thought was, 'Well, I kind of am' — after all, running out of limits was a big part of my problem. Still, by separating accounts, I was doing more than just trying to increase my limits. By dividing business and personal/entertainment uses, I was also organizing information better, and I was making sure I didn't use up all my limits on personal stuff that would hurt my work productivity. Before, I'd burn through my limits pretty quickly on silly time-wasting stuff like writing alternate timeline fiction and other entertainment. Ultimately, having two accounts can be a bit of a gray area, but as long as you're careful about how and why you use each account, it's not technically against the TOS. For what it's worth, ChatGPT agrees — but with some caveats. As the AI explains, two accounts are fine if: Your main reason for separating is genuinely to keep business and personal activities distinct—billing, data, privacy, and not accidentally using up the business quota on personal stuff. This is a reasonable, defensible use. If you had one account and were hitting limits due to mixed usage, it's normal (and frankly smart) to create a second account for business, especially if your work depends on reliable access. As noted by the ChatGPT bot itself, the TOS is mainly aimed at stopping people from abusing the system by creating multiple accounts to stack free or paid uses, or for heavy API stacking. Reading the actual TOS seems to give the same picture as well. Could this kind of 'gray area' usage still attract attention from ChatGPT staff? Maybe, but as long as you're genuinely separating your use cases, there shouldn't be any major issues. In fact, it's common practice to create accounts specifically for business use, including for tax purposes, and so I'd wager this is probably more common than many realize.

How AI chatbots are helping hackers target your banking accounts
How AI chatbots are helping hackers target your banking accounts

Fox News

time15-07-2025

  • Fox News

How AI chatbots are helping hackers target your banking accounts

AI chatbots are quickly becoming the primary way people interact with the internet. Instead of browsing through a list of links, you can now get direct answers to your questions. However, these tools often provide information that is completely inaccurate, and in the context of security, that can be dangerous. In fact, cybersecurity researchers are warning that hackers have started exploiting flaws in these chatbots to carry out AI phishing attacks. Specifically, when people use AI tools to search for login pages, especially for banking and tech platforms, the tools return incorrect links. And once you click that link, you might get directed to fake websites. These sites can then be used to steal personal information or login credentials. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Researchers at Netcraft recently ran a test on the GPT-4.1 family of models, which is also used by Microsoft's Bing AI and AI search engine Perplexity. They asked where to log in to fifty different brands across banking, retail, and tech. Out of 131 unique links the chatbot returned, only about two-thirds were correct. Around 30 percent of the links pointed to unregistered or inactive domains. Another five percent led to unrelated websites. In total, more than one-third of the responses linked to pages not owned by the actual companies. This means someone looking for a login link could easily end up on a fake or unsafe site. If attackers register those unclaimed domains, they can create convincing phishing pages and wait. Since the AI-supplied answer often sounds official, users are more likely to trust it without double-checking. In one recent case, a user asked Perplexity AI for the Wells Fargo login page. The top result wasn't the official Wells Fargo site; it was a phishing page hosted on Google Sites. The fake site closely mimicked the real design and prompted users to enter personal information. Although the correct site was listed further down, many people would not notice or think to verify the link. The problem in this case wasn't specific to Perplexity's underlying model. It stemmed from Google Sites abuse and a lack of vetting in the search results surfaced by the tool. Still, the result was the same: a trusted AI platform inadvertently directed someone to a fake financial website. Smaller banks and regional credit unions face even higher risks. These institutions are less likely to appear in AI training data or be accurately indexed on the web. As a result, AI tools are more prone to guessing or fabricating links when asked about them, raising the risk of exposing users to unsafe destinations. As AI phishing attacks grow more sophisticated, protecting yourself starts with a few smart habits. Here are seven that can make a real difference: AI chatbots often sound confident even when they are wrong. If a chatbot tells you where to log in, do not click the link right away. Instead, go directly to the website by typing its URL manually or using a trusted bookmark. AI-generated phishing links often use lookalike domains. Check for subtle misspellings, extra words, or unusual endings like ".site" or ".info" instead of ".com". If it feels even slightly off, do not proceed. Even if your login credentials get stolen, 2FA adds an extra layer of security. Choose app-based authenticators like Google Authenticator or Authy instead of SMS-based codes when available. If you need to access your bank or tech account, avoid searching for it or asking a chatbot. Use your browser's bookmarks or enter the official URL directly. AI and search engines can sometimes surface phishing pages by mistake. If a chatbot or AI tool gives you a dangerous or fake link, report it. Many platforms allow user feedback. This helps AI systems learn and reduces future risks for others. Modern browsers like Chrome, Safari, and Edge now include phishing and malware protection. Enable these features and keep everything updated.. If you want extra protection, the best way to safeguard yourself from malicious links is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Password managers not only generate strong passwords but can also help detect fake websites. They typically won't auto-fill login fields on lookalike or spoofed sites. Check out the best expert-reviewed password managers of 2025 at Attackers are changing tactics. Instead of gaming search engines, they now design content specifically for AI models. I have been consistently urging you to double-check URLs for inconsistencies before entering any sensitive information. Since chatbots are still known to produce highly inaccurate responses due to AI hallucinations, make sure to verify anything a chatbot tells you before applying it in real life. Should AI companies be doing more to prevent phishing attacks through their chatbots? Let us know by writing us at Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my Copyright 2025 All rights reserved.

AI replaced me, so I decided to ride the AI wave
AI replaced me, so I decided to ride the AI wave

Fast Company

time10-07-2025

  • Business
  • Fast Company

AI replaced me, so I decided to ride the AI wave

In 2022, I was hired to build out AI operations at a health-tech startup—at the time, we were pioneering the use of AI in healthcare, which required abundant human oversight. Over time, new GenAI models were launching at an unprecedented pace and new iterations like GPT-4 could solve a case in 30 seconds, compared to the four months it took my team. It quickly became clear to both my employer and me that my skills were no longer needed, and there were no clear opportunities to chart a new path at my job with my current skill set. I was left with no choice but to move on. As I reignited my job search, I was keen on finding 'AI-proof' positions—roles that wouldn't be affected by the AI revolution—but I persisted with a traditional search. It wasn't until about five months later that I realized this approach wasn't working. Frustrated, I paused to rethink my entire strategy and questioned whether I was looking at the problem from the right angle. Then came my light-bulb moment: instead of thinking about what AI was going to do to me, I shifted my mindset to explore what it could do for me. Secret weapon AI quickly became my secret weapon. I created a custom ' CareerBuddy GPT' in ChatGPT to help me with rote work like drafting a cover letter and updating my résumé to tailor to each individual job posting. Using AI cut down the time I was spending on my job search by 70 to 80% but it also saved me the headache. Anyone who is grinding on the job search knows the process can be fatiguing. I found the best use for AI, however, was using AI as a strategic partner, assessing my candidacy for roles, generating leads for my job search, and advising on the best ways to position my experience. By simply uploading my résumé or summarizing my objectives, CareerBuddy GPT identified people to reach out to, organizations to vet, and even open job listings that I may have missed. Untapped resource This resulted in landing a role at a fresh new startup, which is ironically all about perfecting the human-AI relationship. Ultimately, using AI in my job search helped me realize that collaborating with AI was my greatest untapped resource. I am currently leveraging a lot of what I self-learned to improve our organization's internal AI program—identifying where AI can fill the gaps and free up my teams for more creative opportunities. There are some revolutions so momentous that we cannot avoid them even if we want to. Unfortunately, I couldn't sustain my health-tech position as that revolution unfolded. But this situation clarified an important learning for me: the AI wave is here and there are two options: you can get pulled in by the undertow or grab a surfboard and ride the wave. Maximizing productivity AI can become your job hunter, career coach, your personal shopper, or your receptionist. It can help you save time, explore different paths, find space for creativity, and develop your own set of skills. My personal belief—albeit cautiously optimistic—is that human value is not going to vanish even if AI can replicate some of our capabilities. But what AI can do is help maximize human productivity and help humans unlock value they don't even know they have. To be clear, sharing my experience is not meant to invalidate peoples' stories or deny the truth. As headlines fuel fears around AI replacing human workers, layoffs are happening more frequently. Microsoft just laid off 3% of its workforce (7K+ employees) in order to funnel more cash into its lofty AI goals, but the two shouldn't be mutually exclusive, and in fact, it's better they're not. We know AI is most helpful in helping complete rote work. At the same time leadership is freeing up, say, software engineers to do coding, there's demand for AI prompt engineering support where human expertise is critically needed. Companies can leverage the power of the AI-human relationship to 100x their productivity, rather than have AI replace the labor they're letting go. This is not just one way everyone—from the employee to the board member—can rethink the way we are approaching AI. But we can all start by dispelling our fears that AI will replace us and instead grab that surfboard and make AI our secret weapon and our key to unlocking human potential.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store