
Fox News AI Newsletter: The dangers of oversharing with AI tools
- Dangers of oversharing with AI tools
- Instagram Teen Accounts unveils new built-in protections to block nudity, livestreams
- 'Sound of Freedom' producer says AI tools helped nab child trafficker that eluded FBI for 10 years
DON'T OVERSHARE DEETS: Have you ever stopped to think about how much your chatbot knows about you? Over the years, tools like ChatGPThave become incredibly adept at learning your preferences, habits and even some of your deepest secrets. But while this can make them seem more helpful and personalized, it also raises some serious privacy concerns. As much as you learn from these AI tools, they learn just as much about you.
GREATER CONTROL: Instagram on Tuesday announced new built-in protections for Instagram Teen Accounts and has expanded its suite of features to the Facebook and Messenger applications.
MAJOR VICTORY: Child predators are on high alert as organizations around the globe have begun rolling out artificial intelligence tools to bring sex traffickers to justice and rescue young victims, according to "Sound of Freedom" executive producer Paul Hutchinson.
INDUSTRIAL SUPER-HUMANOID ROBOT: In a groundbreaking development, California-based robotics and artificial intelligence company Dexterity has unveiled Mech, the world's first industrial super-humanoid robot.
FOLLOW FOX NEWS ON SOCIAL MEDIA
FacebookInstagramYouTubeTwitterLinkedIn
SIGN UP FOR OUR OTHER NEWSLETTERS
Fox News FirstFox News OpinionFox News LifestyleFox News Health
DOWNLOAD OUR APPS
Fox NewsFox BusinessFox WeatherFox SportsTubi
WATCH FOX NEWS ONLINE
STREAM FOX NATION
Stay up to date on the latest AI technology advancements and learn about the challenges and opportunities AI presents now and for the future with Fox News here.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
30 minutes ago
- Yahoo
Asimily Adds Enhanced IoT Password Management and Device Patching to Its Comprehensive Security Platform
SUNNYVALE, Calif., July 28, 2025 (GLOBE NEWSWIRE) -- Asimily, the only complete IoT, OT, and IoMT Risk Mitigation Platform, today announced the release of several new innovative features designed to help organizations across all industries efficiently secure and manage IoT devices while continuing down its path of cybersecurity innovation. These features are: IoT Password Management significantly simplifies the execution of password best practices across devices from multiple manufacturers. IoT Patching offers a 200% increase in supported manufacturers whose devices can now be automatically updated by Asimily. An intuitive new user interface designed for speed and efficiency, particularly for busy security and IT teams. 'Organizations with device fleets have always struggled to keep them updated. Unlike servers and operating systems, there is no streamlined process owned by the software manufacturer for IoT. This has always forced organizations to devote significant time and money to this essential line of defense,' said Shankar Somasundaram, CEO of Asimily. 'With our new IoT Management module and its Password Management and Patching capabilities, devices get automatically and fully updated faster with far less time and effort, helping prevent successful attacks from establishing a foothold within a company's networks.' IoT devices, such as printers, IP cameras, teleconference devices and network access points, are increasingly common targets for cyberattacks. Securing and managing IoT fleets requires the right software, processes, and skilled personnel to balance operational functionality and security. These features join the Asimily platform as crucial components, purpose-built to address the unique challenges associated with IoT Unauthorized Access with IoT Password Management Under the IoT Management module, Asimily has added IoT Password Management. This feature helps organizations enforce stronger credential policies and reduce the risk of unauthorized access to critical IoT infrastructure. It makes organizational adherence to best practices – strong passwords, no re-use – much easier while still allowing devices to operate with minimal interruption. Asimily's IoT Patching and IoT Password Management work together to prevent unauthorized access, allowing patching to be performed with a click, according to a schedule, or automatically. Increased Manufacturer Support for IoT Management Asimily has expanded its manufacturer support for IoT Management. This broader support ensures that even more devices can be automatically patched with Asimily, enabling better security for organizations. As businesses across industries continue to adopt new IoT devices, the expansion of this feature enables organizations to confidently lean into IoT while scaling security practices as their fleet grows. Since the initial launch of IoT Patching in March, 2025, the number of supported vendors has doubled, making thousands more customer devices easily updatable. Asimily is on track to increase the number of supported vendors by 400% within a year, dramatically expanding its direct-patching coverage. New User Interface Asimily recently refreshed its user interface to simplify adoption, organize crucial features around common workflows, reduce friction to accomplish critical risk mitigation tasks, and support a best-in-class user experience. Driven by extensive research and testing, the new interface reinforces Asimily's commitment to innovation and enables users to take decisive action across IoT, OT, and IoMT infrastructure. Ready to Strengthen Your IoT Security? See how Asimily's new capabilities make device management faster, safer, and easier than ever here. About Asimily Asimily has built an industry-leading risk management platform that secures IoT devices for organizations in healthcare, manufacturing, higher education, government, life sciences, retail, and finance. With the most extensive knowledge base of IoT and security protocols, Asimily inventories and classifies every device across organizations, both connected and standalone. Because risk assessment—and threats—are not a static target, Asimily monitors organizations' devices, detects anomalous behavior, and alerts operators to remediate any identified anomalies. With secure IoT devices and equipment, Asimily customers know their business-critical devices and data are safe. For more information on Asimily, visit Asimily ContactKyle Petersonkyle@ A photo accompanying this announcement is available at


Geek Wire
2 hours ago
- Geek Wire
In AI we trust?
A recent study by Stanford University's Social and Language Technologies Lab (SALT) found that 45% of workers don't trust the accuracy, capability, or reliability of AI systems. That trust gap reflects a deeper concern about how AI behaves when the stakes are high, especially in business-critical environments. Hallucinations in AI may be acceptable when the stakes are low, like drafting a tweet or generating creative ideas, where errors are easily caught and carry little consequence. But in the enterprise, where AI agents are expected to support high-stakes decisions, power workflows, and engage directly with customers, the tolerance for error disappears. True enterprise-grade reliability demands more: consistency, predictability, and rigorous alignment with real-world context, because even small mistakes can have big consequences. This challenge is referred to as 'jagged intelligence,' where AI systems continue to shatter performance records on increasingly complex benchmarks, while sporadically struggling with simpler tasks that most humans find intuitive and can reliably solve. For example, a model might be able to defeat a chess grandmaster that is unable to complete a simple child's puzzle. This mismatch between brilliance and brittleness underscores why enterprise AI demands more than general LLM intelligence alone; it requires contextual grounding, rigorous testing, and continuous fine-tuning. That's why at Salesforce, we believe the future of AI in business depends on achieving what we call Enterprise General Intelligence (EGI) – a new framework for enterprise-grade AI systems that are not only highly capable but also consistently reliable across complex, real-world scenarios. In an EGI environment, AI agents work alongside humans, integrated into enterprise systems and governed by strict rules that limit what actions they can take. To achieve this, we're implementing a clear, repeatable three-step framework – synthesize, measure, and train – and applying this to every enterprise-grade use case. A Three-Step Framework for Building Trust Building AI agents within the enterprise demands a disciplined process that grounds models in business-contextualized data, measures performance against real-world benchmarks, and continuously fine-tunes agents to maintain accuracy, consistency, and safety. Synthesize: Building trustworthy agents starts with safe, realistic testing environments. That means using AI-generated synthetic data that closely resembles real inputs, applying the same business logic and objectives used in human workflows, and running agents in secure, isolated sandboxes. By simulating real-world conditions without exposing production systems or sensitive data, teams can generate high-fidelity feedback. This method is called 'reinforcement learning' and is a critical foundation for developing enterprise-ready AI agents. Building trustworthy agents starts with safe, realistic testing environments. That means using AI-generated synthetic data that closely resembles real inputs, applying the same business logic and objectives used in human workflows, and running agents in secure, isolated sandboxes. By simulating real-world conditions without exposing production systems or sensitive data, teams can generate high-fidelity feedback. This method is called 'reinforcement learning' and is a critical foundation for developing enterprise-ready AI agents. Measure: Reliable agents require clear, consistent benchmarks. Measuring performance isn't just about tracking accuracy, it's about defining what each specific use case requires. The level of precision needed varies: An agent offering product recommendations may tolerate a wider margin of error than one evaluating loan applications or diagnosing system failures. By establishing tailored benchmarks such as Salesforce's initial LLM benchmark for CRM use cases, and acceptable performance thresholds, teams can evaluate agent output in context and iterate with purpose, ensuring the agent is fit for its intended role before it ever reaches production. LLM benchmark Train: Reliability isn't achieved in a single pass — it's the result of continuous refinement. Agents must be trained, tested, and retrained in a constant feedback loop. That means generating fresh data, running real-world scenarios, measuring outcomes, and using those insights to improve performance. Because agent behavior can vary across runs, this iterative process is essential for building stability over time. Only through repeated training and tuning can agents reach the level of consistency and accuracy required for enterprise use. Turning AI Agents Into Reliable Enterprise Partners Building AI agents for the enterprise is much more than simply deploying an LLM for business-critical tasks. Salesforce AI Research's latest research shows that generic LLM agents successfully complete only 58% of simple tasks and barely more than a third of more complex ones. Truly effective EGI agents that are trustworthy in high-stakes business scenarios require far more than an off-the-shelf DIY LLM plug-in. They demand a rigorous, platform-driven approach that grounds models in business-specific context, enforces governance, and continuously measures and fine-tunes performance. The AI we deploy in Agentforce is built differently. Agentforce doesn't run by simply plugging into an LLM. The agents are grounded in business-specific context through Data Cloud, made trustworthy by our enterprise-grade Trust Layer, and designed for reliability through continuous evaluation and optimization using the Testing Center. This platform-driven approach ensures that agents are not only intelligent, but consistently enterprise-ready. As businesses evolve toward a future where specialized AI agents collaborate dynamically in teams, complexity increases exponentially. That's why leveraging frameworks that synthesize, evaluate, and train agents before deployment is critical. This new framework builds the trust needed to elevate AI from a promising technology into a reliable enterprise partner that drives meaningful business outcomes.


Digital Trends
2 hours ago
- Digital Trends
Web browsers are entering a new era where AI skills take over from extensions
'The browser is bigger than chat. It's a more sticky product, and it's the only way to build agents. It's the only way to build end-to-end 'workflows,' these were the comments of Perplexity CEO, Aravind Srinivas, in a recent interview. The Perplexity co-founder was talking about the future of web browsers, AI agents, and automations in web browsers. Srinivas was bullish on the prospects, partly because his company is already testing a buzzy new browser called Comet. Currently in an invite-only beta phase, the browser comes with an agent that can handle complex and time-consuming tasks on your behalf. Recommended Videos Think of it like an AI tool such as ChatGPT or Gemini, but one that lives exclusively in your browser. The agent-in-browser approach, as Srinivas argues, is more familiar and flexible. You don't have to deal with the usual local permission and cross-app workflow restrictions. Plus, browsers will work just the way we're used to, with products like Chrome or Safari. But the undercurrents are wildly different, and the biggest change could be the sunsetting of browser extensions in favor of AI skills and user-generated agents. Interestingly, the foundation tools were laid over a year ago, but we are only hearing about them with the arrival of AI-first browsers like Dia and Comet. AI skills are the new work champions All the talk of AI agents and skills sounds like a bunch of tech jargon, so let me break it down for you. In the Dia browser, I recently created a skill called 'expand.' How did I do it, even though I didn't write a single line of code? I simply described it in the following words: 'When I use this skill and paste a snippet, do a deep web search, and pull up the entire history in the form of an article in a timely order. Pull information only from reliable news outlets.' I read and write articles for a living, and I often come across snippets and events in articles that I am not familiar with. For such scenarios, all I have to do is select the relevant text (or copy-paste it in the chat sidebar) and use a '/' command to summon the 'expand' skill. As described above, the AI agent in the Dia browser will search the mentions of my target in top news outlets and create a brief report about it in chronological order. This saves me a lot of precious time that would otherwise be spent on wild Google Search attempts. But more importantly, I don't even have to open another tab, and I can ask follow-up questions in the same chat box within the active reading tab. It's quick and convenient. I don't know an extension that can do exactly what this 'expand' skill does for me. It's not possible either. I created it with a specific purpose and intent. And I can create as many as I want, or fine-tune it further to suit my workflow. I've created another one called 'research' that references a work (or phrase) and performs web research by looking exclusively at peer-reviewed science papers. The Dia user community is even saving some money by creating skills that hunt for coupon codes available on products right before checkout. For my Amazon shopping, I've created one that combines the reviews, ratings, and features of products across different Amazon tabs, creates a comparison table, and helps me make the best choice. All of that happens by typing a single word! Another one quickly looks up for grammatical errors and style guide clarity in my emails. There's one that creates a quiz-based reading material for kids I teach at a nearby non-profit institution, based on the learning material I have prepared. Just made a @diabrowser skill that instantly saved me money — Egor (@eg0rev) July 23, 2025 The students love the fun and playful tone in their multiple-choice questions that test their current affairs knowledge. There's even an official Dia gallery where you can find skills created by Dia users, and a crowd-sourced web dashboard where you can find even more. But here's the main reason why I think browser skills are a bigger deal than extensions. Anyone can create them by simply describing what they want. With extensions, you need coding knowledge and basic skills of how the web and its browsing architecture work. Security is another reason that I would put more faith in browser skills than extensions. There is a long history of browser extensions being weaponized but bad actors to seed malware. An average user can't look or make sense of an extension's inner workings, and only realizes the folly when the damage has been done. The situation with AI skills in browsers is as transparent as it gets. How exactly a skill works is described in detail, in natural language, and without any hidden caveats. You just need to read it thoroughly, or just copy-paste it and create your own with extra modifications. That approach is flexible, a lot safer, and gives the whole power in users' hands. Browser agents are here to stay Next, we have browser agents. Opera browser has already implemented one, and it is already offering a more advanced version called Operator. Then you can have tools like ChatGPT Agent, and Perplexity's Comet browser. Think of it as Siri, but for web browsing. Agents are more suited for complex, time-consuming tasks. And they work best when they get access to the services you visit on a daily basis, like your email and Calendar. For example, this is what I did in Perplexity's Comet browser last night: 'Check my inbox and give me an update on all the interview requests with a scientist or company executive that I intended to proceed with. Focus on conversations where I expressed the possibility of virtual interviews, instead of an in-person meeting.' Without opening another tab, the built-in Assistant went through my Gmail inbox, looked up the relevant emails, and then provided me with a list of such interactions in a well-formatted view. For added convenience, it even included one-click Gmail links so that I can directly open that email chain without having to manually dig in. It's great for a lot of other things. For example, during a Twitter AMA, I simply asked it to pick the responses by the speaker and list them as bullet points. That saved me a lot of back-and-forth time opening and closing X conversation chains. For travel planning, shopping, or even consuming videos, the assistant in Comet browser works fine. The only 'ick' is that if you need it to get more personal work done, you will need to provide access to connectors. For example, to handle your Gmail, Calendar, and Drive, you will need to enable access. I did it for my WhatsApp account, as well, and it worked really well in the Comet browser. Not everyone will feel easy doing that, and the caution is totally warranted. For such scenarios, Google and OpenAI offer similar agentic features for Gemini and ChatGPT, respectively. There is no going back Just the way you create skills in Dia by simply typing or narrating your requirements, Gemini and ChatGPT also let you create custom agents for specific tasks. Google calls them Gems, while OpenAI refers to them as GPTs. And yes, you can share them just like skills. Using them is free, but to create them, you'll need a subscription that costs $20 per month. I've created numerous Gems and custom GPTs to speed up my mundane chores. For personal social posting, I've created a Gem that breaks down articles I've written into smaller bits, which are then posted as a chain on X. Likewise, I've created custom agents to handle my emails. One of the Gems simply needs me to type 'yes' or 'no,' and it will accordingly write a polite response while picking up all the context from the email. With connectors coming into the picture, you can link them to as many services as you want. The best part about these gems is that you can effortlessly use them across a desktop browser and mobile apps, as well. Extensions require you to stick with a desktop browser. Some mobile browsers do support extensions, but they are rare. Moreover, they don't offer the same flexibility and peace of mind as custom browser skills or agents created by users. ChatGPT Agent and Google's Project Mariner are a new breed of AI assistants that are tailor-made for web-based tasks, just like the assistant built within Perplexity's Comet browser. Unlike an extension, they can handle multi-step workflows, and you can take over at any stage. Furthermore, you can modify the inner workings of your web browsing automation and tailor the AI skills to your exact specifications, something that's not possible with extensions. Of course, they are not perfect. At the same time, you can take over it and complete the things when it's not able to do it because no AI agent is foolproof, especially when we are at a time when reasoning models are still far from perfection,' admits Perplexity's CEO. But the shift is clearly evident. Browser extensions are not going to vanish overnight, but browsing agents and AI skills created by users are going to take over. It's only a matter of time before the barriers (read: subscription fee) come down!