logo
#

Latest news with #ITindustry

1Spatial (LON:SPA) Is Looking To Continue Growing Its Returns On Capital
1Spatial (LON:SPA) Is Looking To Continue Growing Its Returns On Capital

Yahoo

time5 days ago

  • Business
  • Yahoo

1Spatial (LON:SPA) Is Looking To Continue Growing Its Returns On Capital

If we want to find a stock that could multiply over the long term, what are the underlying trends we should look for? In a perfect world, we'd like to see a company investing more capital into its business and ideally the returns earned from that capital are also increasing. Basically this means that a company has profitable initiatives that it can continue to reinvest in, which is a trait of a compounding machine. Speaking of which, we noticed some great changes in 1Spatial's (LON:SPA) returns on capital, so let's have a look. AI is about to change healthcare. These 20 stocks are working on everything from early diagnostics to drug discovery. The best part - they are all under $10bn in marketcap - there is still time to get in early. Understanding Return On Capital Employed (ROCE) If you haven't worked with ROCE before, it measures the 'return' (pre-tax profit) a company generates from capital employed in its business. To calculate this metric for 1Spatial, this is the formula: Return on Capital Employed = Earnings Before Interest and Tax (EBIT) ÷ (Total Assets - Current Liabilities) 0.054 = UK£1.4m ÷ (UK£41m - UK£16m) (Based on the trailing twelve months to January 2025). Thus, 1Spatial has an ROCE of 5.4%. In absolute terms, that's a low return and it also under-performs the IT industry average of 19%. Check out our latest analysis for 1Spatial Above you can see how the current ROCE for 1Spatial compares to its prior returns on capital, but there's only so much you can tell from the past. If you'd like to see what analysts are forecasting going forward, you should check out our free analyst report for 1Spatial . What Can We Tell From 1Spatial's ROCE Trend? 1Spatial has broken into the black (profitability) and we're sure it's a sight for sore eyes. The company now earns 5.4% on its capital, because five years ago it was incurring losses. While returns have increased, the amount of capital employed by 1Spatial has remained flat over the period. That being said, while an increase in efficiency is no doubt appealing, it'd be helpful to know if the company does have any investment plans going forward. So if you're looking for high growth, you'll want to see a business's capital employed also increasing. The Key Takeaway To sum it up, 1Spatial is collecting higher returns from the same amount of capital, and that's impressive. Since the stock has returned a staggering 116% to shareholders over the last five years, it looks like investors are recognizing these changes. Therefore, we think it would be worth your time to check if these trends are going to continue. If you'd like to know about the risks facing 1Spatial, we've discovered 3 warning signs that you should be aware of. For those who like to invest in solid companies, check out this free list of companies with solid balance sheets and high returns on equity. Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned.

F5 Tightens Screws On Data Leakage In AI Application Delivery
F5 Tightens Screws On Data Leakage In AI Application Delivery

Forbes

time16-07-2025

  • Business
  • Forbes

F5 Tightens Screws On Data Leakage In AI Application Delivery

Damaged pipe with leaking water on grey background Cloud took time. Once cloud computing had laid down its initial gambit (hinged around a promise of lower capital expenditure via a shift to service-based computing and data storage), the IT industry worked through a teething period while missing security, scalability and service suitability headaches were worked out. The rise of artificial intelligence is going through a similar adolescence. Market analysis from application delivery and security platform company F5 suggests that while two-thirds of organizations are now able to demonstrate a level of 'moderate AI readiness', most lack robust governance and cross-cloud management capabilities related to performance, integration and security. The company's latest 2025 State of AI Application Strategy Report compiles feedback from 650 global IT leaders alongside additional research carried out with 150 AI strategists, all of whom represent organizations with at least $200 million in annual revenue. What AI Firewall Crisis? The problem, perhaps, stems from where AI is today, as an experimental prototyping technology used for random web-centric research, for chatbot experiences inside social media apps and for amusing picture generation entertainment. If anything, our non-corporate use of AI services might be argued to be driving a premature familiarity with extremely powerful technologies that really need to be locked down inside corporate control mechanisms when deployed in the workplace. This suggestion is perhaps validated out by F5's estimation that, today, 71% of organizations use AI to boost security, while only 31% have deployed AI firewalls. As AI becomes core to business strategy, readiness requires more than experimentation - it demands security, scalability and alignment. The proposal from F5 states that the average organization uses three AI models; and typically, the use of multiple models correlates with deployment in more than one computing environment or location. Fighting Fire With Fire As a company, F5 has been working to architecturally align its platform for the new AI era for some time now. After specific updates in this direction at the start of this year, the company is now detailing new AI-driven capabilities in the F5 Application Delivery and Security Platform. In something of a case of fighting fire (AI risk) with fire (expanded capabilities in the platform include features such as the F5 AI Gateway service to protect against data leaks), the company is also offering new functionality in its F5 BIG-IP SSL Orchestrator, a technology that works to classify and defend encrypted data in motion and block unapproved AI use. A piece of middleware, any AI gateway works as a filtering tool to inspect and validate data prompts between AI models and the large language models that serve them. Overseeing all the interactions between an AI service and a language model, an AI gateway views potentially chaotic data interchanges and lays down the law to bring order to achieve efficient usage, secure operations and responsible AI. Underlining the progress made in the company's Application Delivery and Security Platform this year François Locoh-Donou, F5 president and CEO spoke to press in London this week to explain where his firm's vision for secure operations across new AI landscapes really manifests itself. The F5 Application Delivery and Security Platform is now being more deeply engineered to ensure CIOs, CISOs, AI Ops users and all engineers across modern DevOps teams working in hybrid multicloud infrastructures manage the key infrastructure, data movement and security challenges they face. F5 CEO: Why IT Complexity Happened 'What is really happening in the world of application delivery right now, is that the organizations we work with (which are primarily large enterprises and government entities) are finding system security a lot more complex over the last couple of decades,' said Locoh-Donou. "That's in large part due to the fact that companies have their cloud and datacenter estates established over a number of different service providers, so they have more than one element of infrastructure to manage. Combine this truth with the fact that modern applications are composed of multiple APIs and microservices… and you can understand why connecting one element of the total topography in any organization is now more difficult. Companies have traditionally used 'point solutions' to address each problem over the years and these multifaceted products themselves create a 'ball of fire' in terms of total systems management and application delivery.' Instead of taking an incremental step to solve these challenges, Locoh-Donou proposes that it makes more sense to take a single platform approach to deliver and secure applications on-premises and in public cloud and out to the edge. He insists that organizations should not have to choose a different application delivery infrastructure to drive successful applications on different form factors. With all that backdrop, when we also add AI into the mix, things get even more challenging because these applications are inherently more distributed (they typically call on data models from multiple sources and agentic AI adds even more dynamic behavior to that vortex as it makes calls agent-2-agent), so there is a whole new raft of security considerations from prompt injection to AI hallucination controls and so on. 'My view on this is that AI is being deployed so rapidly that we really should have looked more closely at what happened in the first decade of cloud computing. It was only back in Nov 2022 that the new AI revolution started with the arrival of ChatGPT, so the speed of progression now is massively exponentially faster,' said Locoh-Donou. 'I believe that generative AI might well be the most sensitive vulnerability that organizations have to manage now. Using an AI gateway to route traffic to the right LLM and apply policy to the AI engine (organizations can use this process are able to manage which cost per token they are prepared to work with), a business can start to understand that working with AI means moving a lot of data around… so being able to take a total platform approach to managing these processes becomes fundamental.' Alert Alarm Fatigue For F5, Locoh-Donou says that his team has been on a journey that sees them infuse AI into its platform to provide AI for application delivery controller technologies. This is all about making it way easier (through natural language interfaces) for customers to deliver the apps that they need to securely. The company this year acquired to help administrators get over the problem of "alarm fatigue" when there are just so many alerts and point administrators to the ones they need to be aware of, which are the ones where the F5 platform can automatically triage.. Locoh-Donou also notes that as businesses adopt AI and hybrid cloud technologies, sensitive data often moves across encrypted traffic and unapproved AI tools, creating security blind spots. Traditional security methods struggle to detect or prevent data leaks from these complex environments. He says that F5 answers this challenge with tools that allow organizations to achieve key compliance and security outcomes such as the ability to detect, classify and stop data leaks in encrypted and AI-driven traffic in real-time. It also tackles risks from unauthorized AI use (also known as shadow AI) and sensitive data exposure. It operates with controls to apply consistent policies across applications, APIs and AI services to maintain security and compliance. Intrinsic In-transit Data Data leakage detection and prevention capabilities are coming to F5 AI Gateway this quarter. The service will be powered by technology that F5 acquired from LeakSignal, a data governance and protection specialist with a National Institute of Standards and Technology recognition for data classification, remediation and AI-driven policy enforcement of data in-transit. The new functionality examines AI prompts and responses to spot sensitive data such as personal information or other sensitive data, and applies customer-defined policies to redact, block, or log it. With the integration and ongoing development of this AI data protection technology, F5 says it expands its ability to inspect in-transit data, applying policies to secure sensitive information before it leaves the network. This addition is promised to simplify compliance and reduce risk across hybrid and multicloud deployments. Competitive Analysis: Application & API Delivery Hand in hand with application delivery and security comes (from any vendor worth its salt) an equally exhaustive approach to application programming interface management and security alongside AI gateway functionality. F5 shares benchspace in this sector with firms including Kong, Cloudflare, Akamai and (of the three major cloud hyperscalers) Google Cloud primarily, although Microsoft Azure and AWS also have fingers in the pie. Each firm has its competencies and costing schedules, but more obvious differentiation manifests itself in terms of how far each vendor can extend into the edge computing space and, crucially, make use of AI accelerators and intelligence boosters. Pure-play application delivery controller competition again comes from AWS, this time alongside Barracuda, HAProxy, NetScaler, A10 Networks, Radware and (back to the hyperscalers again) Microsoft Azure. All three hyperscalers are known for their capabilities in AI inference routing, the ability to make sure an application's resource requests correctly match the parameter requests of any given deployment. Not a replacement for an application delivery controller, but certainly another ingredient in this market's mix. The size of the cloud major players will naturally sit at the back of F5's mind as it now extends its platform vision; the big service providers can bundle a degree of alternatives to what F5 provides as a standalone service (even though it is a platform in and of itself) and some IT managers will inevitably find that to be an attractive option. If we mix data in-transit with real-time data and the need to bring controls to every tier of application execution, there's clearly plenty of surface area to target. What matters now is whether the 'safe and securely protected AI' space grows as fast as the wider AI landscape itself.

How CTOs Can Rein In Vibe Coding Cybersecurity Risks
How CTOs Can Rein In Vibe Coding Cybersecurity Risks

Forbes

time14-07-2025

  • Business
  • Forbes

How CTOs Can Rein In Vibe Coding Cybersecurity Risks

Founder & CEO of Excellent Webworld. A tech innovator with 12+ years of experience in IT, leading 900+ successful projects globally. In 2025, "vibe coding"—creating software simply by describing your requirements in plain English (i.e., writing a prompt)—has become the IT industry's biggest buzzword. AI tools like Cursor, Lovable and Firebase AI have democratized software creation, enabling even nontechnical users to launch apps and prototypes at unprecedented speed. Andrej Karpathy, who coined the term "vibe coding" in February 2025, explains: "It's not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works." While AI-generated code delivers speed and faster time to market, a darker reality is emerging: The same ease that lets anyone spin up a website in minutes allows cybercriminals to do the same. For example: • In 2025, attackers exploited GitLab Duo's AI coding assistant through hidden prompts, causing AI-generated code to leak private source and inject malicious HTML. • Similarly, a Stanford student used prompt injection on Bing Chat to reveal hidden system instructions, exposing sensitive internal data. A direct result of AI-generated responses trusting manipulated user prompts. Urgent action is needed to safeguard digital assets. In this article, I'll share my thoughts on the dark side of vibe coding based on my readings and analysis and why business leaders must rethink their cybersecurity strategies. How Vibe Coding Accelerates Cyberattacks AI-powered vibe coding tools often import external software components automatically. However, these components aren't always thoroughly checked, creating significant business risks. Some of these software components may even be malicious in disguise. Hackers use "slopsquatting" and "typosquatting," or uploading fake software packages with names nearly identical to trusted ones. If a company's AI tool pulls in one of these malicious packages, it can trigger data breaches, system failures or costly downtime. Another significant threat is that, as a recent study found, major AI code tools produce insecure code. Nearly 48% of AI-generated code snippets had exploitable vulnerabilities. These aren't just theoretical risks. One prominent case involved the Storm-2139 cybercrime group, which hijacked Azure OpenAI accounts by exploiting stolen API credentials. They bypassed Microsoft's security measures, generating policy-violating and potentially harmful outputs at scale. As a result, security teams are facing large consequences from AI coding. For example, a recent survey found that, while accidentally installing malicious code was relatively rare, 60% of these incidents were rated as highly significant when they did occur. The Human Factor: Overreliance And Erosion Of Security Skills Vibe coding enables people without technical backgrounds—business managers, marketers and more—to build apps using AI tools. However, many lack cybersecurity training, so critical safety steps are often skipped. This problem grows when teams trust AI-generated code too much, believing it's safe just because a machine produced it. As organizations lean on AI, they risk losing essential security skills and oversight. Without human review and ongoing training, hidden threats can slip through, putting the entire business at risk. In my experience advising digital transformation projects, I've seen teams skip code reviews when using AI tools, assuming the technology is infallible. This overconfidence can be costly; one overlooked vulnerability can compromise an entire system. The Real-World Business Impact Of Security Breaches From Vibe Coding Compliance violations will likely grow as AI-generated code can fail to meet stringent regulatory standards. With the advent of the EU AI Act and stricter U.S. cybersecurity frameworks, regulators now require organizations to demonstrate robust controls over AI-generated software. Noncompliance can mean monetary penalties, restricted market access and lasting reputational damage that can be very difficult to overcome. For enterprise leaders, the message is clear: Unchecked AI-generated code introduces systemic vulnerabilities that threaten financial performance and long-term resilience, which are crucial for any organization to thrive in today's digital economy. What Business Leaders Must Do To Prevent This Nightmare? Business leaders face a crossroads as AI-enabled "vibe coding" reshapes software development. The convenience and speed are undeniable, as are the hidden cybersecurity risks. To protect your organization, take these proactive steps: • Deploy automated security scanning tools to catch vulnerabilities in real time. • Mandate human code reviews for all AI-generated outputs. • Schedule regular, independent security audits to detect hidden threats. • Embed security checks throughout the software development life cycle. • Educate all teams about the risks of AI-driven code to build a security-first culture. • Closely monitor AI tool usage; treat every new code as a potential risk. • Establish clear policies for AI code adoption and escalation protocols. These steps must be continuous, not just periodic, to keep pace with evolving threats. As AI redefines what's possible, those prioritizing security will not only mitigate risk but also unlock new growth opportunities. Companies that thrive will treat cybersecurity as a catalyst for innovation, embedding trust and resilience into every digital initiative. The choice is clear: Lead the charge in securing the AI-driven era, or risk being left vulnerable. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Network Performance Baselines That Predict Future Bottlenecks: By Scott Andery
Network Performance Baselines That Predict Future Bottlenecks: By Scott Andery

Finextra

time27-06-2025

  • Finextra

Network Performance Baselines That Predict Future Bottlenecks: By Scott Andery

Your network is running fine at 9 AM. By 11 AM, everything feels sluggish. Come 2 PM, users are complaining that file transfers are crawling, and by 4 PM, someone's inevitably asking if the internet is "broken again." Sound familiar? Most IT teams treat network performance like the weather – something that just happens to them. But here's the thing: network bottlenecks don't appear out of nowhere. They follow predictable patterns, and if you know how to read those patterns, you can spot problems weeks before they actually impact your users. Understanding What Network Baselines Actually Tell You Let's start with what most people get wrong about network monitoring. They focus on the dramatic spikes – the moments when everything grinds to a halt. But the real intelligence comes from understanding what "normal" looks like during different times, different seasons, and different business cycles. A proper network baseline isn't just a single measurement. It's a collection of patterns that show you how your network behaves under various conditions. Think of it like knowing that your commute usually takes 25 minutes, but on rainy Fridays it takes 45 minutes, and during school holidays it drops to 18 minutes. The Metrics That Actually Matter for Prediction When I'm setting up proactive IT support monitoring for clients, I focus on metrics that have predictive value, not just diagnostic value. Here's what really matters: Bandwidth utilization trends over 30, 60, and 90-day periods Latency patterns during peak business hours vs. off-hours Packet loss rates under different load conditions Connection count growth as business operations expand Application-specific performance for critical business systems The key is tracking these metrics consistently enough to identify patterns, but not so obsessively that you're drowning in data that doesn't lead to actionable insights. Reading the Early Warning Signs Here's where network baseline monitoring gets interesting – and where most businesses miss opportunities for prevention. The warning signs of future bottlenecks show up in subtle changes to your baseline patterns long before users start complaining. Gradual Degradation Patterns The most dangerous network problems aren't the sudden failures – they're the gradual degradations that slowly become the "new normal" until something pushes you over the edge. I've seen companies where file transfer times slowly increased from 30 seconds to 2 minutes over six months, and nobody noticed because it happened gradually. But when you look at the baseline data, the trend is crystal clear. This is where proactive IT support becomes invaluable. Instead of waiting for users to report problems, you're identifying performance degradation trends and addressing them before they become user-facing issues. Seasonal and Cyclical Patterns Different businesses have different network usage cycles, and understanding your specific patterns is crucial for accurate predictions. For example: Accounting firms see massive spikes during tax season Manufacturing companies often have quarterly reporting periods that stress document management systems Professional services may experience increased collaboration traffic during specific project phases The goal is building baselines that account for these predictable variations, so you can distinguish between normal cyclical increases and actual capacity problems. Implementing Predictive Monitoring Systems Building a network monitoring system that actually predicts problems requires more than just installing software and hoping for the best. You need a systematic approach that captures the right data and presents it in ways that support proactive decision-making. Choosing Monitoring Points Strategically Not every network segment needs the same level of monitoring. Focus your detailed baseline tracking on: Internet gateway connections where external bandwidth limitations first appear Core switch infrastructure that handles the majority of internal traffic Server farm connections where application performance bottlenecks develop Wireless access points in high-density user areas WAN connections between office locations Setting Up Meaningful Alerts This is where a lot of monitoring systems fall apart. They either generate so many alerts that you start ignoring them, or they only alert you after problems are already impacting users. Effective proactive IT support monitoring uses graduated alerts based on baseline deviations: Trend alerts when performance metrics show concerning patterns over weeks Threshold warnings when you're approaching known capacity limits Anomaly detection for unusual patterns that don't match historical baselines Predictive alerts when current trends suggest future problems Translating Data Into Preventive Actions Having great baseline data doesn't help if you don't know how to act on it. The most valuable monitoring systems connect performance trends to specific preventive actions you can take. Capacity Planning That Actually Works Traditional capacity planning involves guessing how much your network usage will grow and buying equipment accordingly. Baseline-driven capacity planning uses your actual usage patterns to make informed predictions about future needs. For example, if your baseline data shows that bandwidth utilization increases by 15% each quarter, and you're currently at 60% capacity, you can predict that you'll hit problems in about 18 months – plenty of time to plan and budget for upgrades. Application Performance Optimization Network baselines also reveal which applications are consuming disproportionate resources and when. This intelligence allows you to: Schedule resource-intensive tasks during off-peak hours Implement traffic shaping for non-critical applications during busy periods Optimize application configurations based on actual usage patterns Plan application deployment timing to avoid creating new bottlenecks Real-World Implementation Examples Let me walk you through a couple of scenarios where baseline monitoring prevented major network problems. Case Study: The Gradual Slowdown A 75-person consulting firm was experiencing increasingly slow file access times, but nobody could pinpoint when it started or what was causing it. Their network monitoring showed everything was "green," but users were frustrated. By implementing proper baseline monitoring, we discovered that their file server response times had gradually increased by 300% over eight months. The culprit was a combination of growing file sizes and an aging storage array that was approaching its IOPS limits. Because we caught this trend early, we could plan the storage upgrade during a scheduled maintenance window instead of dealing with an emergency replacement when the system finally failed. Case Study: The Seasonal Surprise A manufacturing company experienced severe network slowdowns every quarter during their reporting periods, but each time it seemed to catch them off guard. Their proactive IT support team wasn't tracking quarterly patterns effectively. After establishing proper baselines, we could predict exactly when network stress would peak and implement temporary traffic management policies in advance. We also used the trend data to justify upgrading their WAN connections before the next major reporting cycle. Building a Sustainable Monitoring Strategy The key to successful predictive network monitoring is building systems that provide actionable intelligence without creating unsustainable administrative overhead. Start with monitoring your most critical network segments and applications. Establish baselines for normal operation during different time periods and business cycles. Then gradually expand your monitoring coverage as you develop the expertise to interpret and act on the data. Remember, the goal isn't to monitor everything perfectly – it's to monitor the right things well enough to make informed decisions about preventing future problems. Effective proactive IT support is about turning network performance data into a strategic advantage rather than just another source of technical complexity. When you can predict network bottlenecks weeks or months before they impact users, you transform from a reactive IT support team into a strategic business enabler. That's the difference between fixing problems and preventing them.

NCC Group Reports First Half 2025 Earnings
NCC Group Reports First Half 2025 Earnings

Yahoo

time21-06-2025

  • Business
  • Yahoo

NCC Group Reports First Half 2025 Earnings

Net income: UK£16.0m (up by UK£16.0m from 1H 2024). EPS: UK£0.052. This technology could replace computers: discover the 20 stocks are working to make quantum computing a reality. All figures shown in the chart above are for the trailing 12 month (TTM) period Looking ahead, revenue is forecast to grow 2.3% p.a. on average during the next 3 years, compared to a 7.9% growth forecast for the IT industry in the United Kingdom. Performance of the British IT industry. The company's shares are down 11% from a week ago. We should say that we've discovered 1 warning sign for NCC Group that you should be aware of before investing here. Have feedback on this article? Concerned about the content? Get in touch with us directly. Alternatively, email editorial-team (at) article by Simply Wall St is general in nature. We provide commentary based on historical data and analyst forecasts only using an unbiased methodology and our articles are not intended to be financial advice. It does not constitute a recommendation to buy or sell any stock, and does not take account of your objectives, or your financial situation. We aim to bring you long-term focused analysis driven by fundamental data. Note that our analysis may not factor in the latest price-sensitive company announcements or qualitative material. Simply Wall St has no position in any stocks mentioned. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store