Latest news with #ITteams


Forbes
7 days ago
- Business
- Forbes
Active Directory Sprawl: The Risks And What You Can Do
John Hernandez, President and Chief Customer Officer at Quest Software. Most breaches don't start with a clever hack—they start with a login. And when Active Directory (AD) is cluttered and outdated, it becomes the easiest way in. For many organizations, AD has quietly grown out of control. Years of business growth, M&A activity and quick fixes leave behind legacy accounts, overlapping domains and unclear permissions. It's not always obvious until something breaks or someone gets in. That complexity slows down operations. Worse, it expands the attack surface. Today's threat actors don't need much. A single unmonitored account or excessive permission can be all it takes. And with AI speeding up how attackers map environments, a messy AD becomes a serious risk. That's why 94% of organizations see the value of AD modernization. Cleaning it up isn't just good IT practice, it's a strategic move to reduce risk and restore control. So, how does sprawl take root, and what can you do about it? A Growing Business Means A Growing AD As businesses grow or merge, so does the complexity of their infrastructure. What starts as a well-managed AD environment can turn into a mess of accounts and domains. Over time, technical teams apply stopgap fixes to keep things running. They may delay cleanup for more pressing tasks, but temporary fixes tend to become permanent. The result is a chaotic environment that strains people and processes. Common impacts of AD sprawl: • IT teams must manage disjointed or duplicated directories. Admins waste time jumping between systems, troubleshooting across forests and manually syncing changes. • Users struggle with login issues and inconsistent access. Different domains lead to multiple passwords, blocked access to shared apps and more help desk tickets. • Delays occur in onboarding or offboarding employees. HR and IT processes break down when users must be manually added or removed across multiple environments. • Higher costs result from redundant infrastructure. More domains often mean duplicate servers, extra licensing and unnecessary hardware. • Gaps appear in security policies and monitoring tools. Inconsistent policies and logging make it easier for attackers to move unnoticed. The operational toll is high. But the security risk is worse. When you can't see your full identity infrastructure, attackers can. Poorly managed accounts, excessive permissions and legacy configurations become open doors for threat actors, ideal for lateral movement, privilege escalation and data theft. For example, attackers might exploit SID History to impersonate high-privilege users or find leftover accounts from an old acquisition. Every outdated setting is a potential risk. Attackers are also using AI to speed up reconnaissance and exploit paths through poorly managed environments. When AD is messy, these tools work faster and more effectively, giving threat actors an edge. That's why identity is such a high-value target, and why containing AD sprawl should be a security priority. How To Contain Your AD There's no one-size-fits-all solution for managing Active Directory, but every organization can take key steps to reduce risk and improve control. It starts with prevention. Assigning ownership of your AD environment isn't just for daily tasks, but for overall structure and accountability. Use consistent naming for users, groups and systems to avoid confusion. Limit who can make structural changes, and put approval processes in place for any major updates. Track all changes, especially after growth events like mergers or reorganizations. Without documentation, it's easy to lose sight of what changed and why. Review your environment regularly. Set aside time each quarter to check for inactive accounts, duplicate rules or unnecessary domains. Avoid giving permanent high-level access. Use temporary permissions whenever possible, tied to specific roles or time frames. None of this requires new tools. It just requires process and discipline. If your AD environment is already complex, here are five practical steps to help you regain control without needing new technology or extra budget: 1. Know what you have. Start by mapping your current environment. How many domains do you have? How do they connect? Which parts are still in active use? Even a basic inventory can reveal hidden problems. 2. Spot the risks. Look for signs of unnecessary complexity. Old accounts that are still active, too many admin-level users or conflicting access rules are all common and easy to overlook. 3. Simplify where possible. You don't have to fix everything at once. Focus on small wins—retire an unused domain, clean up an old group or consolidate overlapping roles. Make changes that reduce risk without disrupting operations. 4. Tighten access rules. Review who has access to what. Make sure users have only what they need—no more, no less. Removing outdated or excessive permissions is one of the fastest ways to reduce exposure. 5. Keep control going forward. Put processes in place to prevent sprawl from returning. Some teams now use AI-driven monitoring to detect unusual behavior, like sudden permission changes or dormant accounts becoming active again. These tools help flag issues early, but they still rely on a clean AD to work well. Track AD changes, review new accounts and roles regularly, and include AD in your ongoing security and infrastructure reviews. These steps can help turn AD from a liability into a stable foundation for your identity security strategy. Take The Next Step If your AD environment has grown messy, you're not alone. Most organizations struggle with sprawl at some point. But you don't have to live with it. Start by asking how many domains you're managing. Where are your biggest risks? What could your team do if AD were no longer a burden? And if you're already investing in AI for threat detection or automation, a clean AD makes that investment more effective. The path forward is clear: simplify and secure. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
03-07-2025
- Business
- Yahoo
As Microsoft Exchange 2016 and 2019 sunset, how can privacy-conscious organisations future-proof their email?
With Microsoft Exchange Server 2016 and 2019 reaching end-of-support in October, IT teams must make an urgent, strategic decision of either migrating to cloud-based services or staying on-premises. As major productivity solution providers continue to adjust their plan offerings, many organisations are grappling with sudden shrinking plan options, rising costs, and the phase-out of long-standing services. With Microsoft Exchange Server 2016 and 2019 reaching end-of-support in October 2025, IT teams must consider more than just a routine upgrade. This is a strategic crossroads, a decision that impacts how businesses manage communication, compliance, and data sovereignty, with significant implications for cost and control. Continuing on unsupported Exchange versions would expose businesses to serious risks, including the loss of security updates, vendor support, and compatibility with other Microsoft applications. Therefore, this shift marks more than just the end of a product lifecycle. It forces IT teams to make an urgent, strategic decision of either migrating to cloud-based services like Exchange Online or Microsoft 365, or staying on-prem with the upcoming Exchange Server Subscription Edition. Time is running out to evaluate the next move before the sunset. With mounting pressure to act, IT teams are left with a narrow window to weigh their options. The new Exchange subscription model introduces added complexity, requiring Software Assurance on top of server licences and client access licences, which can create significant management challenges for growing teams and small to mid-sized organisations. Similarly, cloud adoption offers agility and scalability, but organisations are increasingly weighing the trade-offs in compliance, cost control, and vendor dependency. Software-as-a-service (SaaS) expenditures have grown 27% in two years, averaging US$7,900 ($10,049) per user annually, according to spend optimisation platform Vertice. For heavily regulated sectors or cost-conscious public institutions, this trend raises sustainability concerns. In this landscape, finding a stable on-premise solution that guarantees robust security, privacy and price reliability becomes all the more crucial. Hosting email on-premises allows organisations to retain full ownership over their infrastructure and data, reducing reliance on external vendors and ensuring compliance with local or sector-specific standards such as European Union's General Data Protection Regulation, US's Health Insurance Portability and Accountability Act or ISO 27001. This can be particularly beneficial for teams in education, government, legal, or healthcare environments, where trust and traceability matter. On-premises solutions can also offer key advantages in data governance. With everything hosted within the organisation's own network — from mail services and user permissions to backup and access logs — administrators maintain full visibility into how data is handled and by whom. This level of control is increasingly critical in an era where organisations face tightening compliance regulations and heightened data privacy expectations. Some modern solutions now integrate email, storage, security, and auditing into a single appliance, enabling IT teams to simplify administration while strengthening governance and oversight. In terms of budget, modern self-hosted platforms can also break from the pricing complexity of legacy email systems. For IT teams managing large-scale infrastructure, minimising unpredictable licensing costs and integrating with existing systems is critical. A solution like Synology MailPlus, which runs natively on network attached storage (NAS) devices and follows a lifetime licence model, addresses both these issues. Ultimately, organisations today are not just choosing where to host email. They are choosing how to control and protect one of their most sensitive communications systems. Whether responding to evolving compliance demands or planning for long-term IT resilience, on-prem email remains a smart and strategic option for organisations that want simplicity, ownership, and security on their own terms. Learn more about how Synology MailPlus supports email privacy, data governance, and cost reliability here: See Also: Click here to stay updated with the Latest Business & Investment News in Singapore New AWS innovation hub in Singapore to support the training of 2,000 professionals annually M1 targets Asean growth with a heavier focus on enterprise tech consulting Singapore taps on AI to detect fractures, tuberculosis and streamline public healthcare delivery Read more stories about where the money flows, and analysis of the biggest market stories from Singapore and around the World Get in-depth insights from our expert contributors, and dive into financial and economic trends Follow the market issue situation with our daily updates Or want more Lifestyle and Passion stories? Click hereError in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Finextra
27-06-2025
- Finextra
Network Performance Baselines That Predict Future Bottlenecks: By Scott Andery
Your network is running fine at 9 AM. By 11 AM, everything feels sluggish. Come 2 PM, users are complaining that file transfers are crawling, and by 4 PM, someone's inevitably asking if the internet is "broken again." Sound familiar? Most IT teams treat network performance like the weather – something that just happens to them. But here's the thing: network bottlenecks don't appear out of nowhere. They follow predictable patterns, and if you know how to read those patterns, you can spot problems weeks before they actually impact your users. Understanding What Network Baselines Actually Tell You Let's start with what most people get wrong about network monitoring. They focus on the dramatic spikes – the moments when everything grinds to a halt. But the real intelligence comes from understanding what "normal" looks like during different times, different seasons, and different business cycles. A proper network baseline isn't just a single measurement. It's a collection of patterns that show you how your network behaves under various conditions. Think of it like knowing that your commute usually takes 25 minutes, but on rainy Fridays it takes 45 minutes, and during school holidays it drops to 18 minutes. The Metrics That Actually Matter for Prediction When I'm setting up proactive IT support monitoring for clients, I focus on metrics that have predictive value, not just diagnostic value. Here's what really matters: Bandwidth utilization trends over 30, 60, and 90-day periods Latency patterns during peak business hours vs. off-hours Packet loss rates under different load conditions Connection count growth as business operations expand Application-specific performance for critical business systems The key is tracking these metrics consistently enough to identify patterns, but not so obsessively that you're drowning in data that doesn't lead to actionable insights. Reading the Early Warning Signs Here's where network baseline monitoring gets interesting – and where most businesses miss opportunities for prevention. The warning signs of future bottlenecks show up in subtle changes to your baseline patterns long before users start complaining. Gradual Degradation Patterns The most dangerous network problems aren't the sudden failures – they're the gradual degradations that slowly become the "new normal" until something pushes you over the edge. I've seen companies where file transfer times slowly increased from 30 seconds to 2 minutes over six months, and nobody noticed because it happened gradually. But when you look at the baseline data, the trend is crystal clear. This is where proactive IT support becomes invaluable. Instead of waiting for users to report problems, you're identifying performance degradation trends and addressing them before they become user-facing issues. Seasonal and Cyclical Patterns Different businesses have different network usage cycles, and understanding your specific patterns is crucial for accurate predictions. For example: Accounting firms see massive spikes during tax season Manufacturing companies often have quarterly reporting periods that stress document management systems Professional services may experience increased collaboration traffic during specific project phases The goal is building baselines that account for these predictable variations, so you can distinguish between normal cyclical increases and actual capacity problems. Implementing Predictive Monitoring Systems Building a network monitoring system that actually predicts problems requires more than just installing software and hoping for the best. You need a systematic approach that captures the right data and presents it in ways that support proactive decision-making. Choosing Monitoring Points Strategically Not every network segment needs the same level of monitoring. Focus your detailed baseline tracking on: Internet gateway connections where external bandwidth limitations first appear Core switch infrastructure that handles the majority of internal traffic Server farm connections where application performance bottlenecks develop Wireless access points in high-density user areas WAN connections between office locations Setting Up Meaningful Alerts This is where a lot of monitoring systems fall apart. They either generate so many alerts that you start ignoring them, or they only alert you after problems are already impacting users. Effective proactive IT support monitoring uses graduated alerts based on baseline deviations: Trend alerts when performance metrics show concerning patterns over weeks Threshold warnings when you're approaching known capacity limits Anomaly detection for unusual patterns that don't match historical baselines Predictive alerts when current trends suggest future problems Translating Data Into Preventive Actions Having great baseline data doesn't help if you don't know how to act on it. The most valuable monitoring systems connect performance trends to specific preventive actions you can take. Capacity Planning That Actually Works Traditional capacity planning involves guessing how much your network usage will grow and buying equipment accordingly. Baseline-driven capacity planning uses your actual usage patterns to make informed predictions about future needs. For example, if your baseline data shows that bandwidth utilization increases by 15% each quarter, and you're currently at 60% capacity, you can predict that you'll hit problems in about 18 months – plenty of time to plan and budget for upgrades. Application Performance Optimization Network baselines also reveal which applications are consuming disproportionate resources and when. This intelligence allows you to: Schedule resource-intensive tasks during off-peak hours Implement traffic shaping for non-critical applications during busy periods Optimize application configurations based on actual usage patterns Plan application deployment timing to avoid creating new bottlenecks Real-World Implementation Examples Let me walk you through a couple of scenarios where baseline monitoring prevented major network problems. Case Study: The Gradual Slowdown A 75-person consulting firm was experiencing increasingly slow file access times, but nobody could pinpoint when it started or what was causing it. Their network monitoring showed everything was "green," but users were frustrated. By implementing proper baseline monitoring, we discovered that their file server response times had gradually increased by 300% over eight months. The culprit was a combination of growing file sizes and an aging storage array that was approaching its IOPS limits. Because we caught this trend early, we could plan the storage upgrade during a scheduled maintenance window instead of dealing with an emergency replacement when the system finally failed. Case Study: The Seasonal Surprise A manufacturing company experienced severe network slowdowns every quarter during their reporting periods, but each time it seemed to catch them off guard. Their proactive IT support team wasn't tracking quarterly patterns effectively. After establishing proper baselines, we could predict exactly when network stress would peak and implement temporary traffic management policies in advance. We also used the trend data to justify upgrading their WAN connections before the next major reporting cycle. Building a Sustainable Monitoring Strategy The key to successful predictive network monitoring is building systems that provide actionable intelligence without creating unsustainable administrative overhead. Start with monitoring your most critical network segments and applications. Establish baselines for normal operation during different time periods and business cycles. Then gradually expand your monitoring coverage as you develop the expertise to interpret and act on the data. Remember, the goal isn't to monitor everything perfectly – it's to monitor the right things well enough to make informed decisions about preventing future problems. Effective proactive IT support is about turning network performance data into a strategic advantage rather than just another source of technical complexity. When you can predict network bottlenecks weeks or months before they impact users, you transform from a reactive IT support team into a strategic business enabler. That's the difference between fixing problems and preventing them.

Associated Press
27-06-2025
- Business
- Associated Press
What Every IT Leader Needs to Know About AI and Solution Delivery: Insights Published By Info-Tech Research Group
Global research and advisory firm Info-Tech Research Group has published a new resource that details how AI can transform the way IT teams build, test, and deploy solutions. The firm's research insights highlight that by embedding AI into every stage of the delivery process, organizations can reduce inefficiencies, mitigate risks, and increase development velocity. The recently published research offers practical, scalable insights CIOs can leverage to meet rising business demands with greater clarity and control. TORONTO, June 27, 2025 /PRNewswire/ - Delivering solutions quickly while maintaining consistency and impact is a growing challenge for IT teams navigating complex environments and limited resources. To help organizations improve how they build and release solutions, Info-Tech Research Group has published new research Boost Solution Delivery Throughput With AI, which offers a focused approach to increasing throughput. The firm's blueprint outlines practical steps to guide organizations in embedding AI into their solution delivery teams, driving value, quality, and speed. By reducing inefficiencies, strengthening team capacity, and leveraging technology to improve delivery rhythm, the research insights can guide IT teams on how best to align with evolving business needs. 'Throughput has been and will continue to be the success factor of all solution delivery teams. Teams are expected to deliver high-value and high-quality features, fixes, and updates quickly and continuously. However, there are new headwinds getting in their way,' says Andrew Kum-Seun, research director at Info-Tech Research Group. 'Exponential technologies, democratized IT, security vulnerabilities, and other disruptors have made yesterday's status quo outdated. Enter AI as both the solution and the challenge.' Info-Tech's newly published research highlights the transformative potential of AI to enhance solution quality, accelerate delivery, and increase overall business value. The firm's findings emphasize that capabilities such as AI-assisted code generation can significantly boost developer productivity, synthetic data generation can enable more effective and scalable testing, and intelligent scanning tools can proactively identify issues before they impact delivery. However, to fully unlock these benefits, Info-Tech advises organizations to move beyond surface-level adoption. Andrew Kum-Seun, Application Delivery expert at Info-Tech Research Group, stresses that AI must be deeply embedded in the fabric of the solution delivery team, where every decision, action, and outcome is driven, supported, or executed by AI-enabled capabilities. In its recently published resource, Boost Solution Delivery Throughput With AI, Info-Tech outlines three specific areas where AI can directly address throughput challenges: The data-backed blueprint from the global research and advisory firm explains that successfully leveraging these AI capabilities requires more than just technology adoption; it demands overcoming internal resistance and organizational uncertainty. Info-Tech advises organizations to begin by addressing critical delivery challenges where AI can demonstrate a clear and immediate impact. Early successes build trust, ease concerns, and create the momentum necessary to confidently scale AI adoption across solution delivery teams. For exclusive and timely commentary from Andrew Kum-Seun, an expert in application delivery, and access to the complete Boost Solution Delivery Throughput With AI blueprint, please contact [email protected]. About Info-Tech Research Group Info-Tech Research Group is one of the world's leading research and advisory firms, serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations. To learn more about Info-Tech's divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software buying insights. Media professionals can register for unrestricted access to research across IT, HR, and software and hundreds of industry analysts through the firm's Media Insiders program. To gain access, contact [email protected]. For information about Info-Tech Research Group or to access the latest research, visit and connect via LinkedIn and X. View original content to download multimedia: SOURCE Info-Tech Research Group


Geeky Gadgets
24-06-2025
- Business
- Geeky Gadgets
Why ITAM Software Is the Backbone of Modern IT Operations
Today's increasingly digital world is marked by an IT infrastructure that continues to expand across several cloud environments, remote endpoints, and on-premise systems. Because of that, IT Asset Management (ITAM) software has become an indispensable tool. Businesses looking to scale up, reduce risks, and remain compliant need a solid foundation to manage their growing digital asset base. This is where ITAM software can offer exactly that by focusing on control, visibility, and efficiency across the entire IT ecosystem. Scaling with Confidence The IT environment grows in complexity alongside the underlying business. Managing thousands of hardware devices, software licenses, and cloud services manually becomes a logistical hardship. ITAM software plays an important role in automating inventory tracking, license management, and usage analytics. It ensures that every asset (starting with laptops and ending with SaaS subscriptions) is accounted for and well-optimized for its specific task. With this level of control, scalability is supported and IT teams can proactively plan for future growth. With real-time asset intelligence, organizations can forecast future needs, budget more accurately, and make informed procurement decisions. The result is a more efficient, leaner IT operation capable of supporting rapid expansion without chaos along the way. Reducing Operations and Security Risks Untracked hardware, outdated software, and shadow IT in general are common culprits in security breaches. Using ITAM software, these risks can be mitigated by offering asset discovery and lifecycle management. By knowing exactly what is in the IT environment (while also ensuring it is up-to-date and secure), organizations can drastically reduce their vulnerability to cyber threats. Moreover, automated alerts and compliance checks embedded in most ITAM platforms help identify unauthorized installations, expired warranties, or usage violations. This proactive approach minimizes downtime, improves response time in crisis situations, and enhances overall cybersecurity posture. Ensuring Compliance and Audit Readiness For industries like healthcare, finance, and government, regulatory compliance is non-negotiable. When it comes to GDPR, HIPAA, or software licensing agreements, non-compliance can result in severe penalties and reputational damage. The software also facilitates better vendor management, ensuring that contracts are up-to-date and aligned with actual usage. In the event of an audit, IT teams can quickly produce evidence of compliance, reducing the risk of fines and legal complications. A Strategic Investment for the Future ITAM software is far from being just a back-office tool. It has now become a strategic enabler of modern IT operations, critical right through from the small start-up, to the scaling business, to large international enterprises. That happens because it bridges the gap between finance, IT procurement, and security, providing a single source of truth across departments. Robust asset management capabilities will become even more critical as hybrid work and digital transformation initiatives continue to evolve. It is for all of these reasons ITAM should be considered a necessity (and certainly not a nice-to-have). When it comes to organizations looking to scale efficiently, mitigate risk, and stay compliant in an increasingly complex IT environment, ITAM software provides the operational backbone they can rely on. Filed Under: Guides, Technology News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.