logo
#

Latest news with #NickBurling

Who Will Manage The Agents?
Who Will Manage The Agents?

Forbes

time24-06-2025

  • Business
  • Forbes

Who Will Manage The Agents?

Nick Burling, Senior Vice President of Product at Nasuni. The storage administrator has long been one of the least appreciated jobs in large organizations. These experienced IT professionals manage capacity and access, set up new offices and wind down divested locations. When a user accidentally deletes or loses a file, they track it down, and they are responsible for ensuring data is protected in the event of a disaster. However, they rarely get any credit. Today, as more large organizations look to deploy agentic AI capable of automating so many enterprise tasks, the job of the storage administrator appears to be endangered. Yet, I suspect these individuals are going to be more important than ever in the years ahead. Enterprises are going to need experts with core foundational knowledge and professionals who understand the big picture and can assess what needs to be fixed when those AI agents make mistakes. At the same time, automating the tasks that take up so much time today will allow storage administrators to take on higher-level strategic work. The job isn't going away; it's going to get more interesting. Consider a few sample tasks assigned to the storage administrator today. Let's say a user in a distant office sends an alert that they've lost a critical folder. Maybe they've deleted it—the user isn't certain. All they know is that they absolutely need access restored as soon as possible; otherwise, the world will end. Traditionally, this sort of emergency would send the storage administrator off on a digital wild-goose chase, trying to understand whether that user accidentally moved or erased the folder or if someone else in the organization did so. This is the kind of task that a trained AI agent would be able to handle efficiently. A scan of the organization's audit logs would quickly reveal whether the folder had been deleted or moved and which user was responsible for the action. The agent could then immediately report the findings to the administrator for review, with the option of triggering a restore of the deleted data. As the head of product for a company that specializes in data management and AI readiness, I've come across many other examples with our customers. For example, one of the large enterprises we work with wants to use agentic AI to prebuild new file data directories from a template, making it easier to kick off new projects. In the past, this sort of work might have taken a few days or even weeks, depending on the availability of the storage administrator. Instead, the storage professional could create a set of templates for different project types and train an agent to set them up on command. Rather than doing the grunt work, the storage administrator acts as the high-level designer and final set of eyes to review the agent's work before it is completed. Once many of their tasks become automated, storage administrators will take on new roles such as: Data Stewards The success of an enterprise's AI plans hinges on its ability to manage, unify and curate its data. Whether the goal is to train an AI solution or use one to find hidden patterns and insights in your organization's data, you are going to need skilled storage administrators who understand where all of that data resides and how to connect it. AI Managers Rather than overseeing a department of humans, storage administrators will manage a suite of AI agents. Obviously, this will be a very different type of managerial job that doesn't involve repairing fragile egos or cheerleading. (Unless the agents become sentient, in which case we all have other problems to worry about.) However, it's going to be enormously important to get this right, and no one will be better positioned to do so than the storage administrators who deeply understand the work that needs to be done. Task Experts AI agents will make mistakes, so organizations are going to need domain experts who deeply understand the task the agents were assigned to perform and how it fits within the larger IT environment. That human knowledge and high-level understanding are going to be essential to ensure the agents are a truly effective addition to the enterprise. Business Strategists Even with these additional roles and responsibilities, storage administrators will likely still have extra cycles, given how many time-sucking tasks will be taken out of their hands. Generally, the companies that master data management are going to have a major competitive advantage, and storage administrators are uniquely positioned to identify solutions and approaches that go beyond storage and transform data into a reliable, accessible source of business intelligence. Given the potential expansion of the role, we are left with one question: Are these long-overlooked enterprise professionals finally going to be appreciated? Although the outlook for many knowledge workers appears uncertain in the age of AI, the future of storage administrators looks bright, and I believe enterprises will finally value these IT professionals appropriately. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Why Data Curation Is The Key To Enterprise AI
Why Data Curation Is The Key To Enterprise AI

Forbes

time07-04-2025

  • Business
  • Forbes

Why Data Curation Is The Key To Enterprise AI

Nick Burling, Senior Vice President of Product at Nasuni. All the enterprise customers and end users I'm talking to these days are dealing with the same challenge. The number of enterprise AI tools is growing rapidly as ChatGPT, Claude and other leading models are challenged by upstarts like DeepSeek. There's no single tool that fits all, and it's dizzying to try to analyze all the solutions and determine which ones are best suited to the particular needs of your company, department or team. What's been lost in the focus on the latest and greatest models is the paramount importance of getting your data ready for these tools in the first place. To get the most out of the AI tools of today and tomorrow, it's important to have a complete view of your file data across your entire organization: the current and historical digital output of every office, studio, factory, warehouse and remote site, involving every one of your employees. Curating and understanding this data will help you deploy AI successfully. The potential of effective data curation is clear in the development of self-driving cars. Robotic vehicles can rapidly identify and distinguish between trees and cars in large part because of a dataset called ImageNet. This collection contains more than 14 million images of common everyday objects that have been labeled by humans. Scientists were able to train object recognition algorithms on this data because it was curated. They knew exactly what they had. Another example is the use of machine learning to identify early signs of cancer in radiological scans. Scientists were able to develop these tools in part because they had high-quality data (radiological images) and a deep understanding of the particulars of each image file. They didn't attempt to develop a tool that analyzed all patient data or all hospital files. They worked with a curated segment of medical data that they understood deeply. Now, imagine you're managing AI adoption and strategy at a civil engineering firm. Your goal is to utilize generative AI (GenAI) to streamline the process of creating proposals. And you've heard everyone in the AI world boasting about how this is a perfect use case. A typical civil engineering firm is going to have an incredibly broad range of files and complex models. Project data is going to be multimodal—a mix of text, video, images and industry-specific files. If you were to ask a standard GenAI tool to scan this data and produce a proposal, the result would be garbage. But let's say all this data was consolidated, curated and understood at a deeper level. Across tens of millions of files, you'd have a sense of which groups own which files, who accesses them often, what file types are involved and more. Assuming you had the appropriate security guardrails in place to protect the data, you could choose a tool specifically tuned for proposals and securely give that tool access to only the relevant files within your organization. Then, you'd have something truly useful that helps your teams generate better, more relevant proposals faster. Even with curation, there can be challenges. Let's say a project manager (PM) overseeing multiple construction sites wants to use a large language model (LLM) to automatically analyze daily inspection reports. At first glance, this would seem to be a perfect use case, as the PM would be working with a very specific set of files. In reality, though, the reports would probably come in different formats, ranging from spreadsheets to PDFs and handwritten notes. The dataset might include checklists or different phrasings representing the same idea. A human would easily recognize this collected data as variations of a site inspection report, but a general-purpose LLM wouldn't have that kind of world or industry knowledge. A tool like this would likely generate inaccurate and confusing results. Yet, having curated and understood this data, the PM would still be in a much better position. They'd recognize early that the complexity and variation in the inspection reports would lead to challenges and save the organization the expense and trouble of investing in an AI tool for this application. The opportunities that could grow out of organization-wide data curation stretch far beyond specific departmental use cases. Because most of your organization's data resides within your security perimeter, no AI model has been trained on those files. You have a completely unique dataset that hasn't yet been mined for insights. You could take the capabilities of the general AI models developed in training on massive, general datasets and (with the right security framework in place) fine-tune them to your organization's unique gold mine of enterprise data. This is already happening at an industry scale. The virtual paralegal Harvey has been fine-tuned on curated legal data, including case law, statutes, contracts, legal briefs and the rest. BioBERT, a model optimized for medical research, was trained on a curated dataset of biomedical texts. The researchers who developed this tool did so because biomedical texts have such a particular or specific language. Whether you want to embark on an ambitious project to create a fine-tuned model or select the right existing tool for a department or project team's needs, it all starts with data curation. In this period of rapid change and model evolution, the one constant is that if you don't know what sort of data you have, you're not going to know how to use it. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store