logo
#

Latest news with #systemsengineering

Why AI Is Ending The Era Of The End User
Why AI Is Ending The Era Of The End User

Forbes

time6 days ago

  • Forbes

Why AI Is Ending The Era Of The End User

The term 'end user' is outdated—if not downright dangerous—in the age of AI. I was first introduced to the limitations of 'end user' thinking a decade ago when I managed the rollout of an oil & gas exploration toolset. I expected clear processes and defined user roles. Instead, I found the corporate Wild West: geoscientists and engineers working in open-ended collaboration. There were no playbooks, just expert judgment and debate. Trying to reducing those interactions to 'end user' terms missed the complexity that made the work succeed. Today, that same mistake is being made as AI reshapes the labor landscape. In AI-driven environments, data, expertise, and curiosity combine to shape critical decisions. Calling someone an 'end user' of AI doesn't just sound outdated; it misrepresents our responsibility to direct (and more importantly challenge) the outcomes of intelligent systems. So let's talk about why this language is holding us back and how to move beyond it. Where 'End User' Came From and Why It Stuck The term 'end user' was born from 1960s systems engineering, where it described non-technical staff operating finalized tools. It marked the final point of waterfall-style system design: processes (and value) flowed one way, with 'end users' at the end of the flow as passive recipients. Forty years later, Agile software development methodology updated this framing with the 'user story' format: As a [role], I want [function]. While this appears to be a more nuanced understanding of how a tool is used, the 'roles' in this sense generally refer to software license types and security groups, not organizational roles. From a technology perspective, this makes standardization across environments easier, improving supportability and scalability. The unintended result however is that the actual people who use the tool are often missing from design conversations. Modern technology is not built for the people using it. The people are fit into the technology. As the co-founder of a change firm that explicitly bridges the gap between technology partners and their clients, I am perhaps especially sensitive to this dynamic. While our consultants have a variety of methods to bring these perspectives together, I am keenly aware that the gap (and the risk for clients) is widening with the introduction of AI. Because humans are so accustomed to adapting to technology, the greatest danger of AI is our likelihood to trust it blindly. We have never worked in a space where the outputs of our tools carry the risk of such blatant errors and hallucinations. Research shows humans are prone to automation bias, the tendency to over-trust automated systems even when they're proven wrong. Recent incidents, from lawyers submitting briefs with fabricated case law to employees publishing unverified AI outputs, underscore how this bias results in real damage. With AI, interacting with technology can no longer be a passive activity. We cannot approach AI as a tool we are simply using; we must remember we're collaborating with intelligent systems. Our decisions don't just shape the quality of work, they determine the outcomes themselves. We must understand where accountability starts and ends with AI. Microsoft's idea of calling humans 'agent bosses' who manage AI like junior employees gets part of this right. But it still defines people via their relationship to the tool, not their responsibility for decisions. As AI systems become more modular and agent-based, authority, visibility, and accountability will fragment across organizations. Labels like 'end user' or 'agent boss' don't just oversimplify this, they erase it. 5 Ways to Rethink End Users in the Age of AI We need to move away from grouping stakeholders into a single bucket (no matter the name) and towards a more nuanced and informed understanding of how they will interact with each other. Here are five things organizations can do to mitigate the dangers of irresponsible AI usage: The term 'end user' belongs to an earlier era of linear systems and passive tools. Today, people are not just recipients of data and outputs. Our shared success depends on remembering that humans are no longer endpoints; we're the ones steering the system. To move confidently into this new future, we must name roles with precision, embed accountability into design, and foster active oversight at every level. AI will not replace judgment and critical thinking, but it will amplify the consequences of neglecting it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store