logo
Stanford Analyzes Worker Preferences For AI

Stanford Analyzes Worker Preferences For AI

Forbes5 days ago
workers with AI getty
Many of us have internalized this notion that we're soon going to be working side-by-side with robots, or at least AI agents and entities.
So as humans, what do we want these digital colleagues of ours to do?
How does delegation work?
A Stanford study recently went into this where authors surveyed 15,000 workers in over 100 types of jobs, to see what they really thought about AI adoption.
I thought this comment by one of the authors sums up the purpose of the report well:
'As AI systems become increasingly capable, decisions about how to deploy them in the workplace are often driven by what is technically feasible,' writes project leader Yijia Shao, a Ph.D. student in the Stanford computer science department, 'yet workers are the ones most affected by these changes and the ones the economy ultimately relies on.'
In other words, it's the front-line workers who are going to be most affected by these changes, so we might as well hear what they have to say (in addition to doing all kinds of market research.) There's a reason why the suggestion box is a time-tested element of business intelligence. Technology has to be a good fit – it's not something you just implement carelessly, throwing darts at a wall, and then expecting all of the people involved to sign on and go along for the ride. Some Results
In terms of actual study findings, the Stanford people found that a lot of it, as Billy Joel famously sung, comes down to trust: 45% of respondents had doubts about reliability, and a reported 23% were worried about job loss.
As for the types of tasks that workers favored automating, the study provides a helpful visual that shows off various must-haves against certain danger zones of adoption.
Specifically, Stanford researchers split this into a 'green light zone' and a 'red light zone', as well as a 'low priority zone', and an 'opportunities zone' featuring uses that workers might want, but that are not yet technically viable.
Uses in the green light zone include scheduling tasks for tax preparers, quality control reporting, and the interpretation of engineering reports.
Red light uses that workers are wary of include the preparation of meeting agendas for municipal clerks, as well as the task of contacting potential vendors in logistics analysis.
There's also the task of researching hardware or software products, where surveyed computer network support specialists seem to prefer to do this type of work themselves.
I thought it was funny that one item in the low priority zone was 'tracing lost, delayed or misdirected baggage,' a job typically done by ticket agents. It explains a lot for those legions of hapless travelers entering their faraway AirBnBs without so much as a toothbrush.
As for opportunities, it seems that technical writers would like AI to arrange for distribution of material, computer scientists will largely sign off on technology working on operational budgets, and video game designers would like production schedules automated. Why Automate?
I also came across a section of the study where researchers looked at reasons for automation desire on the part of survey respondents.
It seems that over 2500 survey workers want to automate a task because it will free up time for other kinds of work.
About 1500 cited 'repetitive or tedious' tasks that can be automated, and about the same number suggested that automating a particular task would improve the quality of work done.
A lower number suggested automating stressful or mentally draining tasks, or those that are complicated or difficult.
The study also broke down tasks and processes into three control areas, including 'AI agent drives task completion', 'human drives task completion' or 'equal partnership' (and two other gradations). You can see the entire thing here , or listen to one of my favorite podcasts on the subject here .
One of the headline items is a prediction of diminishing needs for analysis or information processing skills. That connects with more of a focus on managerial, interpersonal or coordination job roles. However, how this will shake out is concerning to many workers, and I would suggest that 23% of respondents worrying about job displacement is a wildly low number. Almost anybody anywhere should be worried about job displacement. Regardless of what happens in the long term, many experts are predicting extremely high unemployment in the years to come, as we work out the kinks in the biggest technological transformation of our time.
Anyway, this study brings a lot of useful information to the question – what do we want AI to do for us in enterprise?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘WE'RE NOT LEARNING ANYTHING': Stanford GSB Students Sound The Alarm Over Academics
‘WE'RE NOT LEARNING ANYTHING': Stanford GSB Students Sound The Alarm Over Academics

Yahoo

timea day ago

  • Yahoo

‘WE'RE NOT LEARNING ANYTHING': Stanford GSB Students Sound The Alarm Over Academics

Stanford University front entrance. Linda A. Cicero/Stanford News Service , long considered among the most elite MBA programs in the world, is facing a storm of internal criticism from students who say the academic experience has fallen far short of expectations. In a series of interviews with Poets&Quants, current MBA students voiced concerns about outdated course content, a disengaged faculty culture, and a broken curriculum structure that they say leaves them unprepared for post-MBA careers — and worse, dilutes the reputation and long-term value of a Stanford degree by producing scores of grads unprepared for the modern world of work. 'We're coming to the best business school on Earth, and the professors can't teach,' says a rising second-year MBA student and elected member of the school's Student Association. 'We're not learning anything. The brand is strong, but there's nothing here to help you build discernible skills.' The student and their peers have been sounding the alarm to administrators, they say, but they've been met with resistance, delays, or indifference. At the core of their frustration is a belief that the school's curriculum has not adapted to the realities of a rapidly evolving business world. While some faculty members have been receptive and collaborative when students raise concerns, they say, others see teaching as a secondary priority, and administrators have been slow or reluctant to act. In speaking candidly to Poets&Quants, the students asked for anonymity to avoid repercussions in their student experience and employment prospects. (See 'AI Is Devaluing The MBA': Stanford Students Speak Out On Curriculum Lag & The Risk To The B-School's Brand.) Meanwhile, a senior member of the GSB's leadership team tells P&Q that they 'hear the students' concerns,' and new Dean Sarah Soule, who began her tenure in June, adds that 'This is an extremely important set of issues, which I take very seriously.' As an example of what's gone wrong, the rising second-year student points to Stanford GSB's required Optimization and Simulation Modeling classes. 'They feel like they were designed in the 2010s,' the student says. 'We're living in an AI age, but there's nothing here that reflects that.' The student describes courses where the ability to 'prompt well' or subscribe to a premium AI tool matters more than actual understanding. The student and others say they increasingly teach themselves material outside class because what's offered isn't novel or skill-building. One says that in a required course, they were offered little more than what amounted to 'a five-minute Excel tutorial' or 'teaching me how to use Copilot, not teaching me how to use data.' Students also raised concerns about the school's teaching culture. Several said that professors often treat teaching as a nuisance, a not uncommon critique at many B-schools because of the predominance of academic research. One faculty member reportedly told colleagues, 'If you're worried about the class you teach, you're doing it wrong.' In stark contrast with HBS's 'cold calling' method, where each student could be called on at any time to answer a question about a reading or synthesize the current material, professors will often send out a 'Room Temp' list the day before class, listing the five to seven people who may be called on in this manner. 'You know what that teaches the students?' one student asks. 'It teaches them that they don't have to read or prepare before class if they're not on the list. It teaches us that we don't have to learn.' GSB's curriculum is structured around core and 'Distribution' requirements that are meant to teach the students fundamental business concepts. The GSB's website describes the first-year curriculum as 'Designed to make sure you're ready for anything and everything— to build your analytical foundation and intuitive skills to succeed in whatever comes next.' But the students who spoke with P&Q say these requirements are uncoordinated and incoherent. For example, when choosing 'Distribution' requirements, students must choose from a narrow menu of around 15 electives — some of which overlap, like two courses on online marketplaces, but none on foundational business strategy. 'Nearly everyone took 'Strategy Beyond Markets,' which is about influencing governments to allow you to do business,' the student says. 'And the only reason many of us took this is because it was one of the few 'Distribution' classes that had seats.' The most popular classes, they say, are often out of reach. One student recounts that one of the in-demand Distribution classes, Financial Restructuring, filled almost immediately — but first with second-year MBA students and then with first-years. 'How can you have a system where you can ONLY take from a choice of 15 classes, but seats aren't guaranteed to you? It's insanity.' Stanford uses a lottery system that randomly assigns students priority numbers to enroll. 'I put a class at the top of my list and still did not get in,' the student says. 'You're paying $250,000 and might not get a single class you came here for. Sounds unlikely, but it happens all the time.' Even classes that are filled during 'Super Round,' a pre-registration lottery that lets the highest-demand classes fill first, there are no guarantees: The student points to high-demand electives like Product Market Fit, taught by a well-known investor, that routinely shut out more than half of interested students. 'They know the class is gold. Why isn't the school offering more sections?' Even lower-demand courses, such as Graham Weaver's Managing Growing Enterprises, fill before the first draft of Super Round closes. 'Getting into Stanford was enough of a lottery. I'm shocked that I'm here and still unable to register for classes I want,' the student says. The rising second-year student and Student Association member shares results from Stanford GSB's own winter student survey, which show a sharp drop in those who agree with the statement, 'My classes are interesting and engaging.' 'This is the lowest it's been in two or three years,' the student says. 'It's a 2.9 on a 5-point scale. The floor is 1. Would you ever buy something from Amazon with 2.9 stars?' The student rejects the notion that Stanford GSB students aren't interested in learning. After all, these are high-achievers who earned admission to the most selective B-school in the world: Last fall the GSB admitted just 6.8% of 7,295 applicants. 'It's not that Stanford picks people who don't care about academics,' they say. 'It's that the academic experience is just that bad. Stanford doesn't admit duds. They admit fireworks, then forget to light the fuse.' The student and their peers in the Student Association have proposed changes — revamping the core curriculum, reforming the Distribution system, expanding popular classes — but say those proposals are often dismissed by deans unwilling to expend political capital. With a new dean, Sarah Soule, having officially began her first term in June, students hope the window for change may be opening. 'This could be the moment to fix things,' the rising second-year student says. 'But someone has to listen. 'If leadership doesn't act now, we're going to lose more than just student satisfaction. We're going to lose the value of the degree.' For now, the student and others are seeking to make their voices heard — through media, alumni outreach, and direct appeals to the administration. As the student puts it: 'We're not trying to burn the place down. We love it here. We just want it to be worthy of the name.' Asked to respond to a list of the issues laid out by GSB students, Anne Beyer, senior associate dean for academic affairs, tells P&Q in an email that 'We hear the students' concerns. The new leadership team at the GSB has only been in place for a little over a month, and I can assure you that we have a commitment to our students and curriculum. I took on this role because I care deeply about the student experience and the academic journey at the GSB. Dean Sarah Soule and I take these recent comments seriously, and addressing them is a top priority for our team. 'At the same time, it's important to recognize that some aspects of the student experience — particularly in the first year — are intentional by design. The first year is meant to establish foundations so the students are prepared for the rigor and relevance of the extraordinary elective curriculum that follows in the second year. This structure has been in place for decades, and it underpins the learning experience we aim to provide at the GSB. 'We are hopeful that as our current students progress through our program, they will continue to value this foundation — just as many alumni do. We continue to hear from our graduates how impactful these courses have been in their careers and lives.' And Dean Sarah Soule adds: 'This is an extremely important set of issues, which I take very seriously. Senior Associate Dean Anne Beyer is the absolute right choice to take on the challenges in the MBA program, curriculum, and student experience.' More to come: Future stories in this series will explore perspectives from more Stanford GSB students, including international students, who are concerned about declining academic rigor at one of the world's premier MBA programs. DON'T MISS and The post 'WE'RE NOT LEARNING ANYTHING': Stanford GSB Students Sound The Alarm Over Academics appeared first on Poets&Quants. Solve the daily Crossword

Why Do Some AI Models Hide Information From Users?
Why Do Some AI Models Hide Information From Users?

Time Business News

timea day ago

  • Time Business News

Why Do Some AI Models Hide Information From Users?

In today's fast-evolving AI landscape, questions around transparency, safety, and ethical use of AI models are growing louder. One particularly puzzling question stands out: Why do some AI models hide information from users? Building trust, maintaining compliance, and producing responsible innovation all depend on an understanding of this dynamic, which is not merely academic for an AI solutions or product engineering company. Using in-depth research, professional experiences, and the practical difficulties of large-scale AI deployment, this article will examine the causes of this behavior. AI is an effective instrument. It can help with decision-making, task automation, content creation, and even conversation replication. However, enormous power also carries a great deal of responsibility. The obligation at times includes intentionally hiding or denying users access to information. Let's look into the figures: Over 4.2 million requests were declined by GPT-based models for breaking safety rules, such as requests involving violence, hate speech, or self-harm, according to OpenAI's 2023 Transparency Report. Concerns about 'over-blocking' and its effect on user experience were raised by a Stanford study on large language models (LLMs), which found that more than 12% of filtered queries were not intrinsically harmful but were rather collected by overly aggressive filters. Research from the AI Incident Database shows that in 2022 alone, there were almost 30 cases where private, sensitive, or confidential information was inadvertently shared or made public by AI models. At its core, the goal of any AI model—especially large language models (LLMs)—is to assist, inform, and solve problems. But that doesn't always mean full transparency. Large-scale datasets, such as information from books, websites, forums, and more, are used to train AI models. This training data can contain harmful, misleading, or outright dangerous content. So AI models are designed to: Avoid sharing dangerous information like how to build weapons or commit crimes. like how to build weapons or commit crimes. Reject offensive content , including hate speech or harassment. , including hate speech or harassment. Protect privacy by refusing to share personal or sensitive data. by refusing to share personal or sensitive data. Comply with ethical standards, avoiding controversial or harmful topics. As an AI product engineering company, we often embed guardrails —automatic filters and safety protocols—into AI systems. They are not arbitrary; they are required to prevent misuse and follow rules. Expert Insight: In projects where we developed NLP models for legal tech, we had to implement multi-tiered moderation systems that auto-redacted sensitive terms—this is not over-caution; it's compliance in action. In AI, compliance is not optional. Companies building and deploying AI must align with local and international laws, including GDPR and CCPA —privacy regulations requiring data protection. —privacy regulations requiring data protection. COPPA — Preventing AI from sharing adult content with children. — AI from sharing adult content with children. HIPAA—Safeguarding health data in medical applications. These legal boundaries shape how much an AI model can reveal. For example, a model trained in healthcare diagnostics cannot disclose medical information unless authorized. This is where AI solutions companies come in—designing systems that comply with complex regulatory environments. Some users attempt to jailbreak AI models to make them say or do things they shouldn't. To counter this: Models may refuse to answer certain prompts . . Deny requests that seem manipulative. that seem manipulative. Mask internal logic to avoid reverse engineering. As AI becomes more integrated into cybersecurity, finance, and policy applications, hiding certain operational details becomes a security feature, not a bug. Although the intentions are usually good, there are consequences. Many users, including academic researchers, find that AI models Avoid legitimate topics under the guise of safety. under the guise of safety. Respond vaguely , creating unproductive interactions. , creating unproductive interactions. Fail to explain 'why' an answer is withheld. For educators or policymakers relying on AI for insight, this lack of transparency can create friction and reduce trust in the technology. Industry Observation: In an AI-driven content analysis project for an edtech firm, over-filtering prevented the model from discussing important historical events. We had to fine-tune it carefully to balance educational value and safety. If an AI model refuses to respond to a certain type of question consistently, users may begin to suspect: Bias in training data Censorship Opaque decision-making This fuels skepticism about how the model is built, trained, and governed. For AI solutions companies, this is where transparent communication and explainable AI (XAI) become crucial. So, how can we make AI more transparent while keeping users safe? Models should not just say, 'I can't answer that.' They should explain why, with context. For instance: 'This question may involve sensitive information related to personal identity. To protect user privacy, I've been trained to avoid this topic.' This builds trust and makes AI systems feel more cooperative rather than authoritarian. Instead of blanket bans, modern models use multi-level safety filters. Some emerging techniques include: SOFAI multi-agent architecture : Where different AI components manage safety, reasoning, and user intent independently. : Where different AI components manage safety, reasoning, and user intent independently. Adaptive filtering : That considers user role (researcher vs. child) and intent. : That considers user role (researcher vs. child) and intent. Deliberate reasoning engines: They use ethical frameworks to decide what can be shared. As an AI product engineering company, incorporating these layers is vital in product design—especially in domains like finance, defense, or education. AI developers and companies must communicate. What data was used for training What filtering rules exist What users can (and cannot) expect Transparency helps policymakers, educators, and researchers feel confident using AI tools in meaningful ways. Recent work, like DeepSeek's efficiency breakthrough, shows how rethinking distributed systems for AI can improve not just speed but transparency. Mixture-of-Experts (MoE) architectures were used by DeepSeek to cut down on pointless communication. This also means less noise in the model's decision-making path—making its logic easier to audit and interpret. Traditional systems often fail because they try to fit AI workloads into outdated paradigms. Future models should focus on: Asynchronous communication Hierarchical attention patterns Energy-efficient design These changes improve not just performance but also trustworthiness and reliability, key to information transparency. If you're in academia, policy, or industry, understanding the 'why' behind AI information hiding allows you to: Ask better questions Choose the right AI partner Design ethical systems Build user trust As an AI solutions company, we integrate explainability, compliance, and ethical design into every AI project. Whether it's conversational agents, AI assistants, or complex analytics engines, we help organizations build models that are powerful, compliant, and responsible. In conclusion, AI models hide information for safety, compliance, and security reasons. However, trust can only be established through transparency, clear explainability, and a strong commitment to ethical engineering. Whether you're building products, crafting policy, or doing research, understanding this behavior can help you make smarter decisions and leverage AI more effectively. If you're a policymaker, researcher, or business leader looking to harness responsible AI, partner with an AI product engineering company that prioritizes transparency, compliance, and performance. Get in touch with our AI solutions experts, and let's build smarter, safer AI together. Transform your ideas into intelligent, compliant AI solutions—today. TIME BUSINESS NEWS

Friederike Ernst: Using The Internet To Empower People And Communities
Friederike Ernst: Using The Internet To Empower People And Communities

Forbes

time3 days ago

  • Forbes

Friederike Ernst: Using The Internet To Empower People And Communities

Friederike Ernst is an Ivy League physicist, co-founder of Web3 company Gnosis, and a mother of four ... More children under 10. At the age of 12, Friederike Ernst's father handed her a copy of theoretical physicist Simon Singh's The Code Book, sparking a lifelong interest in cryptography. The gift was prescient; Friederike went on to study physics to post-doctoral level at Stanford and Colombia, before transitioning to the world of tech. 'I have always loved building things. I could have been a very happy carpenter,' says Friederike. 'I enjoy being in a place where I can shape things, and the next iteration of the internet, known as 'Web3' or the decentralised web, is one of the areas where we can create a better society for all.' Through our conversation, one theme came through consistently - an engrained distrust of authority and the idea of empowering people with agency. That is the fundamental value that underscores her mission-driven work in the tech space - creating the infrastructure that gives power and autonomy back to people, rather than to huge corporations. 'Labels really don't matter' - the changing face of tech for women The businesswoman in glasses standing near the display At 22, she was the only woman in her class, but she says it's rare to be the sole woman these days: 'We've made tremendous progress. We don't need to get to 50:50 representation in every field - it's true that generally speaking men and women have different interests - but a certain level of diversity is important'. She adds that as the only woman, there is a burden to prove yourself not just on your own behalf, but as a representative of women everywhere. Today, Friederike is a mother of four children aged between one and nine years old, and she challenges the idea that tech is a hostile space for women. 'In the beginning, they can underestimate you, but I feel really appreciated for my contributions. Some say it's not a good place for mothers but I haven't found that to be the case. If you're smart, driven, and making a contribution, labels really don't matter.' Power (and profit) to the people Growing up in Germany, Friederike's values are shaped by counter-cultural cypherpunk ideologies grounded in resisting authority and unchecked capitalism: 'I'm a firm believer in agency, just give people the right tools and they can achieve what they want to.' That's what appeals to her about working in Web3, where decentralisation, privacy, and user ownership are prioritised. At Gnosis, she works on developing the infrastructure needed to make that happen across a diverse range of applications and sectors. She describes Web3 as a do-over of the early internet that allows for shared agency. 'The internet was initially used for a lot of peer-to-peer interactions. Over the last 30 years, a lot of that power has been centralised to accrue value and power to the same 10 companies. Google probably has access to your search history, correspondence, and location - that's an incredible amount of information - and then they target you with related ads.' What if we could have similar services without compromising on privacy? Why should we accept that this is the quid pro quo for access to online tools and services? Reimagining finance for everyone Concept of decentralized internet. Wide banner. WEB 3 technology concept. WEB 3.0 3d rendering.. Friederike explains that the principle of shared ownership, where communities, not corporations, hold the value they help create, can be applied to money and finance. At its foundation, Web3 is a neutral technology that could be steered in vastly different directions. 'Web3 is a base technology; an infrastructure. It can be used to create a utopia, but it can also be used to build an extremely effective surveillance state,' one advocate said. 'We need to ensure that doesn't happen, and that privacy is normalised again.' This is where Gnosis, the company she co-founded, comes in. Gnosis is building the digital tools and systems needed to make financial services more accessible, fair, and decentralised. The idea is simple: instead of profits going to a handful of big banks, tech companies, and intermediaries, the benefits should flow back to the users who actually create the value. 'We're building the foundations for a more open, equitable internet — but there's still a long way to go,' she says. 'In an open financial system, everyone should have equal access to opportunities, no matter where they live. Right now, it's incredibly difficult for people in some countries to hold foreign currencies or invest in global markets. But that kind of access is essential if we want a fairer world.' Traditionally, banks have held a lot of power. But today, new technologies make it possible to replace that middleman. Thanks to blockchain, a secure, shared digital ledger, money can now move directly between people without needing a central authority to oversee it. Friederike said: 'Bitcoin was the first example of this. It started as a form of digital cash that people could send to each other without going through a bank. Over time, it evolved into what many now consider 'digital gold' because there's a limited supply — which helps protect its value over time.' The bigger shift, she believes, will come from creating new money that is no longer dependent on central banks. At Gnosis, she helped to create and launch a trust-based cryptocurrency called Circles, where users create and issue their own coins so that they can use them to barter with other trusted people in their community. As the community using Circles grows, so too does its power as a currency. Agency and autonomy are the values that drive Web3 Asked who she looks up to in terms of values, Friederike is reluctant to name specific role models, but said: 'I look up to people who can withstand the pressure or temptation to make money quickly.' In a space dominated by tech giants constantly looking for new ways to monetise and digital currencies creating hype without benefits, the focus on value creation rather than value extraction is certainly refreshing.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store