
Business Leaders Called To Align Tech Decisions With Corporate Values
Signs point to an emerging, if informal, social contact on AI deployment.
With the rapid rollout of AI, corporate leaders are increasingly being called to consider the proper alignment between technology strategies and organizational purposes and values.
It's a call that speaks to an informal, yet important 'social license' between companies and their stakeholders on the use of technology, and its impact on labor, among other interests. And it's a call that's been reflected in recent comments from influential religious, legal and business leaders, including Pope Leo XIV, Amazon CEO Andrew Jassy, and Wachtell Lipton Rosen & Katz Founding Partner Martin Lipton.
Attention to this informal social license arose from President Joseph Biden's 2023 Executive Order on the 'Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.' This (now revoked) Executive Order identified eight specific principles on which AI development should be guided, including a commitment to supporting American workers and preventing 'harmful labor-force disruptions'.
The National Association of Corporate Directors ('NACD') indirectly acknowledged the AI social license in its 2024 Blue Ribbon Commission Report, 'Technology Leadership in the Boardroom: Driving Trust and Value.' The Report called upon boards to 'move fast and be bold' with respect to AI deployment, while simultaneously acting as a 'guardrail to uphold organizational values and protect stakeholders' interests'.
In a May 12, 2025, address to the College of Cardinals, Pope Leo spoke broadly about the social concerns with AI, focusing particularly on what he described as the challenges to the defense of human dignity, justice and labor that arise from 'developments in the field of artificial intelligence.' A recent article in The Wall Street Journal chronicled the long-running dialogue between the Vatican and Silicon Valley on the ethical implications of AI.
Indeed, on June 17, Pope Leo delivered a written message to a two-day international conference in Rome focusing on AI, ethics and corporate governance. In his message, the Pope urged AI developers to evaluate its implications in the context of the 'integral development of the human person and society…taking into account the well-being of the human person not only materially, but also intellectually and spiritually…'.
This 'alignment' concern was underscored by a recent post by the highly regarded Mr. Lipton, encouraging corporate boards to maintain their organizational values while pursuing value through AI. 'Boards should consider in a balanced manner the effect of technological adoptions on important constituencies, including employees and communities, as opposed to myopically seeking immediate expense-line efficiencies at any cost.'
There certainly is little question that for many companies, generative AI is likely to have a disruptive impact on labor; that the efficiency gains expected from AI implementation could result in a reduced or dramatically altered workforce. The related question is the extent to which 'corporate values' should encompass a response to tech-driven labor disruption. Note in this regard the long-standing position of NACD is that a positive workforce culture is a significant corporate asset.
A recent memo from Amazon CEO Andrew Jassy offers a positive example of how to address the strategy/values alignment challenge ‒ by being transparent with employees, well in advance, about the coming transformation and its impact on the workforce, and by offering practical suggestions on how employees can best prepare for it:
Those who embrace this change, become conversant in AI, help us build and improve our AI capabilities internally and deliver for customers, will be well-positioned to have high impact and help us reinvent the company.
As boards work with management to deploy AI, they should be in regular conversation on which valued-centered decisions the board must be informed, and on which such decisions they may be asked to decide, or merely advise. Such a dialogue is likely to enhance the reflection on corporate purposes and values within decisions regarding strategy and technology.
Of course, that incorporation can come in many different ways and from many different directions; the Amazon example being one of them. There are no established guidelines on how leadership might approach the strategy/values alignment discussion. But there is a growing recognition that corporate values must be accommodated in some manner into the AI decision-making.
Most likely, effective alignment will balance the inevitability of AI—driven workforce impact with initiatives that advance employee well-being and 'positively augment human work,' including initiatives that minimize job-displacement risks and maximize career opportunities related to AI.
For as the NACD suggests, the ultimate AI deployment message to the board is that '[I]t's about what you can do, but also what you should do.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
28 minutes ago
- Yahoo
5 Must-Read Analyst Questions From Maximus's Q1 Earnings Call
Maximus delivered a steady first quarter, with revenue holding flat year over year and non-GAAP profits surpassing Wall Street forecasts. Management credited the results to operational efficiencies, particularly in the U.S. Federal Services segment, and successful implementation of automation in areas such as the No Surprises Act contract and Veterans Affairs case preparation. CEO Bruce Caswell highlighted that investments in artificial intelligence and process automation have begun to pay off, enabling the company to handle greater volumes and redirect staff to higher-value work. Is now the time to buy MMS? Find out in our full research report (it's free). Revenue: $1.36 billion vs analyst estimates of $1.29 billion (flat year on year, 5.2% beat) Adjusted EPS: $2.01 vs analyst estimates of $1.38 (45.7% beat) Adjusted EBITDA: $186.4 million vs analyst estimates of $139 million (13.7% margin, 34.1% beat) The company slightly lifted its revenue guidance for the full year to $5.33 billion at the midpoint from $5.28 billion Management raised its full-year Adjusted EPS guidance to $6.45 at the midpoint, a 6.6% increase Operating Margin: 11.2%, up from 9.5% in the same quarter last year Market Capitalization: $4.02 billion While we enjoy listening to the management's commentary, our favorite part of earnings calls are the analyst questions. Those are unscripted and can often highlight topics that management teams would rather avoid or topics where the answer is complicated. Here is what has caught our attention. Charlie Strauzer (CJS Securities) asked how the raised guidance accounted for the quarter's strong performance and the outlook for the remainder of the year. CFO David Mutryn explained that guidance reflects Q2 results and remains cautious due to environmental risks. Charlie Strauzer (CJS Securities) inquired about the drivers behind the strong operating margin. CEO Bruce Caswell credited increased volumes and successful automation, particularly in enabling staff to focus on higher-value activities. Charlie Strauzer (CJS Securities) questioned the impact of increased federal scrutiny and whether procurement delays were affecting the pipeline. Caswell acknowledged delays in civilian agency contracts but noted healthy proposal activity and some benefit from contract extensions. Charlie Strauzer (CJS Securities) requested more detail on the performance of the Outside The U.S. segment. CFO David Mutryn highlighted improvements from restructuring and steady growth in the UK's functional assessment services contract. Charlie Strauzer (CJS Securities) asked about the sustainability of the current business mix and pipeline. Management pointed to a balanced approach, with no reliance on new contract wins to achieve this year's forecasts, but acknowledged uncertainty in future awards. As we look to upcoming quarters, the StockStory team will be watching (1) whether automation and AI investments continue to drive operational efficiencies and margin improvement, (2) progress in securing new and rebid clinical assessment and federal contracts, and (3) the impact of ongoing government spending reviews on contract renewals and pricing. Developments in the company's international business mix and federal procurement pipeline will also be critical signposts. Maximus currently trades at $71.37, up from $67.22 just before the earnings. Is the company at an inflection point that warrants a buy or sell? The answer lies in our full research report (it's free). Market indices reached historic highs following Donald Trump's presidential victory in November 2024, but the outlook for 2025 is clouded by new trade policies that could impact business confidence and growth. While this has caused many investors to adopt a "fearful" wait-and-see approach, we're leaning into our best ideas that can grow regardless of the political or macroeconomic climate. Take advantage of Mr. Market by checking out our Top 9 Market-Beating Stocks. This is a curated list of our High Quality stocks that have generated a market-beating return of 183% over the last five years (as of March 31st 2025). Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-small-cap company Exlservice (+354% five-year return). Find your next big winner with StockStory today. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
28 minutes ago
- Yahoo
5 Must-Read Analyst Questions From Manitowoc's Q1 Earnings Call
Manitowoc's first quarter saw a positive market reaction despite revenue and adjusted EPS falling short of Wall Street expectations. Management attributed the quarter's performance to higher orders in the Americas and a sharp uptick in European tower crane demand, with CEO Aaron Ravenscroft highlighting non-new machine sales growth and 'strong customer feedback' at the Bauma trade show. The company also pointed to successful integration of AI into its operational processes, yielding measurable savings, and reported progress in aftermarket initiatives driven by expanded service capabilities. Challenges such as lower overall sales and margin compression were acknowledged, but management focused on the resilience of the aftermarket business and improving order trends. Is now the time to buy MTW? Find out in our full research report (it's free). Revenue: $470.9 million vs analyst estimates of $482 million (4.9% year-on-year decline, 2.3% miss) Adjusted EPS: -$0.16 vs analyst expectations of -$0.09 (71% miss) Adjusted EBITDA: $21.7 million vs analyst estimates of $16.14 million (4.6% margin, 34.4% beat) Operating Margin: 1.3%, down from 3.1% in the same quarter last year Backlog: $793.7 million at quarter end, down 18.3% year on year Market Capitalization: $433.8 million While we enjoy listening to the management's commentary, our favorite part of earnings calls are the analyst questions. Those are unscripted and can often highlight topics that management teams would rather avoid or topics where the answer is complicated. Here is what has caught our attention. Jerry Revich (Goldman Sachs) asked how tariff mitigation splits between pricing, sourcing, and vendor negotiations. CEO Aaron Ravenscroft explained mitigation involves surcharges, alternative suppliers, and partial cost-sharing with vendors, but emphasized the situation is fluid due to currency and market factors. Revich (Goldman Sachs) inquired about the relative impact of Chinese tariffs and underlying assumptions. Ravenscroft declined to break down the exact China portion, noting tariffs affect both Chinese components and steel/aluminum imports, with mitigation strategies dynamically adjusted as tariffs evolve. Revich (Goldman Sachs) sought detail on the drivers behind accelerating European tower crane orders. Ravenscroft described recovery as broad-based, attributing it to historically low dealer inventories and modest improvements in customer sentiment, but clarified that the market remains far from prior cycle peaks. Steven Fisher (UBS) pressed for clarity on the impact of steel and aluminum tariffs on U.S. production costs. Ravenscroft confirmed these are included in the $45 million estimate for Shady Grove manufacturing, reflecting higher input costs for domestically produced cranes. Fisher (UBS) questioned the ability to reprice backlog orders and sustain non-new machine sales growth. Ravenscroft responded that surcharges are intended to offset tariffs on backlog units, and non-new machine sales growth is 'broad-based' across geographies and product types, supported by ongoing expansion of service technicians and locations. Looking ahead, the StockStory team will be watching (1) the company's ability to sustain aftermarket and non-new machine sales momentum, (2) the effectiveness of tariff mitigation efforts as global trade policy evolves, and (3) continued signs of recovery in European tower crane orders. Execution on service expansion, successful pricing strategies, and progress in key infrastructure projects will also be important markers to track. Manitowoc currently trades at $12.24, up from $8.30 just before the earnings. In the wake of this quarter, is it a buy or sell? The answer lies in our full research report (it's free). Donald Trump's victory in the 2024 U.S. Presidential Election sent major indices to all-time highs, but stocks have retraced as investors debate the health of the economy and the potential impact of tariffs. While this leaves much uncertainty around 2025, a few companies are poised for long-term gains regardless of the political or macroeconomic climate, like our Top 6 Stocks for this week. This is a curated list of our High Quality stocks that have generated a market-beating return of 183% over the last five years (as of March 31st 2025). Stocks that made our list in 2020 include now familiar names such as Nvidia (+1,545% between March 2020 and March 2025) as well as under-the-radar businesses like the once-micro-cap company Tecnoglass (+1,754% five-year return). Find your next big winner with StockStory today. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
35 minutes ago
- Yahoo
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md