
MCP Connects, SDP Delivers: The Missing Half of AI Memory is Here
Key Takeaways
Model Context Protocol (MCP) creates AI connections to external tools but doesn't define structured memory content
Semantic Digest Protocol (SDP) provides trust-scored, fragment-level memory objects for reliable AI operations
Multi-agent systems typically fail due to missing shared, verifiable context rather than communication issues
MCP and SDP together form a complete memory architecture that stops hallucinations and contextual drift
MedicareWire will implement SDP in 2025 as the first major deployment of AI-readable, trust-verified memory in a regulated domain
AI's Memory Crisis: Why Today's Systems Can't Remember What Matters
Today's AI systems face a critical problem: they process vast information but struggle with reliable memory. This isn't merely a technical issue — it's what causes hallucinations, inconsistency, and unreliability in advanced AI deployments.
This problem becomes obvious in multi-agent systems. When specialized AI agents work together, they don't typically fail from poor communication. They fail because they lack shared, scoped, and verifiable context. Without standardized memory architecture, agents lose alignment, reference inconsistent information, and produce unreliable results.
David Bynon, founder at MedicareWire, identified this issue early on. In regulated areas like Medicare, incorrect information can seriously impact consumers making healthcare decisions.
The solution needs two protocols working together to create a complete memory system for AI. The first protocol, Model Context Protocol (MCP), addresses the connection problem. But it's just half of what's needed for truly reliable AI memory.
Understanding Model Context Protocol (MCP)
IBM recently recognized the Model Context Protocol (MCP) as core infrastructure for AI systems, describing it as 'USB-C for AI' — a universal connector standard allowing AI models to connect with external tools, data sources, and memory systems.
This recognition confirmed what many AI engineers already understood: standardized connections between AI models and external resources build reliable systems at scale.
IBM's Recognition: The 'USB-C for AI' Breakthrough
The USB-C comparison makes sense. Before USB standardization, connecting devices to computers required numerous proprietary ports and cables. Before MCP, every AI tool integration needed custom code, fragile connections, and ongoing maintenance.
IBM's official support of MCP acknowledged that AI's future requires standardized interfaces. Just as USB-C connects any compatible device to any compatible port, MCP creates a standard protocol for AI systems to interact with external tools and data sources.
What MCP Solves: The Transport Problem
MCP handles the transport problem in AI systems. It standardizes how an AI agent:
Negotiates with external systems about needed information
Creates secure, reliable connections to tools and data sources
Exchanges information in predictable, consistent formats
Maintains state across interactions with various resources
This standardization allows developers to build tools once for use with any MCP-compliant AI system. Custom integrations for each new model or tool become unnecessary — just consistent connectivity across platforms.
The Critical Gap: Missing Content Definition
Despite its value, MCP has a major limitation: it defines how AI systems connect, but not what the content should look like. This resembles standardizing a USB port without defining the data format flowing through it.
This creates a significant gap in AI memory architecture. While MCP handles connections, it doesn't address:
How to structure memory for machine understanding
How to encode and verify trust and provenance
How to scope and contextualize content
How information fragments should relate to each other
This explains why AI systems with excellent tool integration still struggle with reliable memory — they have connections but lack content structure for trustworthy recall.
Semantic Digest Protocol: The Memory Layer MCP Needs
This is where the Semantic Digest Protocol (SDP) fits — built to work with MCP while solving what it leaves unaddressed: defining what memory should actually look like.
Trust-Scored Fragment-Level Memory Architecture
SDP organizes memory at the fragment level, instead of treating entire documents as single information units. Each fragment — a fact, definition, statistic, or constraint — exists as an independent memory object with its own metadata.
These memory objects contain:
The actual information content
A trust score based on source credibility
Complete provenance data showing information origin
Scope parameters showing where and when the information applies
Contextual relationships to other memory fragments
This detailed approach fixes a basic problem: AI systems must know not just what a fact is, but how much to trust it, where it came from, when it applies, and how it connects to other information.
Using the 'USB-C for AI' analogy, SDP is a universal, USB-C thumb drive for the Model Context Protocol. It provides data, across multiple surfaces, in a format MCP recognizes and understands
Machine-Ingestible Templates in Multiple Formats
SDP creates a complete trust payload system with templates in multiple formats:
JSON-LD for structured data interchange
TTL (Turtle) for RDF graph representations
Markdown for lightweight documentation
HTML templates for web publication
Invented by David Bynon as a solution for MedicareWire, the format flexibility makes SDP work immediately with existing systems while adding the necessary trust layer. For regulated sectors like healthcare, where MedicareWire operates, this trust layer changes AI interactions from educated guesses to verified responses.
The Complete AI Memory Loop: MCP + SDP in Action
When MCP and SDP work together, they form a complete memory architecture for AI systems. Here's the workflow:
From User Query to Trust-Verified Response
The process starts with a user query. Example: 'What's the Maximum Out-of-Pocket limit for this Medicare Advantage plan in Los Angeles?'
The AI model uses MCP to negotiate context with external resources. It identifies what specific plan information it needs and establishes connections to retrieve that data.
The external resource sends back an SDP-formatted response with the requested information. This includes the MOOP value, geographic scope (Los Angeles County), temporal validity (2025), and provenance (directly from CMS data), all with appropriate trust scores.
With trust-verified information, the model answers accurately: 'The 2025 Maximum Out-of-Pocket limit for this plan in Los Angeles County is $4,200, according to CMS data.'
No hallucination. No vague references. No outdated information. Just verified, scoped, trust-scored memory through standardized connections.
Eliminating Hallucinations Through Verified Memory
This method addresses what causes hallucinations in AI systems. Rather than relying on statistical patterns from training, the AI retrieves specific, verified information with full context about reliability and applicability.
When information changes, there's no need to retrain the model. The external memory layer updates, and the AI immediately accesses new information—complete with trust scoring and provenance tracking.
Real-World Implementation: MedicareWire 2025
This isn't theoretical — SDP launches on MedicareWire.com in August 2025, marking the first major implementation of AI-readable, trust-scored memory in a regulated domain.
1. First Large-Scale Deployment in a Regulated Domain
The healthcare industry, especially Medicare, offers an ideal testing ground for trust-verified AI memory. Incorrect information has serious consequences, regulations are complex, and consumers need reliable guidance through a confusing system.
MedicareWire's implementation will give AI systems unprecedented accuracy when accessing Medicare plan information. Instead of using potentially outdated training data, AI systems can query MedicareWire's SDP-enabled content for current, verified information about Medicare plans, benefits, and regulations.
2. Solving Healthcare's Critical Information Accuracy Problem
Consumers using AI assistants for Medicare options will get consistent, accurate information regardless of which system they use. The SDP implementation ensures any AI agent can retrieve precise details about:
Plan coverage specifications
Geographic availability
Cost structures and limitations
Enrollment periods and deadlines
Regulatory requirements and exceptions
All come with proper attribution, scope, and trust scoring.
3. Creating the Foundation for Multi-Agent Trust Infrastructure
Beyond immediate benefits for Medicare consumers, this implementation creates a blueprint for trust infrastructure in other regulated fields. Multi-agent systems will have shared, verifiable context — eliminating drift and hallucination problems that affect complex AI deployments.
The combination of MCP's standardized connections and SDP's trust-verified memory builds the foundation for reliable AI systems that can safely operate in highly regulated environments.
From Connection to Memory: The Future of Reliable AI Is Here
David Bynon, founder of Trust Publishing and architect of SDP, states: 'We didn't just create a format. We created the trust language AI systems can finally understand — and remember.'
As AI shapes important decisions in healthcare, finance, legal, and other critical fields, reliable, verifiable memory becomes essential. The MCP+SDP combination shifts from probabilistic guessing to trust-verified information retrieval — defining the next generation of AI applications.
SDP will be available as an open protocol for non-directory systems, supporting broad adoption and continued development across the AI ecosystem. As the first major implementation, MedicareWire's deployment marks the beginning of a new phase in trustworthy artificial intelligence.
MedicareWire is leading development of trustworthy AI memory systems that help consumers access accurate healthcare information when they need it most.
David Bynon
101 W Goodwin St # 2487
Prescott
Arizona
86303
United States
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNBC
12 hours ago
- CNBC
Ethereum turns 10: From scrappy experiment to Wall Street's invisible backbone
CANNES — Ten years ago, Vitalik Buterin and a small band of developers huddled in a drafty Berlin loft strung with dangling lightbulbs, laptops balanced on mismatched chairs and chipped tables. They weren't corporate titans or venture-backed founders — just idealists working long nights to push a radical idea into reality. From that sparse office, they launched "Frontier," Ethereum's first live network. It was bare-bones — no interface, no polish, nothing user-friendly. But it could mine, execute smart contracts, and let developers test decentralized applications. It was the spark that transformed Ethereum from an abstract concept into a living, breathing system. Bitcoin had captured headlines as "digital gold," but what they built was something else entirely: programmable money, a financial operating system where code could move funds, enforce contracts, and create businesses without banks or brokers. One year earlier and 520 miles away in Zurich, Paul Brody got a call from IBM security: A kid was wandering the lab unattended. "That's not a child," Brody told them. "That's Vitalik. He's a grown-up — he just looks really young." At the time, Buterin had just founded Ethereum. The blockchain was still in its alpha stage, an early build of what would become a $420 billion platform rewiring Wall Street and powering decentralized finance, NFTs, and tokenized markets across the globe. Brody, then leading a research team at IBM, remembers how quickly the idea clicked. "One of the guys on the research team came to me and said, 'I've met this really interesting guy. He's got a really cool like a version of bitcoin, but we're going to make it much faster and programmable,'" he said. "And when he said that to me, I thought, 'That's it. That is what I want. That is what we need.'" With Buterin's help, IBM built its first blockchain prototype on Ethereum's early code, unveiling it at CES in 2015 alongside Samsung. "That was how I ended up down this path," Brody said. "I was done with all other technology and basically made the switch to blockchain." Even now, as EY's global blockchain leader, Brody remembers feeling a pang of envy. "This is a kid, and it doesn't matter," he said. "I was jealous of Vitalik… to be able to do that." He added, "I don't think opportunities like that could have been surfaced when I was that age." Now, a decade later, that experiment has quietly rewired global markets. "It's very impressive, just how much the space has succeeded and grown into, beyond pretty much anyone's expectations," Buterin told CNBC in Cannes on the sidelines of the blockchain's flagship event in Europe. Buterin said the change over the past decade has been staggering. Ten years ago, he recalled, the crypto community was "just a very small space," with only a handful of people working on bitcoin and a few other projects. Since then, Ethereum has become "this big thing," Buterin said, with major corporations now launching assets on both its base layer and layer-two networks. Parts of national economies are beginning to run on Ethereum infrastructure, a far cry from its cypherpunk origins. But Buterin warned that mainstream adoption brings risks as well as benefits. One concern is that if too few issuers or intermediaries dominate, they could become "de facto controllers of the ecosystem." He described a scenario where Ethereum might appear open, but, in practice, all the keys are managed by centralized providers. "That's the thing that we don't want," he said. Two years earlier in Prague, CNBC met Buterin at Paralelní Polis, a sprawling industrial complex turned anarchist tech hub in the city's Holešovice district. The building's labyrinthine staircases and shadowed corridors felt like a physical map of the crypto world itself — part resistance movement, part experiment in reimagining power. It was a place built on Václav Benda's concept of a "parallel society," where decentralized technologies offered refuge from state surveillance and control. It's the kind of place where Buterin, a self-described nomad, found himself at home among cypherpunks and cryptographic idealists. At the time, Buterin described crypto's greatest utility not in speculative trading, but in helping people survive broken financial systems in emerging markets. "The stuff that we often find a bit basic and boring is exactly the stuff that brings lots of value," he told CNBC at the time. "Just being able to plug into the international economy — these are things that they don't have, and these are things that provide huge value for people there." Even in Prague, where coders worked to make payments fast and censorship-resistant, the technology felt like a resistance movement — privacy-preserving, anti-authoritarian, a lifeline in countries where banking collapses were common and money couldn't be trusted. This year, Buterin keynoted Ethereum's flagship conference at the Palais des Festivals — the same red carpet venue that hosts movie stars each spring. It was a fitting symbol of Ethereum's journey: from underground hacker dens to a network that governments, banks, and brokerages are now racing to build upon. Brody, who currently leads blockchain strategy at EY, says what matters most is how deeply Ethereum is integrating into traditional finance. "The global financial system is really nicely described as a whole network of pipes," he said. "What's happening now is that Ethereum is getting plumbed into this infrastructure," Brody continued, noting that until recently, crypto operated on entirely separate rails from traditional finance. Now, he said, Ethereum is being wired directly into core transaction systems, setting the stage for massive financial flows — from investors to everyday savers — to migrate away from older mechanisms toward Ethereum-based platforms that can move money faster, at lower cost, and with more advanced functionality than legacy systems allow. Stablecoins — digital dollars that live on Ethereum — power trillions in payments, tokenized assets and funds are moving on-chain, and Robinhood recently rolled out tokenized U.S. equities via Arbitrum, an Ethereum-based layer two. Circle's USDC — the second-largest stablecoin — still settles around 65% of its volume on Ethereum's rails. According to CoinGecko's latest "State of Stablecoins" report, Ethereum accounts for nearly 50% of all stablecoin activity. Between Circle's IPO and the stablecoin-focused GENIUS Act, now signed into law by President Donald Trump, regulators have new reason to engage with, rather than fight, this transformation. Data from Deutsche Bank shows stablecoin transactions hit $28 trillion last year — more than Mastercard and Visa combined. The bank itself has announced plans to build a tokenization platform on zkSync, a fast, cost-efficient Ethereum layer two designed to help asset managers issue and manage tokenized funds, stablecoins, and other real-world assets while meeting regulatory and data protection requirements. Digital asset exchanges like Coinbase and Kraken are racing to capture this crossover between traditional securities and crypto. As part of its quarterly earnings release, Coinbase said this week it's launching tokenized stocks and prediction markets for U.S. users in the coming months, a move that would diversify its revenue stream and bring it into more direct competition with brokerages like Robinhood and eToro. Kraken announced plans to offer 24/7 trading of U.S. stock tokens in select overseas markets. BlackRock's tokenized money market fund, BUIDL, launched on Ethereum last year, offering qualified investors on-chain access to yield with real-time redemptions settled in USDC. Even as newer blockchains tout faster speeds and lower fees, Ethereum has proven its staying power as the trusted network for global finance. Buterin told CNBC in Cannes that there's a misconception about what institutions actually want. "A lot of institutions basically tell us to our faces that they value Ethereum because it's stable and dependable, because it doesn't go down," he said. He added that firms frequently ask about privacy and other long-term features — the kinds of concerns that institutions, he said, "really value." Different institutions are choosing different layer twos for different needs — Robinhood uses Arbitrum, Deutsche Bank zkSync, Coinbase and Kraken Optimism — but they all ultimately settle on Ethereum's base layer. "The value proposition of Ethereum is its global reach, its huge capital flows, its incredible programmability," Brody said. He added that the fact it isn't the fastest blockchain or the one with the quickest settlement times "is secondary to the fact that it's overall the most widely adopted and flexible system." Brody also believes history points toward consolidation. He said that in most technology standards wars, one platform ultimately dominates. In his view, Ethereum is likely to become that dominant programmability layer, while Bitcoin plays a complementary role as a risk-off, scarcity-driven asset. Engineers, he said, "love to work on a standard… to scale on a standard," and Ethereum has become precisely that. Tomasz Stańczak, the newly appointed co-executive director of the Ethereum Foundation, sees the same pattern from inside the ecosystem. "Institutions chose Ethereum over and over again for its values," Stańczak said. "Ten years without stopping for a moment. Ten years of upgrades with a huge dedication to security and censorship resistance." When institutions send an order to the market, they want to be sure that it's treated fairly, that nobody has preference, and that the transaction is executed at the time when it's delivered. "That's what Ethereum guarantees," added Stańczak. Those assurances have become more valuable as traditional finance moves on-chain. Ethereum's path hasn't been smooth. The network has weathered spectacular booms and busts, rivals promising faster speeds, and criticism that it's too slow or expensive for mass adoption. Yet it has outlasted nearly all early competitors. In 2022, Ethereum replaced its old transaction validation method, proof-of-work — where armies of computers competed to solve puzzles — with proof-of-stake, where users lock up their ether as collateral to help secure the network. The shift cut Ethereum's energy use by more than 99% and set the stage for upgrades aimed at making apps faster and cheaper to run on its base layer. The next decade will test whether Ethereum can scale without compromise. Buterin said the first priority is getting Ethereum to "the finish line" in terms of its technical goals. That means improving scalability and speed without sacrificing its core principles of decentralization and security — and ideally making those properties even stronger. Zero-knowledge proofs, for example, could dramatically increase transaction capacity while making it possible to verify that the chain is following the rules of the protocol on something as small as a smartwatch. There are also algorithmic changes the team already knows are needed to protect Ethereum against large-scale computing attacks. Implementing those, Buterin said, is part of the path to making Ethereum "a really valuable part of global infrastructure that helps make the internet and the economy a more free and open place." Buterin believes the real change won't come with fireworks. He said it may already be unfolding years before most people recognize it. "This type of disruption doesn't feel like overturning the existing system," he said. "It feels like building a new thing that just keeps growing and growing until eventually more and more people realize you don't even have to look at the old thing if you didn't want to." Brody can already see hints of that future. Wire transfers are moving on-chain, assets like stocks and real estate are being tokenized, and eventually, he said, businesses will run entire contracts — the money, the products, the terms and conditions — automatically on a single, shared infrastructure. That shift, Brody added, won't simply copy old financial systems onto new technology. "One of the lessons from technology adoption is that it's not that we replace like for like," he said. "When new things come along, we tend to build on a new technology infrastructure. My key hypothesis is that as we build new financial products, it will be attractive to build them on blockchain rails — and we'll try to do things on blockchain rails that we can't do today." If Brody and Buterin are right, the real disruption won't make headlines. It'll simply become the way money moves, unseen and unstoppable.

Miami Herald
14 hours ago
- Miami Herald
This Is What Toyota's Answer to the Ford Maverick Might Look Like
The world's largest carmaker is preparing to jump into the compact pickup market. The small lifestyle pickup will take on the Ford Maverick and Hyundai Santa Cruz, with a potential launch window in 2026 or 2027. And unlike the Hilux, this one's not being built to pull tree stumps out of bogs - it's aimed squarely at young urbanites, daily drivers, and maybe even your neighbor who already owns three camping chairs and a roof box. Thanks to rendering artist, Theottle, we have some idea of what it might look like. The new model will most likely be based on either the RAV4's TNGA-K platform or the smaller Corolla-based TNGA-C, meaning this ute is more crossover than crawler. The size is expected to be slightly shorter than a RAV4, and powertrains will likely be hybrid-first, including a plug-in hybrid variant capable of over 60 miles (100 km) of EV-only only would this make it Toyota's most efficient pickup ever, but it comes at a time when the company is doubling down on reliability and sheer road presence. Toyota claims over 150 million of its cars are still on the road today - a handy fact when you're trying to win over budget-conscious buyers who don't want to own a disposable trucklet. Although Toyota still holds the overall U.S. sales crown, it's now under pressure from all angles. According to recent Q2 data, Ford is catching up fast. Ford's year-over-year gains are outpacing Toyota's, and with the Maverick continuing to be a runaway success, it's no wonder Toyota sees the need to respond. A hybrid ute that undercuts the Tacoma and gives buyers something between a crossover and a proper truck would fill a glaring gap in their lineup. It's not just the U.S. market Toyota is eyeing. There's also growing speculation - including recent comments from Chairman Akio Toyoda himself - that American-made Toyotas may soon be sold in Japan. This follows new trade agreements that make it easier to import U.S.-built vehicles to Japan by removing complex certification barriers. If Toyota does choose to build this pickup in North America, it could be among the first models to benefit. While Toyota hasn't confirmed specifics yet, early reports suggest the new ute will be based on either the RAV4's TNGA-K platform or the smaller Corolla-based TNGA-C. That means a car-like ride, excellent hybrid integration, and a front-wheel-drive layout with optional all-wheel drive. Powertrains are expected to include both a regular hybrid and a plug-in hybrid, with the latter capable of over 60 miles of electric-only driving. That figure would place it at the top of its class in terms of efficiency. If Toyota's internal timelines are accurate, the truck could launch in North America in late 2026 or early 2027, with other markets to follow. Expect it to slot below the Tacoma in price and size, aimed squarely at buyers who want the rugged look of a pickup without the fuel bills or size penalties. Copyright 2025 The Arena Group, Inc. All Rights Reserved.
Yahoo
19 hours ago
- Yahoo
Top economist Brad DeLong to recent college grads: Don't blame AI for job struggles—blame the sputtering economy
As recent college graduates face one of the toughest job markets in years, Berkeley economist and voluble Substacker Brad DeLong has a message for those struggling to land their first full-time gig: Artificial intelligence (AI) and automation are not to blame. Larger forces are at work. DeLong, a professor at UC Berkeley and former Deputy Assistant Secretary of the Treasury, argued in a recent essay that the challenges confronting young job-seekers today are primarily driven by widespread policy uncertainty and a sluggish economy—not by the rapid rise of AI tools like ChatGPT or data-crunching robots. DeLong offered his analysis on July 23, roughly 10 days before the July jobs report stunned markets, revealing that the economy has been much weaker than previously thought for several months. Prominent business leaders had also flagged troubling signs in the economy before the July jobs report dropped. IBM Vice Chair and former Trump advisor Gary Cohn went on CNBC a day before the jobs data, noting 'warning signs below the surface.' Cohn said he pays close attention to the quits rate in the monthly JOLTS data, arguing that 150,000 fewer quits was an ominous sign of poor economic health. DeLong sounded a prophetic note, writing that 'policy uncertainty' over trade, immigration, inflation, and technology has 'paralyzed business planning,' leading to a self-reinforcing cycle of hiring freezes. New entrants to the job market are bearing the brunt of the retreat to risk aversion. In other words, the college graduate class of 2025 is really unlucky. The economist argued that the uncertainty causes companies to delay major decisions—including hiring—in the face of an unpredictable policy environment. 'This risk aversion is particularly damaging for those at the start of their careers, who rely on a steady flow of entry-level openings to get a foot in the door,' he wrote. DeLong has sounded similar warnings of a slowdown for years. He talked to Fortune in 2022 about his theory of the economy starting to sputter from his book Slouching Towards Utopia. In 2025, he wrote, the big story in the jobs market is not actually AI, but something different. Policy paralysis So, what's really keeping freshly minted graduates from clinching that all-important first job? DeLong cited Bloomberg BusinessWeek's Amanda Mull and her theory about 'stochastic uncertainty'—a cocktail of unpredictability around government policies, trade, immigration, and inflation. Companies aren't firing; instead, they're just waiting. And many are delaying new hires in anticipation of possible sudden shifts in tariffs, inflation rates, and regulatory environments. The result is a wait-and-see climate where employers, worried about future economic shocks, have selected caution over expansion. The holding pattern hits new entrants to the workforce especially hard. While overall unemployment in the U.S. remains low, the situation is uniquely difficult for new graduates relative to the rest of the workforce. Citing economists including Paul Krugman, DeLong noted that while the absolute unemployment rate for college graduates isn't alarming, the gap between graduate unemployment and general unemployment rates is at record highs. In the past, higher education reliably led to lower unemployment, but now recent grads are struggling 'by a large margin' compared to previous generations. As previously reported by Fortune Intelligence, Goldman Sachs has argued that the college degree 'safety premium' is mostly gone. The team, led by Goldman's chief economist Jan Hatzius, wrote: 'Recent data suggests that the labor market for recent college graduates has weakened at a time when the broader labor market has appeared healthy.' It also found that since 1997, young workers without a college degree have become much less likely to even look for work, with their participation rate dropping by seven percentage points. Mull cited an analysis by the Federal Reserve Bank of New York which found that tech and design fields, including computer science, computer engineering, and graphic design, are seeing unemployment rates above 7% for new graduates. Why the AI hype misses the mark Although the tech sector is buzzing about AI's potential to replace junior analysts or automate entry-level tasks, DeLong urged caution in assigning blame. In his typical style, he noted, 'there is still [no] hard and not even a semi-convincing soft narrative that 'AI is to blame' for entry-level job scarcity.' Hiring slowdowns, he pointed out, are driven by broader economic forces: uncertainty, risk aversion, and changes in how companies invest. Here again, DeLong's analysis rhymes and aligns with recent research from Goldman's Hatzius. The bank's quarterly 'AI Adoption Tracker,' issued in July, found that the unemployment rate for AI-exposed occupations had reconciled with the wider economy, which contradicts fears of mass displacement. They also noted there have been no recent layoff announcements explicitly citing AI as the cause, underscoring that it's contained to disruption of specific functions, not entire industries. Crucially, he argued, rather than hiring people, companies in the tech sector are splurging on 'the hardware that powers artificial intelligence'—notably Nvidia's high-performance chips—fueling a boom in capital investment while sidelining junior hires. 'For firms, the calculus is straightforward: Investing in AI infrastructure is seen as a ticket to future competitiveness, while hiring junior staff is a cost that can be postponed.' Underpinning these trends is a shift away from any and all risk. Employers prefer to hire for specific short-term needs and are less willing to invest in developing new talent—leaving young applicants caught in a cycle where 'just getting your foot in the door' is more difficult than ever. Incumbent workers, worried about job market uncertainty, are less likely to change jobs, leading to fewer openings and greater stagnation. DeLong's analysis harmonized with Goldman Sachs' findings about the declining premium attached with a college degree: 'For the longer-run, the rise in the college wage premium is over, and a decline has (probably) begun.' For decades, he continued, a college degree was a ticket to higher earnings, and the labor market rewarded those with advanced skills and credentials. In recent years, though, 'this has plateaued and may even be falling.' The causes are complex, he added, but the takeaway: While degrees remain valuable, they are no longer the ever-ascending ticket to prosperity they once were. These comments confirm the gloomy remarks of University of Connecticut professor emeritus Peter Turchin, who recently talked with Fortune about the declining status of the upper middle class in 21st century America. When asked where else he sees this manifesting in modern life, Turchin said, 'It's actually everywhere you look. 'Look at the overproduction of university degrees,' he said, arguing that the decreasing premium that Goldman and DeLong write about shows up in declining rates of college enrollment and high rates of recent graduate unemployment. 'There is overproduction of university degrees and the value of a university degree actually declines.' DeLong's bottom line for recent grads: Blame a risk-averse business climate, not technology, for today's job woes. And now that we know the economy may have been much more risk-averse in 2025 than previously, DeLong's warnings are worth revisiting. DeLong did not respond to a request for comment. For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data