logo
Amazon's director of security on locking down enterprise AI

Amazon's director of security on locking down enterprise AI

Yahoo18-07-2025
This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter.
Cybersecurity is a growing concern for organizations as they sprint to bring AI into the enterprise.
Amid deployment efforts, AI security issues have surpassed ransomware for nearly one-third of security chiefs, according to Arctic Wolf data. The technology's reliance on company data to create accurate results puts cybersecurity front and center.
In June, CIO Dive spoke with Mark Ryland, director of security at Amazon, about AI's rapid rise, how executive concerns are evolving and the impacts the technology is having on cybersecurity defense.
Editor's note: This interview has been edited for length and clarity.
INDUSTRY DIVE: What is it about these latest iterations of AI, be it generative or agentic, that makes it a greater security challenge in certain respects? Why are we seeing higher concern related to AI?
MARK RYLAND: The fact that these are non-deterministic systems that can give different results with the same input: That's something that computer people have never been accustomed to. And the fact that people are just trying to apply these tools across a broad range of business problems is also a factor. We've seen hype cycles before, but this one is a little different. There is major transformation happening, for sure, and business transformations that will result from the use of this powerful technology that can use structure and unstructured data.
How has AI changed cybersecurity work for organizations? Where do you foresee it having its greatest impact?
It's already having a big impact, starting with something very simple like human language queries of analytics tools. If I'm training a cybersecurity analyst, now they can just ask intelligent questions in human language and get very good results very efficiently. Another area that we see immediate benefit is contextual summarization. If there's a security issue, a human files a ticket that says, 'Hey, I think there's something wrong here,' and now, an AI system can bring in an entire corpus of similar tickets that a human might not have been able to find with a text search. On the proactive security side, our AppSec team is using AI for better, automatic test generation. There are lots of benefits already that we're seeing, and I feel like we're just getting started.
How will the adoption of these technologies impact the cybersecurity sector workforce?
I think the desirable outcome, and the one that we're working toward, is increasing the capacity of human experts to be much more efficient and do work that was difficult to do. At the same time, we don't want to stop the process by which humans develop expertise and judgment in these areas. As an industry, we have to find a way to continue to train people, but at the same time recognize that the tools can do a lot of the work that they used to do. I think maintaining a goal of keeping human expertise at a high level is important.
How can organizations improve their cybersecurity posture as they adopt agentic AI?
What we were advocating for people to do is to continue to use deterministic checkpoints on an agentic system. If you use identity-based controls, you have the ability to lock things down – this identity can only access this set of data. Then, if an agent is running as that identity, you've now constrained the ability of the agent to do things that you don't want it to do. Treat the agent itself as a human actor that can also make mistakes. 'Human-in-the-loop' will also be important for a while. Human-supervised feedback can also become part of a model which then improves the accuracy of the agents.
Do you have any advice for how IT and security can work more collaboratively with each other?
You've got to make security just as much a part of the goals you're trying to achieve as performance, costs or any other kind of criteria in an engineering effort. We've got to get to a point where the easiest path is a secure path, where software engineers are given an environment in which they write the business logic, but everything else is built right in for them. Another pattern that we've seen help is creating a cloud Center of Excellence, a joint skills team with the CTO, the CIO, CISO, all contributing experts can help engineering teams to modernize and onboard to cloud technology.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Innovation with Heart: The Promise of AI
Innovation with Heart: The Promise of AI

Fox News

time9 minutes ago

  • Fox News

Innovation with Heart: The Promise of AI

AI is evolving at lightning speed, opening doors to exciting breakthroughs, and raising a few eyebrows along the way. CEO and founder of Postilize, Jody Glidden, joins Janice for a thoughtful and hopeful conversation about how AI is transforming the way we work, live, and create. Jody sees a future full of promise, not just in innovation, but in strengthening our connections with the people we love. Learn more about your ad choices. Visit

Inside OpenAI's quest to make AI do anything for you
Inside OpenAI's quest to make AI do anything for you

TechCrunch

time38 minutes ago

  • TechCrunch

Inside OpenAI's quest to make AI do anything for you

Shortly after Hunter Lightman joined OpenAI as a researcher in 2022, he watched his colleagues launch ChatGPT, one of the fastest-growing products ever. Meanwhile, Lightman quietly worked on a team teaching OpenAI's models to solve high school math competitions. Today that team, known as MathGen, is considered instrumental to OpenAI's industry-leading effort to create AI reasoning models: the core technology behind AI agents that can do tasks on a computer like a human would. 'We were trying to make the models better at mathematical reasoning, which at the time they weren't very good at,' Lightman told TechCrunch, describing MathGen's early work. OpenAI's models are far from perfect today — the company's latest AI systems still hallucinate and its agents struggle with complex tasks. But its state-of-the-art models have improved significantly on mathematical reasoning. One of OpenAI's models recently won a gold medal at the International Math Olympiad, a math competition for the world's brightest high school students. OpenAI believes these reasoning capabilities will translate to other subjects, and ultimately power general-purpose agents that the company has always dreamed of building. ChatGPT was a happy accident — a lowkey research preview turned viral consumer business — but OpenAI's agents are the product of a years-long, deliberate effort within the company. 'Eventually, you'll just ask the computer for what you need and it'll do all of these tasks for you,' said OpenAI CEO Sam Altman at the company's first developer conference in 2023. 'These capabilities are often talked about in the AI field as agents. The upsides of this are going to be tremendous.' Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California.(Photo by) Image Credits:Justin Sullivan / Getty Images Whether agents will meet Altman's vision remains to be seen, but OpenAI shocked the world with the release of its first AI reasoning model, o1, in the fall of 2024. Less than a year later, the 21 foundational researchers behind that breakthrough are the most highly sought-after talent in Silicon Valley. Mark Zuckerberg recruited five of the o1 researchers to work on Meta's new superintelligence-focused unit, offering some compensation packages north of $100 million. One of them, Shengjia Zhao, was recently named chief scientist of Meta Superintelligence Labs. The reinforcement learning renaissance The rise of OpenAI's reasoning models and agents are tied to a machine learning training technique known as reinforcement learning (RL). RL provides feedback to an AI model on whether its choices were correct or not in simulated environments. RL has been used for decades. For instance, in 2016, about a year after OpenAI was founded in 2015, an AI system created by Google DeepMind using RL, AlphaGo, gained global attention after beating a world champion in the board game, Go. South Korean professional Go player Lee Se-Dol (R) prepares for his fourth match against Google's artificial intelligence program, AlphaGo, during the Google DeepMind Challenge Match on March 13, 2016 in Seoul, South Korea. Lee Se-dol played a five-game match against a computer program developed by a Google, AlphaGo. (Photo by Google via Getty Images) Around that time, one of OpenAI's first employees, Andrej Karpathy, began pondering how to leverage RL to create an AI agent that could use a computer. But it would take years for OpenAI to develop the necessary models and training techniques. By 2018, OpenAI pioneered its first large language model in the GPT series, pretrained on massive amounts of internet data and a large clusters of GPUs. GPT models excelled at text processing, eventually leading to ChatGPT, but struggled with basic math. It took until 2023 for OpenAI to achieve a breakthrough, initially dubbed 'Q*' and then 'Strawberry,' by combining LLMs, RL, and a technique called test-time computation. The latter gave the models extra time and computing power to plan and work through problems, verifying its steps, before providing an answer. This allowed OpenAI to introduce a new approach called 'chain-of-thought' (CoT), which improved AI's performance on math questions the models hadn't seen before. 'I could see the model starting to reason,' said El Kishky. 'It would notice mistakes and backtrack, it would get frustrated. It really felt like reading the thoughts of a person.' Though individually these techniques weren't novel, OpenAI uniquely combined them to create Strawberry, which directly led to the development of o1. OpenAI quickly identified that the planning and fact checking abilities of AI reasoning models could be useful to power AI agents. 'We had solved a problem that I had been banging my head against for a couple of years,' said Lightman. 'It was one of the most exciting moments of my research career.' Scaling reasoning With AI reasoning models, OpenAI determined it had two new axes that would allow it to improve AI models: using more computational power during the post-training of AI models, and giving AI models more time and processing power while answering a question. 'OpenAI, as a company, thinks a lot about not just the way things are, but the way things are going to scale,' said Lightman. Shortly after the 2023 Strawberry breakthrough, OpenAI spun up an 'Agents' team led by OpenAI researcher Daniel Selsam to make further progress on this new paradigm, two sources told TechCrunch. Although the team was called 'Agents,' OpenAI didn't initially differentiate between reasoning models and agents as we think of them today. The company just wanted to make AI systems capable of completing complex tasks. Eventually, the work of Selsam's Agents team became part of a larger project to develop the o1 reasoning model, with leaders including OpenAI co-founder Ilya Sutskever, chief research officer Mark Chen, and chief scientist Jakub Pachocki. Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023. (Photo by JACK GUEZ / AFP) Image Credits:Getty Images OpenAI would have to divert precious resources — mainly talent and GPUs — to create o1. Throughout OpenAI's history, researchers have had to negotiate with company leaders to obtain resources; demonstrating breakthroughs was a surefire way to secure them. 'One of the core components of OpenAI is that everything in research is bottom up,' said Lightman. 'When we showed the evidence [for o1], the company was like, 'This makes sense, let's push on it.'' Some former employees say that the startup's mission to develop AGI was the key factor in achieving breakthroughs around AI reasoning models. By focusing on developing the smartest-possible AI models, rather than products, OpenAI was able to prioritize o1 above other efforts. That type of large investment in ideas wasn't always possible at competing AI labs. The decision to try new training methods proved prescient. By late 2024, several leading AI labs started seeing diminishing returns on models created through traditional pretraining scaling. Today, much of the AI field's momentum comes from advances in reasoning models. What does it mean for an AI to 'reason?' In many ways, the goal of AI research is to recreate human intelligence with computers. Since the launch of o1, ChatGPT's UX has been filled with more human-sounding features such as 'thinking' and 'reasoning.' When asked whether OpenAI's models were truly reasoning, El Kishky hedged, saying he thinks about the concept in terms of computer science. 'We're teaching the model how to efficiently expend compute to get an answer. So if you define it that way, yes, it is reasoning,' said El Kishky. Lightman takes the approach of focusing on the model's results and not as much on the means or their relation to human brains. The OpenAI logo on screen at their developer day stage. (Credit: Devin Coldeway) Image Credits:Devin Coldewey 'If the model is doing hard things, then it is doing whatever necessary approximation of reasoning it needs in order to do that,' said Lightman. 'We can call it reasoning, because it looks like these reasoning traces, but it's all just a proxy for trying to make AI tools that are really powerful and useful to a lot of people.' OpenAI's researchers note people may disagree with their nomenclature or definitions of reasoning — and surely, critics have emerged — but they argue it's less important than the capabilities of their models. Other AI researchers tend to agree. Nathan Lambert, an AI researcher with the non-profit AI2, compares AI reasoning modes to airplanes in a blog post. Both, he says, are manmade systems inspired by nature — human reasoning and bird flight, respectively — but they operate through entirely different mechanisms. That doesn't make them any less useful, or any less capable of achieving similar outcomes. A group of AI researchers from OpenAI, Anthropic, and Google DeepMind agreed in a recent position paper that AI reasoning models are not well understood today, and more research is needed. It may be too early to confidently claim what exactly is going on inside them. The next frontier: AI agents for subjective tasks The AI agents on the market today work best for well-defined, verifiable domains such as coding. OpenAI's Codex agent aims to help software engineers offload simple coding tasks. Meanwhile, Anthropic's models have become particularly popular in AI coding tools like Cursor and Claude Code — these are some of the first AI agents that people are willing to pay up for. However, general purpose AI agents like OpenAI's ChatGPT Agent and Perplexity's Comet struggle with many of the complex, subjective tasks people want to automate. When trying to use these tools for online shopping or finding a long-term parking spot, I've found the agents take longer than I'd like and make silly mistakes. Agents are, of course, early systems that will undoubtedly improve. But researchers must first figure out how to better train the underlying models to complete tasks that are more subjective. AI applications (Photo by Jonathan Raa/NurPhoto via Getty Images) 'Like many problems in machine learning, it's a data problem,' said Lightman, when asked about the limitations of agents on subjective tasks. 'Some of the research I'm really excited about right now is figuring out how to train on less verifiable tasks. We have some leads on how to do these things.' Noam Brown, an OpenAI researcher who helped create the IMO model and o1, told TechCrunch that OpenAI has new general-purpose RL techniques which allow them to teach AI models skills that aren't easily verified. This was how the company built the model which achieved a gold medal at IMO, he said. OpenAI's IMO model was a newer AI system that spawns multiple agents, which then simultaneously explore several ideas, and then choose the best possible answer. These types of AI models are becoming more popular; Google and xAI have recently released state-of-the-art models using this technique. 'I think these models will become more capable at math, and I think they'll get more capable in other reasoning areas as well,' said Brown. 'The progress has been incredibly fast. I don't see any reason to think it will slow down.' These techniques may help OpenAI's models become more performant, gains that could show up in the company's upcoming GPT-5 model. OpenAI hopes to assert its dominance over competitors with the launch of GPT-5, ideally offering the best AI model to power agents for developers and consumers. But the company also wants to make its products simpler to use. El Kishky says OpenAI wants to develop AI agents that intuitively understand what users want, without requiring them to select specific settings. He says OpenAI aims to build AI systems that understand when to call up certain tools, and how long to reason for. These ideas paint a picture of an ultimate version of ChatGPT: an agent that can do anything on the internet for you, and understand how you want it to be done. That's a much different product than what ChatGPT is today, but the company's research is squarely headed in this direction. While OpenAI undoubtedly led the AI industry a few years ago, the company now faces a tranche of worthy opponents. The question is no longer just whether OpenAI can deliver its agentic future, but can the company do so before Google, Anthropic, xAI, or Meta beat them to it?

How To Collect Dividends Up To 11% From Tech Stocks
How To Collect Dividends Up To 11% From Tech Stocks

Forbes

time39 minutes ago

  • Forbes

How To Collect Dividends Up To 11% From Tech Stocks

The Nasdaq has been rallying nonstop since April. Let's discuss three covered call fund payouts up to 11.2% that play the rally. The catalyst is the 'rise of the machines' with companies replacing expensive humans with cheaper robots and AI tools. Hiring numbers are down and (paradoxically to some) the Nasdaq continues to levitate higher. This summer heater in tech stocks is no surprise to us contrarians. The Naz tech giants are enjoying expanding profit margins! Amazon (AMZN) CEO Andy Jassy recently admitted the company's workforce will shrink, replaced by AI. This is bad for those who work at Amazon, but great for those who own AMZN. Microsoft (MSFT) also announced big layoffs in recent months, especially in sales and support roles easily handled by AI-driven tools. And my friends at Alphabet (GOOG) are looking over their shoulders wondering how much longer their services will be needed. This is a dicey time to be a rank-and-file tech bro—but an exciting time to be a tech savvy dividend investor. Here are three 'one-click' (or one-tap) dividend plays on this megatrend! Covered Call Fund #1: Global X Nasdaq 100 Covered Call ETF (QYLD) Alphabet (GOOG) will never pay 11.2%. But we can buy GOOG and the rest of Big Tech for 11.2% payouts via a fund like Global X Nasdaq 100 Covered Call ETF (QYLD), which sells ('writes') covered call funds on the Naz index itself to generate additional income. QYLD buys the stocks in the Nasdaq-100 and simultaneously writes covered calls on the index itself to generate income—which it pays out monthly. It's not perfect exposure to technology. The Nasdaq-100 is made up of the 100 largest nonfinancial companies listed on the Nasdaq exchange, and in fact, it includes stocks from 10 different sectors. However, it's still tech-heavy, at 60% of the index's weight, and includes trillion-dollar tech firms like Apple (AAPL) and Microsoft (MSFT), so it's generally treated as a proxy for the sector. But that's a marginal consideration. The real tradeoff to weigh is tactical. By selling covered calls against the Nasdaq, we're sacrificing potential upside in return for a.) much more stability and b.) the very high income from the options premiums it collects. QYLD will rarely outperform the 'QQQs' to the upside. But it also has less downside exposure, thanks to the constant income it generates by selling the call options. Covered Call Fund #2: JPMorgan Nasdaq Equity Premium Income (JEPQ) The JPMorgan Nasdaq Equity Premium Income (JEPQ) uses a similar strategy, owning roughly 100 or so Nasdaq stocks while selling calls against the Nasdaq-100. It also doles out its massive dividend in monthly distributions. But it's a little more flexible because of a big difference between it and QYLD: management. Whereas QYLD tracks an index and typically has only one options position at any given moment, JEPQ is led by 38-year veteran Hamilton Reiner and a team of four co-managers who can sell multiple contracts. I've also pointed out in the past that while both funds hold pretty much the same stocks, JEPQ is more heavily weighted in mega-cap names than QYLD. But that's not by definition. Indeed, today, JEPQ has a smaller percentage of assets invested in each of its top 10 holdings than QYLD. These might not seem like meaningful differences, but over time we see that JPMorgan's 'homemade' strategy beat QYLD's straightforward approach. Active management can make a world of difference—so much so that I typically prefer closed-end funds (CEFs) over comparable ETFs. Let's walk over to the CEF side of the border to review our final call writer. Covered Call Fund #3: Columbia Seligman Premium Technology Growth Fund (STK) Columbia Seligman Premium Technology Growth Fund (STK) is a CEF, while QYLD and JEPQ are ETFs. But the differences go far beyond fund type. Paul Wick, who has nearly four decades of experience, and a team of five other managers run a slimmer portfolio of about 55 holdings. The portfolio is also a purer—though not pure—play on technology, with about 70% of assets dedicated to the sector. STK also is interested in 'growth at a reasonable price' (GARP); a relatively more value-priced portfolio shows it, with price-to-earnings, sales, book, and cash flow all lower than the other ETFs. And whereas QYLD tries to own Nasdaq-100 stocks (and while JEPQ has a broader mandate but looks index-esque in its larger holdings), STK is much more willing to take some shots—stocks such as Lam Research (LRCX) and industrial Bloom Energy (BE) punch well above their weight. Columbia Seligman's CEF writes covered calls, too—typically on the Nasdaq-100, but again, it has more flexibility. For instance, right now, management is selling Apple calls, too. The strategy works. In fact, it works mighty well. STK still has its drawbacks. Unlike other covered-call funds, Columbia Seligman's fund is actually more volatile than the Nasdaq, not less. Moreover, while the ETFs pay monthly, this CEF is only paying us on a quarterly schedule—and at current prices (which admittedly represent a slight discount to net asset value), it's paying us just half as much as JEPQ and QYLD. Brett Owens is Chief Investment Strategist for Contrarian Outlook. For more great income ideas, get your free copy his latest special report: How to Live off Huge Monthly Dividends (up to 8.7%) — Practically Forever. Disclosure: none

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store