
PlayStation 6? Sony and AMD's Plan to Power Next Gen Consoles With AI
Sony PR said the conversation wouldn't touch any next-gen console topics but, per our discussion, future and next-generation hardware was a common phrase. It's hard to see how these advancements won't find their way onto a possible PS6 or even a dedicated PlayStation handheld (a standalone, unlike the PlayStation Portal that's tethered to a PS5 console).
Sony
What is Project Amethyst? Sony and AMD's AI collaboration for gaming
The big topic of the dinner was Project Amethyst, which was briefly revealed during a PS5 Pro Technical Seminar late last year. Amethyst's goal, which began development in 2023 when the PS5 Pro was largely complete, is to use AI and machine learning to make games look and run better.
Amethyst combines what AMD learned from its RDNA road map with SIE's use of PSSR, or PlayStation Spectral Super Resolution, on the PS5 Pro to create a more ideal architecture for machine learning. The aim is to support a wide library of models that will help developers improve their games.
"[Amethyst] will support ChatGPT, if that's what the developers want," Cerny said. "That's not what we're working on, we're working [on] networks which know about detail and pixels and edges in order to stretch the capabilities of the hardware as far as possible."
Why Amethyst for a codename? It's a combination of PlayStation blue with AMD red, creating Amethyst purple. This synergy was embodied by a 100-pound, split amethyst statue displayed in the corner of our dining room.
"Machine learning-based processing is the future," Cerny told me. With Amethyst, Sony and AMD are aiming for "fewer pixels, prettier pixels coupled with machine leaning libraries to increase resolution or add frames or assist in various ways with ray tracing."
Sony PlayStation 5 Pro Review: The Most Advanced Game Console Ever Sony PlayStation 5 Pro Review: The Most Advanced Game Console Ever
Click to unmute
Video Player is loading.
Play Video
Play
Skip Backward
Skip Forward
Next playlist item
Unmute
Current Time
0:00
/
Duration
8:35
Loaded :
6.93%
0:00
Stream Type LIVE
Seek to live, currently behind live
LIVE
Remaining Time
-
8:35
Share
Fullscreen
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text
Color White Black Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Text Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Opaque Semi-Transparent Transparent Caption Area Background
Color Black White Red Green Blue Yellow Magenta Cyan
Opacity Transparent Semi-Transparent Opaque
Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Drop shadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps
Reset Done
Close Modal Dialog
End of dialog window.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Close Modal Dialog
This is a modal window. This modal can be closed by pressing the Escape key or activating the close button.
Sony PlayStation 5 Pro Review: The Most Advanced Game Console Ever
How machine learning will improve the PS5 Pro and future PlayStation consoles
These three aspects are all highlight features of last year's PlayStation 5 Pro, but Sony and AMD are looking to push this further with the help of AI models. For example, developers will be able to render a game at a lower resolution, such as 1080p, and the machine learning hardware will use its super resolution library to scale that up for a 4K image. Since the original is being rendered at a lower resolution and an easier number of pixels to manage, the hardware can run it more smoothly, making games play better at potentially higher framerates.
"We at PlayStation were beginning to think about what sort of hardware would be ideal for future consoles and what sort of algorithms would be running on it," Cerny continued "and I'm not talking about immediate needs, we just shipped the PS5 Pro, I'm talking about years in the future." My ears perked up. Cerny has said that each console takes about four years of development. If Amethyst began development in 2023 then we might be seeing a new device, possibly, in 2027.
Sony
"We knew we had to go with machine learning because Moore's Law is diminishing … the old school way of adding more performances, more transistors, more flops, more memory bandwidth," Huynh added. Their hope is that machine learning will make these advances more accessible to everyone. This is emphasized by the fact that Amethyst is going to be open for others to use, Cerny continued.
"Obviously we want to use these technologies on our consoles, but these technologies are accessible to any of AMD's customers freely," Cerny said. "There's no restrictions on how any of this can be used."
It's worth noting that, as recently as last week, Xbox reaffirmed its own partnership with AMD in regards to producing future hardware.
Huynh commented a couple times that they're "really trying to find what is the best technology at the most accessible price point." However, each generation of games, both software and hardware, gets more and more expensive. I asked if they believed machine learning would change that pattern, but they, along with AMD's PR, quickly said they aren't talking about price at this time.
One of the most exciting aspects of this was, even though they were planning to implement this new software in the coming years, they overachieved.
"[We were] looking for an algorithmic breakthrough that we could use way down the road," Cerny said."The joint SIE/AMD team did it in about nine months."
The second way they surpassed expectations was that they didn't need future hardware and all that power to run it, Cerny said: "turns out the algorithm could be implemented on current-generation hardware."
Co-developed algorithms have already been released by AMD as part of its latest AI upscaling tech, FidelityFX Super Resolution 4 on PCs but what is really exciting for console gamers is that Sony is currently in the process of implementing it on PS5 Pro with a launch sometime next year.
Cerny cleared up some speculation: "It's not a cut down of the algorithm, it's the full-fat version of the co-developed super resolution."
What about PS5 owners? Why these AI upgrades are exclusive to the PS5 Pro
To be clear, Amethyst isn't just about increasing the resolution of games with AI. The project's next step is machine learning-based virtual frame generation and ray tracing. PC gamers taking advantage of FSR Redstone will get to see these two other performance increases in the second half of 2025.
"Actually this is a little bit different in the approach because what I'm trying to do is prepare for the future, the next generation of consoles," Cerny said. It's clear that AI will play a major role in the next generation of gaming consoles.
While PC gamers will be utilizing FSR, Cerny clarified that PlayStation gamers will see "implementations of the algorithm as FSR and implementations of the algorithm as Spectral" which is the brand of machine learning within SIE.
"But the fact is they will be extraordinarily close because we want the game developers to have interoperability" Cerny said.
Launch PS5 beside the PS5 Pro beside the Slim PS5.
CNET
It's clear the PS5 Pro will see a boost in performance and their games will look "much crisper," Cerny said -- unfortunately, owners of the base PS5 won't see any of these benefits.
"PS5 doesn't have the 300 dots of computational capabilities," Cerny clarified. It's also unclear which features will find their way to the Pro. "Maybe just the super resolution at this point. We have so many algorithms being developed, many of which were not designed with that particular hardware in mind."
Cerny went on to mention that the PS5 Pro is capable because it sits around an AMD RTX 9070 or RTX 9070 XT GPU for performance. "From an SIE perspective, we're not looking at the bespoke hardware of the PS5 Pro and so it does complicate the implementation of these algorithms," Cerny said "we're really focused on the future when the co-developed hardware is available."
So when will we see the fruits of Sony and AMD's Amethyst?
What I needed to know most is: When can I get my hands on this? When will I play a game that uses Project Amethyst? Cerny said that later this year is when developers will be getting an early version of the co-developed network and we should see them publishing sometime next year.
"We already have 65 games right now on FSR 4. We committed to 75 games, I think we're ahead of that schedule," Huynh added. And it sounds like it shouldn't be too difficult for developers to make the switch when working on their updates since "it's compatible with FSR3 from an API perspective."
A lot of this comes down to the game developers implementing the new software into their games, or patching an older game to take advantage of it. I asked how this will affect older games that may not have an active development scene at the moment. For example, over on the Xbox, older games have automatically gotten frame rate boosts and HDR implementation by simply running on newer hardware, without developer support.
"If [developers] do it on their side, the new algorithm is a drop-in replacement for the current PSSR. So if they patch the game, they get the new algorithm. There is still the question of 'does any of that happen automatically' and that's something we're taking a look at," Cerny said.
Suffice to say, this is definitely something we'll have to keep our eye on and test when Amethyst begins rolling out.
In the meantime, Sony is implementing a team of QA testers to keep an eye on the frames and materials generated by AI models. If frames and pixels are going to be getting produced outside of the developers' hands, they want to make sure things are displaying correctly.
"And so that type of stuff we have to train people to look at and see," said Jeff Connell, S3 General Manager, CVP, AMD. "[If] Spider-Man [is] sitting on top of a building and he spins really quickly, we pause it and you look at a building. Are all the windows lined up or are they bowed all over the place? You look at power lines, you look at things like that."
Will AI gaming features scale to future PlayStation handhelds?
It's still unclear what future hardware we'll actually see all this running on -- the successor to the PS5 remains a mystery -- but one growing trend in the games industry is powerful handhelds. In fact, Xbox announced their close collaboration with ASUS earlier this month when I went hands-on at Summer Game Fest with the ROG Xbox Ally.
Sony
It's worth noting that Sony doesn't have a current, dedicated gaming handheld on the market. The closest device is the PlayStation Portal from 2023 which can only stream games via a PS5 or the cloud. With Nintendo currently dominating the space with the recently released Switch 2, and Xbox entering it later this year, it seems like only a matter of time before Sony throws its hat in the ring.
Another attendee asked whether the Amethyst algorithms are scaled onto weaker, handheld hardware.
"The answer is yes," said Cerny. "The algorithms are scalable and so a lot of what we do is we're looking at the possible range of algorithms and how much horsepower we can grow from it, but there are solutions both above and below the ones we're looking at."
Handhelds are less of a priority for AMD, Huynh added. "We're focused right now on the desktop because I want to do the desktop right, build that foundation … and handheld is very important to us too because I believe in continuous gaming, gaming on the go, and we're very focused on handhelds as well."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Motor Trend
18 minutes ago
- Motor Trend
Bonkers on a Whole New Level: Rimac Nevera R Sets 24 Speed and Stop Records
You think your car is fast? Well, Rimac says its Nevera R has a hard lesson on just how fast an all-electric hypercar can be—and how quickly it can stop. It's a lesson that may make you question the reality we live in. The Rimac Nevera R is claiming 24 records, including for 0–60 mph time, 0–249–0 mph, and a new top speed mark for EVs. Enhanced aerodynamics and new Michelin tires contributed to its performance, as did 2,107 hp from its four electric motors. This summary was generated by AI using content from this MotorTrend article Read Next Recently, the 2025 Porsche Taycan Turbo GT Weissach (yes, that's a mouthful) delivered the quickest 0–60-mph times we've ever recorded with an official 1.89-second blast. Per drag racing protocol, that time is minus our standard one-foot rollout; the raw time is a still-nuts 2.10 seconds. (You can read all about our testing procedures right here.) The rest of the top five (with rollout deducted): the non-Weissach Taycan GT (1.94), the Tesla Model S Plaid (2.07), Ferrari SF90 Stradale Assetto Fiorano (2.10), and the Lucid Air Sapphire (2.16). The Turbo GT Weissach can also haul itself to a stop quite capably, getting down from 60 mph in just 90 feet. We provide all of this information for context on what you're about to read. While we'd need to verify the numbers for the Rimac Nevera R ourselves to get it an official slot in our rankings, the Croatian company is reporting an absolutely scorching 1.66-second 0–60 that includes a one-foot rollout. This number as well as the others were certified, Rimac says, by Dewesoft at the Automotive Testing Papenburg proving grounds in Germany. With that much velocity off the hop, the Nevera R is said to have set 24 new records (including 23 the previous Nevera R set in 2023). The big one is the 0–400–0-kph sprint and stop (0-249-0 mph): In the time it probably took you to read to this point of the story, the Nevera R has already completed that run and reset the bar to 25.79 seconds. Adding 1 mph to round out the run to 250 mph added 0.41-second to the time (26.20 seconds). If you're curious, its quarter-mile time is 7.90 seconds, the standing mile is 19.71 seconds, and it is touting a record-setting top speed for EVs at 268.2 mph. The 2025 Nevera R was able to achieve these records, the company says, due to reengineering the major components of a "regular' Nevera. This includes a new aerodynamics package with a fixed rear wing and enlarged diffuser, which generated 15 percent more downforce while also improving its aero efficiency by 10 percent. A new set of Michelin Cup 2 tires also increased mechanical grip while its all-wheel torque vectoring system (the car uses four individual motors) was recalibrated to handle the higher traction levels. With its extreme abilities and a total output of 2,107 hp, the Nevera R is truly bonkers on another level.


Forbes
28 minutes ago
- Forbes
That Vibe Code You Just Shipped? How Today's AI Speed Creates Tomorrow's Sinkhole
Jimmie founded JLEE with the mission to "Enhance life for all through innovative, disruptive technologies." Learn more at Although it may sound like something out of a '70s self-help book, the term 'vibe coding' was just coined earlier this year, and since then, the idea has been spreading fast. It refers to people with little or no programming experience using AI tools like Replit, Cursor, ChatGPT, Claude or GitHub Copilot to build real software and complete AI SaaS Platforms, just by describing what they want using natural language in a conversational tone. Imagine you're a nontechnical founder with a great idea for a product. Once upon a time—say, a year ago—you'd have to find a developer, figure out a budget, maybe even offshore some work to build a prototype. Now you can just tell an AI, 'I want a website that does X, Y and Z,' and voila—there's your working code and usable, sellable product. For early-stage validation, this can be a game-changer. But as cool as it sounds, vibe coding comes with real risks. And if you don't know where those risks are hiding, you could end up in a world of trouble. Where Vibe Coding Works—And Doesn't I've seen firsthand how powerful vibe coding can be for prototyping. If you're still trying to figure out your ideal customer profile or whether your product idea actually has legs to build traction, using AI to come up with a fast minimum sellable prototype can be very effective. You get something tangible into users' hands early and can make informed decisions without sinking tens of thousands of dollars into dev work. But that doesn't mean you should trust AI from start to finish, especially if you're working in areas that deal with sensitive data, or if you're in a regulated industry like healthcare or finance. Here's why: Large language models weren't trained on clean, secure, regulatory-compliant code. They were trained on code that's already out there on the internet. Some of it is good, but a lot of it is sloppy, outdated or full of vulnerabilities. On top of that, by definition, most people using vibe coding don't fully understand the code that's being generated. It's like building a house based on blueprints gathered from all over the internet, and you're not an architect. This dramatically increases the risk of security and privacy breaches as well as regulatory and compliance violations. We're already seeing this. Startups that grew fast using vibe coding are starting to appear in the headlines for the wrong reasons: hacked APIs, exposed user data, major privacy issues. Asking AI to identify and fix security, privacy, and architectural issues is like asking a 10-year-old to drive 70 MPH on the freeway when their only driving experience is GTA and Asphalt Unite. Don't Code On Autopilot So, what's the better approach? Think of vibe coding like lane keeping assist: It helps, but you still have to keep your hands on the wheel. That means involving a human expert—someone who knows how to check architecture, security and scalability—instead of relying 100% on anything AI-generated for a real, user-facing product. In the end, if you want a production-quality product, you still need a human in the driver's seat. People assume AI-generated code is good to go because it 'works.' But functioning and being secure and able to scale are two very different things. Every piece of software should go through proper review and testing, especially for things like input validation, authentication and how data is stored or transferred. If your product touches anything sensitive, such as financial data, intellectual property or trade secrets, you have to be extra careful. AI tools often send data back to third-party servers, which means you might be exposing private or proprietary info without even knowing it. There's a reason we don't have 'vibe medicine' or 'vibe finance.' You wouldn't go to ChatGPT for a court defense (at least not yet). The same logic applies to software that handles real people's data or money. Doing It Right How can you leverage the benefits of vibe coding in a smart way? First, by all means, use it for what it's best at: building early prototypes. If you're not sure your idea will work, vibe coding is a great way to get to a proof of concept. Test your assumptions. Show it to users. But don't scale from there without help. Second, loop in technical advisors early. If you can't read the code, find someone who can. There are even services from Amazon (AWS), Google (GCP) and Microsoft that help you vet your architecture. AWS, for example, has startup programs that include free credits and partner assessments to flag security or scaling problems. Third, you can use automated tools like OWASP ZAP, Snyk or SonarQube to scan the AI-generated code for known vulnerabilities. These tools aren't perfect, but they'll increase your odds of catching obvious problems before users (or hackers) do. Create your own CI/CD pipelines to consistently scan your code. Vibe coding is here, it's easy and it's definitely useful. But it's not a free pass to skip over everything that makes software trustworthy, secure and scalable. Right now, AI is great at saving time but not at making decisions about privacy, ethics or architecture. Think of vibe coding as a driver assist feature, not a driverless autopilot. You still need to know where you're going, and you need some experienced human input along the way. Otherwise, you probably won't end up where you want to go—if you don't crash first. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
35 minutes ago
- Yahoo
TSMC's First-Half Revenue Surges 40% on Booming AI Demand
Taiwan Semiconductor Manufacturing Co., the world's largest contract chipmaker, posted a first-half revenue surge of 40% on booming AI demand. TSMC's June revenue dropped, however, over the previous month. TSMC supplies AI chipmaker Nvidia, which hit the $4 trillion market cap threshold briefly Semiconductor Manufacturing Co. (TSM), the world's largest contract chipmaker, posted a first-half revenue surge of 40% on booming AI demand, though June revenue dropped over the previous month. TSMC reported June revenue of 263.71 billion New Taiwan dollars ($9.02 billion), up almost 27% from a year earlier but down nearly 18% from May's figure. The first-half revenue of NT$1.77 trillion was up 40% over the same period last year. The U.S.-listed shares of the company, which supplies tech heavyweights such as AI darling Nvidia (NVDA), are rising more than 1% in premarket trading after entering Wednesday 17% higher this year. The fervor for AI plays has buoyed shares linked to the technology this year and briefly made Nvidia the first $4 trillion company on Wednesday. TSMC has also been expanding in the U.S., as President Donald Trump pushes for more domestic manufacturing. In March, TSMC CEO C.C. Wei joined President Trump in announcing the firm's plans to invest $100 billion in U.S.-based chip-manufacturing facilities. At the company's annual shareholder meeting in early June, Wei reportedly said that the chipmaker is unlikely to face a big hit from tariffs, as they are typically absorbed by U.S. importers. He also said, according to The Wall Street Journal, that AI demand remains strong and he projected record-high revenue and earnings at TSMC this year. Read the original article on Investopedia Sign in to access your portfolio