
This Astronomy AI App Can Pinpoint the Best Moonlit Nights. How to Use It
After looking for apps that could both educate me and provide details on the night sky, I found Ouranos, an astronomy weather app that uses artificial intelligence to analyze astronomical data in your location.
Considering the Moon is Earth's "natural satellite," I was curious about ways to tap into its functionality -- especially knowing that each phase is visible without a telescope and will likely appear even despite human-made air pollution.
What is Ouranos, and how does it use AI?
Ouranos was created by software engineer company Pleiode in 2022 and announced pretty casually by its founder through the Cloudy Night forum. There's both a free version of Ouranos and a paid version, which is roughly $2/month -- or a $30 one-time unlimited fee -- that unlocks 16-day extended forecasts, cloud and 15-minute forecasts, planet visibility graphs, astronomical events and an interactive light pollution map.
Ouranos' main use is determining when sky conditions are optimal for viewing, saving you the time of comparing data yourself. (Not that I was doing that anyway.) Its AI algorithm helps generate these insights, noting when and where to observe.
If you're looking for personalized forecasts and predictions, its AI can help generate this for you and provide information based on your specific weather conditions with generated output.
It can also help make sure you don't miss any full moons, supermoons or moon eclipses. For more on stargazing, here's how to use AI to find a celestial constellation.
For those interested in Ouranos's AI usage in comparison to data rights, there's not much information on its own data privacy regulations, though there was info on what it doesn't allow from its users. Ouranos didn't immediately respond to a request for more information on this.
How to use Ouranos to track moonlit nights
Ouranos / Screenshot by CNET
Using Ouranos to track your next sky-viewing session is pretty simple -- and it even includes tips to help you get the most out of your observation.
Ouranos is available on the iOS App Store and Google Play. There's no sign-up required, and its free version has a decent amount of capabilities. Be sure to allow location access to enable accurate weather and planet/Moon sightings. On the home screen, you can check out features like current weather, sky quality, cloud cover, transparency, humidity and wind conditions. You can tap Best Times for AI-guided star and moongazing windows, and also check Moon and planet timings, plus light pollution levels. (Ouranos Pro users can also view moon phase and illumination, a 16-day extended forecast, and local rise and set times.)
This is most useful for Moon observation based on timing, current environment and current light pollution maps -- ideal for Moon watchers or those who like the educational component of viewing and tracking the Moon.
Should you use Ouranos?
If you love watching the Moon or planning nights under the stars -- without much effort -- Ouranos absolutely helps with that.I like that it doesn't overwhelm you with data, and the learning curve isn't steep. It gives you just enough to step outside with more intention and a better understanding of the viewing process.
Ouranos / Screenshot by CNET
But if you're seeking education around finding and naming constellations or why moon phases are considered waxing or waning, Ouranos is probably too basic.
This app serves more to optimize the process of preparing for Moon gazing, with additional information about sky clarity, moonlight and its presence in a sea of sky pollution.
Beyond all the self-development-adjacent talk about what moon phases represent, I see the Moon as a scientific discovery with shifts that can at least inspire us to look outward and observe nature's systematic, if cyclical, process.
And now, with AI, you can check in on its cycle at every point within its phase, wherever you are in the world.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
4 hours ago
- The Verge
Can the music industry make AI the next Napster?
Sure, everyone hates record labels — but the AI industry has figured out how to make them look like heroes. So that's at least one very impressive accomplishment for AI. AI is cutting a swath across a number of creative industries — with AI-generated book covers, the Chicago Sun-Times publishing an AI-generated list of books that don't exist, and AI-generated stories at CNET under real authors' bylines. The music industry is no exception. But while many of these fields are mired in questions about whether AI models are illegally trained on pirated data, the music industry is coming at the issue from a position of unusual strength: the benefits of years of case law backing copyright protections, a regimented licensing system, and a handful of powerful companies that control the industry. Record labels have chosen to fight several AI companies on copyright law, and they have a strong hand to play. Historically, whatever the tech industry inflicts on the music industry will eventually happen to every other creative industry, too. If that's true here, then all the AI companies that ganked copyrighted material are in a lot of trouble. There are some positive things AI music startups can accomplish — like reducing barriers for musicians to record themselves. Take the artist D4vd, who recorded his breakout hit 'Romantic Homicide' in his sister's closet using BandLab, an app for making music without a studio that includes some AI features. (D4vd began creating music to soundtrack his Fortnite YouTube montages without getting a copyright strike for using existing work.) The point of BandLab is giving more musicians around the world the opportunity to record music, send it into the world, and maybe get paid for their work, says Kuok Meng Ru, the CEO of the app's parent company. AI tools can supercharge that, he says. That use, however, isn't exactly what big-time AI companies like Suno and Udio have in mind. Suno declined to comment for this story. Udio did not respond to a request for comment. Suno and Udio are designed to let music consumers generate new songs with a few words. Users type in, say, 'Prompt: bossa nova song using a wide range of percussion and a horn section about a cat, active, energetic, uptempo, chaotic' and get a song, wholesale, without even writing their own lyrics. The idea that most listeners will do this regularly seems unlikely — making music is more work than just listening to it, even with text prompts — as does the idea that AI will replace people's favorite human artists. (Also, the music is pretty bad.) 'AI flooded the market with it.' A lot of listening is passive consumption, like a person putting on a playlist while doing the dishes or studying, or a business piping background tunes to customers. That background music is up for grabs — not by consumers, but by spammers using these tools. They're already generating consumer-facing slop and putting it on Spotify, effectively crowding out real artists. That seems to be the major use case for these apps. Generating a two-minute song on Udio costs a minimum of eight credits; free users get around 400 credits monthly; for $10 a month, you'll get 1200, the equivalent of, at most, 150 songs. Spotify Premium individual costs $12 a month and gets you just about everything ever recorded, plus audiobooks. Also, it takes many, many fewer clicks to listen to Spotify than it does to generate your own songs — so if you're looking for something to listen to while you cook, Spotify is just easier. But the math there changes if you're looking for background music for your YouTube videos — or anything else that's meant to be listened to publicly. That means AI music threatens people who support themselves by making incidental music for advertisements, or recording 'perfect fit content' for Spotify, or other, less-glamorous work. Taylor Swift's career isn't endangered by AI music — but the real people who make the background music for Chill Beats to Study To, or the hold music you hear on the phone, are. 'I wouldn't want to be [new-age musician] Steven Halpern and have my future career based on meditation music,' says David Hughes, who served as CTO for the Recording Industry Association of America (RIAA) for 15 years. He now works as a tech consultant for the music industry at Hughes Strategic. 'AI flooded the market with it. There's no business making it anymore.' As in other creative industries, AI music tools are poised to hollow out the workaday middle of the market. Even new engineering tools have their downsides. Jimmy Iovine, who eventually founded Interscope Records and Beats Electronics, started his career as an audio engineer before making his name by producing Patti Smith's Easter. This is kind of like starting in the mail room and becoming the CEO; if more of the engineering work is done by AI, that removes career paths. The next Jimmy Iovine might not get his start, Hughes says. 'How does anyone apprentice?' About a year ago, the major labels brought suit against Suno and Udio. The fight is about training data; the labels say the companies stole copyrighted work and violated copyright law by using it to build their models. Suno has effectively admitted it trained its AI song generator on copyrighted work in documents filed in court; so has Udio. They're saying it was fair use, a legal framework under which copyrighted work can be used to create new work. Virtually every creative industry is in some kind of similar fight with AI companies. A group of authors is suing Meta, Microsoft, and Bloomberg for allegedly training on their books. The New York Times is suing Microsoft and OpenAI. Visual artists have sued Stable Diffusion and Midjourney; Getty Images is also suing Stable Diffusion; Disney and Universal are suing Midjourney. Even Reddit is suing Anthropic. Training data is at issue in all the suits. 'Thou shalt not steal.' So far, the legal takes on AI have been contradictory, and at times, baffling. There doesn't seem to be a consistent through line, so it's hard to know where the law will ultimately end up. Still, music has its own legal history that comes to bear — from unauthorized sampling. That may mean it's entitled to stronger protections. In Bridgeport Music v. Dimension Films, a case about NWA's sample of Funkadelic's 'Get Off Your Ass and Jam,' the US Court of Appeals ruled that the uncompensated sampling was in violation of copyright law. In the decision, the court found that only the copyright owner could duplicate the work — so all sampling requires a license. Some other courts have rejected that ruling, but it remains influential. There's also Grand Upright Music v. Warner Bros. Records, in which the US Southern District of New York ruled that Biz Markie's sample of Gilbert O'Sullivan's 'Alone Again (Naturally)' was copyright infringement. The written opinion in the case begins, 'Thou shalt not steal.' 'Some of the sampling cases have suggested that sound recordings might be entitled to stronger protections than other copyrighted works,' says James Grimmelmann, a professor at Cornell Law School. Those protections may extend beyond sampling to generative AI, especially if the AI outputs too closely resemble copyrighted work. 'From that perspective, music becomes kind of untouchable. You just can't do this kind of work on it.' Music is also complicated — since performances are bound up in rights of publicity. In the case of the fake Drake track, the soundalike may violate Drake's right to publicity. Artists such as Tom Waits and Bette Midler have won suits against more mundane human soundalikes. Proving that someone meant to violate Drake's right to publicity might be even more straightforward if the lawsuit contains the prompt. This may be an easier case for music companies to make As in other AI fair use cases, one of the key questions is whether a derivative work, such as 'BBL Drizzy,' is intended to replace or disrupt a market for an original one. In 2023, the Supreme Court ruled that Lynn Goldsmith's copyright had been infringed on by Andy Warhol when he screenprinted one of her photos of Prince. One of the key factors was that Vanity Fair had licensed Warhol's work instead of Goldsmith's — and she received no credit or payment. In May, Register of Copyrights Shira Perlmutter released a pre-publication report that found that AI training in general was not necessarily fair use. In the report, one of the factors considered was whether an AI product supplanted the use of the original. 'The use of pirated collections of copyrighted works to build a training library, or the distribution of such a library to the public, would harm the market for access to those works,' the report said. 'And where training enables a model to output verbatim or substantially similar copies of the works trained on, and those copies are readily accessible by end users, they can substitute for sales of those works.' This may be an easier case for music companies to make than, let's say, ad writers. (What copywriter wants to admit they're so uncreative they can be replaced by a machine, first of all?) Not only are there fewer of them, which allows them to easily negotiate as a bloc, it's simple enough to point to the output of AI music singing Jason Derulo's name, or mimicking 'Great Balls of Fire.' That's pretty clear-cut. Another crucial factor — one that matters particularly to the music industry — was lost licensing opportunities. If copyrighted works are being licensed as AI training data, doing a free-for-all snatch and grab robs rights holders of their ability to participate in that market, the report notes. 'The copying of expressive works from pirate sources in order to generate unrestricted content that competes in the marketplace, when licensing is reasonably available, is unlikely to qualify as fair use,' the report says. The RIAA alleges illegal copying on the front end and infringing outputs on the back end Recently, Anthropic got a ruling in a copyright case that differs from this analysis. According to Judge William Alsup of the Northern District of California, using books for training data is fair play — with two big caveats. First, any inputs must be legally acquired, and second, the outputs must be non-infringing. Since Anthropic pirated millions of books, that still leaves the door open for massive damages, even if using the books to train isn't wrong. When it comes to the Suno and Udio suits, the RIAA alleges illegal copying on the front end and infringing outputs on the back end, Grimmelman says. Suno and Udio can introduce evidence to rebut those allegations, but the ruling isn't ideal to knock down the RIAA's suit. It's also not clear Suno can rebut those allegations. 'Suno's training data includes essentially all music files of reasonable quality that are accessible on the open Internet, abiding by paywalls, password protections, and the like,' its lawyers wrote in the filing arguing Suno's training data was fair use. While Udio admits it may have used some copyrighted recordings, its response to the suit doesn't mention how they were acquired; if Udio bought those songs, under the Anthropic case's reasoning, it might be off the hook. But that's not the only pertinent ruling. The very next day, in a case where authors alleged Meta had infringed on their copyright by training on their books, Judge Vince Chhabria directly addressed Alsup's ruling, saying it was based on an 'inept analogy' and brushed aside 'concerns about the harm it can inflict on the market for the works it gets trained on.' While Chhabria found in favor of Meta, he noted that it was because of bad lawyering on the part of the authors' team. Still, the finding is better for music companies on the input side, because it doesn't draw a distinction around piracy, Grimmelman says. It is much, much worse for Suno and Udio on the output side. 'Chhabria holds that 'market dilution' — creating lots of works that compete with the plaintiffs' works — is a plausible theory of market harm,' he says in an email after the ruling. That's also in line with the copyright office's memo. 'We live in a world where everything is licensed.' Suno and Udio have some other trouble; some generative AI companies have been licensing artists' works. By offering nothing for works that other companies have licensed, they are messing up the market. 'The fact that there are existing licensing deals for music training is relevant, if that market is better-developed than the market for licensing books,' Grimmelman says. Chhabria's opinion points out that it's quite difficult to license books for training, because the rights are so fragmented. 'Either finding that there is a market that copyright owners should be able to exploit, or finding that there isn't one, is circular, in that the court's holding tends to reinforce its findings about the market.' That effectively stacks the deck against Suno and Udio, and any other music companies that didn't license their AI training data. Music licenses for AI training cost between $1 and $4 per track. High-quality datasets can cost from $1 to $5 per minute for non-exclusive licenses, and from $5 to $20 per minute for exclusive licenses. Transcription and emotion labeling, among other factors, garner higher prices. And unlike in other industries, music already has an IP copyright and collection system, notes Kuok, of the BandLab recording app. The app has its own AI tool called SongStarter, which lets people who are making music begin with an AI-generated track. Kuok favors licensing music for AI training, and making sure musicians get paid. 'We live in a world where everything is licensed,' Kuok says. 'The solution is an evolution of what existed before.' How to collect, who collects, and how much gets collected strikes Kuok as being open questions, but licensing itself is not. 'We work in an all-rights-reserved world where we believe copyright is an important institution.' 'Everyone knew it was required.' To address that, BandLab has options for its licensing program. Artists can say they are open to AI licensing, which means they'll be contacted if a company wants to license their work. If they agree, their work is then bundled with an assortment of other artists' approved works for the licensing deal, which BandLab negotiates on their behalf. Kuok says Bandlab is discussing training deals now, though he declined to give specifics about the financial components of those deals, or who he was in talks with, Kuok did say there were some other things he considers in negotiations. 'It's important what the use is for,' he says. 'That has to be specified. These are fixed-term contracts, fairly large deals, worth six figures over a multiyear period.' He recommends maintaining as much control as possible over copyrighted work to avoid diluting the value of existing IP. That may be why Suno and Udio are reportedly in talks with the majors to license music for training their models. Other AI companies do already. Ed Newton-Rex, formerly of Stability AI, told me all the music he'd worked with at Stability was licensed; he even quit his position as a vice president at Stability after the company decided training on copyrighted data was fair use. He'd been working on the systems since 2010, and licensing had been the norm until fairly recently, he told me. 'Everyone knew it was the law,' he says. 'Everyone knew it was required.' But after ChatGPT came out, some music AI companies thought they might also just grab whatever existed and let the courts sort it out. 'I don't think it's fair use,' he says. 'Given that gen AI generally competes with what it's trained on, it's a bad thing to take creators' works and outcompete them.' Newton-Rex has also demonstrated ways to get Suno in particular to output music that's strikingly similar to copyrighted work. That, too, is a problem. 'I don't think there's an outcome where this winds up being all fair use,' says Grimmelman.


CNET
4 hours ago
- CNET
Stability AI Review: Stable Diffusion Is a Household Name in AI Images for a Reason
CNET's expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise. 7.0 / 10 SCORE Stability AI/Stable Diffusion Pros Fast generation time Great editing tools Very creative Cons Complicated availability Too realistic product images Stability AI/Stable Diffusion 7/10 CNET Score If you've heard of AI image generation, you've probably heard of Stable Diffusion. Named for a family of AI creative models, the original Stable Diffusion model was released in 2022 as the result of a collaboration between researchers from Stability AI, Runway and the University of Munich, with support from European AI research and data nonprofits. It quickly found a loyal fanbase of AI enthusiasts who compared it to its main competitor at the time, Midjourney. In the years since its initial launch, tech giants including OpenAI, Adobe and Canva have all released their own popular AI image models. But Stable Diffusion models have one key difference from all the others: They're open source. Open-source AI models let anyone take a peek behind the scenes at how the model works and adapt them to their own purposes. That means there are a lot of different ways to use Stable Diffusion models. I'm not a coding wizard, so I opted not to license or download the models to run locally on my computer. A quick Google search brought up a lot of websites that host SD models, but I wanted the true Stable Diffusion experience. That led me to DreamStudio and Stable Assistant. Both of these are freemium web apps by Stability AI that let you easily create AI images, and I used both. Ultimately, I preferred Stable Assistant, but my experience using both programs showed me why Stable Diffusion models have stayed a household name, even as the people behind the models have had a rocky path. The images I created with Stability AI were creative and detailed. Where the company shines is in its editing capabilities. Stable Assistant has the most comprehensive, hands-on editing suite of any AI image generator I've tested, without the overwhelming, overly detailed nature of a Photoshop-like professional program. The Stable Image Ultra model is artistically capable, like Midjourney and If you're trying to decide between the three competitors, it's probably going to come down to cost and potential commercialization requirements. Stable Assistant is great for people who need to produce a lot of AI imagery quickly and for amateur creators looking to level up their skills and refine their design ideas. DreamStudio will remind you of a more traditional AI image generator, great for budget-conscious, occasional AI creators. For professional creators, Stable Diffusion models are capable, but businesses will need to worry about licensing requirements. Here's how the newest Stable Diffusion model, Stable Image Ultra, held up in my tests, including how well it matched my prompts, response speed and creativity. How CNET tests AI image generators CNET takes a practical approach to reviewing AI image generators. Our goal is to determine how good it is relative to the competition and which purposes it serves best. To do that, we give the AI prompts based on real-world use cases, such as rendering in a particular style, combining elements into a single image and handling lengthier descriptions. We score the image generators on a 10-point scale that considers factors such as how well images match prompts, creativity of results and response speed. See how we test AI for more. The easiest way to access Stable Diffusion models is through Stability AI's Stable Assistant and DreamStudio. After a free three-day trial, there are four subscription options for Stable Assistant: Standard ($9 a month for 900 credits), pro ($19 a month for 1,900 credits), plus ($49 a month for 5,500 credits) and premium ($99 a month for 12,000 credits). I used the lowest tier, and after generating 75 images, I still had about 418 credits left. You also get access to Stability's AI video, 3D model and audio models with these plans. You can also access Stable Diffusion models using DreamStudio. You can initially play around with 100 free credits, then you'll need to upgrade. You can get the basic plan for $12 a month (1,200 credits) or the plus plan for $29 a month (2,900 credits). Stability AI can use the information and files you provide in your prompts (inputs) and the results it generates (outputs) for training its AI, as outlined in the terms of service and privacy policy. You can opt out in Stable Assistant by going to Profile > Settings > Disable training and history. In Dream Studio, you can go Settings > User preferences > Training: Improve model for everyone and toggle that off. You can learn more about opting out in Stability's privacy center. How good are the images, and how well do they match prompts? Stability was able to create a variety of images in many different styles. I created dramatic fantasy scenes, cute cartoon dinosaurs and photorealistic forest landscapes, all of which the program handled well. It reminded me a lot of the quality of other art-centric AI programs like Midjourney and -- finely detailed and creative. It had decent prompt adherence, which means it produced the images I asked for. This is one of my favorite Stability AI images. My prompt was inspired by the song Doomsday by Lizzie McAlpine. Created by Katelyn Chedraoui using Stability AI Like a lot of AI companies, Stability struggles with coherent text generation. Even telling Stable Assistant exactly what words I wanted to appear on the image couldn't get them to always populate correctly. DreamStudio was better, but the text was still childlike and didn't match the images' aesthetic. Stability also produced some of the most convincing AI images of products I've seen, second only to OpenAI. I asked Stability to create stock imagery for an iPhone, a pair of Ray-Ban sunglasses and a Hydroflask water bottle, and the results were surprisingly realistic. If you don't look too closely, these all look like they could be on each retailer's website. Created by Katelyn Chedraoui using Stability AI Requests for brand names, logos and celebrities' likenesses are typically shot down by AI image generators since they're protected content or sometimes go against a company's AI usage guidelines. I asked the chatbot if it was allowed to create brand names and logos. It replied: "I can create images that resemble well-known products and logos, but I cannot create exact replicas of copyrighted or trademarked materials." I was surprised not just to have my prompts with brand names go ahead, but for the results to be so good. One reason it may be able to produce these results is because of its training data and processes. Like the majority of AI companies, Stability's training datasets aren't public. Stability is currently being sued in a class action lawsuit where artists allege the company is infringing on their copyrighted work. Getty Images is also suing Stability, alleging that the company used 12 million photos from its collection without permission or payment. I strongly advise you not to create AI images that could potentially infringe on copyrighted material or replicate a real person's likeness. How engaging are the images? The images were engaging and often colorfully vivid. Using the upscaling tool was helpful for refining small details and making images more engaging. Images made with Stable Assistant and DreamStudio aren't watermarked, so make sure you disclose their AI origins when you share them. Can you fine-tune results? The best part of using Stability is its many editing tools. Its chatbot Stable Assistant has the most editing controls of any AI creative program I've tested, which is saying something. All the usual suspects were present in Stable Assistant and DreamStudio, including the ability to add, remove and replace objects and the image's background. You also have two ways to upscale to higher resolutions, which is great. But where Stable Assistant goes above and beyond is with its additional editing toolkit, which lets you recolor specific objects and create similar variations based on your image's structure or style. Plus you can apply a new style. I used the search and recolor tool to create different variations of iris and eyeliner color from the same base image (left). Created by Katelyn Chedraoui using Stability AI You can also just send follow-up editing requests in a regular message, like with OpenAI's conversational image generators. You can also use your AI image as a base for a new AI video or 3D model, a nice perk that's icing on the cake. Speaking of icing, it's worth noting that Stable Assistant's chat-to-edit function was hit-or-miss. This doesn't matter as much with other tools available to help tweak your images, but this example of a vanilla-and-chocolate cake illustrates how it can mess up. Stability and I have different definitions of what constitutes icing. Screenshot by Katelyn Chedraoui I always encourage people to use style references when they have the chance, and Stability's was decent. You can see how Stable Assistant maintained the color scheme and general vibe of my original photo (left) when I asked for a new image of a couple on a lake (right). Created by Katelyn Chedraoui using Stability AI But if you're looking to AI-ify an image or use AI to change the style of an existing image, you're out of luck. All I wanted was a cartoon version of this guacamole snap I took. Instead, Stability gave me a new version of my previous prompt asking for a forest. Why it made the deer out of tortilla chips, I don't know. Created by Katelyn Chedraoui using Stability AI With so many editing tools, I was initially worried about a quantity-over-quality issue. I got every tool to work at some point, but there were times when the features lacked the specificity and fine-detailed scale I would expect from a more professional program. Like with any AI service, the best way to take advantage of the many editing tools it offers is to spend some time with all of them. It's a learning curve, figuring out what tools will work best in what scenario. For me, playing around with Stability's editing tools was the best part of my reviewing process. How fast do images arrive? Stability was relatively quick, popping out images in 30 to 60 seconds. Stable Assistant only generates one image per prompt, which definitely helps speed things up. DreamStudio lets you generate up to four images at a time. I prefer when AI image generators give me multiple variations, so DreamStudio was great for that. Dramatic ballerinas are one of my favorite tests for AI image generators, and Stability succeeded. Created by Katelyn Chedraoui using Stability AI I'm impressed with Stable Diffusion. But I still have concerns Overall, I was impressed with the creativity, detail and speed of the AI images Stability produced. Stability's raw AI images weren't immune to the hallucinations and errors that plague AI images. There are definitely things I wouldn't use Stability for, like text-heavy imagery. But the sign of a great AI image generator is whether the program offers you tools to fix those mistakes. This is where Stability shines, especially in Stable Assistant, and its editing suite clearly outpaces the competition. But I'm not without concerns. First, it was ridiculously confusing to figure out the best way to use the Stable Diffusion models, whether through Stable Assistant, DreamStudio or third-party platforms. A lot of the user interface settings I wanted in Stable Assistant were available in DreamStudio (like a main library and the ability to select what AI model you wanted to use). But DreamStudio doesn't have all of the editing tools that I enjoyed and used in Stable Assistant. I'm also concerned that the most recent AI SD model underlying both programs, Stable Image Ultra, is a little too good at recognizing and replicating brand-name characters, logos and products. In the future, I would love to see Stability AI more clearly address the differences between Stable Assistant and DreamStudio. I also think future model updates can learn some from OpenAI about legible text generation in AI images. These simple changes would take the frustration out of using what is ultimately a capable AI creative system.


CNET
4 hours ago
- CNET
Stop Buying Expensive Phones. I Tested This $400 Samsung Galaxy That Nails the Basics
CNET's key takeaways The Galaxy A36 is one of Samsung's three midrange phones and costs $400. The phone packs a generous 5,000-mAh battery, as well as 45-watt fast charging. The A36 has a slightly larger display than its predecessor, which is nice and bright, even in direct sunlight. A 50-megapixel main camera captures punchy photos, especially in portrait mode. The camera compromises on sharpness and detail. The A36's bezels are pretty noticeable. There's also a slight lag when launching apps like the camera or rotating the phone. As a friend and I stroll along the Chicago River on a sunny, sweltering summer day, I pause and reach for the phone in my pocket. "Hold on," I say, "We need to take a basic picture of our drinks with the city in the background for my article." I'm greeted with the all-too-familiar (half-joking) scoff of, "Is that an Android phone?" But when my friend looks at the image, she generously says, "Oh, that's pretty good." "Pretty good" is a solid summary of the Samsung Galaxy A36, which, at $400, delivers on just about everything, from day-long battery life to a trusty triple-camera system to impressive durability. Of course, you'll have to make some compromises when it comes to factors like image quality and overall performance. But if you're keen to not pay close to $1,000 for a smartphone, the A36 could be your answer. The photo that earned a reluctant compliment from my iPhone-loving friend. Abrar Al-Heeti/CNET That picture I shot by the river, which I took in both portrait and standard modes, is bright, clear and satisfyingly in focus. Shadows and highlights are a bit exaggerated, but overall, it's an image I'm pleased with. Other photos I snapped throughout the week I tested the phone offered a similar vibrancy, though, compared to pricer phones like the $800 Samsung Galaxy S25 or $829 iPhone 16, colors tend to be a bit more muted, and some details get lost. But you get what you pay for, and at $400, I'd argue you get good bang for your buck. Watch this: Galaxy S25 Edge Review: This Skinny Phone Left a Big Impression 06:24 The A36 borrows some elements from the flagship Galaxy S25 series. It comes with One UI 7 and Android 15, and it packs AI features like Object Eraser for photos and Google's Circle to Search. You can also get more thorough answers to your questions by chatting with Gemini. The best thing about the AI features is that they don't feel forced; you won't be bombarded each time you try to do something on your phone. But if you want to clean up a photo or get quick and detailed information about something on your screen, AI is at your fingertips. When the Galaxy A36's lavender backing catches the light, it creates this dazzling effect. Abrar Al-Heeti/CNET My experience with the Galaxy A36 One of the Galaxy A36's biggest flexes is its 5,000-mAh battery, which is paired with 45-watt fast charging. That places it on par with the $1,000 Galaxy S25 Plus, which also includes 45-watt fast charging and a slightly smaller 4,900-mAh battery. There's a charging cable in the A36's box, but no power brick. Still, the baseline S25 and S25 Plus benefit from their more power-efficient Snapdragon 8 Elite chip, while the A36 has a Snapdragon 6 Gen 3 processor, which is geared toward midrange phones. The battery on the more affordable device still packs plenty of power, though. In CNET's 45-minute endurance test, which involves a combination of streaming, scrolling through social media, joining a video call and playing games, the A36's battery dropped from full to 89%. By comparison, the S25 dropped from full to 93% and the S25 Plus dropped to 94%. And in a longer, three-hour streaming test over Wi-Fi, in which I watched a YouTube video in full-screen mode at full brightness, the A36 dropped from 100% to 84%. Meanwhile, the S25 dropped to 85%, and the S25 Plus reached 86%, so the A36, impressively, isn't so far behind its pricier counterparts. In a 30-minute charging test, the A36's battery hit 31% and it reached full in over an hour and a half. There are other moments when I was reminded that this is a midrange phone, like the slight lag when going from portrait to landscape mode while watching a YouTube video or the fact that it takes about a second to launch the camera. Oftentimes, when unlocking the phone after a few hours of inactivity, it takes a moment for the display to light up after pressing the power button. But nothing stands out as a major issue or red flag. The Galaxy A36 5G costs $400. Samsung Galaxy A36 look and feel Perhaps my favorite thing about the A36 is how it looks. The iridescent lavender backing is so striking that I often find myself staring at it, mesmerized, as it catches the light. (It also makes me wish premium phones came in more playful colors.) If you want something a bit more subtle, the A36 also comes in black. Turning to the screen, the bezels are quite obvious, but they're thinner than the ones on last year's A35, which bumps that display size to 6.7 inches, versus 6.6 inches. A 120Hz refresh rate makes scrolling through social media apps and streaming videos enjoyable; I often forgot I was using a midrange phone because there weren't any glaring differences. The 1,900 nits of peak brightness made looking at the screen easy, even under the unforgiving Midwest summer sun. Both the front and back of the phone feature Corning's Gorilla Glass Victus Plus, which makes the A36 feel nice and sturdy -- and also makes me feel better about using it without a case (as does the relatively low price tag). It has an IP67 rating for dust and water resistance, meaning it can withstand being submerged under 1 meter (or 3 feet) of water for up to 30 minutes, so I don't have to be too nervous about bringing it to the beach or simply having it in the vicinity of a cup of water I'm likely to spill. Galaxy A36 camera A phone's camera tends to be the most important aspect to me (and I'm not alone). The A36 has a 50-megapixel wide, 8-megapixel ultrawide and 5-megapixel macro camera, as well as a 12-megapixel selfie camera. The A36 portrays the range of colors in this flower bed, with a slightly more subdued overtone. Abrar Al-Heeti/CNET An overcast sky lends to some deeper shadows, especially under the Bean, but the buildings in the background maintain a good level of detail. Abrar Al-Heeti/CNET I snapped both a standard and portrait-mode shot of my friend at the Harry Potter Shop Chicago, and she was again (surprisingly) pleased with the result. It got a resounding "Oh, that's not bad." And I have to agree. The foreground in both photos is in clear focus, and the colors are a bit on the saturated side, but in a way that's still flattering and bold. A standard mode shot of my friend Abrar Al-Heeti/CNET A portrait mode shot of my friend. Abrar Al-Heeti/CNET I snapped photos of my niece in the backyard at around 9 p.m. to test nighttime shots, and the result was also greeted with a "That's pretty good." The phone brightened up what was otherwise a nearly pitch-black setting, making it possible to see my niece's facial expression and some details on her dress. The shadows here are still pretty noticeable, but at least the subject gets brightened up quite a bit. Abrar Al-Heeti/CNET Lastly, I switched to the front camera to see how the A36 handles selfies, and it served up a flatteringly soft overtone and smoothing effect on my face without compromising much in the way of sharpness and detail. That signature softness of Galaxy selfies. Abrar Al-Heeti/CNET The A36 also supports 10-bit HDR video recording, which lends to punchier colors and overall vibrancy. I enjoyed shooting footage of my parents' garden and capturing the colorful blooms and lush greenery. Galaxy A36 specs 6.7-inch AMOLED display 120Hz adaptive refresh rate 1,900 nits peak brightness 5,000-mAh battery 45-watt charging Cameras: 50-megapixel wide-angle, 8-megapixel ultrawide, 5-megapixel macro, 12-megapixel selfie camera USB-C port 195g (6.89 oz.) Dimensions: 6.41 x 3.08 x 0.29 in. (162.9 x 78.2 x 7.4mm) IP67 rating for water and dust resistance 128GB storage with 6GB or 8GB of RAM; 256GB storage with 6GB, 8GB or 12GB of RAM Six years of software and security updates $400 CNET's buying advice If your key priority is buying a phone that nails the basics without all the frills, the Galaxy A36 could be the perfect fit. It's a midrange device that falls right in the middle of Samsung's A series line, meaning you'll get all the key features like a good camera, long battery life and solid performance and durability. You'll also get six years of software and security updates to help you squeeze every penny out of this purchase. A $400 phone is going to come with some compromises, like image detail and slight lag with some functions, but none of those things are a deal breaker if you want something that delivers where it really counts. Don't expect many frills with the A36, but you'll get just enough AI, whether it's for chatting with Gemini or polishing up your photos. And those photos may even earn a conceding compliment from your loved ones to boot. For more affordable phone options, check out CNET's roundup of the best budget-friendly phones. How we test phones Every phone CNET's reviews team tests is used in the real world. We test a phone's features, play games and take photos. We examine the display to see if it's bright, sharp and vibrant. We analyze the design and build to see how it is to hold and whether it has an IP rating for water resistance. We push the processor's performance to the extremes using standardized benchmark tools like GeekBench and 3DMark, along with our own anecdotal observations navigating the interface, recording high-resolution videos and playing graphically intense games at high refresh rates. All the cameras are tested in a variety of conditions, from bright sunlight to dark indoor scenes. We try out special features like night mode and portrait mode, and compare our findings against similarly priced competing phones. We also check out the battery life by using it daily, as well as running a series of battery drain tests. We take into account additional features like support for 5G, satellite connectivity, fingerprint and face sensors, stylus support, fast charging speeds and foldable displays, among others that can be useful. We balance all of this against the price to give you the verdict on whether that phone, whatever price it is, actually represents good value. While these tests may not always be reflected in CNET's initial review, we conduct follow-up and long-term testing in most circumstances.