Latest news with #NightCafe


Tom's Guide
21 hours ago
- Tom's Guide
I put 5 of the best AI image generators to the test using NightCafe — this one took the top spot
Competition in the AI image generator space is intense, with multiple companies like Ideogram, Midjourney and OpenAI hoping to convince you to use their offerings. That is why I'm a fan of NightCafe and have been using it for a few years. It has all the major models in one place, including DALL-E 3, Flux, Google Imagen and Ideogram. I've created a lot of AI images over the years and every model brings something different. For example, Flux is a great general purpose model in different versions. Imagen 4 is incredible for realism and Ideogram does text better than anything but GPT-4o. With NightCafe you can try the same prompt over multiple models, or even create a realistic image of say a train station using Google Imagen, then use that as a starter image for an Ideogram project to overlay a caption or stylized logo. You can also just use the same prompt over multiple models to see which you prefer. NightCafe also offers most of the major video models including Kling, Runway Gen-4, Luma Dream Machine and Wan 2.1. For this test we're focusing on image models. Having all those models to hand is a great way to test each of them to find the one that best matches your personal aesthetic — and they're each more different than you think. As well as the 'headline' models like Flux and Imagen, there are also community models that are fine-tuned versions of Flux and Stable Diffusion. For this I focused on the core models OpenAI GPT1, Recraft v3, Google Imagen 4, Ideogram 3 and Flux Kontext. I've come up with a prompt to try across each model. It requires a degree of photorealism, it presents a complex scene and includes a subtle text requirement. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Google's Imagen 4 is the model you'll use if you ask the Gemini app to create an image of something for you. It's also the model used in Google Slides when you create images. This was the first image for this test and while it captured the smoke rising it emphasised it a little. It did create a visually compelling scene and followed the requirement for the two people in the scene. It captured the correct vehicle but there's no sign of the text. Black Forest Labs Flux models are among the most versatile and are open source. With the arrival of the Kontext variant, we got image models that also understand natural language better. This means, a bit like OpenAI's native image generation in GPT-4o, it gives much more accurate results, especially when rendering text or complex scenes. Flux Kontext captured the 'Cafe Matin' perfectly, got the woman right and it somehow feels more French than Imagen but I don't think it's as photographically accurate. GPT Image-1, not to be confused with the 2018 original GPT-1 model, is a multimodal model from OpenAI designed for improved render accuracy, it is used by Adobe, Figma, Canva and NightCafe. Like Kontext, it has a better understanding of natural language prompts. One downside to this model is it can't do 9:16 or 16:9 images. Only variants of square. It captured the truck and the name, but I don't think the scene is as good. It also randomly generated a second umbrella and placement of hands feels unreal. Ideogram has been one of my favorite AI image models since it launched. Always able to generate legible text, it is also more flexible in terms of style than the other models. The Ideogram website includes a well designed canvas and built-in upscaler. The result isn't perfect, the barista leans funny but the lighting is more realistic, the scene is also more realistic with the truck on the sidewalk instead of the road. It also feels more modern and the text is both legible and well designed. Recraft is more of a design model, perfect for both rendered text and illustration, but that doesn't mean it can't create a stunning image. When it hit the market it shook things up, beating other models to the top of leaderboards. I wasn't overly impressed with the output. Yes, it's the most visually striking in part thanks to the space given to the scene. But it over emphasises the smoke and where is the barista? Also for a model geared around text — there's no sign writing. While Flux had a number of issues visually, it was the most consistent and it included legible sign writing. If I were using this commercially, as a stock image, I'd go with the Google Imagen 4 image, but from a purely visual perspective — Flux wins. What you also get with Flux Kontext is easy adaptation. You could make a secondary prompt to change the truck color or replace the old lady with a businessman. You can do that in Gemini but not with Imagen. You'd need to use native image generation from Gemini 2+. If you want to make a change to any image using Kontext, even if it wasn't a Kontext image originally, just click on the image in NightCafe and select "Prompt to Edit". Costs about 2.5 credits and is just a simple descriptive text prompt away. I used the most expensive version of each model for this test. The one that takes the most processing time to work on each image. This allowed for the fairest comparison. What surprises me is just how differently each model interprets the same descriptive prompt. But it doesn't surprise me how much better they've all got at following that description. What I love about NightCafe though, is its one stop shop for AI content. It isn't just a place to use all the leading image and video models, it contains a large community with a range of games, activities and groups centered around content creation. Also, you can edit, enhance, fix faces, upscale and expand any image you create within the app.


Hamilton Spectator
11-06-2025
- Entertainment
- Hamilton Spectator
Is it art, or is it stealing work? Album cover designers stare down an AI future
TORONTO - Finger Eleven guitarist James Black has picked up a new instrument, one that pushes the boundaries of his visual imagination — generative artificial intelligence technology. The Toronto musician and graphic artist admits it's a controversial choice, but over the past year, he's been using the tool to help design his band's new album covers. Each one showcases grand concepts, stunning imagery and ultimately a piece of art that demands attention in an era where all musicians are jostling to stand out. 'We're in the blockbuster age where people like to see big, big things,' Black says from his office. 'Whenever I have an idea, it's usually something beyond what we have the resources to do, and AI means you don't have to put a lid on those ideas.' His work usually starts with typing a few descriptive words into AI software and collecting the images it spits back out. Then, he uses photo editing to fine-tune his favourites so they fit his original vision. Sometimes, he submits those altered images back into the AI to generate more ideas. 'There's quite a bit of back-and-forth where you're applying your own skill and then putting it back in,' he said. 'It's a little bit like arguing with a robot. You have to nuance it into doing what you want.' One of his first experiments was the cover artwork for Finger Eleven's 2024 single 'Adrenaline.' The illustration shows a curvaceous woman in a skin-tight red-and-white-racing suit, her head concealed under a motorcycle helmet. She's standing in the middle of a racetrack with her back to the viewer. A cloudy blue sky imparts an otherworldly calm. Anyone who's seen recent AI artwork will probably recognize the hyperrealistic sheen of its esthetic. Other familiar AI trademarks are there too, including a landscape firmly rooted in a dream world. Generative image models are trained on billions of photographs to learn patterns, such as recurring shapes and styles. They then use that information to construct images that can often seem familiar. Many fear that the tools also draw from copyrighted pieces without permission from their creators. It's a legal quagmire that only skirts the surface of the ethical debate around generative AI models. Beyond the copyright risks, critics fear the technology will cost album cover designers and photographers their jobs. But AI programs such as NightCafe, CoPilot and Adobe Firefly offer cutting-edge possibilities that many artists say they can't ignore. Still, Black said he understands there are ethical concerns. 'I'm definitely torn myself,' he said. 'But I'm using it because it extends as far as my imagination can go.' Other musicians have found that generative AI answers the demands of a streaming industry that pressures them to churn out new music, eye-catching lyric videos and other visual elements regularly. But some fan bases aren't sympathetic to those reasons. Last year, Tears for Fears was slammed on social media after they revealed the cover of their live album 'Songs for a Nervous Planet,' which had several familiar AI image traits. The illustration shows an astronaut staring straight at the viewer, their face concealed under a space helmet. They're standing in the middle of a field of sunflowers that stretches into the distance. A cloudy blue sky imparts otherworldly calm. The cover's creator, Vitalie Burcovschi, described it as 'art created by AI using human imagination.' But fans were quick to accuse the band of using AI that might have scraped copyrighted work. As blowback intensified, the English duo released a statement calling it 'a mixed media digital collage, with AI being just one of the many tools used.' Pop singer Kesha encountered similar flak for the cover of her 2024 single 'Delusional,' which featured a pile of Hermés Birkin bags with the song's name spray-painted across them. Fans instantly recognized common flaws of an AI-created image: misspellings in the song's title, sloppy digital fragments. Some demanded she redo the artwork with paid photographers. It took months, but the singer replaced the image with a photograph of herself tied to a chair. She assured fans it was created with an 'incredible team of humans.' 'AI is a Pandora's box that we as a society have collectively opened, and I think it's important that we keep human ramifications in mind as we learn how to use it as a tool and not as a replacement,' she said in an Instagram post in May. Illustrator and musician Keenan Gregory of the band Forester says he used AI technology to extend the background of an old photograph so it could fit on the cover of the band's upcoming EP. The original image for 'Young Guns' was taken in the 1940s as a vertical photograph and showed bass player Dylan Brulotte's grandfather strolling through the streets of Edmonton. Gregory needed a square shape for the album cover, so he put the shot into Photoshop's generative AI tool, which artificially extended the frame's left and right edges with more detail. He removed certain background elements, like storefront signs, with a blend of traditional photo editing techniques. 'Typically, an artist would have to do that manually,' he said. 'But having AI provide you with options, which you then edit, is very powerful.' Gregory said he considers AI one of a photo editor's many tools, adding he didn't use it to make the cover for Royal Tusk's 'Altruistic,' which earlier this year won him a Juno Award for best album artwork. Even when musicians are transparent about using AI, some fans are not ready to embrace it, as British Columbia rock band Unleash the Archers learned last year. Vocalist Brittney Slayes said their concept album 'Phantoma' told the story of an AI gaining sentience and escaping into the real world in the body of an android. To explore the album's theme, Slayes said some of her songwriting drew inspiration from ChatGPT suggestions, while they used visual AI programs to create inspiration images for songs. She said the band also filmed a music video for 'Green & Glass' and then fed the finished product into an AI model trained on artwork by Bo Bradshaw — the illustrator for the band's merchandise. It spat out an AI-animated version of the video. 'We paid to license all of his artwork ... so he was compensated and he was credited,' she said. But the reaction was swift. Some listeners accused the band of theft, alleging that despite paying for Bradshaw's work, the AI tool likely used other unlicensed art to fill out the visuals. 'We didn't realize that even though our model was trained after one artist, the program was going to fill in the blanks with others,' Slayes said. 'People didn't care. The second the word 'AI' was used, we were targeted. You know, the usual Twitter uproar, being like scraped across the internet as these terrible people that use AI in their music.' Unleash the Archers responded on their socials, issuing a statement acknowledging they had unintentionally implied their video featured original artwork by Bradshaw when it was actually produced through an AI program without his direct involvement. Their statement recognized how fraught the risks are for bands eager to explore new technology, saying that 'while we were expecting some controversy, we weren't expecting as much as we got.' Slayes said the backlash has forever sullied her connection to the album, which she originally intended as an exploration of an inevitable AI future. Instead, to her, it's become a reminder of how fast-developing AI technology is provoking deep-rooted anxieties. 'People are still afraid of it,' she said. 'And for good reason, because it is taking jobs.' For other artists, she urges them to think carefully about how they introduce AI into their own projects: 'If you're going to use AI for your artwork, you've got to have a really good reason.' This report by The Canadian Press was first published June 10, 2025.


Winnipeg Free Press
10-06-2025
- Entertainment
- Winnipeg Free Press
Is it art, or is it stealing work? Album cover designers stare down an AI future
TORONTO – Finger Eleven guitarist James Black has picked up a new instrument, one that pushes the boundaries of his visual imagination — generative artificial intelligence technology. The Toronto musician and graphic artist admits it's a controversial choice, but over the past year, he's been using the tool to help design his band's new album covers. Each one showcases grand concepts, stunning imagery and ultimately a piece of art that demands attention in an era where all musicians are jostling to stand out. 'We're in the blockbuster age where people like to see big, big things,' Black says from his office. 'Whenever I have an idea, it's usually something beyond what we have the resources to do, and AI means you don't have to put a lid on those ideas.' His work usually starts with typing a few descriptive words into AI software and collecting the images it spits back out. Then, he uses photo editing to fine-tune his favourites so they fit his original vision. Sometimes, he submits those altered images back into the AI to generate more ideas. 'There's quite a bit of back-and-forth where you're applying your own skill and then putting it back in,' he said. 'It's a little bit like arguing with a robot. You have to nuance it into doing what you want.' One of his first experiments was the cover artwork for Finger Eleven's 2024 single 'Adrenaline.' The illustration shows a curvaceous woman in a skin-tight red-and-white-racing suit, her head concealed under a motorcycle helmet. She's standing in the middle of a racetrack with her back to the viewer. A cloudy blue sky imparts an otherworldly calm. Anyone who's seen recent AI artwork will probably recognize the hyperrealistic sheen of its esthetic. Other familiar AI trademarks are there too, including a landscape firmly rooted in a dream world. Generative image models are trained on billions of photographs to learn patterns, such as recurring shapes and styles. They then use that information to construct images that can often seem familiar. Many fear that the tools also draw from copyrighted pieces without permission from their creators. It's a legal quagmire that only skirts the surface of the ethical debate around generative AI models. Beyond the copyright risks, critics fear the technology will cost album cover designers and photographers their jobs. But AI programs such as NightCafe, CoPilot and Adobe Firefly offer cutting-edge possibilities that many artists say they can't ignore. Still, Black said he understands there are ethical concerns. 'I'm definitely torn myself,' he said. 'But I'm using it because it extends as far as my imagination can go.' Other musicians have found that generative AI answers the demands of a streaming industry that pressures them to churn out new music, eye-catching lyric videos and other visual elements regularly. But some fan bases aren't sympathetic to those reasons. Last year, Tears for Fears was slammed on social media after they revealed the cover of their live album 'Songs for a Nervous Planet,' which had several familiar AI image traits. The illustration shows an astronaut staring straight at the viewer, their face concealed under a space helmet. They're standing in the middle of a field of sunflowers that stretches into the distance. A cloudy blue sky imparts otherworldly calm. The cover's creator, Vitalie Burcovschi, described it as 'art created by AI using human imagination.' But fans were quick to accuse the band of using AI that might have scraped copyrighted work. As blowback intensified, the English duo released a statement calling it 'a mixed media digital collage, with AI being just one of the many tools used.' Pop singer Kesha encountered similar flak for the cover of her 2024 single 'Delusional,' which featured a pile of Hermés Birkin bags with the song's name spray-painted across them. Fans instantly recognized common flaws of an AI-created image: misspellings in the song's title, sloppy digital fragments. Some demanded she redo the artwork with paid photographers. It took months, but the singer replaced the image with a photograph of herself tied to a chair. She assured fans it was created with an 'incredible team of humans.' 'AI is a Pandora's box that we as a society have collectively opened, and I think it's important that we keep human ramifications in mind as we learn how to use it as a tool and not as a replacement,' she said in an Instagram post in May. Illustrator and musician Keenan Gregory of the band Forester says he used AI technology to extend the background of an old photograph so it could fit on the cover of the band's upcoming EP. The original image for 'Young Guns' was taken in the 1940s as a vertical photograph and showed bass player Dylan Brulotte's grandfather strolling through the streets of Edmonton. Gregory needed a square shape for the album cover, so he put the shot into Photoshop's generative AI tool, which artificially extended the frame's left and right edges with more detail. He removed certain background elements, like storefront signs, with a blend of traditional photo editing techniques. 'Typically, an artist would have to do that manually,' he said. 'But having AI provide you with options, which you then edit, is very powerful.' Gregory said he considers AI one of a photo editor's many tools, adding he didn't use it to make the cover for Royal Tusk's 'Altruistic,' which earlier this year won him a Juno Award for best album artwork. Even when musicians are transparent about using AI, some fans are not ready to embrace it, as British Columbia rock band Unleash the Archers learned last year. Vocalist Brittney Slayes said their concept album 'Phantoma' told the story of an AI gaining sentience and escaping into the real world in the body of an android. To explore the album's theme, Slayes said some of her songwriting drew inspiration from ChatGPT suggestions, while they used visual AI programs to create inspiration images for songs. She said the band also filmed a music video for 'Green & Glass' and then fed the finished product into an AI model trained on artwork by Bo Bradshaw — the illustrator for the band's merchandise. It spat out an AI-animated version of the video. 'We paid to license all of his artwork … so he was compensated and he was credited,' she said. But the reaction was swift. Some listeners accused the band of theft, alleging that despite paying for Bradshaw's work, the AI tool likely used other unlicensed art to fill out the visuals. 'We didn't realize that even though our model was trained after one artist, the program was going to fill in the blanks with others,' Slayes said. 'People didn't care. The second the word 'AI' was used, we were targeted. You know, the usual Twitter uproar, being like scraped across the internet as these terrible people that use AI in their music.' Unleash the Archers responded on their socials, issuing a statement acknowledging they had unintentionally implied their video featured original artwork by Bradshaw when it was actually produced through an AI program without his direct involvement. Their statement recognized how fraught the risks are for bands eager to explore new technology, saying that 'while we were expecting some controversy, we weren't expecting as much as we got.' Wednesdays Columnist Jen Zoratti looks at what's next in arts, life and pop culture. Slayes said the backlash has forever sullied her connection to the album, which she originally intended as an exploration of an inevitable AI future. Instead, to her, it's become a reminder of how fast-developing AI technology is provoking deep-rooted anxieties. 'People are still afraid of it,' she said. 'And for good reason, because it is taking jobs.' For other artists, she urges them to think carefully about how they introduce AI into their own projects: 'If you're going to use AI for your artwork, you've got to have a really good reason.' This report by The Canadian Press was first published June 10, 2025.