
Samsung brings AI-powered lock screen shopping to select Galaxy mobile phones
According to a 9to5Google report, Galaxy phone users can now opt into this shopping experience that allows them to see themselves virtually wearing different outfits and styles. The feature uses selfies or images provided by users to create realistic renderings of how various clothes might look on them. Users can then save these images as wallpapers or share them.
Also read: Meta's deal for nuclear power is likely cheaper than Microsoft's, Jefferies says
Jason Shim, head of the Galaxy Store in the US, said this integration offers a personalised and interactive shopping experience on the lock screen, designed to engage users more directly. The AI system uses Google's Gemini and Imagen models to generate visuals and suggest apparel options.
The service pulls from more than 400 brands and retailers, including Levi's, Old Navy, and Tommy Hilfiger, according to the reports. Users can purchase items through the app with a simple tap. The AI also keeps track of current trends, local events, and social media, which will provide them with timely updates on sales and promotions.
Also read: Samsung Galaxy S25 review: Flagship features in a handful package
This new shopping feature will be available on several Galaxy phone models, such as the Galaxy S22, S23, S24, and S25 series, excluding the Edge version of the S25. Users can download the Glance AI app through the Galaxy Store to access the service.
Glance first announced its AI shopping concept last month. The company said that the experience centres on the user, tailoring clothing suggestions based on selfies and personal data like age, body type, gender, and height.
Also read: Samsung Galaxy S25 Ultra review: Almost the perfect Android flagship
Beyond apparel, Glance plans to expand its AI-powered styling services to cover beauty products, accessories, and travel recommendations later this year. This ongoing development aims to provide a broader range of personalised shopping options for Galaxy users through AI technology.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
an hour ago
- Hindustan Times
Why is Google making two robots play endless table tennis? The reason reveals the future of AI
At a lab south of London, two robotic arms been playing table tennis non-stop, pushing each other to new limits and quietly hinting at the future of artificial intelligence in the real world. Unlike the legendary Wimbledon marathon where humans finally called it quits, these robots seem content to keep going, always learning, never truly finished. Table tennis helps Google's robots learn to handle real-world unpredictability, one ball at a time.(Unsplash) Training robots, one rally at a time Google DeepMind's project started as a hunt for better ways to train robots to handle real-world complexity. After all, it isn't enough for a robot to just lift a box if it cannot adjust to unexpected changes or interact with people around it. The team decided that table tennis, a game that mixes fast reaction times, precision control, and strategic play, was a natural choice for testing. Every point, with its wild spins and shifting speeds, is a lesson in adapting to a moving target. The first step was simple rallies. The robots played cooperatively, just keeping the ball in play. Gradually, engineers turned up the challenge, tweaking the rules so that each arm began to compete for points. Quick improvement wasn't immediate; the robot arms forgot some tactics as fast as they learned new ones, and early rallies were often short and awkward. Progress ramped up, though, when real humans jumped in. Facing off against people with different styles, the robots began seeing a broader set of shots, forcing them to adjust and respond on the fly. After dozens of matches, these arms could routinely outplay beginners and even break even with some intermediate players. What really sets this project apart is how the robots are now getting feedback. Google's Gemini vision-language model watches clips of table tennis games, then gives clear, actionable advice: hit farther right, go for a short ball, defend closer to the table. Unlike old-school programming, this feedback comes in natural language, almost like a coach at the sidelines. The robots adjust their strategies and keep growing, rally by rally. Why it matters beyond the table There's a bigger dream behind this marathon. DeepMind hopes that robots learning from endless competition and human coaching will one day lead to machines ready for real jobs. It's a step toward robots as office helpers, lab partners, or just reliable hands in unpredictable home environments. In the world of robotics, mastering 'simple' actions, like tying a shoelace or avoiding trip-ups, remains the real challenge, not chess or code-breaking. Long rallies at the table may help smooth that learning curve and chip away at obstacles that have slowed progress for years. Researchers say these games are just the beginning. As AI models become more general and feedback loops tighter, the journey from lab-bound robot to everyday helper could speed up. Until then, the arms keep at it, never tiring, always volleying, and inching closer to a day when robots truly join us in the rhythm of daily life.


Time of India
an hour ago
- Time of India
The 5% rule: What can you do that AI still can't?
By Abhik Choudhury This anxiety around AI taking our jobs isn't novel, it's just wearing shinier shoes this time. We've been here before when steam engines replaced horse carts, when ATMs replaced bank tellers, when Excel wiped out half of accounting. This time, though, it's not the blue collars trying to make ends meet but the white collar elite looking nervously over their ergonomic chairs. Shopify CEO Tobi Lütke recently said that the company has paused hiring for roles where AI can already outperform humans, adding, "You must prove that what you do cannot be done better by AI." The industrial revolution replaced muscle. The AI revolution is replacing method, and maybe even meaning. Machines haven't made humans irrelevant, they've forced us to evolve. And that's what's happening again. According to the World Economic Forum's Future of Jobs Report 2025, on one hand skills like creative thinking are one of the fastest growing while roles like graphic designers are one of the fastest declining. So to reiterate a much needed distinction, no, human intelligence isn't becoming irrelevant, it's just being upgraded. THE 5% RULE: EVOLVE OR BE EXCEPTIONAL Here's the uncomfortable truth: If you're not evolving, you need to be in the top 5% to survive. Everyone else? Relearn. Rethink. Rewire. Everything. AI doesn't care about your job title. And there is now enough research to confirm neither does your customer. They want output, fast . They want value, consistent . You may love craft, but the customer loves convenience. Be honest: Would you buy a smartphone just because one ad was made by Gemini and the other by Gitanjali? Back in 2019 itself, JP Morgan Chase tested AI written copy via Persado and saw a 450% spike in click-through rates. The tool analyzed millions of phrases to pick what would resonate emotionally with readers. Good three years ago WPP partnered with NVIDIA to build a generative AI-Enabled content engine trained in layout, typography and brand personality for their advertising briefs exclusively for art directors. These examples are from years ago. Today, the tools are leaner, faster, and trained on every brand brief since Mad Men aired. Recently a global agency head told me how pre pandemic they usually needed an average of 12 people before a campaign went live, now the number is already at 4. Just ask around, in the last two years how many full time, mid to senior level hires have happened in the industry and how many senior creative directors of the biggest names are working as freelance consultants now. And this is not to say creative legends are obsolete. If you're Stephen King, Gulzar, or Nolan, you'll always be in demand. But if your name isn't also your brand? You're not fighting AI. You're fighting other humans using AI better than you. So don't go to a gun fight with a sword. Especially when we are just figuring out the expanse of AI agents while almost fully liberated, self deciding Agentic AI's are on standby waiting to go live any day now. In the end, revenue never lies, and it's never sentimental. Really unfortunate but we've built a capitalist system that is supposed to reward results, not romance. So what are the next steps: Identify the 10% of your role that relies on deep human insight, nuanced emotion, or cultural fluency. Double your last 3 projects. What part was remarkable and not replicable?Study your audience more than your on your personal brand moat, become so synonymous with a niche that you are good enough to train the software with your experience. STOP BEING ROMANTIC. START BEING REFLECTIVE. Let's take journalism. If you're not using AI to scan earnings calls, summarize government reports, or verify PR spin in real time you're not post AI ready. According to UKG's 2023 Survey across 10 countries, 78% of CSuite expect automation in workflows by 2028. Earlier this year, Australia's CADA found itself at the center of backlash when it was revealed that their popular 11am–3pm radio show, 'Workdays with Thy,' was hosted entirely by AI, a synthetic face and Eleven Labs powered voice that went unnoticed for six months. It's still on air. And it's not just reporting. At McKinsey, an AI assistant named Lilli released in 2023 is trained on 100 years of internal consulting work. It scans over 100,000 documents and as per their own statement is used by more than 70% of its 45,000 consultants who use it weekly to surface insights and accelerate analysis. What did junior analysts once do in weeks, Lilli does in minutes. Now the partner can just ask Lilli: 'Why did Pepsi Co lose 10K dealers in March 2013?' And they will instantly get references, charts, and insights to draft it into a presentation or proposal. No digging through dead PDFs, old mail trails & 20 TB hard drives over a month. And here's the kicker: prompt performance is now quietly being used to evaluate junior consultants. Not just in consulting, but across industries. The $20 vs $2000 productivity debate is already playing out in every CFO's head. Now let's take a look at a completely different department: HR. IBM's AI models now claim 95% accuracy in flagging which employees are likely to quit using patterns in tenure, overtime, and promotion history. They saved $300 million by retaining talent before attrition struck. Tools like Humanyze now scan real-time sentiment on Slack, Teams, and some even deep dive into internal email analysis. They don't just predict disengagement, they offer action plans. 'This person is likely to quit in the next 60 days. Please take the following steps to retain.' That's not science fiction. That's Tuesday. The future HR won't just sense attrition, it'll trigger work flows. 'Change the project, schedule a 1:1, lower the workload.' So if the algorithm can sense burnout and enable preemptive retention steps before your boss can, maybe it's time to stop pretending it's still 2018. So what are the next steps: Learn prompt engineering. It's today's a macro course in understanding the basics of building & working with AI your own AI stack for daily work. Think of it as your second a weekly ritual: 'How did AI save me 7 hours this week and how can I use that time more effectively now?' THE EPILOGUE: SURPRISE THE ALGORITHM AI can now write poetry, generate illustrations, mimic brand voices, and compose video scripts in seconds. What it can't do is be weird. Or uncomfortable. Or irrational. Or beautiful in a way that makes no statistical sense. AI is trained on what is expected to be good. But it cannot carry the weight of a lullaby or the chaos of a forgotten love. In a world engineered for sameness, your rebellion is your art. Break the expected patterns till you glow as the glitch. And ask yourself before your next project, pitch, or personal brand tweak: 'Is this surprising the algorithm?' Want to thrive? Be the one AI can't clone (yet) because your voice, your vision, or your thinking still surprises the algorithm. And that's the t-shirt I would have gifted to my students entering the creative field: Surprise the algorithm. Because that, kind reader, is your 5%. The age of purely human output is gone. What we're in now is the hybrid era where the best of us will look increasingly like Iron Man, not Superman. Tech augmented, emotionally intelligent, and dangerously efficient. In ten years, this article might not just be translated, it could be psychographically rewritten for each reader. Same ideas, but delivered in the tone, lingo, and rhythm your brain likes best. Written by me? Maybe. Written by an AI trained on my brain? Almost certainly. (The author is chief strategist and founder of Salt and Paper Consulting. Views expressed are personal.)


India Today
3 hours ago
- India Today
Should you double-check your doctor with ChatGPT? Yes, you absolutely should
First, there was Google. Or rather Doctor Google, as it is mockingly called by the men and women in white coats, the ones who come in one hour late to see their patients and those who brush off every little query from patients brusquely and sometimes with unwarranted there is a new foe in town, and it is only now that doctors are beginning to realise it. This is ChatGPT, or Gemini, or something like DeepSeek, the AI systems that are coherent and powerful enough to act like medical guides. Doctors are, obviously, not happy about it. Just the way they enrage patients for trying to discuss with them what the ailing person finds after Googling symptoms, now they are fuming against advice that ChatGPT can dish problem is that no one likes to be double-checked. And Indian doctors, in particular, hate it. They want their word to be the gospel. Bhagwan ka roop or something like that. But frustratingly for them, the capabilities of new AI systems are such that anyone can now re-check their doctor's prescription, or can read diagnostic films and observations, using tools like ChatGPT. The question, however, is: should you do it? Absolutely yes. The benefits outweigh the harms. Let me tell you a story. This is from around 15 years ago. A person whom I know well went to a doctor for an ear infection. This was a much-celebrated doctor, leading the ENT department in a hospital chain which has a name starting with the letter F. The doctor charged the patient a princely sum and poked and probed the ear in question. After a few days of tests and consultations, a surgery — rather complex one — was recommended. It was at this time, when the patient was submitting the consent forms for the surgery that was scheduled for a few days later, that the doctor discovered some new information. He found that the patient was a journalist in a large media group, the name of which starts with the letter new information, although not related to the patient's ear, quickly changed the tune the doctor was whistling. He became coy and cautious. He started having second thoughts about the surgery. So, he recommended a second opinion, writing a reference for another senior doctor, who was the head of the ENT at a hospital chain which has a name starting with the letter A. The doctor at this new hospital carried out his own observations. The ear was probed and poked again, and within minutes he declared, 'No, surgery needed. Absolutely, no surgery needed.'What happened? I have no way of confirming this. But I believe here is what happened. The doctor at hospital F was pushing for an unnecessary and complex surgery, the one where chances of something going wrong were minimal but not zero. However, once he realised that the patient was a journalist, he decided not to risk it and to get out of the situation, relied on the doctor at hospital is a story I know, but I am sure almost everyone in this country will have similar anecdotes. At one time or another, we have all had a feeling that this doctor or that was probably pushing for some procedure, some diagnostic test, or some advice that did not sit well with us. And in many unfortunate cases, people actually underwent some procedure or some treatment that harmed them more than it helped. Medical negligence in India flies under the radar of 'doctor is bhagwan ka roop' and other other countries where medical negligence is something that can have serious repercussions for doctors and hospitals, in India, people in white coats get flexibility in almost everything that they do. A lot of it is due to the reverence that society has for doctors, the savers of life. Some of it is also because, in India, we have far fewer doctors than are needed. This is not to say that doctors in India are incompetent. In general, they are not, largely thanks to the scholastic nature of modern medicine and procedures. Most of them also work crazy long hours, under conditions that are extremely frugal in terms of equipment and highly stressful in terms of this is exactly why we should use ChatGPT to double-check our doctors in India. Because there is a huge supply-demand mismatch, it is safe to say that we have doctors in the country who are not up for the task, whether these are doctors with dodgy degrees or those who have little to no background in modern medicine, and yet they put Dr in front of their name and run clinics where they deal with most complex is precisely because doctors are overworked in India that their patients should use AI to double-check their diagnostic opinions and suggested treatments. Doctors, irrespective of what we feel about them and how we revere them, are humans at the end of the day. They are prone to making the same mistakes that any human would make in a challenging work finally, because many doctors in India — not all, but many — tend to overdo their treatment and diagnostic tests, we should double-check them with AI. Next time, when you get a CT scan, also show it to ChatGPT and then discuss with your doctor if the AI is telling you something different. In the last one year, again and again, research has highlighted that AI is extremely good at diagnosis. Just earlier this month, a new study by a team at Microsoft found that their MAI-DxO — a specially-tuned AI system for medical diagnosis — outperformed human doctors. Compared to 21 doctors who were part of the study and who were correct in only 20 per cent of cases, MAI-DxO was correct in 85 per cent of cases in its none of this is to say that you should replace your doctor with ChatGPT. Absolutely not. Good doctors are indeed precious and their consultation is priceless. They will also be better with subtleties of the human body compared to any AI system. But in the coming months and years, I have a feeling that doctors in India will launch a tirade against AI, similar to how they once fought Dr they will shame and harangue their patients for using ChatGPT for a second opinion. When that happens, we should push back. Indian doctors are not used to questions, they don't like to explain, they don't want to be second-guessed or double-checked. And that is exactly why we should ask them questions, seek explanations and double-check them, if needed, even with the help of ChatGPT.(Javed Anwer is Technology Editor, India Today Group Digital. Latent Space is a weekly column on tech, world, and everything in between. The name comes from the science of AI and to reflect it, Latent Space functions in the same way: by simplifying the world of tech and giving it a context)- Ends(Views expressed in this opinion piece are those of the author)Trending Reel