logo
Fed up with AI slop? Here's how DuckDuckGo can help

Fed up with AI slop? Here's how DuckDuckGo can help

If you've had enough of AI-generated images filling up your search results, then the DuckDuckGo search engine is here to help.
The Pennsylvania-based company recently announced an easy way to filter out AI-generated images from search results on its privacy-focused search engine.
Recommended Videos
To try it, simply make a search on DuckDuckGo and then head to the Images tab. You'll see a new drop-down option that says, 'AI images: show.' Select it and then select 'AI images: hide,' and then voilà, your page of images will appear slop-free!
Alternatively — and this is a neat touch — if you want the feature auto-enabled on DuckDuckGo's search engine, all you need to do is bookmark noai.duckduckgo.com. This page also hides DuckDuckGo's AI-assisted summaries and AI chat icons.
'Our philosophy about AI features is 'private, useful, and optional,'' DuckDuckGo said in a post on X announcing the new feature. 'Our goal is to help you find what you're looking for. You should decide for yourself how much AI you want in your life — or if you want any at all.'
The company said that its new filter blocks AI-generated images by using community-made lists that identify known AI image sources. It checks images against these lists and then hides from its search results the ones flagged as AI-generated.
It added that while it won't catch 100% of AI-generated results, 'it will greatly reduce the number of AI-generated images you see.'
DuckDuckGo's attempt to give people more control over the presence of AI-generated images in search results is a notable effort by the company toward preserving the integrity and usability of its search engine as the internet becomes increasingly filled with content created by AI tools.
It's great that DuckDuckGo is listening to user gripes about the proliferation of synthetic images, with the new feature sure to prove popular among folks keen for real images in their results.
Now, if only the social media sites would follow suit …
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Humans beat AI at top maths contest
Humans beat AI at top maths contest

News24

timea few seconds ago

  • News24

Humans beat AI at top maths contest

• For more financial news, go to the News24 Business front page. Humans beat generative AI models made by Google and OpenAI at a top international mathematics competition, despite the programmes reaching gold-level scores for the first time. Neither model scored full marks - unlike five young people at the International Mathematical Olympiad (IMO), a prestigious annual competition where participants must be under 20 years old. Google said Monday that an advanced version of its Gemini chatbot had solved five out of the six maths problems set at the IMO, held in Australia's Queensland this month. "We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points - a gold medal score," the US tech giant cited IMO president Gregor Dolinar as saying. "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Around 10 percent of human contestants won gold-level medals, and five received perfect scores of 42 points. US ChatGPT maker OpenAI said that its experimental reasoning model had scored a gold-level 35 points on the test. The result "achieved a longstanding grand challenge in AI" at "the world's most prestigious math competition", OpenAI researcher Alexander Wei wrote on social media. "We evaluated our models on the 2025 IMO problems under the same rules as human contestants," he said. "For each problem, three former IMO medalists independently graded the model's submitted proof." Google achieved a silver-medal score at last year's IMO in the British city of Bath, solving four of the six problems. That took two to three days of computation - far longer than this year, when its Gemini model solved the problems within the 4.5-hour time limit, it said. The IMO said tech companies had "privately tested closed-source AI models on this year's problems", the same ones faced by 641 competing students from 112 countries. "It is very exciting to see progress in the mathematical capabilities of AI models," said IMO president Dolinar. Contest organisers could not verify how much computing power had been used by the AI models or whether there had been human involvement, he cautioned.

Streamline simulation: The benefits over real-world testing
Streamline simulation: The benefits over real-world testing

Yahoo

time27 minutes ago

  • Yahoo

Streamline simulation: The benefits over real-world testing

Simulation technology is increasingly being utilised within the automotive space, saving time, money and allowing for simulated repeats of real-world scenarios again and again. Advanced Micro Devices (AMD), a company specialising in high-performance and adaptive computing solutions has recently adopted rFpro's simulation platform to develop automated driving technologies. The platform named AV elevate, enables AMD to reduce its dependency on real-world data collection and testing and will reduce development time and cost. We spoke with Matt Daley, director, rFpro, to learn more about the use of simulation technology, and to highlight its importance within the automotive industry. Just Auto (JA): Could you discuss AMD's use of your company's software and what the goals are? Matt Daley (MD): AMD specifically came to see us at one of our industry events. We were showing off our autonomous sensors and how those integrate into our very high-fidelity 3D worlds and doing this on a very high performance simulation system. We came across their automotive leadership team that were there and they were impressed with what we were doing. It very quickly led to connecting with our American team. We have a base over in Michigan - we've got a few developers and a sales team in Michigan – so it was easy for the AMD leadership team to connect with them. We started to explore how the simulation platforms can be set up quickly and efficiently in order to replace real-world testing – which comes with limitations. They're really keen to show that their innovations, such as parking system developments, their 3D surround vision of car can be demonstrated without having to rely on real world tests. It avoids finding a place to set up real-world tests and they can do it in their lab. More importantly, they can do it quickly, and iterate their designs rapidly. For someone like AMD, they are entering this as a platform supplier. They are not in business to sell you a camera, or to sell you one part. It's important for them that the entire ADAS and autonomous stack, from sensing through to perception, control, action and all of those phases that you need to do are available in the simulation. Also, they want the flexibility to do both - real-time and simulation - so that they can bring their physical hardware. You have that whole flexibility to test any part of this autonomous stack. Is it the sensors you're testing? Your perception system in the middle? Your control system that's guiding the car? Is it reversing or moving into the space? What does that feel like to the passenger inside? That's obviously a massive attraction of rFpro and our flexibility that we can use a single virtual world, and a single simulation platform like 'AV elevate' and let our customers explore all of them together, or each of them independently – up to them. I think AMD were just really impressed with its flexibility, and it's the speed with which they went from seeing the solution, trying it out, to proof of concept and then having a full demonstration ready to take to Las Vegas (in January). This came about in just six months, which in the automotive industry is unheard of. As technology develops within the automotive industry, has that posed any challenges for rFpro and its capabilities? only launched AV elevate, a dedicated product, last year. It was developed over seven years, which illustrates the time and investment required for something like this. There were some big challenges to separate it from classical driver-in-the-loop simulation. In driver-in-the-loop simulation, you obviously have to have everything in real-time, because things won't wait. You as a human will react exactly to what you see in front of you. For our sensing industry, we actually need to synchronize everything and ensure that all of the software stays in sync, because some of the testing needed isn't just real-time based. One of the big challenges we had initially was to look at having both our real-time product for humans and hardware, and having what we call our synchro step, our synchronous product for software-in-the-loop testing. So that was a first thing that we realised; that we need to have a common virtual world, and a common set of interfaces for people to connect, but also that we need to allow them flexible deployment in terms of: are you operating it with humans in real-time, or are you operating it in high fidelity, in synchronous mode? You have different challenges in each of those things, and we have had to adapt and build new technologies into our digital worlds, so that they could be viewed by different types of sensors and can be used for training and producing high quality training data. We've had challenges in our scenarios and our interfacing. We've had to not just think about a driver in a seat and a steering wheel, but think about, how do we bring in lots and lots of actors into the scene at the same time, and in a flexible way for different systems. Another massive area is sensor models, really changing the way that the simulation looks at the world, not just from human eyes, but looking at it from the electronic eye's sense of view. Is it a camera? Is it a LiDAR? Is it a radar? Then building individual simulation results based on the physics of those sensors. How important is simulation technology for the industry? We've been creating rFpro products for the autonomous industry for many years. When we started, we were about a highly successful driver-in-the-loop simulation system. So it became: how do we try and adapt this for machine vision, rather than just human vision? That whole philosophy is of trying to reuse industry leading simulation and all the investments we'd made, but applying it now to this very important, safety critical industry like ADAS and autonomous driving. We've been creating rFpro products for the autonomous industry for many years. We had been in a very good position for a long time in terms of having a good base, a lot of customers that are very used to using simulation, and how they use it to improve their development process. If you are no longer relying on real-world testing, then you don't have enough time or money to drive everything in the real-world. It's essential that you're able to adopt simulation into your development process to really make that move forward. I often talk about it as how you make a movie. You need to have a whole set of actors and locations that you're working in. So that's the digital content, and we've got to build very specific things around that. You need a range of flexible scenarios, so there are movement opportunities to move the actors around – and of course the script that you're going to give them. Then you need all of the filming equipment, so you need all of these sensor models that are very specific to what you need to do. It's a lot of moving parts to bring together. What goals has the company set out for 2025? In terms of AV elevate, it is still fairly new to the market. We've just got our first customers up and using it. I think the goal is to start showing more and more use cases where this single AV elevate simulation platform can be used to do multiple use cases. We will continue to work with AMD to not only promote what they're doing in this use case, but to show it off with other people, maybe on the training data side. This is very much on the testing side. We always talk about the different applications into three main buckets of tuning systems, training autonomous systems, and then testing the full stack. I think this is very much the tuning and the training, so they've got a system that they're looking at adapting it a little bit, and then they're testing how their driving algorithms are working. I think for us, there's a massive part of training. Using the synthetic data for training up the perception systems that we've been targeting for a long time, and we've got several different projects going on in there. If we can find some big successes this year that allow us to publicly talk about the verified success of using the synthetic training data to do the training part of the perception system as well, then that's a major goal for us as a company. "Streamline simulation: The benefits over real-world testing" was originally created and published by Just Auto, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Mastering AI is career insurance. Upskill now or fall behind
Mastering AI is career insurance. Upskill now or fall behind

Yahoo

time27 minutes ago

  • Yahoo

Mastering AI is career insurance. Upskill now or fall behind

As we've seen with other disruptive technologies over the decades, generative AI's rapid adoption has sparked a very human concern among many in the white-collar workforce: job displacement. The Pew Research Center published findings from an October 2024 survey that found more than half (52%) of U.S. workers are 'worried' about the future impact of generative AI on their careers. Similarly, a recently published PYMNTS Intelligence Report based on survey results collected a month later revealed as many as 54% of U.S. workers believed genAI posed a 'significant risk' of widespread layoffs. But are these fears justified? PYMTS, which publishes news and insights on the financial sector, found 82% of those who use genAI at least weekly reported that it increases their productivity. From buzzword to must-have: Why AI is now an imperative for business leaders Other surveys have found similar results. Conducted by researchers from Stanford, George Mason and Clemson Universities, a report published in April found workers using AI claim a three-fold productivity gain, estimating tasks that would normally (i.e., manually) take about 90 minutes to complete can be finished in 30 minutes with the help of genAI. In other words, perhaps AI tools will augment rather than replace staff to provide the most efficient outcomes for employees — and perhaps yield more profitable results for employers. Collaboration, not condemnation Billed as 'your AI companion,' Microsoft's Copilot is one of the biggest players in this space, and the benefits of embracing AI in the workplace are highlighted in the company's latest Work Trend Index. A recent study showed "that an individual with AI now outperforms a team without it,' affirms Colette Stallbaumer, WorkLab Cofounder and General Manager of Copilot, at Microsoft. 'But a team using AI outperforms them all.' 'It's all about this combination of sort of AI fluency and human skills, and I really believe the future belongs to people who can partner with AI,' adds Stallbaumer. In case you missed it: How AI and cloud technology are reengineering Formula One racing On why Copilot, Stallbaumer says it's integrated with 'all the tools that millions of people already use every day at work,' such as the Microsoft 365 suite of productivity apps. 'Copilot goes with you where you work, it understands your organizational data, it's secure, and while you're in control of it all, it's easy for employees to create and build 'agents' and set them to work on their behalf,' she adds. Leveraging artificial intelligence, AI agents are programs that can perform tasks and achieve goals for you, such as a smart personal assistant that can interact with your customers, like a chatbot that can learn and adapt its behavior over time. Stallbaumer says the new phrase 'agent boss' refers to a human manager who uses or oversees the work of AI agents. One example could be a sales professional who might leverage one agent to draft a request for proposal (RFP) and another agent to pull high-potential leads from their CRM data, and then bring the two together. 'Interestingly, our data showed that employees at companies with human-agent teams are actually more satisfied with their work, and so there's something really interesting happening when everyone is empowered with AI.' Upskilling and new AI-related jobs While some workers may be losing sleep over the threat of genAI coming after their jobs — and it didn't help that Amazon's CEO Andy Jassy recently conceded that AI will likely reshape its 1.5 million workforce in coming years — employees could in fact learn to master genAI as a kind of insurance policy. 'Our data showed that 47% of business leaders say that their top workforce priority is upskilling existing employees over the next 12 to 18 months,' says Stallbaumer. Carolina Milanesi, president and principal analyst at Creative Strategies, a Silicon Valley–based technology research firm, agrees. 'It's true that AI is going to impact every single job, one way or another — it will take some jobs, but also create a lot of jobs that were not possible before — and existing workers should be learning AI skills, too.' Milanesi quotes Cisco's President Jeetu Patel. 'Don't be afraid of AI taking your job. Be afraid of someone who knows how to use AI well from taking your job.' 'People can also take advantage of AI to do menial tasks that they don't want to do to free up their time and energy for more interesting parts of the jobs,' adds Milanesi. Microsoft is calling 2025 'the year the 'frontier firm' is born,' defined by the Work Trend Index as 'a company powered by intelligence on tap, human-agent teams, and a new role for everyone: agent boss.' And 'remember it's early innings right now,' says Stallbaumer. 'Only 1% of global leaders say their AI strategy is fully implemented, and so as we start to see the emergence of the 'frontier firm' we will see some exciting things ahead.' 'We will have to learn how to leverage and interact with AI, especially in the era of agentic AI,' adds Milanesi, 'and take advantage of this powerful technology for our benefit.' This article originally appeared on USA TODAY: Can AI empower staff without replacing human work? Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store