
Ask Adrian: My generic printer ink no longer works – will I have to buy ridiculously priced official cartridges?
— Alan Walsh
Answer
It's a well-worn reality of the consumer printer industry that ink often costs more than the printer itself; an entry-level, wifi-connected HP or Canon printer costs from €50, but a replacement cartridge of ink can cost €75 or more.
In general, generic ink companies claim to work but my experience is that it's hit and miss. Sometimes printer companies will install firmware updates that can try to add extra measures against third-party inks, while a lack of the original manufacturer's sensors on the cartridges themselves usually trigger warnings.
Because printers and printer ink are no longer regarded as core technology for anyone, there isn't the kind of EU competition investigation into this practice that similar restrictions might trigger in the digital space.
Question
We are thinking of switching from smartphones to 'dumb' phones. The main concern is WhatsApp. As a family, we recognise that too much time is spent scrolling and time for studying is being affected. The solution has been to leave the phone at home but then there is no way of contacting each other for lifts. Do you have any recommendations for suitable phones?
— Name with editor
Answer
Some of Nokia's button phones (the Nokia 800 Tough, for example) come pre-loaded with basic versions of WhatsApp and Facebook, but no TikTok, Snapchat, Instagram or YouTube, which can be the main drains on your time.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Examiner
8 hours ago
- Irish Examiner
Colman Noctor: New, strict online age verification will keep Ireland's children safer
On July 21, there was a landmark shift in Ireland. Part B of the online safety code, framed by Coimisiún na Meán, was introduced. The code requires video-sharing platforms — such as YouTube, Facebook, TikTok, Instagram, and X — to enforce age-verification procedures before users can view adult or harmful content. While part A covered general harmful content and took effect in November 2024, part B has an additional age-assurance requirement that safeguards younger people. This means Ireland has moved beyond self-declaration, which often involved simply clicking 'Yes, I'm over 18' to access adult content. These selected platforms must now adopt real age-verification methods to safeguard young and vulnerable users. Practically, when your child tries to access adult content, they might be asked to take a short, live selfie video or connect their account to an age-verifying app. If they fail the age-verification test, some content will be locked behind a barrier, and there will be no easy workaround. That's the intent. So, what does this mean for parents? As a result of part B, several video platforms are now required to block access to pornography, extreme violence, self-harm content, eating-disorder promotion, cyberbullying, or hate speech. Unless effective age verification confirms the user is over 18, they will be unable to access the content. So, self-declaration is no longer sufficient. These platforms must also have visible and easy-to-use parental controls, such as time limits or restrictions on who can post or view videos on a child's account, along with clear mechanisms to report harmful content and established procedures for resolving complaints. Crucially, these platforms are now subject to enforcement and can be fined up to €20m, or 10% of annual turnover — whichever is greater — for infringement of these directives. For parents, this signifies a shift from self-regulation and parental vigilance towards holding tech firms legally accountable — assigning some responsibility to platforms for keeping children safe. CyberSafe Kids describes this as a crucial milestone of online platforms accepting legal responsibility. However, this directive is not the panacea that we might assume it to be. A major limitation of this legislation is that it only applies to 10 sites headquartered in Ireland, including Facebook, Instagram, X, YouTube, Udemy, TikTok, LinkedIn, Pinterest, Tumblr, and Reddit. Unfortunately, it does not cover non-video sharing platforms; therefore, significant problem areas — such as online gaming, pornography sites, and notably Snapchat — are unaffected. Some other shortcomings include that the directive does not specify particular technologies for verifying age. As a result, various platforms suggest solutions that differ in their robustness — ranging from 'live selfies' or facial age-estimation AI to document upload or digital ID tokens. Like any safeguards or identifier technology, there have been criticisms from certain groups, including European Digital Rights (EDRi), which warns that many age-verification systems pose serious privacy and surveillance risks, especially when biometrics or identity documents are involved. They call on regulators to require zero-knowledge proofs, data minimisation, and alternatives for those without a formal ID. Given these changes, parents should talk to their child about the new verification steps, and explain how they might come across them on TikTok, Instagram, or YouTube, and make sure they never share sensitive documents or photos without understanding the process. Parents should familiarise themselves with platform settings, including parental controls, reporting tools, privacy defaults, and content ratings. Coimisiún na Mean has mandated these platforms to highlight these features, so we should take advantage. Parents and schools must continue to promote digital literacy, so children know why some content is age-restricted and how recommendation algorithms can still promote harmful content, even behind age-restricted screens. Parents are encouraged to report any violations of these new directives. If your child sees harmful content or has bypassed the verification tools, parents should first contact the platform to report these issues and, if unresolved, contact Coimisiún na Meán. How does Ireland's recent approach compare internationally? The UK's Children's Code requires online services to use strong privacy settings for children and minimise data collection, but it does not mandate strict age verification. The Australian federal government, through the Online Safety Amendment (approved in late 2024), plans to ban under-16s from social media, enforcing this via age verification and imposing heavy fines on platforms. It is one of the stricter regimes globally, alongside Singapore, whose regulators require app stores to screen users' ages, blocking those under 18 from downloading adult apps and under-12s from downloading popular social-media apps. Although not as strict as Australia or Singapore, Ireland is among the first EU countries to implement legally binding age- verification requirements, specifically for video- sharing platforms, with plans for robust enforcement and penalties. It aligns with the EU Digital Services Act by requiring age verification in service design rather than relying on self-regulation. This represents a step in the right direction, but the fact that the code only applies to platforms with EU headquarters in Ireland is deeply problematic. Consequently, Snapchat, and other services based elsewhere, will be outside the scope of these rules, although they remain subject to the EU's Digital Services Act, which has less-rigorous age-verification measures. Even with content restrictions, algorithms can still promote harmful suggestions, especially for self-harm or eating-disorder content, which is not always blocked by age gates. Part B of Ireland's Online Safety Code does not guarantee that teenagers will not encounter troubling content. Still, in collaboration with families, schools, and civic groups, it might provide a stronger framework than ever before. For parents aiming to keep children safe in today's digital world, the law is catching up, but your involvement remains important. While this new online code makes parenting children in a technological age a shared responsibility rather than an individual one, it's also realistic to recognise that there are still limitations to ensuring children's safety online. Unfortunately, recommendation algorithms (like those promoting trending or extreme content) aren't directly regulated under this code, though they are covered by the EU's Digital Services Act. This code only applies to 10 video-sharing platforms based in Ireland, and many other apps and games remain outside its scope. Underage users can still create fake ages or use older siblings' IDs, though the new tools will make this more difficult. Not all platforms are required to verify users in the same way, so some may operate more smoothly, while others might be more frustrating or invasive. This new online code marks a significant shift from passive gatekeeping to active protection. Let's hope it has the teeth needed to be effective by following through on the sanctions it promises. It is our first step to keeping children in Ireland safe online. Dr Colman Noctor is a child psychotherapist

The Journal
16 hours ago
- The Journal
Project launched to test new tech that sends text messages as they are typed
NEW TECHNOLOGY THAT allows people to receive text messages as they are typed has been introduced for emergency situations. 'Real-time texts' often improve the digital communication experience of people who are deaf or hard-of hearing. It allows mobile phone users to see messages come through, letter-by-letter. Those with the technology enabled on their smartphones can use it by calling someone else's number and selecting the real-time text option. It is typically used by people who are deaf when contacting emergency services. Advertisement The introduction is one of the many measures included in EU directives which seek to make the use of technology more accessible for everyone, including those with additional needs. It has been in place in other countries for a number of years. Vodafone has announced the technology today, which will allow their customers to communicate with 999 operators, through real-time texts, on Apple and Android devices. Communications minister Patrick O'Donovan and CEO of national deaf charity Chime, Mark Byrne, have welcomed the introduction of the service. Readers like you are keeping these stories free for everyone... A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation. Learn More Support The Journal


Irish Examiner
18 hours ago
- Irish Examiner
'High risk' of consumers finding illegal products on Temu, says European Commission
Temu is not doing enough to assess the risks of illegal products being sold online and could be in breach of a new digital services law, the European Commission said. The commission said on Monday that there was a "high risk" of consumers in the EU encountering illegal products on the e-commerce giant's platform. Specifically, analysis of a mystery shopping exercise conducted by the commission found that consumers shopping on Temu were very likely to find non-compliant products, including baby toys and small electronics. The statement is part of an investigation into the e-commerce giant under the commission's Digital Services Act (DSA), a new piece of legislation governing online content in the European Union. It forces companies that run online platforms such as e-commerce websites to assess how likely consumers are to be exposed to dangerous or illegal products, and work to lessen the risk. The commission said according to its analysis, a risk assessment carried out by Temu, which is owned by PDD Holdings, in October 2024 was "inaccurate" and "relying on general industry information rather than on specific details about its own marketplace". Henna Virkkunen, executive vice-president for 'tech sovereignty, security and democracy', said: "We shop online because we trust that products sold in our Single Market are safe and comply with our rules. "In our preliminary view, Temu is far from assessing risks for its users at the standards required by the Digital Services Act. "Consumers' safety online is not negotiable in the EU - our laws, including the Digital Services Act, are the foundation for a better protection online and a safer and fairer digital Single Market for all Europeans." The company could face a fine of up to 6% of its annual worldwide turnover if the commission ultimately decides its risk assessment does not meet the companies' obligations under the DSA. The commission said officials would also continue investigating the company over other suspected breaches of the DSA such as using addictive design features and a lack of transparency on its algorithms. The EU is trying to counter what it sees as a glut of cheap and potentially unsafe products from China flooding the single market. Officials also sent a formal warning to Shein in May, saying the company's sales tactics fell foul of EU consumer protection law. Shein said it was engaging with the Commission to address concerns.