
Instagram starts testing report feature: Here's what it means for users
Instagram is reportedly testing a new feature X-like which will enable users to share content directly to their main feed. According to a report Techcrunch, Instagram is working on a 'repost' feature which will allow users to post content directly to their main feed. For years, Instagram users have relied on workarounds, such as third-party apps or sharing posts to their ephemeral Stories, to amplify content from other accounts. The introduction of a native reposting tool would streamline this process, making it much easier for users to share public posts and Reels directly with their followers.
How Instagram's upcoming repost feature will work
The new feature enables users to reshare both their own posts and those of others, making it easier to amplify content without relying on third-party apps or workarounds. While Instagram already allows users to share posts to Stories or via direct messages, this update brings reposting to the forefront of the app's core experience.
While the feature is currently being trialed with a select group of users, an official support page already provides details on its functionality. When a user reposts content, their followers "may see what you reposted in their feeds," and these reposts will also appear in a dedicated "reposts" tab on the user's profile. Importantly, users will also have the option to control whether others can repost their content.
For content creators, the official repost feature could be a significant boon, offering an improved way to gain wider reach and ensure proper attribution for their work, addressing a common issue where viral content is often shared without credit to the original poster.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
17 hours ago
- Time of India
The ‘Big' reason why you must carefully read Facebook and Instagram's terms and conditions
After years of training its generative AI models on billions of public images from Facebook and Instagram , Meta is reportedly seeking access to billions of photos users haven't publicly uploaded, sparking fresh privacy debates. While the social media giant explicitly states it is not currently training its AI models on these private photos, the company has declined to clarify whether it might do so in the future or what rights it will hold over these images, a report has said. The new initiative, first reported by TechCrunch on Friday (June 27) sees Facebook users encountering pop-up messages when attempting to post to Stories. These prompts ask users to opt into "cloud processing," which would allow Facebook to "select media from your camera roll and upload it to our cloud on a regular basis." The stated purpose is to generate "ideas like collages, recaps, AI restyling or themes like birthdays or graduations." The report notes that by agreeing to this feature, users also consent to Meta's AI terms, which permit the analysis of "media and facial features" from these unpublished photos, alongside metadata like creation dates and the presence of other people or objects. Users also grant Meta the right to "retain and use" this personal information. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Free P2,000 GCash eGift UnionBank Credit Card Apply Now Undo Meta used public, not private, data train its generative AI models According to The Verge, Meta recently acknowledged that it used data from all public content published on Facebook and Instagram since 2007 to train its generative AI models. Although the company stated it only used public posts from adult users over 18, it has remained vague about the precise definition of 'public' and what constituted an 'adult user' in 2007. Meta public affairs manager , Ryan Daniels, has reiterated to the publication that this new 'cloud processing' feature is not currently used for training its AI models, a, told The Verge, "[The story by the publication] implies we are currently training our AI models with these photos, which we aren't. This test doesn't use people's photos to improve or train our AI models," Maria Cubeta, a Meta comms manager, was quoted as saying. Cubeta also described the feature as 'very early,' innocuous, and entirely opt-in, stating, "Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test." Furthermore, while Meta also said that opting in grants permission to retrieve only 30 days' worth of camera roll data at a time, Meta's own terms suggest some data retention may be longer. 'Camera roll suggestions based on themes, such as pets, weddings and graduations, may include media that is older than 30 days,' Meta's says. Google Pixel 9 Pro Fold After 1 Year: Is It STILL My Daily Driver? (Long-Term Review) AI Masterclass for Students. Upskill Young Ones Today!– Join Now


India Today
2 days ago
- India Today
Facebook users beware, Meta AI can scan all your phone photos anytime if you are not careful
Meta has consistently found itself at the centre of privacy debates. There's little doubt that the company has been using our data, for instance, our publicly posted photos across Facebook and Instagram, to train its AI models (more commonly known as Meta AI). But now, it seems Meta is taking things to another level. Recent findings suggest that it now wants full access to your phone's camera roll, meaning even photos you haven't shared on Facebook (or Instagram), reported by TechCrunch, some Facebook users have recently come across a curious pop-up while attempting to upload a Story. The notification invites them to opt into a feature called 'cloud processing.' On the surface, it sounds fair and safe, as Facebook says this setting will allow it to automatically scan your phone's camera roll and upload images to Meta's cloud 'on a regular basis.' In return, the company promises to offer 'creative ideas' such as photo collages, event recaps, AI-generated filters, and themed suggestions for birthdays, graduations, or other cool? But wait for it. When you agree to its terms of use, you're also giving Meta a go-ahead to analyse the content of your unpublished and presumably private photos on an ongoing basis as Meta AI looks at details such as facial features, objects in the frame, and even metadata like the date and location they were taken, to gradually become is little doubt that the idea is to make AI more helpful for you – the user – since AI needs all the data one can possibly fathom to make sense of the real world and respond accordingly to questions and prompts you are putting out. And Meta, on its part, says that this is an opt-in feature, which is to say that users can choose to disable it as and when they want. That's fair, but given that this is user data we're talking about and given Facebook's history, some users (and privacy advocates) would be tech giant had earlier admitted it had scraped all public content uploaded by adults on Facebook and Instagram since 2007 to help train its generative AI models. However, Meta hasn't clearly defined what 'public' means or what age qualifies someone as an 'adult' in its dataset from 2007. That haziness leaves a lot of room for different interpretations—and even more room for concern. Moreover, its updated AI terms, active since June 23, 2024, don't mention whether these cloud-processed, unpublished photos are exempt from being used as training Verge reached out to the Meta AI executives, but they bluntly denied that Meta, "is not currently training its AI models on those photos, but it would not answer our questions about whether it might do so in future, or what rights it will hold over your camera roll images."There is, thankfully, a way out. Facebook users can dive into their settings and disable this cloud processing feature. Once turned off, Meta promises it will begin deleting any unpublished images from the cloud within 30 days. Still, the very nature of this tool—pitched as a fun and helpful feature—raises questions about how users are nudged into handing over private data without fully realising the a time when AI is reshaping how we interact with tech, companies like Meta are testing the limits of what data they can collect, analyse, and potentially monetise eventually. This latest move blurs the lines between user assistance and data extraction. What used to be a conscious decision—posting a photo to share with the world—now risks being replaced with quiet uploads in the background and invisible AI eyes watching it all unfold. We'll see how things pan out.- Ends advertisement


Time of India
2 days ago
- Time of India
What to know about online age verification laws
The Supreme Court has upheld a Texas law aimed at blocking children under 18 from seeing online pornography by requiring websites to verify the ages of all visitors. Many states have passed similar age verification laws in an attempt to restrict access to adult material from minors, but digital rights groups have raised questions about such laws' effects on free speech and whether verifying ages by accessing sensitive data could violate people's is the Texas law? The law requires websites hosting pornographic material to verify the ages of users in hopes of stopping those under 18 from visiting. Adults would need to supply websites with a government-issued ID or use third-party age-verification services. The law carries fines of up to $10,000 per violation - fined against the website - that could be raised to up to $250,000 per violation by a has argued that technology has improved significantly in the last 20 years, allowing online platforms to easily check users' ages with a quick picture. Those requirements are more like ID checks at brick-and-mortar adult stores that were upheld by the Supreme Court in the 1960s, the state internet service providers, search engines and news sites are exempt from the do sites verify ages? It's already illegal to show children pornography under federal law, however it's rarely enforced. But various measures already exist to verify a person's age online. Someone could upload a government ID or consent to the use facial recognition software to prove they are the age they say they and social media companies such as Instagram parent company Meta have argued that age verification should be done by the companies that run app stores, such as Apple and Google, and not individual apps or people get around verification? Critics, such as Pornhub have argued that age-verification laws can be easily circumvented with well-known tools such as virtual private networks (VPNs) that reroute requests to visit websites across various public have also been raised about enforcement, with Pornhub claiming those efforts would drive traffic to less-known sites that don't comply with the law and have fewer safety opposes such laws? Though heralded by social conservatives, age verification laws have been condemned by adult websites who argue they're part of a larger anti-sex political also garnered opposition from groups that advocate for digital privacy and free speech, including the Electronic Frontier Foundation. The group has argued that it is impossible to ensure websites don't retain user data, regardless of whether age verification laws require they delete Jain, vice president of policy at the nonprofit Center for Democracy & Technology, said the court's decision on age verification "does far more than uphold an incidental burden on adults' speech. It overturns decades of precedent and has the potential to upend access to First Amendment-protected speech on the internet for everyone, children and adults alike.""Age verification requirements still raise serious privacy and free expression concerns," Jain added. "If states are to go forward with these burdensome laws, age verification tools must be accurate and limit collection, sharing, and retention of personal information, particularly sensitive information like birthdate and biometric data."