logo
#

Latest news with #generativeAI

Singapore's NTU assembles panel after students penalised over AI use, one of them merely for alphabetising citations with online tool
Singapore's NTU assembles panel after students penalised over AI use, one of them merely for alphabetising citations with online tool

Malay Mail

time5 hours ago

  • Malay Mail

Singapore's NTU assembles panel after students penalised over AI use, one of them merely for alphabetising citations with online tool

SINGAPORE, June 26 — Nanyang Technological University (NTU) will convene an appeal review panel including artificial intelligence (AI) experts after a student was accused of academic fraud for allegedly using generative AI tools. CNA reported that NTU allows students to use AI in assignments but requires them to declare usage, ensure accuracy, and cite sources. 'NTU remains committed to our goal of equipping students with the knowledge and skills to use AI technologies productively, ethically and critically,' a spokesman was quoted saying. The university said it had met two of the three students involved to assess the grounds for appeal, though no decisions were made during the consultations. One student's appeal was accepted for review, while another's was rejected. The student whose appeal was processed had earlier shared on Reddit that she was accused of misusing AI after submitting an essay for a module on health and disease politics. An assistant professor questioned whether AI tools were used, prompting the student to submit a time-lapse video of her writing process using the Draftback browser extension. However, she was penalised for using Study Crumb, an AI-powered site, to alphabetise her citations, receiving a zero for the assignment and a 'D' for the module. The student paid S$40 (RM139) to appeal and later demonstrated her writing process and use of the citation tool during a two-hour consultation with a faculty panel. A panel member reportedly agreed the tool was not considered generative AI and assured her that the misconduct would not appear on her permanent record. Two other students from the same class also received zeros, including one who used Citation Machine and ChatGPT to organise citations and conduct limited background research. She said her appeal was rejected after a panel found she violated explicit instructions banning AI tools. NTU said the student had previously admitted to using generative AI in her assignment and noted that instructors may prohibit AI use for certain tasks. A briefing slide for the class stated that AI use in developing essays was prohibited, with zero marks imposed for violations. The third student was penalised for allegedly using fake citations and initially faced a 10-mark deduction, which was later escalated to a zero. He accepted the decision and chose not to contest it further, saying he prioritised passing as he had already secured a job but feared the incident could harm his reputation.

Facebook is starting to feed its AI with private, unpublished photos
Facebook is starting to feed its AI with private, unpublished photos

The Verge

time10 hours ago

  • The Verge

Facebook is starting to feed its AI with private, unpublished photos

For years, Meta's trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram's servers. But apparently, Meta has decided to try training its AI on the billions of images that users haven't uploaded to those servers. On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they'd like to opt into 'cloud processing', which would allow Facebook to 'select media from your camera roll and upload it to our cloud on a regular basis', to generate 'ideas like collages, recaps, AI restyling or themes like birthdays or graduations.' By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze 'media and facial features' of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to 'retain and use' that personal information. Meta recently acknowledged that it's scraped the data from all the content that's been published on Facebook and Instagram since 2007 to train its generative AI models. Though the company stated that it's only used public posts uploaded from adult users over the age of 18, it has long been vague about exactly what 'public' entails, as well as what counted as an 'adult user' in 2007. Unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta's current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through 'cloud processing' are exempt from being used as training data. Meta did not return TechCrunch's request for comment; The Verge has reached out for comment as well. Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days. But the workaround, disguised as a feature, suggest a new incursion into our private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta's already offering AI restyling suggestions on previously-uploaded photos, even if users hadn't been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge.

As job losses loom, Anthropic launches program to track AI's economic fallout
As job losses loom, Anthropic launches program to track AI's economic fallout

TechCrunch

time13 hours ago

  • Business
  • TechCrunch

As job losses loom, Anthropic launches program to track AI's economic fallout

Silicon Valley has opined on the promise of generative AI to forge new career paths and economic opportunities – like the newly coveted solo unicorn startup. Banks and analysts have touted AI's potential to boost GDP. But those gains are unlikely to be distributed equally in the face of what many expect to be widespread AI-related job loss. Amid this backdrop, Anthropic on Friday launched its Economic Futures Program, a new initiative to support research on AI's impacts on the labor market and global economy and to develop policy proposals to prepare for the shift. 'Everybody's asking questions about what are the economic impacts [of AI], both positive and negative,' Sarah Heck, head of policy programs and partnerships at Anthropic, told TechCrunch. 'It's really important to root these conversations in evidence and not have predetermined outcomes or views on what's going to [happen].' At least one prominent name has shared his views on the potential economic impact of AI: Anthropic's CEO Dario Amodei. In May, Amodei predicted that AI could wipe out half of all entry-level white-collar jobs and spike unemployment to as high as 20% in the next one to five years. When asked if one of the key goals of Anthropic's Economic Futures Program was to research ways to mitigate AI-related job loss, Heck was cautious, noting that the disruptive shifts AI will bring could be 'both good and bad.' 'I think the key goal is to figure out what is actually happening,' she said. 'If there is job loss, then we should convene a collective group of thinkers to talk about mitigation. If there will be huge GDP expansion, great. We should also convene policy makers to figure out what to do with that. I don't think any of this will be a monolith.' The program builds on Anthropic's existing Economic Index, launched in February, which open-sources aggregated, anonymized data to analyze the effects of AI on labor markets and the economy over time – data that many of its competitors lock behind corporate walls. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW The program will focus on three main areas: providing grants to researchers investigating AI's effect on labor, productivity, and value creation; creating forums to develop and evaluate policy proposals to prepare for AI's economic impacts; and building datasets to track AI's economic usage and impact. Anthropic is kicking off the program with some action items. The company has opened applications for its rapid grants of up to $50,000 for 'empirical research on AI's economic impacts,' as well as evidence-based policy proposals for Anthropic-hosted symposia events in Washington, D.C. and Europe in the fall. Anthropic is also seeking partnerships with independent research institutions and will provide partners with Claude API credits and other resources to support research. For the grants, Heck noted that Anthropic is looking for individuals, academics, or teams that can come up with high-quality data in a short period of time. 'We want to be able to complete it within six months,' she said. 'It doesn't necessarily have to be peer-reviewed.' For the symposia, Anthropic wants policy ideas from a wide variety of backgrounds and intellectual perspectives, said Heck. She noted that policy proposals would go 'beyond labor.' 'We want to understand more about the transitions,' she said. 'How do workflows happen in new ways? How are new jobs being created that nobody ever contemplated before?…How are certain skills remaining valuable while others are not?' Heck said Anthropic also hopes to study the effects of AI on fiscal policy. For example, what happens if there's a major shift in the way enterprises see value creation? 'We really want to open the aperture here on things that can be studied,' Heck said. 'Labor is certainly one of them, but it's a much broader swath.' Anthropic rival OpenAI released its own Economic Blueprint in January, which focuses more on helping the public adopt AI tools, building robust AI infrastructure and establishing 'AI economic zones' that streamline regulations to promote investment. While OpenAI's Stargate project to build data centers across the U.S. in partnership with Oracle and SoftBank would create thousands of construction jobs, OpenAI doesn't directly address AI-related job loss in its economic blueprint. OpenAI's blueprint does, however, outline frameworks where government could play a role in supply chain training pipelines, investing in AI literacy, supporting regional training programs, and scaling public university access to compute to foster local AI-literate workforces. Anthropic's economic impact program is part of a slow but growing shift among some tech companies to position themselves as part of the solution to the disruption they're helping to create – whether out of reputational concern, genuine altruism, or a mix of both. For instance, on Thursday, ride-hail company Lyft launched a forum to gather input from human drivers as it starts integrating robotaxis into its platform.

Report: Amazon loses one of its top AI bosses
Report: Amazon loses one of its top AI bosses

Daily Mail​

time13 hours ago

  • Business
  • Daily Mail​

Report: Amazon loses one of its top AI bosses

Amazon has lost one of its top AI bosses in a major blow to Jeff Bezos. Vasi Philomi - who until recently helped to oversee generative artificial intelligence development at Amazon Web Services - has left after eight years, Reuters reported. Philomin helped lead generative AI efforts and product strategy, and oversaw foundation models known as Amazon Titan. Rajesh Sheth, a vice president previously overseeing Amazon Elastic Block Store, had assumed some of Philomin's responsibilities, Amazon told Reuters. In his biography, Philomin said he helped create and lead Amazon Bedrock, a hub for using multiple AI models and one of AWS's premier products in its battle for AI supremacy. Amazon is working to bolster its reputation in AI development, after rivals like OpenAI, and Google have taken an early lead, particularly with consumer-focused models. OpenAI CEO Sam Altman complained that Zuckerberg was attempting to poach his top talent with huge salaries as Meta also scrambles to catch up. Amazon has invested $8 billion in AI startup Anthropic and integrated its Claude software into its own products including a new revamped version of voice assistant Alexa that it's rolling out to customers this year. In December, Amazon introduced its Nova AI models which provide for text, video and image generation. Earlier this year, it added to the lineup with a version called Sonic that can more readily produce natural-sounding speech. Companies are employing creative techniques to hire top AI talent, including using sports industry data analysis to help identify undiscovered talent, Reuters reported last month. However, as Amazon races to produce more advanced AI, it said it expects its own success will lead to fewer corporate jobs, according to a memo from CEO Andy Jassy last week. Job growth limits will be driven in particular by so-called Agentic AI, which can perform tasks with minimal or even no additional input from people.

BREAKING NEWS Mystery tech rival delivers low blow to Jeff Bezos on his wedding day
BREAKING NEWS Mystery tech rival delivers low blow to Jeff Bezos on his wedding day

Daily Mail​

time14 hours ago

  • Business
  • Daily Mail​

BREAKING NEWS Mystery tech rival delivers low blow to Jeff Bezos on his wedding day

Amazon has lost one of its top AI bosses in a major blow to Jeff Bezos on his wedding day. Vasi Philomi - who until recently helped to oversee generative artificial intelligence development at Amazon Web Services - has left after eight years, Reuters reported. Philomin confirmed to the outlet that he has departed for another company but did not reveal where. An Amazon spokesperson confirmed the departure. It comes as Mark Zuckerberg is said to be personally calling top AI talent at rival companies and offering $100mn sign-on bonuses to join Meta. Philomin helped lead generative AI efforts and product strategy, and oversaw foundation models known as Amazon Titan. Rajesh Sheth, a vice president previously overseeing Amazon Elastic Block Store, had assumed some of Philomin's responsibilities, Amazon told Reuters. In his biography, Philomin said he helped create and lead Amazon Bedrock, a hub for using multiple AI models and one of AWS's premier products in its battle for AI supremacy. Amazon is working to bolster its reputation in AI development, after rivals like OpenAI, and Google have taken an early lead, particularly with consumer-focused models. OpenAI CEO Sam Altman complained that Zuckerberg was attempting to poach his top talent with huge salaries as Meta also scrambles to catch up. Amazon has invested $8 billion in AI startup Anthropic and integrated its Claude software into its own products including a new revamped version of voice assistant Alexa that it's rolling out to customers this year. In December, Amazon introduced its Nova AI models which provide for text, video and image generation. Earlier this year, it added to the lineup with a version called Sonic that can more readily produce natural-sounding speech.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store