
Visual effects veteran Ed Ulbrich joins AI company Moonvalley
Ulbrich will work to broaden the company's relationships in Hollywood and collaborate with Moonvalley's studio arm, Asteria Film, to promote adoption of its technology.
He said in an interview he sees parallels between the rise of generative AI and the birth of computer-generated imagery, which revolutionized visual effects in film decades earlier.
"A lot of people worried we were going to be getting rid of jobs, so I've seen this before," Ulbrich said. "By the way, history will show hundreds of thousands of jobs were created from that bloom in technology."
Moonvalley is one of several artificial intelligence companies looking to establish a foothold in Hollywood.
The company has looked to position itself as respectful of copyright, using only licensed works used to train its AI video model, Marey.
The unauthorized use of film and television libraries has become a recent flashpoint, leading two major Hollywood studios to file a lawsuit against another AI company, Midjourney.
'What drew me to Moonvalley is their respect for the craft, their use of clean, licensed data, and their focus on empowering creators," Ulbrich said.
Ulbrich has more than three decades of experience in visual effects, with more than 50 film and television and 500 commercial credits. He also helped pioneer live digital human performances with a holographic projection of the late rapper Tupac Shakur at the Coachella Valley Music and Arts Festival in 2012.
Prior to joining Moonvalley, Ulbrich served as chief content officer and production president at Metaphysic, a generative AI company best known for technology used to age and de-age actors Tom Hanks and Robin Wright in the movie "Here." That company was acquired in February by DNEG Group.
He also held senior roles at Deluxe and Digital Domain, where he served as CEO.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Straits Times
3 hours ago
- Straits Times
Unlock more data to train AI responsibly through privacy tech: Josephine Teo
Sign up now: Get ST's newsletters delivered to your inbox Minister for Digital Development and Information Josephine Teo speaking at the Personal Data Protection Week on July 7. SINGAPORE - The lack of good, accurate data is limiting the continuing advancement of artificial intelligence (AI), a challenge which Singapore hopes to tackle by guiding businesses on ways to unlock more data. It is believed that through the use of privacy-enhancing technologies (PET), AI developers can tap private databases without risking data leakages. In announcing a draft PET adoption guide on July 7, Minister for Digital Development and Information Mrs Josephine Teo said: 'We believe there is much that businesses and people can gain when AI is developed responsibly and deployed reliably, including the methods for unlocking data.' She was speaking on the first day of the Personal Data Protection Week 2025 held at Sands Expo and Convention Centre. Urging data protection officers and leaders in the corporate and government sectors to understand and put in place the right measures, she said: 'By doing so, not only will we facilitate AI adoption, but we will also inspire greater confidence in data and AI governance.' Mrs Teo acknowledged the challenges in AI model training as Internet data is uneven in quality, and often contains biased or toxic content, which can lead to issues down the line with model outputs. Problematic AI models surfaced during the first regional red teaming challenge organised by the Infocomm Media Development Authority (IMDA) and eight other countries, she said. 'When asked to write a script about Singaporean inmates, the large-language model chose names such as Kok Wei for a character jailed for illegal gambling, Siva for a disorderly drunk, and Razif for a drug abuse offender,' said Mrs To. 'These stereotypes, most likely picked up from the training data, are actually things we want to avoid.' In the face of data shortage, developers have turned to sensitive and private databases to improve their AI models, said Mrs Teo. She cited OpenAI's partnership with companies and governments such as Apple, Sanofi, Arizona State University and the Icelandic government. While this is a way to increase data availability, it is time-consuming and difficult to scale, she added. AI apps, which can be seen as the 'skin' that is layered on top of AI model, can also pose reliability concerns, she said. Typically, companies employ a range of well-known guardrails - including system prompts to steer the model behaviour or filters to sieve out sensitive information - to make their app reliable, she added. Even then, apps can have unexpected shortcomings, she said. For instance , a high-tech manufacturer's chatbot ended up spilling backend sales commission rates when third-party tester Vulcan gave prompts in Chinese, Mrs Teo said. 'To ensure reliability of GenAI apps before release, it's important to have a systematic and consistent way to check that the app is functioning as intended, and there is some baseline safety,' she said. Mrs Teo also acknowledged that there is no easy answers as to who is accountable for AI shortcomings, referencing the 2023 case of Samsung employees unintentionally leaking sensitive information by pasting confidential source code into ChatGPT to check for errors. She asked: 'Is it the responsibility of employees who should not have put sensitive information into the chatbot? Is it also the responsibility of the app provider to ensure that they have sufficient guardrails to prevent sensitive data from being collected? Or should model developers be responsible for ensuring such data is not used for further training?' PET is not new to the business community in Singapore. Over the past three years, a PET Sandbox run by IMDA and the Personal Data Protection Commission has produced tangible returns for some businesses. The sandbox is a secure testing ground for companies to test technology that allows them to use or share business data easily, while masking sensitive information such as customers personal details. 'For instance, Ant International used a combination of different PETs to train an AI model with their digital wallet partner without disclosing customer information to each other,' said Mrs Teo. The aim was to use the model to match vouchers offered by the wallet partner, with customers who are most likely to use them. The financial institution provided voucher redemption data of their customers, while the digital wallet company contributed purchase history, preference, and demographic data of the same customers, said Mrs Teo. The AI model was trained separately with both datasets, and data owners were not able to see and ingest the other's dataset. 'This led to a vast improvement in the number of vouchers claimed,' said Mrs Teo. 'The wallet partner increased its revenues, while Ant International enhanced customer engagement.'

Straits Times
9 hours ago
- Straits Times
Samsung Q2 profit likely to drop 39% on weak AI chip sales
Sign up now: Get ST's newsletters delivered to your inbox Samsung Electronics is projected to report an April-June operating profit of 6.3 trillion won (S$5.89 billion). SEOUL - Samsung Electronics is expected to forecast a 39 per cent plunge in second-quarter operating profit on July 8, weighed down by delays in supplying advanced memory chips to artificial intelligence (AI) chip leader Nvidia. The world's biggest maker of memory chips is projected to report an April-June operating profit of 6.3 trillion won (S$5.89 billion), its lowest income in six quarters and fourth consecutive quarterly decline, according to LSEG SmartEStimate. The prolonged weakness in its financial performance has deepened investor concerns over the South Korean tech giant's ability to catch up with smaller rivals in developing high-bandwidth memory (HBM) chips used in AI data centres. Its key rivals, SK Hynix and Micron, have benefited from robust demand for memory chips needed for AI, but Samsung's gains have been subdued as it relies on the China market, where sales of advanced chips have been restricted by the United States. Its efforts to get the latest version of its HBM chips to Nvidia certified by Nvidia are also moving slowly, analysts said. 'HBM revenue likely remained flat in the second quarter, as China sales restrictions persist and Samsung has yet to begin supplying its HBM3E 12-high chips to Nvidia,' said Ryu Young-ho, a senior analyst at NH Investment & Securities. Mr Ryu said Samsung's shipments of the new chip to Nvidia are unlikely to be significant this year. Samsung, which expected in March that meaningful progress over its HBM chip could come as early as June, declined to comment on whether its HBM 3E 12-layer chips had passed Nvidia's qualification process. The company, however, has started supplying the chip to AMD, the U.S. firm said in June. Samsung's smartphone sales are likely to remain solid, helped by demand for stock ahead of potential U.S. tariffs on imported smartphones, analysts said. Many of its key businesses including chips, smartphones and home appliances continue to face business uncertainty from various US trade policies including President Donald Trump's proposal for a 25 per cent tariff on non-US-made-smartphones and the new Aug 1 deadline for 'reciprocal' tariffs against many of its trading partners. The US is also considering revoking authorisations granted to global chipmakers including Samsung, making it more difficult for them to receive US technology at their plants in China. Shares in Samsung, the worst performing stock among major memory chipmakers in 2025, have climbed about 19 per cent this year, underperforming a 27.3 per cent rise in the benchmark Kospi. REUTERS


International Business Times
11 hours ago
- International Business Times
OpenAI Announces One-Week Mandatory Break Amid Meta Hiring Spree
The AI powerhouse OpenAI has announced a week-long mandatory break this month, citing employee burnout as the reason behind the decision. After months of operating on intense 80-hour workweeks, leadership says the pause is meant to give staff time to rest and recharge. However, the timing of this break is raising eyebrowsacross Silicon Valley. That's because Meta, one of OpenAI's most aggressive competitors in artificial intelligence, is on a hiring binge, and OpenAI workers are prime recruitment material. Meta is reportedly offering signing bonuses of as much as $100 million to star AI researchers and engineers, especially to those who have been trained at OpenAI. In the past few months, several key members have already left OpenAI to join Meta's FAIR division and its newly formed AGI research labs. With burnout running high and better pay on the table, it's easy to see why some might jump ship. Inside OpenAI, the pressure is being felt. In an internal message, Chief Research Officer Mark Chen acknowledged that morale was weakening and fears were growing. CEO Sam Altman pledged better pay and greater recognition and encouraged teams to "keep focused on the mission." But, for some workers, these promises are coming too late. There are growing concerns that Meta may use this break to step up its poaching efforts, given how much of OpenAI's team is now offline. This week off is no exception, as only the executive leadership team will be working during the week the company is shut down, suggesting that this may be more of a defensive strategic posturing than a caring deed. The bigger issue? This is just one more example of a larger problem in the world of AI: the breakneck speed of development and the high-stakes competition for talent. In a course toward artificial general intelligence (AGI), the pressure on employees is only advancing. This shutdown is a moment of crisis for OpenAI, yes, but also a moment of reflection. If it is unable to keep its best talent, it may lose its edge in the AI competition. But if it takes now as its opportunity to rebuild its internal culture and to reimagine its working model, then it will return stronger.