logo
#

Latest news with #techCompanies

Trump Preps Actions to Boost Power Supply for AI
Trump Preps Actions to Boost Power Supply for AI

Wall Street Journal

time2 days ago

  • Business
  • Wall Street Journal

Trump Preps Actions to Boost Power Supply for AI

The Trump administration is weighing executive orders to increase power generation and meet demand from artificial intelligence, people familiar with the matter said. The moves could include giving federal land to tech companies for the data centers needed to train AI models and expediting grid connections and permitting for more advanced power generation projects. The orders could coincide with the release of Trump's AI action plan, which is expected to outline the administration's proposal to win the AI race with China. The plan is scheduled to be released next month. Reuters earlier reported the planned energy executive orders. The Biden administration took some similar steps, putting out an executive order on siting data centers on federal lands in its final days. Trump's team hopes to go beyond that order with further-reaching directives, the people said.

What Are AI Video Generators? What to Know About Google's Veo 3, Sora and More
What Are AI Video Generators? What to Know About Google's Veo 3, Sora and More

CNET

time2 days ago

  • Entertainment
  • CNET

What Are AI Video Generators? What to Know About Google's Veo 3, Sora and More

First came the rise of AI chatbots, then image generators blew up. Now, tech companies are rushing to release AI video generators. During the past year, nearly every major tech company has announced some kind of AI video model they've been cooking. Each company has its own timeline, which can make it hard to keep up with who's done what. To save you from searching, I've run down every major AI video program and compiled my early insights on the testing I've done on the programs available now. They aren't all built the same, and there are noticeable difference even across one company's AI products. For example, I've seen some of my favorite image generator features pop up in the video models, while others are noticeably absent. AI videos are a huge leap forward in a company's AI creative offerings, and they're something worth keeping an eye on as generative AI become a bigger part of the content we create and see online. This is especially true as the advancement of the tech comes at a time when the legality, ethics and other concerns continue to surround AI creative offerings. This is everything you need to know about the major AI video generators. This list is regularly updated with the most recent info on each generator. For more, check out the best AI image generators. What are AI video generators? AI video generators are one of the latest ways tech companies are using generative AI. These programs use text-to-video and image-to-video technology that lets you create short video clips. You enter a short description called a prompt, or upload an image to animate, and the software creates a clip entirely made with gen AI. These AI videos tend to be between five and 10 seconds long, and only Google's Veo 3 has synchronized audio. Because this tech is new, errors -- called hallucinations -- are possible. What AI video models I can use right now? Some examples of AI video generators you can use now are Sora by OpenAI, Veo 3 by Google and Adobe Firefly. They are all are paid programs that produce decent results and let you customize your shot with control panels. Runway, an AI start-up that co-created the Stable Diffusion image generator, is another AI video option with freemium plans. Other AI start-ups like Luma, Pika and Ideogram are also available. CNET, Lily Hailyeh OpenAI's Sora Sora joined the ChatGPT family at the end of 2024. It's a pretty user-friendly program, but it doesn't have the same conversational UI as Dall-E 3 -- you can't "chat" with Sora to make follow-up revisions. Instead, it's more reminiscent of traditional AI creative services. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) In Sora, you have a panel to customize your video's dimensions, length and stylistic feel. You can enter a prompt or upload an image for Sora to animate, and you can use a few editing options to perfect your video from there. Sora videos also come automatically watermarked, designating their AI origins. Sora is only available to paying ChatGPT users. If you're a ChatGPT Plus user ($20 per month) you'll get 50 priority generation credits per month, with videos up to 5 seconds long at 720p. Upgrading to the Pro tier ($200 per month) gets higher monthly credits, including 500 videos created with priority/fast generation and unlimited videos with relaxed generation. Pro subscribers can also create higher-resolution videos at a max of 1080p, extend the duration of their videos up to 20 seconds and have the option to download videos without the watermark. OpenAI's privacy policy states that it may train on your content unless you opt out. To do that in Sora, go to Settings > General, and turn off Improve model for everyone. You can also exclude your videos from public explore feeds in settings. See at OpenAI Midjourney Midjourney V1 Midjourney is one of the most popular AI image generators, and it just released its first AI video model called V1. You can use Midjourney to create video clips between 5-and-21 seconds long in 720p resolution. You can use Midjourney through Discord or its website. Right now, video generation is paywalled for users, but it's one of the cheaper options at $10 per month. Midjourney's privacy policy says it can use personal information and information included in your prompts to improve its service. If you create in Stealth mode, then your AI images will be private; otherwise they will be shared in a public gallery. See at Midjourney Adobe Adobe Firefly Firefly's standalone AI video generator is available for you to use now, on your computer and through its Firefly mobile app. If you're familiar with Firefly's AI image tools, the video generator set up will feel familiar. The left-side panel lets you customize your clip, and it has the added benefit of letting you select the kind of motion you want (zoom in and out, move right and left, etc). You can also select the camera angle you want, like if you wanted to mimic drone footage with an aerial view. Some Creative Cloud plans include Firefly access, including if you're paying for a single program or all of the Adobe apps. You can check and compare options here. If you don't have an existing Adobe plan, you can try the Firefly standard plan ($10 per month) to create up to 20 videos a month. If you need more generation credits, the Pro plan ($30 per month) gets you up to 70 videos a month. Both Firefly plans come with unlimited AI image generation. Your Firefly videos will be 5 seconds long, at 1080p with no audio. Adobe says that videos created with Firefly are commercially safe, and its AI policy states it will not train on your content. Firefly videos don't have a visible watermark, but its content credentials are automatically attached to your work. Firefly models are trained on licensed and public domain content. See at Adobe Katelyn Chedraoui/CNET Runway AI enthusiasts might recognize Runway as the start-up that co-created the popular AI image generator Stable Diffusion. You might also recognize Runway from a landmark deal it made with a major film studio last fall. Lionsgate agreed to open up its catalog -- thousands of hours of movies like The Hunger Games and John Wick and TV shows like Mad Men -- to be used to create custom AI models for the studio to use. During my brief testing of the service, I was impressed with the prompt-building tools and the general ease of finding my way around. I've also used the service before as part of Canva's Magic Media app, which is convenient if you're a Canva lover. You can use Runway for free on its web app, with 125 monthly credits -- you'll use about 20 credits with each generation, so it's a pretty low limit. Upgrading ($15 per month or $144 annually) gets you 625 monthly credits, access to newer models and the ability to upscale videos to 4K and download without watermarks. Runway's terms of service says it can train its AI on your prompts and the resulting videos but doesn't retain ownership over them. Its privacy policy also states that Runway may disclose your information to affiliates, business and marketing partners. The videos you make are automatically private. See at Runwayml What are some other AI video projects? Notably absent from this list is Meta. The company has devoted its resources to develop AI, but it doesn't have a publicly available AI video generator. It teased a version of one in October 2024. Here's what we know so far. Meta/Screenshot by CNET Meta's Movie Gen Meta's AI video model -- Movie Gen -- is only a research concept right now and not publicly available, with no word on when it may be. Thanks to a research paper Meta published, we know Movie Gen videos could be 1080p HD and up to 16 seconds long at 16 frames per second. The most notable thing going for Movie Gen is the possibility of synchronized audio. Meta said that Movie Gen could also be used to create sound effects, ambient noise and instrumental music up to 45 seconds long. There's always a chance this feature doesn't make it to the final cut, but it would give Meta an edge. Perhaps like with Google and YouTube, we'll see some AI-powered features pop up first on its social platforms, Instagram and Facebook. (We already have a number of other AI features eating up space on our feeds.) Meta's AI models for its chatbot and image generator are trained on publicly available Facebook and Instagram content, as well as licensed data. See at Meta For more, check out our guide to writing the best AI image prompts and the best AI chatbots.

Why we're measuring AI success all wrong—and what leaders should do about it
Why we're measuring AI success all wrong—and what leaders should do about it

Fast Company

time6 days ago

  • Business
  • Fast Company

Why we're measuring AI success all wrong—and what leaders should do about it

Here's a troubling reality check: We are currently evaluating artificial intelligence in the same way that we'd judge a sports car. We act like an AI model is good if it is fast and powerful. But what we really need to assess is whether it makes for a trusted and capable business partner. The way we approach assessment matters. As AI models begin to play a part in everything from hiring decisions to medical diagnoses, our narrow focus on benchmarks and accuracy rates is creating blind spots that could undermine the very outcomes we're trying to achieve. In the long term, it is effectiveness, not efficiency, that matters. Think about it: When you hire someone for your team, do you only look at their test scores and the speed they work at? Of course not. You consider how they collaborate, whether they share your values, whether they can admit when they don't know something, and how they'll impact your organization's culture—all the things that are critical to strategic success. Yet when it comes to the technology that is increasingly making decisions alongside us, we're still stuck on the digital equivalent of standardized test scores. The Benchmark Trap Walk into any tech company today, and you'll hear executives boasting about their latest performance metrics: 'Our model achieved 94.7% accuracy!' or 'We reduced token usage by 20%!' These numbers sound impressive, but they tell us almost nothing about whether these systems will actually serve human needs effectively. Despite significant tech advances, evaluation frameworks remain stubbornly focused on performance metrics while largely ignoring ethical, social, and human-centric factors. It's like judging a restaurant solely on how fast it serves food while ignoring whether the meals are nutritious, safe, or actually taste good. This measurement myopia is leading us astray. Many recent studies have found high levels of bias toward specific demographic groups when AI models are asked to make decisions about individuals in relation to tasks such as hiring, salary recommendations, loan approvals, and sentencing. These outcomes are not just theoretical. For instance, facial recognition systems deployed in law enforcement contexts continue to show higher error rates when identifying people of color. Yet these systems often pass traditional performance tests with flying colors. The disconnect is stark: We're celebrating technical achievements while people's lives are being negatively impacted by our measurement blind spots. Real-World Lessons IBM's Watson for Oncology was once pitched as a revolutionary breakthrough that would transform cancer care. When measured using traditional metrics, the AI model appeared to be highly impressive, processing vast amounts of medical data rapidly and generating treatment recommendations with clinical sophistication. However, as Scientific American reported, reality fell far short of this promise. When major cancer centers implemented Watson, significant problems emerged. The system's recommendations often didn't align with best practices, in part because Watson was trained primarily on a limited number of cases from a single institution rather than a comprehensive database of real-world patient outcomes. The disconnect wasn't in Watson's computational capabilities—according to traditional performance metrics, it functioned as designed. The gap was in its human-centered evaluation capabilities: Did it improve patient outcomes? Did it augment physician expertise effectively? When measured against these standards, Watson struggled to prove its value, leading many healthcare institutions to abandon the system. Prioritizing dignity Microsoft's Seeing AI is an example of what happens when companies measure success through a human-centered lens from the beginning. As Time magazine reported, the Seeing AI app emerged from Microsoft's commitment to accessibility innovation, using computer vision to narrate the visual world for blind and low-vision users. What sets Seeing AI apart isn't just its technical capabilities but how the development team prioritized human dignity and independence over pure performance metrics. Microsoft worked closely with the blind community throughout the design and testing phases, measuring success not by accuracy percentages alone, but by how effectively the app enhanced the ability of users to navigate their world independently. This approach created technology that genuinely empowers users, providing real-time audio descriptions that help with everything from selecting groceries to navigating unfamiliar spaces. The lesson: When we start with human outcomes as our primary success metric, we build systems that don't just work—they make life meaningfully better. Five Critical Dimensions of Success Smart leaders are moving beyond traditional metrics to evaluate systems across five critical dimensions: 1. Human-AI Collaboration. Rather than measuring performance in isolation, assess how well humans and technology work together. Recent research in the Journal of the American College of Surgeons showed that AI-generated postoperative reports were only half as likely to contain significant discrepancies as those written by surgeons alone. The key insight: a careful division of labor between humans and machines can improve outcomes while leaving humans free to spend more time on what they do best. 2. Ethical Impact and Fairness. Incorporate bias audits and fairness scores as mandatory evaluation metrics. This means continuously assessing whether systems treat all populations equitably and impact human freedom, autonomy, and dignity positively. 3. Stability and Self-Awareness. A Nature Scientific Reports study found performance degradation over time in 91 percent of the models it tested once they were exposed to real-world data. Instead of just measuring a model's out-of-the-box accuracy, track performance over time and assess the model's ability to identify performance dips and escalate to human oversight when its confidence drops. 4. Value Alignment. As the World Economic Forum's 2024 white paper emphasizes, AI models must operate in accordance with core human values if they are to serve humanity effectively. This requires embedding ethical considerations throughout the technology lifecycle. 5. Long-Term Societal Impact Move beyond narrow optimization goals to assess alignment with long-term societal benefits. Consider how technology affects authentic human connections, preserves meaningful work, and serves the broader community good. Supporting genuine human connection and collaboration Preserving meaningful human choice and agency Serving human needs rather than reshaping humans to serve technological needs The Path Forward Forward-thinking leaders implement comprehensive evaluation approaches by starting with the desired human outcomes and then establishing continuous human input loops and measuring results against the goals of human stakeholders. The companies that get this right won't just build better systems—they'll build more trusted, more valuable, and ultimately more successful businesses. They'll create technology that doesn't just process data faster but that genuinely enhances human potential and serves societal needs. The stakes couldn't be higher. As these AI models become more prevalent in critical decisions around hiring, healthcare, criminal justice, and financial services, our measurement approaches will determine whether these models serve humanity well or perpetuate existing inequalities. In the end, the most important test of all is whether using AI for a task makes human lives genuinely better. The question isn't whether your technology is fast enough but whether it's human enough. That is the only metric that ultimately matters.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store