Latest news with #AtlanticIntelligence


Atlantic
17 hours ago
- Business
- Atlantic
The College-Major Gamble
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. When I was in college, the Great Recession was unfolding, and it seemed like I had made a big mistake. With the economy crumbling and job prospects going with it, I had selected as my majors … journalism and sociology. Even the professors joked about our inevitable unemployment. Meanwhile, a close friend had switched majors and started to take computer-science classes—there would obviously be opportunities there. But that conventional wisdom is starting to change. As my colleague Rose Horowitch writes in an article for The Atlantic, entry-level tech jobs are beginning to fade away, in part because of new technology itself: AI is able to do many tasks that previously required a person. 'Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words,' Rose writes. 'This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true.' I spoke with Rose about how AI is affecting college students and the job market—and what the future may hold. This interview has been edited and condensed. Rose Horowitch: There are a lot of tech executives coming out and saying that AI is replacing some of their coders, and that they just don't need as many entry-level employees. I spoke with an economics professor at Harvard, David Deming, who said that may be a convenient talking point—nobody wants to say We didn't hit our sales targets, so we have to lay people off. What we can guess is that the technology is actually making senior engineers more productive; therefore they need fewer entry-level employees. It's also one more piece of uncertainty that these tech companies are dealing with—in addition to tariffs and high interest rates—that may lead them to put off hiring. Damon: Tech companies do have a vested interest in promoting AI as such a powerful tool that it could do the work of a person, or multiple people. Microsoft recently laid thousands of people off, as you write in your article, and the company also said that AI writes or helps write 25 percent of their code—that's a helpful narrative for Microsoft, because Microsoft sells AI tools. At the same time, it does feel pretty clear to me that many different industries are dealing with the same issues. I've spoken about generative AI replacing entry-level work with prominent lawyers, journalists, people who work in tech—the worry feels real to me. Rose: I spoke with Molly Kinder, a Brookings Institution fellow who studies how AI affects the economy, and she said that she's worried that the bottom rung of the career ladder across industries is breaking apart. If you're writing a book, you may not need to hire a research assistant if you can use AI. It's obviously not going to be perfectly accurate, and it couldn't write the book for you, but it could make you more productive. Her concern, which I share, is that you still need people to get trained and then ascend at a company. The unemployment rate for young college graduates is already unusually high, and this may lead to more problems down the line that we can't even foresee. These early jobs are like apprenticeships: You're learning skills that you don't get in school. If you skip that, it's cheaper for the company in the short term, but what happens to white-collar work down the line? Damon: How are the schools themselves thinking about this reality—that they have students in their senior year facing a completely different prospect for their future than when they entered school four years ago? Rose: They're responding by figuring out how to produce graduates that are prepared to use AI tools in their work and be competitive applicants. The challenge is that the technology is changing so quickly—you need to teach students about what's relevant professionally while also teaching the fundamental skills, so that they're not just reliant on the machines. Damon: Your article makes this point that students should be focused less on learning a particular skill and more on studying something that's durable for the long term. Do you think students really will shift what they're studying? Will the purpose of higher education itself change somehow? Rose: It's likely that we'll see a decline in students studying computer science, and then, at some point, there will be too few job candidates, salaries will be pushed up, and more students will go in. But the most important thing that students can do—and it's so counterintuitive—is to study things that will give you human skills and soft skills that will help you endure in any industry. Even without AI, jobs are going to change. The challenge is that, in times of crisis, people tend to choose something preprofessional, because it feels safer. That cognitive bias can be unhelpful. Damon: You cover higher education in general. You're probably best known for the story you did about how elite college students can't read books anymore, which feels related to this discussion for obvious reasons. I'm curious to know more about why you were interested in exploring this particular topic. Rose: Higher ed, more than at any time in recent memory, is facing the question of what it is for. People are questioning the value of it much more than they did 10, 20 years ago. And so, these articles all fit into that theme: What is the value of higher ed, of getting an advanced degree? The article about computer-science majors shows that this thing that everyone thought is a sure bet doesn't seem to be. That reinforces why higher education needs to make the case for its value —how it teaches people to be more human, or what it's like to live a productive life in a society. Damon: There are so many crisis points in American higher education right now. AI is one of them. Your article about reading suggested a problem that may have emerged from other digital technologies. Obviously there have been issues stemming from the Trump administration. There was the Claudine Gay scandal. This is all in the past year or two. How do you sum it all up? Rose: Most people are starting to realize that the status quo is not going to work. There's declining trust in education, particularly from Republicans. A substantial portion of the country doesn't think higher ed serves the nation. The fact is that at many universities, academic standards have declined so much. Rigor has declined. Things cannot go on as they once did. What comes next, and who's going to chart that course? The higher-education leaders I speak with, at least, are trying to answer that question themselves so that it doesn't get defined by external forces like the Trump administration.
Yahoo
28-03-2025
- Entertainment
- Yahoo
Hayao Miyazaki's AI Nightmare
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words. Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute). Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take. This interview has been edited and condensed. Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil? Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly. If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture. Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem. Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter. I read an article about copyright and style—you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing. Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness. But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life. Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me. Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that. Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me. Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images. Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well. Damon: You can't really imagine image generators in a paradigm where there's no social media. Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends. Damon: Maybe that's what people don't like, too—that it's nakedly transactional. Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out. Damon: It's this generation's poser. Ian: Engagement baiter. Damon: Leave me with a concluding thought about how people should react to these images. Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame. Article originally published at The Atlantic


Atlantic
28-03-2025
- Entertainment
- Atlantic
Hayao Miyazaki's AI Nightmare
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words. Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute). Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take. This interview has been edited and condensed. Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil? Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly. If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture. Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem. Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter. I read an article about copyright and style— you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing. Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness. But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life. Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me. Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that. Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me. Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images. Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well. Damon: You can't really imagine image generators in a paradigm where there's no social media. Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends. Damon: Maybe that's what people don't like, too—that it's nakedly transactional. Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out. Damon: It's this generation's poser. Ian: Engagement baiter. Damon: Leave me with a concluding thought about how people should react to these images. Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame.


Atlantic
28-02-2025
- Business
- Atlantic
The Complicated Relationship Between Sam Altman and Donald Trump
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. President Donald Trump has been clear about his vision for America as an AI superpower, signing in his first week an executive order geared toward helping AI 'promote human flourishing, economic competitiveness, and national security.' In order to achieve this goal, the Trump administration has forged a relationship with OpenAI and its CEO, Sam Altman—but that could complicate things for Elon Musk. As my colleague Matteo Wong wrote for The Atlantic on Wednesday, the two technologists are bitter rivals. Musk was one of OpenAI's initial investors and served on the company's board, but he left in 2018. Ever since ChatGPT made OpenAI a household name, Musk has routinely taken potshots at the company, calling its chatbot too 'woke' and using the nickname 'Scam Altman' to refer to its CEO. Meanwhile, he's launched his own AI firm, xAI, whose products have lagged behind OpenAI's. Musk and Altman may both need Trump's blessing to unlock considerable resources for their projects and to stave off inconvenient regulations. 'Anything that OpenAI might gain from Trump, xAI could reap as well,' Matteo writes. The companies are in competition with each other, so any advantage that one gets may be to the detriment of the other, building up a tension between Musk and Altman that could eventually snap. How Sam Altman Could Break Up Elon Musk and Donald Trump By Matteo Wong The rivalry between Sam Altman and Elon Musk is entering its Apprentice era. Both men have the ambition to redefine how the modern world works—and both are jockeying for President Donald Trump's blessing to accelerate their plans. Altman's company, OpenAI, as well as Musk's ventures—which include SpaceX, Tesla, and xAI—all depend to some degree on federal dollars, permits, and regulatory support. The president could influence whether OpenAI or xAI produces the next major AI breakthrough, whether Musk can succeed in sending a human to Mars, and whether Altman's big bet on nuclear energy, and fusion reactors in particular, pans out. What to Read Next 'Terrified' federal workers are clamming up: Karen Hao recently spoke with more than a dozen federal workers about the culture of fear and paranoia that they say is spreading through their agencies under the Trump administration. They allege that they are being hindered from doing their work—some of which touches on risks emerging from AI. 'Federal workers I spoke with now say that neither they nor their colleagues want to be associated in any way with working on or promoting disinformation research,' Hao writes, 'even as they are aware that the U.S. government's lack of visibility into such networks could create a serious national vulnerability, especially as AI gives state-backed operations powerful upgrades.'
Yahoo
21-02-2025
- Business
- Yahoo
What Could DOGE Do With Federal Data?
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. When the Department of Government Efficiency stormed the federal government, it had a clear objective—to remake the government, one must remake the civil service. And in particular, the team of Elon Musk acolytes 'focused on accessing the terminals, uncovering the button pushers, and taking control,' Michael Scherer, Ashley Parker, Shane Harris, and I wrote this week in an investigation into the DOGE takeover. Computers, they figured, run the government. DOGE members and new political appointees have sought access to data and IT systems across the government—at the Treasury Department, IRS, Department of Health and Human Services, and more. Government technologists have speculated that DOGE's next step will be to centralize those data and feed them into AI systems, making bureaucratic processes more efficient while also identifying fraud and waste, or perhaps simply uncovering further targets to dismantle. Musk's team has reportedly already fed Department of Education data into an AI system, and Thomas Shedd, a former Tesla engineer recently appointed to the General Services Administration, has repeatedly spoken with staff about an AI strategy, mentioning using the technology to develop coding agents and analyze federal contracts. No matter DOGE's goal, putting so much information in one place and under the control of a small group of people with little government experience has raised substantial security concerns. As one recently departed federal technology official wrote in draft testimony for lawmakers, which we obtained, 'DOGE is one romance scam away from a national security emergency.' This Is What Happens When the DOGE Guys Take Over By Michael Scherer, Ashley Parker, Matteo Wong and Shane Harris They arrived casually dressed and extremely confident—a self-styled super force of bureaucratic disrupters, mostly young men with engineering backgrounds on a mission from the president of the United States, under the command of the world's wealthiest online troll. On February 7, five Department of Government Efficiency representatives made it to the fourth floor of the Consumer Financial Protection Bureau headquarters, where the executive suites are located. They were interrupted while trying the handles of locked office doors. 'Hey, can I help you?' asked an employee of the agency that was soon to be forced into bureaucratic limbo. The DOGE crew offered no clear answer. Read the full article. What to Read Next DOGE and new Trump appointees' access to federal data and computer systems is growing in both breadth and depth. Defense technologies, Americans' sensitive personal and health data, dangerous biological research, and more are in reach. Within at least one agency, USAID, they have achieved 'God mode,' according to an employee in senior leadership—meaning Elon Musk's team has 'total control over systems that Americans working in conflict zones rely on, the ability to see and manipulate financial systems that have historically awarded tens of billions of dollars, and perhaps much more,' Charlie Warzel, Ian Bogost, and I reported this week. With this level of control, the USAID staffer feared, DOGE could terminate federal workers in 'a conflict zone like Ukraine, Sudan, or Ethiopia.' In the coming weeks, we reported, 'the team is expected to enter IT systems at the CDC and Federal Aviation Administration.' Just how far Musk and his team can go is uncertain; they face various lawsuits, which have thus far had varying success. The team may be trying to improve the government's inner workings, as is its stated purpose. 'But in the offices where the team is reaching internal IT systems,' Charlie, Ian, and I wrote, 'some are beginning to worry that [Musk] might prefer to destroy' the government, 'to take it over, or just to loot its vaults for himself.' Article originally published at The Atlantic