logo
#

Latest news with #Intelligencer

Is ChatGPT killing higher education?
Is ChatGPT killing higher education?

Yahoo

time4 days ago

  • Yahoo

Is ChatGPT killing higher education?

What's the point of college if no one's actually doing the work? It's not a rhetorical question. More and more students are not doing the work. They're offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They're doing everything. We're living in a cheating utopia — and professors know it. It's becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it's not clear that there's anything to be done at this point. So what are we doing here? James Walsh is a features writer for New York magazine's Intelligencer and the author of the most unsettling piece I've read about the impact of AI on higher education. Walsh spent months talking to students and professors who are living through this moment, and what he found isn't just a story about cheating. It's a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt. I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. Let's talk about how students are cheating today. How are they using these tools? What's the process look like? It depends on the type of student, the type of class, the type of school you're going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, 'I need a four to five-page essay,' and copying and pasting that essay without ever reading it. One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they're dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they're using will say something random about broccoli or Dua Lipa. Unless you're incredibly lazy, it takes just a little effort to cover that up. Every professor I spoke to said, 'So many of my students are using AI and I know that so many more students are using it and I have no idea,' because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay. And there are these platforms, these AI detectors, and there's a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that's really just looking at the language and deciding whether or not that language is created by an LLM. But it doesn't account for big ideas. It doesn't catch the students who are using AI and saying, 'What should I write this essay about?' And not doing the actual thinking themselves and then just writing. It's like paint by numbers at that point. Did you find that students are relating very differently to all of this? What was the general vibe you got? It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, 'I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.' And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading. The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, 'Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I'm sitting here in class and people are referencing studies that we haven't even covered in class, and it just makes for a really boring and unfulfilling class.' That was the realization for her that something is really wrong. So there are students like that. And then there are students who feel like they have to use AI because if they're not using AI, they're at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI. What's the general professor's perspective on this? They seem to all share something pretty close to despair. Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn't appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it's the best class she's ever had. So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don't know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you're going to accuse a student of using AI, there's no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down. Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general. How many professors do you think are now just having AI write their lectures? There's been a little reporting on this. I don't know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI. I've spoken to another couple of professors who are like, I'm nearing retirement, so it's not my problem, and good luck figuring it out, younger generation. I just don't think people outside of academia realize what a seismic change is coming. This is something that we're all going to have to deal with professionally. And it's happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, 'We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.' Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it's better for business to shove it aside? I do think there's a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that's coming. They want to adopt the technology and aren't really picking up on how dangerous it might be. I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, . And I think it's revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they're much more content to let the whole AI thing play out. I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn't threaten the bottom line in that way, or at least it doesn't yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I'm being too harsh? I genuinely don't think that's too harsh. I think administrators may not fully appreciate the power of AI and exactly what's happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem. I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, 'Just grade it like it was any other paper.' I think they're just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education. It's this system that hasn't been updated in forever. And in the case of the US higher ed system, it's like, yeah, for a long time it's been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important. The universities, in many cases, also have partnerships with AI companies, right? Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, 'Google is letting parents know that they have created a chatbot for children under [thirteen years old].' And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty. There's been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they've been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand. This is a good time to ask the obligatory question, People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn't just another moral panic? I think there's a lot of different ways we could respond to that. It's not a generational moral panic. This is a tool that's available, and it's available to us just as it's available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won't be in a year. However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, 'This is a calculator for words.' And I just don't really understand how that is compatible with other statements he's made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course. I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we've maxed out the AI's potential to supplement essays or writing. That it might not get much better than it is now. And I think that's a very long shot, one that I would not want to bank on. Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we're also post-thinking. It's a very scary thought that I try not to dwell in — the idea that my profession and what I'm doing is just feeding the machine, that my most important reader now is a robot, and that there's going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas. I think ideas can certainly be expressed in other mediums and that's exciting, but I don't think anybody who's paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we're talking about now is orders of magnitude more powerful than the algorithms on Instagram. Listen to the rest of the conversation and be sure to follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you listen to podcasts.

Is ChatGPT killing higher education?
Is ChatGPT killing higher education?

Vox

time5 days ago

  • Vox

Is ChatGPT killing higher education?

What's the point of college if no one's actually doing the work? It's not a rhetorical question. More and more students are not doing the work. They're offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They're doing everything. We're living in a cheating utopia — and professors know it. It's becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it's not clear that there's anything to be done at this point. So what are we doing here? James Walsh is a features writer for New York magazine's Intelligencer and the author of the most unsettling piece I've read about the impact of AI on higher education. Walsh spent months talking to students and professors who are living through this moment, and what he found isn't just a story about cheating. It's a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt. I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. Let's talk about how students are cheating today. How are they using these tools? What's the process look like? It depends on the type of student, the type of class, the type of school you're going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, 'I need a four to five-page essay,' and copying and pasting that essay without ever reading it. One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they're dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they're using will say something random about broccoli or Dua Lipa. Unless you're incredibly lazy, it takes just a little effort to cover that up. Every professor I spoke to said, 'So many of my students are using AI and I know that so many more students are using it and I have no idea,' because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay. And there are these platforms, these AI detectors, and there's a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that's really just looking at the language and deciding whether or not that language is created by an LLM. But it doesn't account for big ideas. It doesn't catch the students who are using AI and saying, 'What should I write this essay about?' And not doing the actual thinking themselves and then just writing. It's like paint by numbers at that point. Did you find that students are relating very differently to all of this? What was the general vibe you got? It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, 'I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.' And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading. The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, 'Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I'm sitting here in class and people are referencing studies that we haven't even covered in class, and it just makes for a really boring and unfulfilling class.' That was the realization for her that something is really wrong. So there are students like that. And then there are students who feel like they have to use AI because if they're not using AI, they're at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI. What's the general professor's perspective on this? They seem to all share something pretty close to despair. Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn't appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it's the best class she's ever had. So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don't know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you're going to accuse a student of using AI, there's no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down. Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general. How many professors do you think are now just having AI write their lectures? There's been a little reporting on this. I don't know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI. I've spoken to another couple of professors who are like, I'm nearing retirement, so it's not my problem, and good luck figuring it out, younger generation. I just don't think people outside of academia realize what a seismic change is coming. This is something that we're all going to have to deal with professionally. And it's happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, 'We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.' Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it's better for business to shove it aside? I do think there's a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that's coming. They want to adopt the technology and aren't really picking up on how dangerous it might be. I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, Hey, figure it out. And I think it's revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they're much more content to let the whole AI thing play out. I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn't threaten the bottom line in that way, or at least it doesn't yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I'm being too harsh? I genuinely don't think that's too harsh. I think administrators may not fully appreciate the power of AI and exactly what's happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem. I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, 'Just grade it like it was any other paper.' I think they're just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education. It's this system that hasn't been updated in forever. And in the case of the US higher ed system, it's like, yeah, for a long time it's been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important. The universities, in many cases, also have partnerships with AI companies, right? Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, 'Google is letting parents know that they have created a chatbot for children under [thirteen years old].' And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty. There's been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they've been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand. This is a good time to ask the obligatory question, Are we sure we're not just old people yelling at clouds here? People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn't just another moral panic? I think there's a lot of different ways we could respond to that. It's not a generational moral panic. This is a tool that's available, and it's available to us just as it's available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won't be in a year. However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, 'This is a calculator for words.' And I just don't really understand how that is compatible with other statements he's made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course. I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we've maxed out the AI's potential to supplement essays or writing. That it might not get much better than it is now. And I think that's a very long shot, one that I would not want to bank on. Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we're also post-thinking. It's a very scary thought that I try not to dwell in — the idea that my profession and what I'm doing is just feeding the machine, that my most important reader now is a robot, and that there's going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas. I think ideas can certainly be expressed in other mediums and that's exciting, but I don't think anybody who's paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we're talking about now is orders of magnitude more powerful than the algorithms on Instagram.

Frontier expands service at Trenton-Mercer Airport, Here's the new nonstop route
Frontier expands service at Trenton-Mercer Airport, Here's the new nonstop route

Yahoo

time09-05-2025

  • Business
  • Yahoo

Frontier expands service at Trenton-Mercer Airport, Here's the new nonstop route

Passengers flying out of Trenton Mercer Airport will now have a new vacation option after Frontier Airlines announced another travel route for this summer. Beginning July 10, the airline will offer nonstop service from Trenton-Mercer Airport in Ewing to Myrtle Beach International Airport in South Carolina. "People in Mercer, Bucks, Burlington, and nearby areas really enjoy the convenience of flying out of Trenton-Mercer Airport," Mercer County Executive Dan Benson said in a news release. "Just about every new Frontier flight has been a hit, and I'm sure this new nonstop to Myrtle Beach will be too. It's a great option for travelers and a boost for regional tourism." This new route to Myrtle Beach will be Frontier's sixth nonstop route from Trenton-Mercer including service to Atlanta, Orlando and West Palm Beach in Florida. Trenton-Mercer Airport welcomed 237, 477 passengers in 2024, according to the Bureau of Transportation Statistics, a 26% decrease from the previous year. Lacey Latch is the development reporter for the Bucks County Courier Times and The Intelligencer. She can be reached at LLatch@ This article originally appeared on Bucks County Courier Times: Frontier flights from Trenton to Myrtle Beach will take off July 10

John Fetterman Reacts to Concerns About His Health
John Fetterman Reacts to Concerns About His Health

Newsweek

time06-05-2025

  • Health
  • Newsweek

John Fetterman Reacts to Concerns About His Health

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Senator John Fetterman, a Pennsylvania Democrat, responded to concerns about his health stemming from a report published New York Magazine's Intelligencer last week. Newsweek reached out to Fetterman's office and New York Magazine for comment via email. Why It Matters Fetterman has faced concerns about his health since his Senate run in 2022, when he suffered a stroke on the campaign trail. Despite concerns following the stroke, he went on to defeat Republican Mehmet Oz in the battleground state, which has been roughly evenly divided between Democrats and Republicans in recent elections. New York Magazine's Intelligencer published an article featuring an interview with Fetterman's former chief of staff Adam Jentleson, who raised concerns about the senator's health and whether he was following his recovery plan after the stroke. Senator John Fetterman attends the AI Insight Forum in Washington, D.C., on September 13, 2023. Senator John Fetterman attends the AI Insight Forum in Washington, D.C., on September 13, To Know Fetterman responded to the article in remarks to NBC News. "It's a one-source story with a couple anonymous sources. A hit piece from a very left publication," Fetterman said. "There's really nothing more to say about that." NBC News associated producer Kate Santaliz followed up by asking, "He said he was worried you're not taking your medications. Are you taking your medications, sir?" The senator reiterated that it was a "hit piece" with "anonymous sources." In additional remarks posted to X, formerly Twitter, by CBS News' Cristina Corujo, Fetterman said he does not believe people are concerned about him. "They're actually not concerned," he said. "It's a hit piece." Jentleson told the publication that Fetterman appeared to be committed to the recovery plan after the stroke but was not following up properly and had not been attending regular blood draws, a key part of the plan. Jentleson told the Intelligencer that Fetterman "could get back in treatment at any time, and for a long time I held out hope that he would. But it's just been too long now, and things keep getting worse." He left Fetterman's team in March 2024. What People Are Saying Progressive commentator and journalist Mehdi Hasan wrote on X on Friday: "This email, a year ago, from Fetterman's former chief of staff, and this entire piece from Ben Terris, makes clear that Fetterman should not be serving in the Senate. Every Senate Democrat should read this and be asked about it - especially Schumer." Former Democratic and independent Senator Kyrsten Sinema of Arizona posted to X: "Despicable hit piece on @JohnFetterman- I wish I was surprised anyone would publish an obvious vendetta re: a man's medical journey. What a weird medical stalker. To the former staffer: My advice to you is to do what your parents did. Get a job, sir." What Happens Next Fetterman is up for reelection in 2028 in Pennsylvania, which flipped back to President Donald Trump in the 2024 presidential race after backing former President Joe Biden in 2020. If you or someone you know is considering suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988, text "988" to the Crisis Text Line at 741741 or go to

Liberals who rallied behind Fetterman post-stroke in 2022 turn on pro-Israel senator after NY Magazine report
Liberals who rallied behind Fetterman post-stroke in 2022 turn on pro-Israel senator after NY Magazine report

Yahoo

time06-05-2025

  • Politics
  • Yahoo

Liberals who rallied behind Fetterman post-stroke in 2022 turn on pro-Israel senator after NY Magazine report

Sen. John Fetterman, D-Pa., appears to no longer have the support he once had, with many liberals turning on him following a scathing report focusing on his health. New York Magazine's Intelligencer published a lengthy piece Friday titled "All By Himself" which says how Fetterman "insists he is in good health" in the wake of a massive stroke he suffered in May 2022, "but staffers past and present say they no longer recognize the man they once knew." Fetterman, once seen as a progressive darling, has earned fanfare from many moderates and conservatives over his pragmatism on various issues. However, he has made more headlines over his ardent support for Israel following the Oct. 7 terrorist attack by Hamas, which has sparked an outcry from the far-left wing of the Democratic Party, including members of his own staff. Many conservative critics have taken aim at New York Magazine's "hit piece" and believe Fetterman's backing of Israel, which was prominently disseminated in the report, is the reason why liberals are suddenly abandoning him after rallying behind him on the heels of his stroke during the 2022 midterms. John Fetterman Faces New Spotlight On Health, Family Drama, Sparking Online Uproar Democratic Pennsylvania Sen. John Fetterman's health was at the center of a lengthy report from New York Magazine, pushing the narrative he is not fit to serve on Capitol Hill. Tech journalist and podcast host Kara Swisher was one of Fetterman's most vocal defenders after NBC News aired a report shedding light on the severity of his stroke, even taking a personal shot at reporter Dasha Burns (now Politico's White House bureau chief), who spoke about Fetterman's cognitive challenges she witnessed and questioned whether he understood what she was saying in small talk following a rare in-person interview at the time. Read On The Fox News App "Sorry to say but I talked to @JohnFetterman for over an hour without stop or any aides and this is just nonsense. Maybe this reporter is just bad at small talk," Swisher posted on X in response to Burns. Liberals Rush To Defend Fetterman As Reporters Question His Mental Fitness Following Nbc Interview That wasn't the attitude Swisher expressed towards New York Magazine correspondent Ben Terris, who authored the report. "This is so sad and Ben Terris handles it with fairness and empathy," Swisher wrote on the social media platform Bluesky. "Having had a stroke, I can say meds and self care is key to a good recovery and a great life. This was also so avoidable and the twisting of Fetterman's massive political skills is painful to read." Terris' New York Magazine colleague Rebecca Traister repeatedly drew attention to his report on her social media accounts, even sharing someone else's post that quoted the report which read, "One former staffer recalled overhearing Gisele on speakerphone that December saying to Fetterman, 'Who did I marry? Where is the man I married?'" But in her own piece profiling Fetterman in October 2022, Traister praised his campaign's transparency of his medical records and attacked media outlets for "pushing for further documentation with some of the energy once applied to Hillary's emails" and accused "right-wing carnival barkers" of having "taken cues from the Oz campaign." "As someone who has recently interviewed him: Fetterman's comprehension is not at all impaired," Traister lectured Burns on X. "He understands everything, it's just that he reads it (which requires extra acuity, I'd argue) and responds in real time. It's a hearing/auditory processing challenge." Dave Marcus: Yep, It's Fetterman's Brain That Triggered The Left's Hit Piece Traister expressed sheer excitement over Fetterman's recovery while calling his GOP opponent Dr. Mehmet Oz's attacks towards his health "horrifying" during an appearance on MSNBC. "It was very striking, following his campaign so closely over the past month, to see how swiftly his health was improving and how that improvement was on public display," Traister said in 2022. "You could see almost a day by day, and certainly a week over week, improvement in his ability to address crowds, his ability to be loose and his confidence in front of crowds. And that was really striking," she continued. "And the thing that was striking alongside it was that that visible improvement was happening alongside this building press narrative, certainly on the right wing and also in some major newspapers, about how he was hiding something about his health." MSNBC host Chris Hayes sounded the alarm on the "profoundly unnerving" report on the "urgent concerns" those around Fetterman have about his health. But during the 2022 campaign, Hayes called the attacks about Fetterman's health "gross," and downplayed his stroke as a serious campaign issue since he's an "incredibly authentic dude." MSNBC contributor Rotimi Adeoye sounded off on the New York Magazine report, writing "The Fetterman story is troubling—not just because of chaotic staff allegations, but because someone clearly still struggling with their mental health shouldn't be in such a high-stakes role. The only solution is political: Fetterman should resign. PA Dems need a robust primary." But in another post after Fetterman was elected, Adeoye declared "Our country is better off because John Fetterman is in the Senate." Both posts have since been deleted. Former MSNBC host and Zeteo founder Mehdi Hasan highlighted from the report an email Fetterman's former chief of staff Adam Jentleson sent Fetterman's doctor expressing his concerns, saying it "makes clear that Fetterman should not be serving in the Senate." "Every Senate Democrat should read this and be asked about it - especially Schumer," Hasan wrote. But in Oct. 2022, Hasan posted "Imagine being a sentient human being who really believes John Fetterman can't be a senator because he had a stroke, but a stroke-free Herschel Walker can be a senator." Fetterman Calls For Bombing Iranian Nuclear Facilities: 'Waste That S---' Liberal writer Jill Filipovic praised the report, calling it "well worth a read." "Not every person is fit to do every job, and someone with serious mental health challenges who may not be complying with a treatment plan probably shouldn't be in congress," Filipovic wrote. But in October 2022, Filipovic chalked up the impact of Fetterman's stroke as mere speech impairment. "I know it's too much to expect consistency from Republicans, but it's weird to see them go after Fetterman because his stroke has impaired his speech, but defend Herschel Walker by being like, 'it's not his fault he can't remember the abortions he paid for, he has brain damage.'" Filipovic wrote at the time. She also praised Fetterman for demonstrating "a kind of courage and gumption rarely seen on the national political stage" following his Senate debate performance against Oz. Click Here For The Latest Media And Culture News "Senator Fetterman routinely drives so recklessly he nearly killed his wife in a car crash," Democratic activist Armand Domalewski wrote while highlighting an excerpt from the report about a driving accident Fetterman was involved in last year. That wasn't the attitude Domalewski always had. "[D]riving me crazy that Fetterman has to apologize for stumbling over his words after a stroke but we all have to just keep going on normally as if his opponent didn't TORTURE LITERAL PUPPIES," Domalewski wrote in October 2022, referencing his Fetterman's Republican rival Oz. "Fetterman could be a stumbling drunk who forgets his pants half the time and it wouldn't matter because Dr Oz literally TORTURED AND KILLED PUPPIES!!!!" he added. Even after Fetterman won his election, Domalewski was hyping the senator's political prospects. "Unless his health takes a dive, Biden is obviously running for re-election, but if he does bow out and Fetterman's health continues to recover, Big John is clearly a Presidential contender," Domalewski wrote in November 2022. Original article source: Liberals who rallied behind Fetterman post-stroke in 2022 turn on pro-Israel senator after NY Magazine report

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store