Latest news with #UniversityOfEastAnglia


Al Jazeera
3 days ago
- Politics
- Al Jazeera
'Feelings running high' over politics at Glastonbury this year
George McKay, a professor at the University of East Anglia, notes that Glastonbury has a long tradition of political expression and asks whether Kneecap's overt support for Palestine could mark a turning point in their career — or will they fade away?


CBS News
14-06-2025
- Science
- CBS News
How optical illusions are illuminating vital medical research
New York — At the Museum of illusions in New York City, around every corner is a wonder for the eyes. There's a vase that's a face, art that moves with you, and a room that seems to go on forever. It's a funhouse for our perceptions, built for the TikTok age. But the visual tricks are windows into how the mind works, and they fascinate scientists. "The brain uses all the information it can get to figure out what's in front of it," Dr. Martin Doherty, a psychology professor at the University of East Anglia in England, told CBS News. Doherty has studied one particular puzzle for years called the Ebbinghaus illusion, an optical illusion which shows how size perception can be manipulated using surrounding shapes. "The illusion works by using context to mess around with your perception," Doherty explains. Doherty long thought that everyone saw the Ebbinghaus illusion the same way. But in a study published in March in the journal Scientific Reports , he and his colleagues found that radiologists who have years of training to ignore visual distractions actually see the image differently and accurately. In the study, researchers tested 44 experts in "medical image interpretation" — radiographers and radiologists — against a control group of nonexperts consisting of psychology and medical students. They found that the experts were "significantly less susceptible to all illusions except for the Shepard Tabletops, demonstrating superior perceptual accuracy." "According to the theory, that shouldn't happen," Doherty said. "It shouldn't be possible. No previous research has shown that you can learn to see through them." One other group has also been shown to solve the illusion, young children. But that ability goes away after age 7, Doherty said. "We think that's because it takes time to learn to integrate context into your perception," Doherty said. It's evidence of the deep abilities of a trained brain. But for most of us, illusions are proof of our limitations. "When you see these visuals, it's just like your brain just starts going crazy," museumgoer Kevin Paguay said. It's also a reminder that you cannot always believe what you see.


Telegraph
15-05-2025
- General
- Telegraph
Harvard's $27 copy of Magna Carta revealed to be $21m original
A copy of Magna Carta bought by Harvard University for just $27 in the 1940s is an original worth $21 million (£16 million), scans have shown. The document, originally drafted by Cardinal Stephen Langton, the Archbishop of Canterbury, in 1215 to make peace between King John and rebel barons is credited with laying the foundations of many democracies around the world. Although the first version was annulled, it was reissued in 1300 by Edward I, promising protection of church rights, limits on taxes and access to impartial justice. Four of its clauses, including a guarantee of due legal process, are still in law today. There were thought to be only six originals remaining from the final version and Harvard believed it had bought an unofficial replica at auction for $27.50 in 1946. In the auction catalogue, the document was described as a 'copy… made in 1327… somewhat rubbed and damp-stained'. But new analysis by King's College London and the University of East Anglia has found the handwriting, sizing and elongated letters are all consistent with the original. When a similar original Magna Carta was auctioned in 2007, it sold for $21.3 million. David Carpenter, professor of medieval history at King's College London, said: 'This is a fantastic discovery. Harvard's Magna Carta deserves celebration, not as some mere copy, stained and faded, but as an original of one of the most significant documents in world constitutional history, a cornerstone of freedoms past, present and yet to be won.' Prof Carpenter was studying unofficial copies of Magna Carta when he came across the digitised version of the document on the Harvard Law School Library website and realised it might be an original document and not a copy. He began to compare it to other originals to establish its authenticity and teamed up with the University of East Anglia's Nicholas Vincent, a fellow professor of medieval history, to investigate its provenance. The pair realised that its dimensions – 19.2in by 18.6in – were the same as the six previously known originals, as is the handwriting, with the large capital 'E' at the start in 'Edwardus' and the elongated letters in the first line. Using images obtained by Harvard Law School librarians via ultraviolet light and spectral imaging, the pair discovered the text matching up perfectly with that in the other originals. Prof Vincent said: 'If you asked anybody what the most famous single document in the history of the world is, they would probably name Magna Carta. 'It is an icon both of the Western political tradition and of constitutional law.' The pair believe the document may be the lost Magna Carta issued to the former parliamentary borough of Appleby in Westmorland. The manuscript was sent to auction in 1945 by Air Vice-Marshal Forster 'Sammy' Maynard, a First World War pilot, who had inherited archives from Thomas and John Clarkson, leading campaigners against the slave trade. In the early 1800s, Clarkson retired to the Lake District, where he became a friend both of poet William Wordsworth and William Lowther, a local landowner and hereditary lord of the manor of Appleby. Considered a key step in the evolution of human rights against oppressive rulers, Magna Carta has formed the basis of constitutions around the world. It was influential in the founding of the United States, from the Declaration of Independence to the framing of the US Constitution and the subsequent adoption of the Bill of Rights. Only four original copies of the 1215 Magna Carta survive. Two are kept in the British Library (one of which was badly damaged by fire in 1731), one in Salisbury Cathedral the other in Lincoln Castle. Amanda Watson, Harvard Law School's assistant dean for library and information services, said: 'Congratulations to Professors Carpenter and Vincent on their fantastic discovery 'This work exemplifies what happens when magnificent collections, like Harvard's, are opened to brilliant scholars.'


Forbes
09-04-2025
- Politics
- Forbes
Is AI Really ‘Woke' Or Extremist?
As AI becomes increasingly embedded in business operations, concerns about political bias and ... More extremist influence have emerged as critical considerations. I truly believe AI will be the most transformative technology of our lifetimes. However, even I, a firm advocate for the good I think it will do, can see that there is a huge amount of hype and confusion around it. This isn't surprising. Some of the biggest and most powerful corporations have bet the house on selling it to us. It's also a highly contentious subject, with many rightly concerned about its possible impact on jobs, privacy and security. Another frequently voiced fear is that AI will be used to create disinformation that could further political narratives or even influence our democratic choices. There are two claims I see made frequently – the first is that AI can be used to spread extremist beliefs and maybe even create extremists. The second is that AI output veers towards the 'woke' – a term originally used by African American civil rights protesters but now most frequently used by conservatives to refer to progressive or pro-social-justice ideas and beliefs. Reports concerning left-leaning bias in AI were particularly prevalent during last year's US election. At the same time, counter-terrorist think tanks have warned that extremist groups are using AI to indoctrinate. As both of these myths concern the dangers of AI being used to influence political opinions, I thought it made sense to examine them together. So, are they true? Does AI really have the power to drive us to commit terrorist acts or to adopt liberal philosophies and become 'woke'? Conservative and right-wing commentators frequently make the claim that AI and the Silicon Valley culture, where it often originates from, have a left-wing bias. And it does seem that there is at least some evidence to back up these beliefs. A number of studies, including one by the University Of East Anglia in 2023 and one published in the Journal of Economic Behavior And Organization, make the case that this is true. Of course, generative AI doesn't actually have a political opinion – or any opinions, for that matter. Everything it 'knows' comes from data scraped from the web. This includes books, scientific papers and journals, as well as content scraped from discussion forums and social media. If that data happens to support a progressive consensus – for example, if the majority of climate science data supports theories that climate change is man-made – then the AI is likely to present this as true. Rather than simply presenting facts with a left-wing bias, some of the research focuses on findings that AI will just refuse to process "right-wing image generation" requests. And when prompts describe images featuring progressive talking points like 'racial-ethic equality' or 'transgender acceptance,' the results are more likely to show positive images (happy people, for example). But that doesn't necessarily mean AI is 'woke'. In fact, further research has found that LLM-based AIs can also display right-wing bias, and the results vary according to which AI is tested. A study recently published in Nature found that, based on standardized political orientation tests, there has been 'a clear and statistically significant shift in ChatGPT's ideological positioning over time.' At the end of the day, AI systems are built by humans and trained on the data we select. If bias is present in the way their algorithms are engineered or the information they are given about the world, then that bias is very likely to be replicated in their output. While some researchers are concerned that AI will turn everyone into liberals, others are more worried that it will be used to radicalize people or further extremist agendas. The International Centre For Counter-Terrorism, based at The Hague, reports that terrorist groups already widely use generative AI to create and spread propaganda. This includes using fake images and videos to spread narratives that align with their values. Terrorist and extremist groups, including Islamic State, have even released guides demonstrating how to use AI to develop propaganda and disinformation. Often, the aim is simply to sow chaos and confusion, leading to distrust of establishment agencies and institutions and mainstream (which usually means edited and fact-checked) media. It's also been suggested that extremist can use AI to work out who is susceptible to radicalization in the first place by predicting who is likely to be sympathetic to their ideology. Again, this is a case of humans using AI to persuade people to adopt their views rather than an indication that AI is extreme or prone to suggesting extreme ideas and behaviors. However, one inherent risk with AI is its capability to reinforce extreme views through the algorithmic echo-chamber effect. This happens when social media and news platforms use AI to suggest content based on past engagement. This often results in users being shown more of what they already agree with, creating 'echo chambers,' where people repeatedly see content that mirrors their existing beliefs. If those beliefs are extreme, AI can amplify its effect by serving up similar, more radical content. It's important to remember that while AI is likely to play an increasing role in shaping the way we consume information, it can't directly influence our beliefs. It should also be noted that AI can also help counter these threats. It can detect bias in data, for example, that could lead to biased responses, and it can find and remove extremist content from the Internet. Nevertheless, there is clearly a perception, which appears to be justified, that groups of all political affiliations will inevitably use it to try to steer public opinion. Understanding where misinformation comes from and who might be trying to spread it helps us to hone our critical-thinking skills and become better at understanding when somebody (or some machine) is trying to influence us. These skills will become increasingly important as AI becomes more ingrained in everyday life, no matter which way we lean politically.