Latest news with #MargaretMitchell


Daily Record
16-06-2025
- Daily Record
Hamilton Guides enjoy special adventures thanks to Generation Cashback funding
The 17th Hamilton members tackled a climbing wall, took to the water and visited New Lanark, Garrion Bridges and Girlguiding South Lanarkshire's residential site. Guides from Hamilton enjoyed a host of new adventures thanks to a funding boost – allowing them to try out new opportunities ranging from taking to the water to scaling a climbing wall. Members of the 17th Hamilton unit had a busy programme of activities arranged by their volunteer leaders thanks to the support of the Generation Cashback project, a Scottish Government-funded programme aiming to increase opportunities for young people living in areas of deprivation. The 'lifechanging' experiences for the Guides, aged 10 to 14, included hitting the heights as they reached the top of a climbing wall plus boarding a boat for the first time – as well as enjoying a camping weekend at Gowanpark, Girlguiding South Lanarkshire's residential activity centre near Crossford. During their residential, they visited New Lanark World Heritage Centre and enjoyed afternoon tea at Garrion Bridges – another first which none of the Guides had ever done before, and with the funding helping to ensure all members had the chance to participate. Unit leader Margaret Mitchell said: 'We're proud of giving girls different experiences that they can't get at home. We take them on adventures that they would never get to go on and we get so much out of seeing them do new things. 'Anything we get we use to enhance the girls' experience of Girlguiding and life. We want them to have memories for life.' The Hamilton unit was supported to undertake their adventure through the Generation CashBack project delivered by partners Girlguiding Scotland, Boys' Brigade Scotland, Scouts Scotland and Youth Scotland. Girlguiding Scotland Cashback development officer Anna Hannen Thomas said: 'The volunteers in this unit have shown how much they value the young people by creating a brilliant programme of activities for their weekend away. 'We're proud that Cashback was able to support this to happen and continues to support this unit to be a safe space for young people to push themselves and do amazing new things.' Visit for more details on volunteering with Girlguiding's units across Lanarkshire. And did you know Lanarkshire Live had its own app? Download yours for free here.
Yahoo
09-05-2025
- Entertainment
- Yahoo
Historic hotel in Blowing Rock to be demolished
A historic hotel in the North Carolina High Country is being torn down. The Green Park Inn in Blowing Rock, which was built back in 1891, is set for demolition. ALSO READ: Carolina Theatre gets ready to reopen after 47 years During its 133-year history, the hotel's guests included 2 presidents, Annie Oakley, and Margaret Mitchell, who penned a portion of Gone with the Wind while staying at the hotel. On Friday, Channel 9 crews spotted workers trying to salvage some of the wood from the historic hotel. The town of Blowing Rock said the new owner wants to build condos and a forty-room hotel on the site. VIDEO: Historic Carolina Theatre reopens in Uptown


WIRED
05-05-2025
- WIRED
Take a Tour of All the Essential Features in ChatGPT
If you missed WIRED's live, subscriber-only Q&A focused on the software features of ChatGPT, hosted by Reece Rogers, you can watch the replay here. Hello WIRED subscribers! Thank you to everyone who attended our most recent AI Unlocked webinar. I really enjoyed our lively discussion about ChatGPT's software features and wish I could have answered even more of your questions about using generative AI tools. I really enjoyed the questions about what ChatGPT can do beyond just chatting. Image search is a feature I use often, and here are my first impressions of the tool that I recorded back in September 2023 when it first dropped. I use ChatGPT's image search tool nowadays by snapping a picture with my phone when I don't recognize something. I'll upload the screenshot and ask the chatbot what it is. For example, I was recently at an Irish bar and learned what a hurley was when I saw one hanging on the wall and was perplexed. (Although, I also could have just asked the friendly bartenders when I got another coffee drink.) What are some ChatGPT features that I wasn't able to go deep on during the 45-minute session? Two come to mind: temporary chats and memory. Temporary chats keep things slightly more private. For example, the log of the conversation will not appear on the left side of your screen when it's over like it normally does. Temporary chats are not protected enough that you should feel comfortable sharing private information—definitely still don't do that—but this is a nice option for chats you don't necessarily need or want saved for your own use later. ChatGPT's memory function has gotten better over time. While some information about you will be stored passively as you use the tool (unless you turn it off in the settings), I think actively engaging with ChatGPT's memory by telling it your preferences does lead to a better software experience overall. What are the drawbacks of ChatGPT? There are three I want to highlight here. These are all issues I keep in mind as I'm using any AI tool, and think you should as well. First, hallucinations are still a problem, so you should never put full trust in an AI's output. Always doublecheck your answers against trusted sources of information. Second, generative AI amplifies biases. Some biases are very evident, while others are more subtle. Check out my interview with Margaret Mitchell, an AI ethics researcher at Hugging Face, to learn more about how a top researcher is thinking about stereotypes within AI tools. Third, generative AI tools are resource intensive when compared to other software programs, and the overall environmental impact of your usage may be much more than you'd expect. If you want to know more about data privacy when it comes to ChatGPT, then this recent article from WIRED's security team about what happens to your data if you follow the 'AI action figure' trend on social media is worth reading. As I mentioned in the webinar, our article about how to opt out from AI training is also worth checking out for more context and to learn what your options are across many different websites. Haven't already signed up for season two of the AI Unlocked newsletter? I would definitely recommend doing so. The 10 editions in season two include many hands-on tasks for you to try out multiple AI tools and think critically about the strengths and weaknesses. It includes many prompting tips for those hoping to better understand how to craft these kinds of software interactions. I also spent plenty of time answering more reader questions—one of the most fun and engaging parts of my job. I really appreciate you taking the time out of your day to support WIRED and watch this webinar. Talk to you soon.


Boston Globe
03-05-2025
- Politics
- Boston Globe
Today in History: May 3, Oklahoma City struck by historic tornado.
In 1937, Margaret Mitchell won the Pulitzer Prize for her novel, 'Gone with the Wind.' In 1948, the Supreme Court, in Shelley v. Kraemer, ruled that covenants prohibiting the sale of real estate to Blacks or members of other racial groups were legally unenforceable. Advertisement In 1979, the Conservative Party ousted the incumbent Labour government in British parliamentary elections. Conservative leader Margaret Thatcher would become the first female UK Prime Minister the following day. In 1986, aboard the longshot horse Ferdinand, Bill Shoemaker became the oldest jockey to win the Kentucky Derby at age 54. In 1999, the Bridge Creek–Moore tornado struck the Oklahoma City metropolitan area, causing 41 deaths and nearly 600 injuries. The tornado's top wind speed of 321 miles per hour was the highest ever recorded on earth. In 2003, the 'Old Man of the Mountain,' a 40-foot-tall granite outcropping in Franconia, N.H., that bore the resemblance of a human face in profile, collapsed despite decades of preservation efforts. In 2015, two gunmen were killed by a SWAT team in Garland, Texas, after they opened fire outside a purposely provocative contest for cartoon depictions of the Prophet Muhammad. Advertisement In 2016, in a stunning triumph for a political outsider, PresidentTrump all but clinched the Republican presidential nomination with a resounding victory in the Indiana primary election that knocked rival Ted Cruz out of the race. In 2018, a federal grand jury in Detroit indicted former Volkswagen CEO Martin Winterkorn on charges stemming from the company's diesel emissions cheating scandal. (Under Germany's constitution, he could not be extradited to the US to face charges.)


WIRED
23-04-2025
- Science
- WIRED
AI Is Spreading Old Stereotypes to New Languages and Cultures
Apr 23, 2025 12:31 PM Margaret Mitchell, an AI ethics researcher at Hugging Face, tells WIRED about a new dataset designed to test AI models for bias in multiple languages. Photo-Illustration:Margaret Mitchell is a pioneer when it comes to testing generative AI tools for bias. She founded the Ethical AI team at Google, alongside another well-known researcher, Timnit Gebru, before they were later both fired from the company. She now works as the AI ethics leader at Hugging Face, a software startup focused on open source tools. We spoke about a new dataset she helped create to test how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize English, this dataset is malleable, with human translations for testing a wider breadth of languages and cultures. You probably already know that AI often presents a flattened view of humans, but you might not realize how these issues can be made even more extreme when the outputs are no longer generated in English. My conversation with Mitchell has been edited for length and clarity. Reece Rogers: What is this new dataset, called SHADES, designed to do, and how did it come together? Margaret Mitchell: It's designed to help with evaluation and analysis, coming about from the BigScience project. About four years ago, there was this massive international effort, where researchers all over the world came together to train the first open large language model. By fully open, I mean the training data is open as well as the model. Hugging Face played a key role in keeping it moving forward and providing things like compute. Institutions all over the world were paying people as well while they worked on parts of this project. The model we put out was called Bloom, and it really was the dawn of this idea of 'open science.' We had a bunch of working groups to focus on different aspects, and one of the working groups that I was tangentially involved with was looking at evaluation. It turned out that doing societal impact evaluations well was massively complicated—more complicated than training the model. We had this idea of an evaluation dataset called SHADES, inspired by Gender Shades, where you could have things that are exactly comparable, except for the change in some characteristic. Gender Shades was looking at gender and skin tone. Our work looks at different kinds of bias types and swapping amongst some identity characteristics, like different genders or nations. There are a lot of resources in English and evaluations for English. While there are some multilingual resources relevant to bias, they're often based on machine translation as opposed to actual translations from people who speak the language, who are embedded in the culture, and who can understand the kind of biases at play. They can put together the most relevant translations for what we're trying to do. So much of the work around mitigating AI bias focuses just on English and stereotypes found in a few select cultures. Why is broadening this perspective to more languages and cultures important? These models are being deployed across languages and cultures, so mitigating English biases—even translated English biases—doesn't correspond to mitigating the biases that are relevant in the different cultures where these are being deployed. This means that you risk deploying a model that propagates really problematic stereotypes within a given region, because they are trained on these different languages. So, there's the training data. Then, there's the fine-tuning and evaluation. The training data might contain all kinds of really problematic stereotypes across countries, but then the bias mitigation techniques may only look at English. In particular, it tends to be North American– and US-centric. While you might reduce bias in some way for English users in the US, you've not done it throughout the world. You still risk amplifying really harmful views globally because you've only focused on English. Is generative AI introducing new stereotypes to different languages and cultures? That is part of what we're finding. The idea of blondes being stupid is not something that's found all over the world, but is found in a lot of the languages that we looked at. When you have all of the data in one shared latent space, then semantic concepts can get transferred across languages. You're risking propagating harmful stereotypes that other people hadn't even thought of. Is it true that AI models will sometimes justify stereotypes in their outputs by just making shit up? That was something that came out in our discussions of what we were finding. We were all sort of weirded out that some of the stereotypes were being justified by references to scientific literature that didn't exist. Outputs saying that, for example, science has shown genetic differences where it hasn't been shown, which is a basis of scientific racism. The AI outputs were putting forward these pseudo-scientific views, and then also using language that suggested academic writing or having academic support. It spoke about these things as if they're facts, when they're not factual at all. What were some of the biggest challenges when working on the SHADES dataset? One of the biggest challenges was around the linguistic differences. A really common approach for bias evaluation is to use English and make a sentence with a slot like: 'People from [ nation ] are untrustworthy.' Then, you flip in different nations. When you start putting in gender, now the rest of the sentence starts having to agree grammatically on gender. That's really been a limitation for bias evaluation, because if you want to do these contrastive swaps in other languages—which is super useful for measuring bias—you have to have the rest of the sentence changed. You need different translations where the whole sentence changes. How do you make templates where the whole sentence needs to agree in gender, in number, in plurality, and all these different kinds of things with the target of the stereotype? We had to come up with our own linguistic annotation in order to account for this. Luckily, there were a few people involved who were linguistic nerds. So, now you can do these contrastive statements across all of these languages, even the ones with the really hard agreement rules, because we've developed this novel, template-based approach for bias evaluation that's syntactically sensitive. Generative AI has been known to amplify stereotypes for a while now. With so much progress being made in other aspects of AI research, why are these kinds of extreme biases still prevalent? It's an issue that seems under-addressed. That's a pretty big question. There are a few different kinds of answers. One is cultural. I think within a lot of tech companies it's believed that it's not really that big of a problem. Or, if it is, it's a pretty simple fix. What will be prioritized, if anything is prioritized, are these simple approaches that can go wrong. We'll get superficial fixes for very basic things. If you say girls like pink, it recognizes that as a stereotype, because it's just the kind of thing that if you're thinking of prototypical stereotypes pops out at you, right? These very basic cases will be handled. It's a very simple, superficial approach where these more deeply embedded beliefs don't get addressed. It ends up being both a cultural issue and a technical issue of finding how to get at deeply ingrained biases that aren't expressing themselves in very clear language.