logo
NIKKEI Film: Why learn English in the age of AI?

NIKKEI Film: Why learn English in the age of AI?

Nikkei Asia22-06-2025
WATARU ITO
TOKYO -- AI can correctly answer about 90% of the University of Tokyo's English entrance exam questions and is capable of achieving a 900 out of a 990 perfect score on the TOEIC (Test of English for International Communication). The average TOEIC score for Japanese in 2023 was 561.
Developments such as these are significant enough that AI translation researcher Eiichiro Sumita asserts, "Practical English for business use should be left to AI."
Meanwhile, English-language education has accelerated in Japan in order to better nurture people who can play active roles in the world -- a move that has received a strong push from the business community. English-language kindergartens and international schools are also popular, and parents are enthusiastic about English education.
However, these advancements in AI have raised an intriguing question: Is English-language education even necessary anymore? NIKKEI Film explores the future of English learning in the age of AI, with the help of a class of fifth-grade elementary school students just starting to study English.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tech's diversity crisis is baking bias into AI systems
Tech's diversity crisis is baking bias into AI systems

Japan Times

time16 hours ago

  • Japan Times

Tech's diversity crisis is baking bias into AI systems

As an Afro-Latina woman with degrees in computer and electrical engineering, Maya De Los Santos hopes to buck a trend by forging a career in AI, a field dominated by white men. AI needs her, experts and observers say. Built-in viewpoints and bias, unintentionally imbued by its creators, can make the fast-growing digital tool risky as it is used to make significant decisions in areas such as hiring processes, health care, finance and law enforcement, they warn. "I'm interested in a career in AI because I want to ensure that marginalized communities are protected from and informed on the dangers and risks of AI and also understand how they can benefit from it," said De Los Santos, a first-generation U.S. college student. "This unfairness and prejudice that exists in society is being replicated in the AI brought into very high stakes scenarios and environment, and it's being trusted, without more critical thinking." Women represent 26% of the AI workforce, according to a UNESCO report, and men hold 80% of tenured faculty positions at university AI departments globally. Blacks and Hispanics also are underrepresented in the AI workforce, a 2022 census data analysis by Georgetown University showed. Among AI technical occupations, Hispanics held about 9% of jobs, compared with holding more than 18% of U.S. jobs overall, it said. Black workers held about 8% of the technical AI jobs, compared with holding nearly 12% of U.S. jobs overall, it said. AI bias De Los Santos will soon begin a PhD program in human computer interaction at Brown University in Providence, Rhode Island. Bias has unintentionally seeped into some AI systems as software engineers, for example, who are creating problem-solving techniques integrate their own perspectives and often-limited data sets. | reuters She said she wants to learn not only how to educate marginalized communities on AI technology but to understand privacy issues and AI bias, also called algorithm or machine learning bias, that produces results that reflect and perpetuate societal biases. Bias has unintentionally seeped into some AI systems as software engineers, for example, who are creating problem-solving techniques integrate their own perspectives and often-limited data sets. Amazon scrapped an AI recruiting tool when it found it was selecting resumes favoring men over women. The system had been trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of a preponderance of men across the industry, and the system in effect taught itself that male candidates were preferable. "When people from a broader range of life experiences, identities and backgrounds help shape AI, they're more likely to identify different needs, ask different questions and apply AI in new ways," said Tess Posner, founding CEO of AI4ALL, a non-profit working to develop an inclusive pipeline of AI professionals. "Inclusion makes the solutions created by AI more relevant to more people," said Posner. Promoting diversity AI4ALL counts De Los Santos as one of the 7,500 students it has helped navigate the barriers to getting a job in AI since 2015. By targeting historically underrepresented groups, the nonprofit aims to diversify the AI workforce. AI engineer jobs are one of the fastest growing positions globally and the fastest growing overall in the U.S. and the United Kingdom, according to LinkedIn. U.S. President Donald Trump holds a signed executive order on AI at the White House in January. | reuters Posner said promoting diversity means starting early in education by expanding access to computer science classes for children. About 60% of public high schools offer such classes with Blacks, Hispanics and Native Americans less likely to have access. Ensuring that students from underrepresented groups know about AI as a potential career, creating internships and aligning them with mentors is critical, she said. Efforts to make AI more representative of American society are colliding with U.S. President Donald Trump's backlash against Diversity, Equity & Inclusion (DEI) programs in the federal government, higher education and corporate levels. DEI offices and programs in the U.S. government have been terminated and federal contractors banned from using affirmative action in hiring. Companies from Goldman Sachs to PepsiCo have halted or cut back diversity programs. Safiya Noble, a professor at the University of California Los Angeles and founder of the Center on Resilience & Digital Justice, said she worries the government's attack on DEI will undermine efforts to create opportunities in AI for marginalized groups. "One of the ways to repress any type of progress on civil rights is to make the allegation that tech and social media companies have been too available to the messages of civil rights and human rights," said Noble. "You see the evidence with their backlash against movements like Blacks Lives Matter and allegations of anti-conservative bias," she said. Globally, from 2021 to 2024, UNESCO says the number of women working in AI increased by just 4%. While progress may be slow, Posner said she is optimistic. "There's been a lot of commitment to these values of inclusion,' she said. "I don't think that's changed, even if as a society, we are wrestling with what inclusion really means and how to do that across the board."

Japanese government urging citizens to use generative AI more
Japanese government urging citizens to use generative AI more

SoraNews24

timea day ago

  • SoraNews24

Japanese government urging citizens to use generative AI more

Government would be happy if you at least tried generating a mildly bawdy limerick. It's certainly hard to imagine life before generative AI came out. Without it, I never would have known what my microwave looked like if it were in a Studio Ghibli movie, or felt the lingering dread that anything I read or saw online is neither authentic human expression nor correct. And yet, despite these revolutionary changes to society, in its annual Communications White Paper Japan's Ministry of Internal Affairs and Communications says that gen-AI is being underused in Japan compared to the rest of the world. According to the white paper, only 26.7 percent of people in Japan have ever used generative AI. While that nearly tripled last year's 9.1 percent usage rate, it's still far below other countries' rates, such as 81.2 in China, 68.8 in the USA, and 59.2 in Germany. One might be quick to assume Japan's aging population is to blame, but even when focusing solely on people in their 20s, the rate is still only 44.7 percent, and usage by business is only slightly higher at 49.7. Also interesting to note is that AI usage by people in their 30s in Japan is slightly lower than by people in their 40s, with 23.8 and 29.6 percent respectively. ▼ Feline usage, however, remains abysmally low. The white paper concludes that 'Japan is lagging behind AI-advanced countries of the world in terms of technology, industry, and applications, and further promotion of AI usage is needed in daily life.' Online comments from trending-news Internet portal Hachima Kiko didn't disagree, but felt that certain aspects of Japanese society may need changing first. 'I don't do anything like AI illustrations, but it's good for expanding on searches and proofreading.' 'I'm not surprised. We're struggling to get people to use cashless payment systems.' 'We should start by replacing everyone on TV with generated characters to end all the harassment and abuse there.' 'Using AI in game rendering can greatly reduce memory usage. I wonder if Japanese game makers are looking into it properly.' 'Japan needs to stop using floppy disks and fax machines first.' 'Maybe we just prefer the warmth of humanity.' 'That many people are using it in China?!' 'I think if there were a domestically produced AI, more people would get into it.' 'It's not important how widely it's used, but if it's being used properly.' 'In Japan, people who have used it to make money were arrested, and people who used it online were harassed.' In fairness, the people who were arrested were using AI to make money in illegal ways, so I don't think that's a valid argument that Japan has a stifling environment. If anything, it shows there are AI entrepreneurs here trying to make things happen. People just need to find more legitimate applications of it. That being said, I've recently had to deal with a few AI customer service bots from other countries and wasn't really blown away by their effectiveness. Maybe Japan can stand to be a little sluggish on AI adoption until it starts working a little more smoothly. Source: Communications White Paper, FNN Online Prime, Hachima Kiko Featured image: Pakutaso Insert image: Pakutaso ● Want to hear about SoraNews24's latest articles as soon as they're published? Follow us on Facebook and Twitter!

'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says
'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says

Japan Today

time2 days ago

  • Japan Today

'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says

By Anuj CHOPRA More than 90 percent of X's Community Notes -- a crowd-sourced verification system popularized by Elon Musk's platform -- are never published, a study said, highlighting major limits in its effectiveness as a debunking tool. The study by the Digital Democracy Institute of the Americas (DDIA), which analyzed the entire public dataset of 1.76 million notes published by X between January 2021 and March 2025, comes as the platform's CEO Linda Yaccarino resigned after two years at the helm. The community-driven moderation model -- now embraced by major tech platforms including Facebook-owner Meta and TikTok -- allows volunteers to contribute notes that add context or corrections to posts. Other users then rate the proposed notes as "helpful" or "not helpful." If the notes get "helpful" ratings from enough users with diverse perspectives, they are published on X, appearing right below the challenged posts. "The vast majority of submitted notes -- more than 90 percent -- never reach the public," DDIA's study said. "For a program marketed as fast, scalable, and transparent, these figures should raise serious concerns." Among English notes, the publication rate dropped from 9.5 percent in 2023 to just 4.9 percent in early 2025, the study said. Spanish-language notes, however, showed some growth, with the publication rate rising from 3.6 percent to 7.1 percent over the same period, it added. A vast number of notes remain unpublished due to lack of consensus among users during rating. Thousands of notes also go unrated, possibly never seen and never assessed, according to the report. "As the volume of notes submitted grows, the system's internal visibility bottleneck becomes more apparent –- especially in English," the study said. "Despite a rising number of contributors submitting notes, many notes remain stuck in limbo, unseen and unevaluated by fellow contributors, a crucial step for notes to be published." In a separate finding, DDIA's researchers identified not a human but a bot-like account -- dedicated to flagging crypto scams –- as the most prolific contributor to the program in English, submitting more than 43,000 notes between 2021 and March 2025. However, only 3.1 percent of those notes went live, suggesting most went unseen or failed to gain consensus, the report said. The study also noted that the time it takes for a note to go live had improved over the years, dropping from an average of more than 100 days in 2022 to 14 days in 2025. "Even this faster timeline is far too slow for the reality of viral misinformation, timely toxic content, or simply errors about real-time events, which spread within hours, not weeks," DDIA's report said. The findings are significant as tech platforms increasingly view the community-driven model as an alternative to professional fact-checking, which conservative advocates in countries such as the United States have long accused of a liberal bias. Studies have shown Community Notes can work to dispel some falsehoods such as vaccine misinformation, but researchers have long cautioned that it works best for topics where there is broad consensus. Some researchers have also cautioned that Community Notes users can be motivated by partisan motives and tend to target their political opponents. X introduced Community Notes during the tenure of Yaccarino, who said on Wednesday that she had decided to step down after leading the company through a major transformation. No reason was given for her exit, but the resignation came as Musk's artificial intelligence chatbot Grok triggered an online firestorm over its anti-Semitic comments that praised Adolf Hitler and insulted Islam in separate posts on X. © 2025 AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store