
Before Going to Tokyo, I Tried Learning Japanese With ChatGPT
On the final day of my visit to Japan, I'm alone and floating in some skyscraper's rooftop hot springs, praying no one joins me. For the last few months, I've been using ChatGPT's Advanced Voice Mode as an AI language tutor, part of a test to judge generative AI's potential as both a learning tool and a travel companion. The excessive talking to both strangers and a chatbot on my phone was illuminating as well as exhausting. I'm ready to shut my yapper for a minute and enjoy the silence.
When OpenAI launched ChatGPT late in 2022, it set off a firestorm of generative AI competition and public interest. Over two years later, many people are still unsure whether it can be useful in their daily lives outside of work.
A video from OpenAI in May of 2024 showing two researchers chatting back and forth, one in English and the other in Spanish, with ChatGPT acting as a low-latency interpreter, stuck in my memory. I wondered how practical the Advanced Voice Mode could be for learning how to speak bits of a new language and whether it's a worthwhile app for travelers.
To better understand how AI voice tools might transform the future of language learning, I spent a month practicing Japanese with the ChatGPT smartphone app before traveling to Tokyo for the first time. Outside of watching some anime, I had zero working knowledge of the language. During conversation sessions with the Advanced Voice Mode that usually lasted around 30 minutes, I often approached it as my synthetic, over-the-phone language tutor, practicing basic travel phrases for navigating transportation, restaurants, and retail shops.
On a previous trip, I'd used Duolingo, a smartphone app with language-learning quizzes and games, to brush up on my Spanish. I was curious how ChatGPT would compare. I often test new AI tools to understand their benefits and limitations, and I was eager to see if this approach to language learning could be the killer feature that makes these tools more appealing to more people. Me and My AI Language Tutor
Jackie Shannon, an OpenAI product lead for multimodal AI and ChatGPT, claims to use the chatbot to practice Spanish vocabulary words as she's driving to the office. She suggests beginners like me start by using it to learn phrases first—more knowledgeable learners can immediately try free-flowing dialogs with the AI tool. 'I think they should dive straight into conversation,' she says. 'Like, 'Help me have a conversation about the news on X.' Or, 'Help me practice ordering dinner.''
So I worked on useful travel phrases with ChatGPT and acting out roleplaying scenarios, like pretending to order food and making small talk at an izakaya restaurant. Nothing really stuck during the first two weeks, and I began to get nervous, but around week three I started to gain a loose grip on a few key Japanese phrases for travelers, and I felt noticeably less anxious about the impending interactions in another language.
ChatGPT is not necessarily designed with language acquisition in mind. 'This is a tool that has a number of different use cases, and it hasn't been optimized for language learning or translation yet,' says Shannon. The generalized nature of the chatbot's default settings can lead to a frustrating blandness of interactions at first, but after a few interactions ChatGPT's memory feature caught on fairly quickly that I was planning for a Japan trip and wanted speaking practice.
The 'memory' instructions for ChatGPT are passively updated by the software during conversations, and they impact how the AI talks to you. Go into the account settings to adjust or delete any of this information. An active way you can adjust the tool to be better suited for learning languages is to open the 'custom instructions' options and lay out your goals for the learning experience.
What frustrated me most was the incessant, unspecific guideline violation alerts during voice interactions, which ruined the flow of the conversation. ChatGPT would trigger a warning when I asked it to repeat a phrase multiple times, for example. (Extreme repetition is sometimes a method used by people hoping to break a generative AI tool's guardrails.) Shannon says OpenAI rolled out improvements related to what triggers a violation for Advanced Voice Mode and is looking to find a balance that prioritizes safety.
Also, be warned that Advanced Voice Mode can be a bit of a yes-man. If you don't request it to role-play as a tough-ass tutor, you may find the personality to be saccharine and annoying—I did. A handful of times ChatGPT congratulated me for doing a fabulous job after I definitely butchered a Japanese pronunciation. When I asked it to provide more detailed feedback to really teach me the language, the tool still wasn't perfect, but it was able to respond in a manner that fit my learning style better.
Comparing the overall experience to my past time with Duolingo, OpenAI's chatbot was more elastic, with a wider range of learning possibilities, whereas Duolingo's games were more habit forming and structured. Are ChatGPT's language abilities an existential threat to Duolingo? Not according to Klinton Bicknell, Duolingo's head of AI. 'If you're motivated right now, you can go to ChatGPT and get it to teach you something, including a language,' he says. 'Duolingo's success is providing a fun experience that's engaging and rewarding.'
The company partnered with OpenAI in the past and is currently using its AI models to power a feature where users can have conversations with an animated character to practice speaking skills. Putting ChatGPT to the Test in Tokyo
ChatGPT really became useful when I wanted to practice a phrase or two before saying it while out and about in Tokyo. Over and over, I whispered into my smartphone on the sidewalk, requesting reminders of how to ask for food recommendations or confess that I don't understand Japanese very well.
Using Advanced Voice Mode to translate back and forth live may be great for longer conversations you'd want to have in more intimate settings, but at a buzzy restaurant, crowded shrine, or other common tourist spots in Japan, it's just easier to do asynchronous translations with the tool.
At a barbecue spot with an all-you-can-drink special and a mini-keg of lemon sour right under the table, the food came out but not the requested drinking mugs. I had a tough time requesting them. The waitress was patient with us as I spoke a few lines into ChatGPT and showed her the translation on my smartphone. She then explained I hadn't yet signed a waiver promising not to drink and drive and brought out a form to sign. A few minutes later, she returned with the mug. In this instance, OpenAI's chatbot was quite helpful, but I likely would have been just fine using the Google Translate app.
More times than I would like to admit, though, the phrases I thought I had down pat by practicing with ChatGPT ended up sloshing around in my head and embarrassing me. For example, while trying to get back to the hotel around 10 pm via the train, I got disoriented looking for the correct station exit. I was able to ask for help from one of the station staff members, but instead of saying 'thank you' ( arigato gozaimasu ) at the end, my tired mind blurted out the phrase for 'this one, please' ( kore wo onegaishimasu ) as I confidently strode away.
After a month of ChatGPT practice, did I really know Japanese? Of course not. But a few of the polite greetings and touristy phrases stuck well enough, most of the time at least, to navigate my way around Tokyo and feel like I could really enjoy the thrill of adventure in a new country.
As generative AI tools improve, they will keep getting better at helping language learners practice speaking skills, as well as their reading skills. Tomotaro Akizawa, an associate professor and program coordinator at Stanford's Inter-University Center for Japanese Language Studies in Yokohama, gives me an example. 'Students who have just completed the beginner level can now try to read challenging literary works from the Shōwa era by using AI for translations, explanations, and word lists,' he says.
If students eventually end up relying only on generative AI tools and go their entire language learning journey sans human instructor, then the complexities of spoken language and communication may get flattened over time.
'The opportunity to personally experience the human elements embedded in the target language—such as emotions, thoughts, hesitations, or struggles—would be lost,' says associate professor Akizawa. 'Words spoken in conversation are not always as structured as those from a large language model.' AI may be more patient with you than a human tutor, but language learners risk losing the rough edges and experience-based insights.
Have you tried to learn to do anything with AI? Would you feel confident using AI to help with translation in public? Let us know your experiences by emailing hello@wired.com or commenting below.
Page 2

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Engadget
11 minutes ago
- Engadget
Everything you need to know about iOS 26 beta release: How to download it on your iPhone, new Apple features like Liquid Glass and more
Liquid Glass is a huge new change coming to iOS 26. (Apple) Waiting until the fall can feel like ages when you're ready to upgrade your iPhone to iOS 26. But there's good news: you can test out all the features now by downloading and installing Apple's public beta, which CEO Tim Cook says is (with the other current beta operating systems) "by far the most popular developer betas we've had," 9to5Mac reports. We also previewed the iOS 26 public beta release, which shows off the fresh home and lock screen redesign we've been asking to see for years. Called Liquid Glass, the new translucent look will extend across all of Apple's upcoming operating systems. The overhaul is one of several big changes coming to iOS, macOS, iPadOS and the rest of Apple's software suite, all of which were showcased during the company's WWDC keynote on June 9. After overpromising on AI plans last year, Apple kept its iOS roadmap focused more on basic quality of life improvements this year. There are multiple useful additions coming to the Phone and Messages apps on your iPhone, for instance: Apple execs outlined the ability to weed out spam texts or other unknown senders and an option to hold your spot on a phone call when you've been waiting for a representative to pick up. Plus, a treasured feature that we took for granted is coming back (hint: it's in the Photos app). Siri, meanwhile, is in a holding pattern. Apple has previously specified that its smarter voice assistant — first promised at WWDC 2024 — is delayed until some point "in the coming year," so you shouldn't expect any major changes in the current betas. But there are reports that Apple is aiming to give Siri a bigger brain transplant by basing it on third-party artificial intelligence models like OpenAI's ChatGPT or Anthropic's Claude, which could make 2026 a pivotal year. With each beta, it seems like additional new improvements are popping up, like a newly discovered FaceTime feature that'll freeze your video if it detects nudity. Most newer iPhone models are eligible to download iOS 26 (both the betas and final version). Want to see the full list of new features coming this fall? Read on. The current iPhone operating system is iOS 18, and Apple is still actively updating it — version 18.6 was just recently released. But don't expect to see iOS 19. Instead, Apple is skipping the numbering ahead to iOS 26 later this year. The company has decided to line up its iOS version numbers with a year-based system, similar to car model years. So while iOS and its sibling operating systems will be released in late 2025, they're all designated "26" to reflect the year ahead. (Meanwhile, iOS 18 is still getting new versions this summer, too.) It's official, we're moving to iOS 26. (Apple) Let's be honest. Out of everything announced at WWDC this year, the new Liquid Glass design was the star of the show. The iPhone's home and lock screens have looked pretty much the same year after year — the last exciting thing (in my opinion) was the option to add your own aesthetic to your home screen by customizing your apps and widgets. So seeing the home and lock screens' new facelift is refreshing. So what exactly is Liquid Glass? Apple calls it a "new translucent material" since, well, the apps and widgets are clear. However, the screen can still adapt to dark and light modes, depending on surroundings. You'll also notice buttons with a new floating design in several apps, like Phone and Maps. They're designed to be less distracting than the current buttons, but are still easy to see. While the design overhaul has proven to be controversial since its announcement, some — including Engadget's own Devindra Hardawar — like the new direction, even if it's somewhat reminiscent of Microsoft's translucent Windows Vista Aero designs from nearly twenty years ago. That said, as of the release of the iOS 26 beta 2, Apple has already incorporated some user feedback into the design, dialing back the transparency in at least some places. And while it will continue to evolve, Apple users won't be able to escape it: Liquid Glass was designed to make all of Apple's OSes more cohesive. Here's a look at how the translucent aesthetic will look with the new macOS Tahoe 26 on your desktop. iOS 26 has a laundry list of new features. Among the most worthwhile: Phone app redesign: You'll finally be able to scroll through contacts, recent calls and voicemail messages all on one screen. It also comes with a new feature called Hold Assist that'll notify you when an agent comes to the phone so you can avoid the elevator music and continue on with other tasks. Live Translation in Phone, FaceTime and Messages: iOS 26 is bringing the ability to have a conversation via phone call or text message with someone who speaks another language. Live Translation will translate your conversation in real time, which results in some stop-and-go interactions in the examples Apple shared during its presentation. Polls in group chats: Tired of sorting through what seems like hundreds of messages in your group chat? You and your friends will soon be able to create polls in group messages for deciding things like which brunch spot you're eating at or whose car you're taking on a road trip. Filtering unknown senders in Messages: If you haven't received spam texts about unpaid tolls or other citations, you're lucky. For those of us who have, those annoying messages will soon be filtered away in a separate folder. Visual Intelligence: Similar to a reverse Google image search, this new feature will allow you to search for anything that's on your iPhone screen. For instance, if you spot a pair of shoes someone is wearing in an Instagram photo, you can screenshot it and use Visual Intelligence to find those shoes (or similar ones) online. Photos tabs are back: For anyone who's still frustrated with the Photos changes made last year, you'll be happy to know that your tabs are coming back. Library and Collections will have their own separate spaces so you don't have to scroll to infinity to find what you're looking for. FaceTime "Communication Safety" feature: A newer addition to iOS 26 appears to be the FaceTime "Communication Safety" feature that pauses communications if and when nudity is detected. The feature appears to be a child safety feature that uses on-device detection, thus obviating any cloud-based privacy issues. Apple's Hold Assist will be nifty for those pesky services that put you on hold for 10 or more minutes. (Apple) A few iPhone models that run the current version of iOS — iPhone XR, XS and XS Max — won't be compatible with the latest upgrade. But any iPhones released in 2019 or later will be eligible for the iOS 26 update. iPhone SE (second generation or later) iPhone 11 iPhone 11 Pro iPhone 11 Pro Max iPhone 12 iPhone 12 mini iPhone 12 Pro iPhone 12 Pro Max iPhone 13 iPhone 13 mini iPhone 13 Pro iPhone 13 Pro Max iPhone 14 iPhone 14 Plus iPhone 14 Pro iPhone 14 Pro Max iPhone 15 iPhone 15 Plus iPhone 15 Pro iPhone 15 Pro Max iPhone 16e iPhone 16 iPhone 16 Plus iPhone 16 Pro iPhone 16 Pro Max Not listed here are the presumed new iPhone 17 models (or maybe iPhone 26?) that are all but certain to be announced and released in September. The iOS 26 public beta is now available to download via the Apple Beta Software Program. If you're not already a member, you'll need to sign up to try out all the latest features. Just visit and sign up with your phone number or email address. It's free. Once you're in, you can install it by going to Settings > General > Software Update and selecting iOS 26 public beta. A word of caution: Don't sign up with your main iPhone unless you're OK with any risks that occur with using an OS that isn't finalized. iOS 26 will be released to the public this fall. It usually comes in September, within a week of the Apple iPhone event. Last year, it rolled out to iPhone users on September 16 — exactly one week after the iPhone 16 lineup was announced. If you're more interested in the Apple Intelligence features coming, here's everything Apple revealed for iOS, macOS and more during WWDC. Also, check out how iOS 26 screenshots could be an intriguing preview of Apple's delayed Siri rework. Update, August 1: Added quote from Tim Cook about iOS 26. Update, July 31: Noted that iOS 18.6 is now available. Update, July 24: Noted the iOS 26 public beta is now available. Update, July 3: Noted new FaceTime feature found in the developer beta. Update, June 30: Noted ongoing iOS 18 releases, and reports that Apple is considering additional external LLMs for Siri. Update, June 25: Noted changes added in iOS 26 beta 2. If you buy something through a link in this article, we may earn commission.

Business Insider
12 minutes ago
- Business Insider
Your chats with Meta's AI might end up on Google — just like ChatGPT until it turned them off
OpenAI's ChatGPT raised some eyebrows this week when people realized that certain chats were able to be found by Google search. Although people had checked a box to share the chats publicly, it seemed likely that not everyone understood what they were doing. On Thursday, OpenAI said that it would stop having shared chats be indexed by Google. Meanwhile, Meta's stand-alone MetaAI app also allows users to share their chats — and it will continue to allow Google to index them, meaning that they can show up in a search. I did a bunch of Google searches and found lots of MetaAI conversations in the results. The Meta AI app, launched this spring, lets people share chats to a "Discover" feed. Google crawlers can "index" that feed and then serve up the results when people use Google search. So, for instance, if you do a site-specific search on Google for " and the keyword "balloons," you might come up with a chat someone had with the MetaAI bot about where to get the best birthday balloons — if that person tapped the button to allow the chat to be shared. As Business Insider reported in June, the Meta AI Discover feed had been full of examples of chats that seemed personal in nature — medical questions, specific career advice, relationship matters. Some contained identifying information like phone numbers, email addresses, or full names. Although all of these people did click to share, based on the personal nature of some of the chats, I could only guess that people might have misunderstood what it meant to share the conversation. After Business Insider wrote about this a few weeks ago, the Meta AI app made some tweaks to warn users more clearly about how the Discover feed works. Now, when you choose to share a conversation, you get a pop-up with the warning: "Conversations on feed are public so anyone can see them and engage." The additional warning seems to be working. Scrolling through the Discover feed, I now see mainly instances of people using it for image creation and far fewer accidental private text conversations (although there seemed to still be at least a few of those). Meanwhile, Daniel Roberts, a representative for Meta, confirmed that Meta AI chats that were shared to its Discover feed would continue to be indexed by Google. He reiterated the multi-step process I just described. For now, Meta AI can only be used via its mobile app, not the web. This might lead people to think that even the Discover feed exists as a sort of walled garden, separate from "the internet" and existing only within the Meta AI app. But posts from the Discover feed (and only those public posts) can be shared as links around the web — and that's where the Google indexing comes in. If this sounds slightly confusing, it is. That may also be confusing to users. Now, it's possible that some people really do want to share their AI chats with the general public, and are happy to have those chats show up on Google searches along with their Instagram or Facebook handles. But I'm still not sure I'd understand why anyone would want to share their interactions — or why anyone else would want to read them.
Yahoo
14 minutes ago
- Yahoo
Federal Reserve economists aren't sold that AI will actually make workers more productive, saying it could be a one-off invention like the light bulb
A new Federal Reserve Board staff paper concludes that generative artificial intelligence (genAI) holds significant promise for boosting U.S. productivity, but cautions that its widespread economic impact will depend on how quickly and thoroughly firms integrate the technology. Titled 'Generative AI at the Crossroads: Light Bulb, Dynamo, or Microscope?' the paper, authored by Martin Neil Baily, David M. Byrne, Aidan T. Kane, and Paul E. Soto, explores whether genAI represents a fleeting innovation or a groundbreaking force akin to past general-purpose technologies (GPTs) such as electricity and the internet. The Fed economists ultimately conclude their 'modal forecast is for a noteworthy contribution of genAI to the level of labor productivity,' but caution they see a wide range of plausible outcomes, both in terms of its total contribution to making workers more productive and how quickly that could happen. To return to the light-bulb metaphor, they write that 'some inventions, such as the light bulb, temporarily raise productivity growth as adoption spreads, but the effect fades when the market is saturated; that is, the level of output per hour is permanently higher but the growth rate is not.' Here's why they regard it as an open question whether genAI may end up being a fancy tech version of the light bulb. GenAI: a tool and a catalyst According to the authors, genAI combines traits of GPTs—those that trigger cascades of innovation across sectors and continue improving over time—with features of 'inventions of methods of invention' (IMIs), which make research and development (R&D) more efficient. The authors do see potential for genAI to be a GPT like the electric dynamo, which continually sparked new business models and efficiencies, or an IMI like the compound microscope, which revolutionized scientific discovery. The Fed economists did cautioning that it is early in the technology's development, writing 'the case that generative AI is a general-purpose technology is compelling, supported by the impressive record of knock-on innovation and ongoing core innovation.' Since OpenAI launched ChatGPT in late 2022, the authors said genAI has demonstrated remarkable capabilities, from matching human performance on complex tasks to transforming frontline work in writing, coding, and customer service. That said, the authors said they're finding scant evidence about how many companies are actually using the technology. Limited but growing adoption Despite such promise, the paper stresses that most gains are so far concentrated in large corporations and digital-native industries. Surveys indicate high genAI adoption among big firms and technology-centric sectors, while small businesses and other functions lag behind. Data from job postings shows only modest growth in demand for explicit AI skills since 2017. 'The main hurdle is diffusion,' the authors write, referring to the process by which a new technology is integrated into widespread use. They note that typical productivity booms from GPTs like computers and electricity took decades to unfold as businesses restructured, invested, and developed complementary innovations. 'The share of jobs requiring AI skills is low and has moved up only modestly, suggesting that firms are taking a cautious approach,' they write. 'The ultimate test of whether genAI is a GPT will be theprofitability of genAI use at scale in a business environment and such stories are hard to come by at present.' They know that many individuals are using the technology, 'perhaps unbeknownst to their employers,' and they speculate that future use of the technology may become so routine and 'unremarkable' that companies and workers no longer know how much it's being used. Knock-on and complementary technologies The report details how genAI is already driving a wave of product and process innovation. In healthcare, AI-powered tools draft medical notes and assist with radiology. Finance firms use genAI for compliance, underwriting, and portfolio management. The energy sector uses it to optimize grid operations, and information technology is seeing multiples uses, with programmers using GitHub Copilot completing tasks 56% faster. Call center operators using conversational AI saw a 14% productivity boost as well. Meanwhile, ongoing advances in hardware, notably rapid improvements in the chips known as graphics processing units, or GPUs, suggest genAI's underlying engine is still accelerating. Patent filings related to AI technologies have surged since 2018, coinciding with the rise of the Transformer architecture—a backbone of today's large language models. 'Green shoots' in research and development The paper also finds genAI increasingly acting as an IMI, enhancing observation, analysis, communication, and organization in scientific research. Scientists now use genAI to analyze data, draft research papers, and even automate parts of the discovery process, though questions remain about the quality and originality of AI-generated output. The authors highlight growing references to AI in R&D initiatives, both in patent data and corporate earnings calls, as further evidence that genAI is gaining a foothold in the innovation ecosystem. Cautious optimism—and open questions While the prospects for a genAI-driven productivity surge are promising, the authors warn against expecting overnight transformation. The process will require significant complementary investments, organizational change, and reliable access to computational and electric power infrastructure. They also emphasize the risks of investing blindly in speculative trends—a lesson from past tech booms. 'GenAI's contribution to productivity growth will depend on the speed with which that level is attained, and historically, the process for integrating revolutionary technologies into the economy is a protracted one,' the report concludes. Despite these uncertainties, the authors believe genAI's dual role—as a transformative platform and as a method for accelerating invention—bodes well for long-term economic growth if barriers to widespread adoption can be overcome. Still, what if it's just another light bulb? For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing. This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data