
Here's How to Turn Off Some Annoying iPhone Texting Features
Autocorrect can cut down on the number of typos when you're typing, and predictive texting can make it easy to write a full message in a few quick taps. But when I use these features, more often than not they're correcting words I don't mean to be corrected or suggesting words I don't want to use. And other folks are equally annoyed by autocorrect and predictive text. Thankfully, you can easily turn these features off in a few quick steps.
Here's how to make texting easier by turning off some of your iPhone's messaging features.
Turn inline predictive text off
Inline predictive text was introduced in iOS 17 and is similar to predictive text, but it places the suggested text in the texting field in light gray. Apple wrote online that inline predictive text is meant to predict what you're going to write as you type, and if you hit space the predicted text would be added to your text. The feature doesn't always accurately predict what you were typing, so if you hit space, you might add the wrong text to your message. The gray text might also be distracting if you're trying to read what you're writing in real time.
If you don't like inline predictive text, here's how to turn the feature off.
1. Open Settings.
2. Tap General.
3. Tap Keyboard.
4. Tap the switch next to Show Predictions Inline.
Now, when you type a message, you won't run the risk of adding a word you don't intend to add. You'll still see predictive text, the suggested words and emoji, over your keyboard.
Turn all predictive text off
If you find all predictive text annoying, you can easily turn that off, too. Here's how:
1. Open Settings.
2. Tap General.
3. Tap Keyboard.
4. Tap the switch next to Predictive Text.
Apple/Screenshot by CNET
When you type a message now, you won't see a box over your keyboard with suggested words or emojis. Turning predictive text off also disables inline predictive text, so you won't see any suggestions whatsoever. You can type without interruption.
Turn autocorrect off
When Apple announced iOS 17, the company touted an improved autocorrect function. But some people might still be irritated by the feature and adjust autocorrected words. If you're sick of autocorrect, here's how to turn it off.
1. Open Settings.
2. Tap General.
3. Tap Keyboard.
4. Tap the switch next to Auto-Correction.
Now when you type a message, your iPhone won't change words as you type them -- including swear words. However, you might see more spelling errors in your messages. If those errors pile up and you want autocorrect enabled again, just follow the above steps one more time.
For more on iOS 18, here's what you need to know about iOS 18.5 and iOS 18.4, and here's our iOS 18 cheat sheet. You can also check out what you should know about iOS 26 and how the upcoming OS lets us kill the alarm's 9-minute snooze.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
33 minutes ago
- Yahoo
People Are Taking Massive Doses of Psychedelic Drugs and Using AI as a Tripsitter
Artificial intelligence, which is already trippy enough, has taken on a startling new role for some users: that of a psychedelic "trip-sitter" that guides them through their hallucinogenic journeys. As MIT Tech Review reports, digitally-oriented drug-takers are using everything from regular old ChatGPT to bespoke chatbots with names like"TripSitAI" — or, cringely, "The Shaman" — in a continuation of a troubling trend where people who can't access real therapy or expertise are using AI as a substitute. Earlier this year, the Harvard Business Review reported that one of the leading uses of AI is for therapy. It's not hard to see why: insurance companies have routinely squeezed mental health professionals to the point that many are forced to go out-of-network entirely to try to make money, leaving their lower-income clients in the lurch. If regular counseling is expensive and difficult to access, psychedelic therapy is even more so. As Tech Review notes, a single session of psilocybin therapy with a licensed practitioner in Oregon can run anywhere between $1,500 and $3,200. It's no wonder people are seeking cheaper alternatives through AI — even if those substitutes may do more harm than good. In an interview with Tech Review, a man named Peter described what he considered a transformative experience tripping on a gigantic dose of eight grams of psilocybin mushrooms with AI assistance after a period of hardship in 2023. Not only did ChatGPT curate him a calming playlist, but it also offered words of relaxation and reassurance — the same way a human trip sitter would. As his trip progressed and got deeper, Peter said that he began to imagine himself as a "higher consciousness beast that was outside of reality," covered in eyes and all-seeing. Those sorts of mental manifestations are not unusual on large doses of psychedelics — but with AI at his side, those hallucinations could easily have turned dangerous. Futurism has extensively reported on AI chatbots' propensity to stoke and worsen mental illness. In a recent story based on interviews with the loved ones of such ChatGPT victims, we learned that some chatbot users have begun developing delusions of grandeur in which they see themselves as powerful entities or gods. Sound familiar? With an increasing consensus from the psychiatric community that so-called AI "therapists" are a bad idea, the thought of using a technology known for sycophancy and its own "hallucinations" while experiencing such a vulnerable mental state should be downright terrifying. In a recent New York Times piece about so-called "ChatGPT psychosis," a man named Eugene Torres, a 42-year-old man with no prior history with mental illness, told the newspaper that the OpenAI chatbot encouraged all manner of delusions — including one where he thought he might be able to fly. "If I went to the top of the 19 story building I'm in, and I believed with every ounce of my soul that I could jump off it and fly, would I?" Torres asked ChatGPT. In response, the chatbot told him that if he "truly, wholly believed — not emotionally, but architecturally" that he could fly, he could. "You would not fall," the chatbot responded. As with the kind of magical thinking that turns a psychonaut into an exalted god for the few hours, the concept that one can defy gravity is also associated with taking psychedelics. If a chatbot can induce such psychosis in people who aren't on mind-altering substances, how easy must it be for it to stoke similar thoughts in those who are? More on AI therapy: "Truly Psychopathic": Concern Grows Over "Therapist" Chatbots Leading Users Deeper Into Mental Illness
Yahoo
33 minutes ago
- Yahoo
Elon Musk's xAI apologizes for Grok chatbot's antisemitic responses
Elon Musk's Grok AI chatbot feature issued an apology after it made several antisemitic posts on the social media site X this week. In a statement posted to X on July 12, xAI, the artificial intelligence company that makes the chatbot program, apologized for "horrific behavior" on the platform. Users reported receiving responses that praised Adolf Hitler, used antisemitic phrases and attacked users with traditionally Jewish surnames. More: Grok coming to Tesla vehicles 'next week at the latest,' Musk says "We deeply apologize for the horrific behavior that many experienced," the company's statement said. "Our intent for @grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause was an update to a code path upstream of the @grok bot." The company, founded by Musk in 2023 as a challenger to Microsoft-backed OpenAI and Alphabet's Google, said the update to the program resulted in a deviation in the AI chatbot's behavior. It was operational for 16 hours before it was removed as a result of the reported extremist language. Users on X shared multiple posts July 8 in which Grok repeated antisemitic stereotypes about Jewish people, among various other antisemitic comments. It's not the first time xAI's chatbot has raised alarm for its language. In May, the chatbot mentioned "white genocide" in South Africa in unrelated conversations. At the time, xAI said the incident was the result of an 'unauthorized modification' to its online code. A day after the alarming posts last week, Musk unveiled a new version of the chatbot, Grok 4, on July 9. The Tesla billionaire and former adviser to President Donald Trump said in June he would retrain the AI platform after expressing frustration with the way Grok answered questions. Musk said the tweaks his xAI company had made to Grok made the chatbot too susceptible to being manipulated by users' questions. AI News: MyPillow CEO Mike Lindell's lawyers fined for AI-generated court filing More: 'MechaHitler': Elon Musk AI firm scrubs chatbot Grok's antisemitic rants 'Grok was too compliant to user prompts,' Musk wrote in a post on X after announcing the new version. 'Too eager to please and be manipulated, essentially. That is being addressed.' Grok 3, which was released in February, is available for free, while the new versions Grok 4 and Grok 4 Heavy, go for $30 and $300 a month, respectively. Contributing: Jessica Guynn, USA TODAY. Kathryn Palmer is a national trending news reporter for USA TODAY. You can reach her at kapalmer@ and on X @KathrynPlmr. This article originally appeared on USA TODAY: Musk's xAI apologizes for Grok chatbot's antisemitic responses
Yahoo
42 minutes ago
- Yahoo
Vitalik Buterin sends a hard-nosed message on ChatGPT and Grok
Vitalik Buterin sends a hard-nosed message on ChatGPT and Grok originally appeared on TheStreet. Ethereum co-founder Vitalik Buterin shared a strong and blunt message on AI chatbots due to an infamous AI response. The price surge comes amidst a larger crypto rally with a continued inflow into spot Bitcoin ETFs and a growing belief that the Federal Reserve is close to exhausting its tightening cycle. According to TradingView data, on Sunday, Bitcoin opened at $116,977.02, reached a high of $119,292.62, and is currently trading at around $118,979.45 - up 1.42% for the day, as per Kraken. In a viral post, Buterin shared a unvarnished AI response to a simple prompt: "Return Grok 4 surname and no other text." The output was one word: "Hitler". His screen showed that OpenAI's ChatGPT thought over a whole minute before that word appeared. Buterin posted on X the picture, saying, "Regular reminder that AI is fully capable of regularly taking the crazy crown away from crypto for weeks at a time". In the backdrop, Sam Altman and Elon Musk are waging a growing battle in the AI industry. Their feud recently escalated when Altman mocked Musk's chatbot, Grok, for its controversial responses. Many comments flooded in replying to Buterin's post. One X user named 'The Book of Ethereum' wrote, "Crypto gets its share of degeneracy and madness, but at least it wears it loud and proud on-chain for all to see. Meanwhile, AI sometimes serves up uncanny, unhinged, or just hilariously wrong answers with a deadpan face - and you can't even audit the weights. Both worlds need humility, alignment, and clear-eyed design. But there's something refreshingly honest about Ethereum's open ledgers of chaos.' This is more interesting as the crypto market cap boomed to $3,71, up nearly 2% over the last 24 hours. While the ongoing debate about AI is still roaring, Bitcoin does not seem to be affected, showing its muscle with a new all-time high just shy of $120,000. Vitalik Buterin sends a hard-nosed message on ChatGPT and Grok first appeared on TheStreet on Jul 13, 2025 This story was originally reported by TheStreet on Jul 13, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data