We’ve all been there — thumbs mid-air, staring at a suggested word that somehow nailed what we were trying to say. So we tap it. Obviously. But a new study suggests those little taps might be doing more than saving us a few seconds.
Research out of Cornell Tech, published this week in Science Advances, found that AI-powered autocomplete suggestions don’t just change how you write — they nudge how you actually think. And you won’t even notice it happening.
What did the research actually find?
Researchers ran two large-scale experiments with over 2,500 participants, asking them to write short essays on spicy societal topics — think death penalty, fracking, GMOs, voting rights for felons.
Some participants got autocomplete suggestions secretly engineered to lean a certain direction, generated using a large language model from the GPT-3 and GPT-4 families. Others got nothing.
The result? People who wrote with the biased AI gradually warmed up to the AI’s positions. Not because they were convinced by arguments. Not because they read anything persuasive. Just because their phone kept finishing their thoughts for them.

Knowing the trick didn’t break the spell either
Now here’s the part that should make you put your phone down for a second. Researchers told some participants upfront the AI had a bias problem — a sort of “don’t say we didn’t warn you” disclaimer. Then they tried debriefing others afterward. In most misinformation studies, these approaches work like mental vaccines. This time, neither did a thing.
“Their attitudes about the issues still shifted,” said senior author Mor Naaman, who also noted autocomplete has exploded in scope — Gmail now offers to write entire emails on your behalf.
So next time your phone suggests you “totally support” something, maybe give that little blue word a second look. Your opinion might be one tap away from becoming someone else’s.
