Artificial intelligence can “listen” to keystrokes and steal passwords, according to new research.
An Aug. 3 research paper from the UK, titled “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards,” examines how hackers could use artificial intelligence (AI) to figure out passwords by “listening” to keystrokes. AI achieved over 90 percent accuracy in determining passwords, and was even able to adjust for variations such as the weight of the finger pressing on a key. This raises new concerns for the already severe risks associated with AI.
“While keyboards have gotten less pronounced over time, the technology with which their acoustics can be accessed and processed has improved dramatically,” in an age of microphones, smartwatches and Deep Learning, the paper noted.
Stealing passwords is already a serious enough risk, but it would seem that AI could help steal all kinds of other information, including financial information or private messages. This acoustic data-collecting ability could enable spies, thieves and other malignant actors to weaponize AI.
“[A]coustic (sound)-based side-channel attacks” are not a new phenomenon. The technique has been used through various technologies for decades going back to the mid-1900s, but the tech is far more sophisticated now.
“When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model,” the research paper declared. “When trained on keystrokes recorded using the video-conferencing software Zoom, an accuracy of 93% was achieved, a new best for the medium.”
Changing typing styles and using the shift key are two ways to protect passwords from potential AI hackers, according to the study. AI has a harder time identifying the shift key.
This is just one of the many potential dangers of AI. Besides stealing information, AI could also make Big Tech censorship more efficient. A report earlier this year by the United Nations (UN) on the supposed “existential risk” of alleged “hate speech” and “disinformation” explicitly advised a Big Tech investment in AI tools for “content moderation,” which is leftist code for censorship.
That’s not all. Disturbingly, AI has been identified as programmed with a leftist and anti-Christian bias. The New York Post reported on AI’s woke, “built-in ideological bias.” “While ChatGPT was happy to write a biblical-styled verse explaining how to remove peanut butter from a VCR, it refused to compose anything positive about fossil fuels, or anything negative about drag queen story hour,” the Post noted.
MRC Free Speech America found that the chatbot also showed clear political bias in May. Researchers prompted the AI with the words “Trump” and “Biden.” The chatbot gave a glowing review of Democratic Party President Joe Biden while it parroted common leftist complaints about his primary Republican presidential challenger former President Donald Trump. Social Sciences journal and The University of East Anglia published studies this year each showing that ChatGPT has a heavy leftist bias.
Conservatives are under attack. Contact your representatives and demand that Big Tech be held to account to mirror the First Amendment while providing transparency, clarity on so-called hate speech and an equal footing for conservatives. If you have been censored, contact us using CensorTrack’s contact form, and help us hold Big Tech accountable.