Lens reflections may betray your secrets in Zoom video calls • The Register

Boffins from the University of Michigan in the US and Zhejiang University in China want to highlight how bespectacled video conference participants inadvertently reveal sensitive on-screen information through reflections in their glasses.

With the COVID-19 pandemic and the rise of remote work, video conferencing has become commonplace. The researchers argue that the resulting privacy and security issues deserve more attention, and have been keeping an eye on this unusual attack vector.

In a paper distributed via ArXiv, titled “Private Eye: On the Limits of Textual Screen Peeking via Eyeglass Reflections in Video Conferencing,” researchers Yan Long, Chen Yan, Shilin Xiao, Shivan Prasad, Wenyuan Xu, and Kevin Fu describe how they analyzed the emanations optics of the video screens that have been reflected in the lenses of the glasses.

“Our work explores and characterizes viable threat models based on optical attacks using multi-frame super-resolution techniques on video frame sequences,” the computer scientists explain in their paper.

“Our models and experimental results in a controlled laboratory environment show that it is possible to reconstruct and recognize text on screen with heights as small as 10 mm with a 720p webcam with greater than 75 percent accuracy.” That corresponds to 28 pt, a font size commonly used for small headings and headlines.

“Current 720p camera attack capability is often mapped to 50- to 60-pixel font sizes with average laptops,” explained Yan Long, corresponding author and doctoral candidate at the University of Michigan, Ann Arbor, in an e-mail to Register.

“Such font sizes can be found primarily in slideshows and in the headers/titles of some websites (for example, ‘We saved you a seat in chat’ on https://www.twitch.tv/p/en/about/).”

Being able to read text the size of a mirrored title is not the privacy and security issue of being able to read smaller 9-12 pt fonts. But this technique is expected to provide access to smaller font sizes as high-resolution webcams become more common.

“We found that future 4k cameras will be able to see most header text on almost all websites and some text documents,” Long said.

When the goal was to identify just the specific website visible on a video conference participant’s screen from the reflection of a pair of glasses, the success rate increased to 94% among the top 100 Alexa websites.

“We believe that the possible applications of this attack range from causing disruption in daily activities, for example bosses monitoring what their subordinates are watching in a video work meeting, to business and commercial scenarios where reflections can leak key information. related to the negotiation,” Long said. .

He said the attack targets both adversaries who participate in conference sessions and those who obtain and play recorded meetings. “It would be interesting for future research to mine online videos like YouTube and analyze how much information is filtered through the glasses in the videos,” he said.

A variety of factors can affect the readability of text reflected on a video conference participant’s glasses. These include reflectance based on the skin color of the meeting participant, ambient light intensity, screen brightness, contrast of text to the web page or application background, and lens characteristics. of the glasses. Consequently, not everyone who wears glasses will necessarily provide adversaries with a shared mirrored screen.

Regarding potential mitigations, the scientists say that Zoom already provides a video filter in its Background and Effects settings menu that consists of opaque cartoon lenses that block reflections. Skype and Google Meet lack that defense.

The researchers argue that other more usable software-based defenses involve the specific blurring of eyeglass lenses.

“Although none of the platforms support it now, we have implemented a real-time glasses blur prototype that can inject a modified video stream into video conferencing software,” they explain. “The prototype program locates the goggle area and applies a Gaussian filter to blur the area.”

the python code can be found on GitHub. ®

Leave a Comment