Pixnapping Attack
pixnapping.comSee also Hackers can steal 2FA codes and private messages from Android phones - https://news.ycombinator.com/item?id=45574613
See also Hackers can steal 2FA codes and private messages from Android phones - https://news.ycombinator.com/item?id=45574613
In my view, the core issue here is that Android's permissions system doesn't consider "Running in the background" and "Accessing the Internet" to be things that apps need to ask the user for permission and the user can restrict.
This attack wouldn't work if every app, even an "offline game", has those implicit permissions by default. Many apps should at most have "Only while using the app" permission to access the Internet. Which would not be complete protection -- there's always the risk you misclick on a now-malicious app that you never use -- but it would make the attack far less effective.
> I am an app developer. How do I protect my users? > We are not aware of mitigation strategies to protect apps against Pixnapping. If you have any insights into mitigations, please let us know and we will update this section.
IDK, I think there are obvious low-hanging attempts [0] such as: do not display secret codes in stable position on screen? Hide it when in background? Move it around to make timing attacks difficult? Change colours and contrast (over time)? Static noise around? Do not show it whole at the time (not necessarily so that user could observe it: just blink parts of it in and out maybe)? Admittedly, all of this will harm UX more or less, but in naïve theory should significantly raise demands for the attacker.
[0] Provided the target of the secret stealing is not in fact some system static raster snapshot containing the secret, cached for task switcher or something like that.
Huh. I remember a while ago Google Authenticator hid TOTP codes until you tap on them to reveal them. I remember thinking this was an absolutely stupid feature, because it did not mitigate any real threat and was annoying and inconvenient. Apparently a lot of people agreed because a few weeks later, Google Authenticator quietly rolled that feature back.
I wonder if they were aware of this flaw, and were mitigating the risk.
They could have made it a setting, with an explanation of the security benefits of it, so that folks who are paranoid can take advantage of it.
A relevant threat scenario is when you're using your phone in a public place. Modern cameras are good enough to read your phone screen from a distance, and it seems totally realistic that a hacked airport camera could email/password/2FA combinations when people log into sites from the airport.
Ideally, you want the workflow to be that you can copy the secret code and paste it, without the code as a whole ever appearing on your screen.
Note that for TOTP the attack is only feasible if the font and pixel-perfect positions on the screen are known:
> The attacks described in Section 5 take hours to steal sensitive screen regions—placing certain categories of ephemeral secrets out of reach for the attacker app. Consider for example 2FA codes. By default, these 6-digit codes are refreshed every 30 seconds [38]. This imposes a strict time limit on the attack: if the attacker cannot leak the 6 digits within 30 seconds, they disappear from the screen
> Instead, assuming the font is known to the attacker, each secret digit can be differentiated by leaking just a few carefully chosen pixels
Since there's only 3 or so (google, microsoft authenticator, okta, anyone else?) apps in widespread use, that seems not actually like an obstacle?
I assume Authy is fairly high use. I use Aegis usually, but I likely it has very little usage share.
They also need to know where in the app the code for each service is displayed, so they are grabbing the code for your bank and not for your World of Warcraft account.
which they can read from the same fixed layout/offsets displaying it to you
It's not exactly a new technique but it's effective for most super targeted attacks, honestly it seems if you were this inclined to be able to get a specific app on the users phone, you might as well just work off the Android app you've already gotten delivered to the users phone. Like Facebook.
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping? We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
A patch for the original vulnerability is already public: https://android.googlesource.com/platform/frameworks/native/... and explicitly states in the commit message that it tries to defeat "pixel stealing by measuring how long it takes to perform a blur across windows."
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
If genuine, this finger pointing is an interesting approach to a security vulnerability. Last time I read such arguments was 20 years ago from a different firm in California and it was not to their advantage.
P.S.: where did you see this discussion?
TFA: https://www.pixnapping.com
The initial disclosure to Google was on February 24, 2025. They had more than enough time.
I would say this is a nice & clever attack vector by calculating from rendering time aka side channeling. Kudos to the researchers though it would take lot of time and capture pixels even for Google authenticator. My worry is now how much of this could be reproduced to steal OTP from messages.
Given to rise of well defined templates (accurately vibe coding design for example: GitHub notification emails) phishing via email, I have literally stopped clicking links email and now I have stop launching apps from intent directly (say open with). Better to open manually and perform such operation + remove useless apps but people underestimate the attack surface (it can come through sdk, web page intents)
I'm no expert in security, but I'm guessing if you install an app on a Windows Desktop computer it can do more chaos faster and more discreetly than pixnapping can on Android.
If you use the same password on two websites, any one of the two websites can use it to log you it in the second website (if it doesn't have an extra layer of security).
On paper security is pretty weak yet in practice these attacks are not very common or easy to do.
>but I'm guessing if you install an app on a Windows Desktop computer it can do more chaos faster and more discreetly than pixnapping can on Android.
On desktop, apps aren't sandboxed. On mobile, they are. Breaking out of the sandbox is a security breach.
On desktop, people don't install an app for every fast food chain. On mobile, they do.
inb4 "graphene solves this"
I was looking for a nice browser game, just judging by the name.
This is wild.
Seems like the only real solution would be to have a dedicated device just for 2fa
Or... don't use Android?
Modern devices are simply too complex to be completely secure.
We have this tendency of adding more and more "features", more and more functionality 85% of which nobody asked for or has use for.
I believe that there will be a market for a small, bare bones secure OS in the future. Akin to how freeBSD is being run.
Would love a terminal and make world while on the go (-;
Bunnie's Precursor? It sounds cool, but it's also expensive as fuck. If you thought $100 for a graphing calculator was a ripoff, the Precursor is a similar form factor and level of computational power, but costs $1000 and can't be used in maths exams.
https://www.bunniestudios.com/blog/2020/introducing-precurso... (currently down, might be up later)
From reading comments on hn over the past couple of years, I'm disappointed how terrible the security practices and knowledge has become. All of this stuff is about to get a lot worse with generative AI.
There are complaints on this story, and on the recent one about the fsf phone project about how inconvenient it is to not be able to access banking apps on your mobile phone. I can't be bothered to enter my banking password every 30 minutes on my desktop! What, I'm supposed to have two phones?
The first thing someone is going to do when they steal your phone (after they saw you enter your password in public) is open your banking and money apps and exfiltrate as much as they can from your accounts. This happens every single day. None of those apps should be installed or logged in on your phone. Same goes for 2FA apps. That's like traveling with Louis Vuitton luggage which is basically a "steal me" sign.
That's the most basic stuff for people who aren't a CEO of a company that is in the crosshairs of state sponsored espionage attacks.
The problems with "bare bones secure OS" device remain the same from a physical access standpoint: social engineering, someone sees your password, steals the device. But otherwise, yes, the devices you install a bunch of spyware/adware games on and take to bars should not be the ones you are doing your banking, 2FA, work, etc on ever.
Discussion: https://news.ycombinator.com/item?id=45574613
In the previous discussion everyone seems happy it’s been patched and not to worry (even though androids mostly don’t run anything like the latest android)
But in this write up they say the patch doesn’t work fully
The bigger issue is the sidechannel that exists which leaks information from secure windows, even from protected buffers, potentially including DRM protected content.
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
Don't extra security measures in authenticator apps provide protection against this? I need to enter a pin/fingerprint in order to access my codes. And the code of an entry is hidden and only temporarily shown after being tapped.
The best defence seems to be to configure your 2FA app to require biometrics. I'm not sure why they didn't mention this option.
Biometrics can't be changed if someone ever figures out how to duplicate them.
think it's a fair point. but it still triggered this in me: "only way to prevent more of my data from being stolen is to give Android more of my data"
Not a phone designer, but could we imagine a new class of screen region which is excluded from screen grab, draw over and soft focus with a mask, and then notification which do otp or pin subscribe to use it?
App developers can already dynamically mark their windows as secure which should prevent any other app from reading the pixels it rendered. The compositor composites all windows, including secure windows and applies any effects like blur. No apps are supposed to be able to see this final composited image, but this attack uses a side channel they found that allows apps on the system to learn information about the pixels within the final composition.
The attack needs you to be able to alter the blur of pixels in a secure window; this could be forbidden. A secure window should draw 100% as requested or not at all.
The blur happens in the compositor. It doesn't happen in the secure windows.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
> Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
That seems well worth the trade to me.
These sort of restrictions also often interfere with accessibility and screen readers.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Yes; that is a perfect example of where I would prefer security over not looking out of place.
Right but night mode is built into the OS so you can easily make an exception (same for things like toasts). Are there use cases where you need a) a secure window, and b) a semi-transparent app-controlled window on top of it?
Things like this make me wonder if the social media giants use attacks like these to gain certain info about you and advertise to you that way.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
Consider that your thoughts are a consequence of what you've consumed. They're not guessing what you think, they're influencing it.
Similar people thinking similar thoughts I'd wager
Are you sure that isn't just the horoscope effect?
interesting
My takeaway:
Do not install apps. Use websites.
Apps have way too much permissions, even when they have "no permissions".
No OS vendor wants you to do that, unless you're using a desktop, and then Google wants you to use Chrome. They all want a 30% cut of revenue and/or platform lock-in. They'll rely on dark patterns and nerfing features to push you to their app stores.
Similarly, software vendors want you to use apps for the same reason you don't want to use them. They'll rely on dark patterns to herd you to their native apps.
These two desires influence whether it's viable to use the web instead of apps. I think we need legislation in this area, apps should be secondary to the web services they rely on, and companies should not be allowed to purposely make their websites worse in order to get you on their apps.
The unfortunate truth is that so many things require a dedicated mobile app these days to use.
I don't own or carry a smart phone. I'm still able to get by without one, but just barely.
I wish Uber or Lyft allowed me to use a website. I hate having to find a regular taxi or rely on the kindness of others to use their app.
surprisingly, Uber does! m.uber.com is a mobile website for Uber.
I only used once, in February, so hopefully they didn't break it since then.
Thanks. Will try it!
I am not familiar to this type of side-channel attacks but the article says they use GPU.zip which is exploitable through Chrome:
https://www.hertzbleed.com/gpu.zip/
Looks to me that the browser version requires the targeted website to be iframed into the malicious site for this to work, which is mitigated significantly by the fact that many sites today—and certainly the most security-sensitive ones—restrict where they can be iframed via security headers. Allowing your site to be loaded in an iframe elsewhere is already a security risk, and even the most basic scans will tell you you're vulnerable to clickjacking if you do not set those headers.
With JS disabled!
You know it's serious because it's got a domain and a logo. Even security researchers gotta create engagement and develop their brand.
I'd say it's _not_ serious when they need to market it.
Anyone remembers the OG heartbleed?
[dead]
[dead]
Huh. I don’t know that I’ve seen a whole domain name registered, for a paper on a single CVE, before.
It's quite standard for "big" CVEs nowadays
I'd say that it started with heartbleed.
Maybe Linus has a point
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
https://www.techpowerup.com/242340/linus-torvalds-slams-secu...
It started at least since https://www.heartbleed.com/ if not earlier
https://taptrap.click/ has been around for a while.
Interesting. Looks like I upset someone. Not sure why admitting to ignorance is so offensive. Maybe because it's so rare, hereabouts?