woodruffw 8 hours ago

This is a great example of why `pull_request_target` is fundamentally insecure, and why GitHub should (IMO) probably just remove it outright: conventional wisdom dictates that `pull_request_target` is "safe" as long as branch-controlled code is never executed in the context of the job, but these kinds of argument injections/local file inclusion vectors demonstrate that the vulnerability surface is significantly larger.

At the moment, the only legitimate uses of `pull_request_target` are for things like labeling and auto-commenting on third-party PRs. But there's no reason for these actions to have default write access to the repository; GitHub can and should be able to grant fine-grained or (even better) single-use tokens that enable those exact operations.

(This is why zizmor blanket-flags all use of `pull_request_target` and other dangerous triggers[1]).

[1]: https://docs.zizmor.sh/audits/#dangerous-triggers

  • leeter 6 hours ago

    I don't disagree... but, there is a use case for orgs that don't allow forks. Some tools do their merging outside of github and thus allow for PRs that cannot be clean from a merge perspective. This won't trigger workflows that are pull_request. Because pull_request requires a clean merge. In those cases pull_request_target is literally the only option.

    The best move would be for github to have a setting for allowing the automation to run on PRs that don't have clean merges, off by default and intended for use with linters only really. Until that happens though pull_request_target is the only game in town to get around that limitation. Much to my and other SecDevOps engineers sadness.

    NOTE: with these external tools you absolutely cannot do the merge manually in github unless you want to break the entire thing. It's a whole heap of not fun.

    • woodruffw 6 hours ago

      That's a fantastic use case that should be supported discretely!

      • leeter 6 hours ago

        Why github didn't is beyond me. Even if something isn't merge clean doesn't mean linters shouldn't be run. I get not running deployments etc. but not even having the option is pain.

  • lijok 5 hours ago

    Inside private repos we use pull_request_target because 1. it runs the workflow as it exists on main and therefore provides a surface where untampered with test suites can run, and 2. provides a deterministic job_workflow_ref in the sub claim in the jwt that can be used for highly fine grained access control in OIDC enabled systems from the workflow

    • woodruffw 5 hours ago

      Private repos aren't as much of a concern, for obvious reasons.

      However, it's worth noting that you don't (necessarily) need `pull_request_target` for the OIDC credential in a private repo: all first-party PRs will get it with the `pull_request` event. You can configure the subject for that credential with whatever components you want to make it deterministic.

      • lijok 4 hours ago

        You’re right! I edited my comment to clarify I was talking about good ole job_workflow_ref.

  • cookiengineer an hour ago

    This attack surface is essentially unfixed for almost a year now.

    Remember the python packages that got pwned with a malicious branch name that contained shellshock like code? Yeah, that incident.

    I blogged about all vulnerable variables at the time and how the attack works from a pentesting perspective [1].

    [1] https://cookie.engineer/weblog/articles/malware-insights-git...

  • zamalek 7 hours ago

    This is what GitHub says about it:

    > This event runs in the context of the base of the pull request, rather than in the context of the merge commit, as the pull_request event does. This prevents execution of unsafe code from the head of the pull request that could alter your repository or steal any secrets you use in your workflow.

    Which is comical given how easily secrets were exilfiltrated.

    • woodruffw 7 hours ago

      Yeah, I think that documentation is irresponsibly misleading: it implies that (1) attacker code execution requires the attacker to be able to run code directly (it doesn't, per this post), and (2) that checking out at the base branch somehow stymies the attacker, when all it does is incentivizes people to check out the attacker-controlled branch explicitly.

      GitHub has written a series of blog posts[1] over the years about "pwn requests," which do a great job of explaining the problem. But the misleading documentation persists, and has led to a lot of user confusion where maintainers mistakenly believe that any use of `pull_request_target` is somehow more secure than `pull_request`, when the exact opposite is true.

      [1]: https://securitylab.github.com/resources/github-actions-prev...

lrvick 27 minutes ago

Had the Nix team rolled out signed commits/reviews and independent signed reproducible builds as my (rejected) RFC proposed, then it would not be possible to do any last mile supply chain attacks like this.

In the end NixPkgs wants to be wikipedia easy for any rando to modify, and fear any attempt at security will make volunteers run screaming, because they are primarily focused on being a hobby distro.

That's just fine, but people need to know this, and stop using and promoting Nix in security critical applications.

An OS that will protect anything of value must have strict two party hardware signing requirements on all changes and not place trust in any single computer or person with a decentralized trust model.

Shameless plug, that is why we built Stagex. https://stagex.tools https://codeberg.org/stagex/stagex/ (Don't worry, not selling anything, it is and will always be 100% free to the public)

  • gmfawcett 14 minutes ago

    That's pretty impressive -- thanks for sharing the link.

amluto 7 hours ago

I find it rather embarrassing that, after all these years of trying to design computer systems, modern workflows are still designed so that bearer tokens, even short-lived, are issued to trusted programs. If the GitHub action framework gave a privileged Unix socket or ssh-agent access instead, then this type of vulnerability would be quite a lot harder to exploit.

  • Thom2000 5 hours ago

    Exactly!

    Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.

  • otabdeveloper4 3 hours ago

    The SPIFFE standard does something like this.

    It's not used by anyone because nobody actually gives a shit about security, the entire industry is basically a grift.

perlgeek 8 hours ago

CI/CD actions for pull/merge requests are a nightmare. When a developer writes test/verification steps, they are mostly in the mindset "this is my code running in the context of my github/gitlab account", which is true for commits made by themselves and their team members.

But then in a pull request, the CI/CD pipeline actually runs untrusted code.

Getting this distinction correct 100% of the time in your mental model is pretty hard.

For the base case, where you maybe run a test suite and a linter, it's not too bad. But then you run into edge cases where you have to integrate with your own infrastructure (either for end2end tests, or for checking if contributors have CLAs submitted, or anything else that requires a bit more privs), and then it's very easy byte you.

  • woodruffw 8 hours ago

    I don't think the problem is CI/CD runs on pull requests, per se: it's that GitHub has two extremely similar triggers (`pull_request` and `pull_request_target`). One of these is almost entirely safe (you have to go out of your way to misuse it), while the other is almost entirely unsafe (it's almost impossible to use safely).

    To make things worse, GitHub has made certain operations on PRs (like auto-labeling and leaving automatic comments) completely impossible unless the extremely dangerous version (`pull_request_target`) is used. So this is a case of incentive-driven insecurity: people want to perform reasonable operations on third-party PRs, but the only mechanism GitHub Actions offers is a foot-cannon.

aftergibson 2 hours ago

As time goes on, I find myself increasingly worried about supply chain attacks—not from a “this could cost me my job” or “NixOS, CI/CD, Node, etc. are introducing new attack vectors” perspective, but from a more philosophical one.

The more I rely on, the more problems I’ll inevitably have to deal with.

I’m not thinking about anything particularly complex—just using things like VSCode, Emacs, Nix, Vim, Firefox, JavaScript, Node, and their endless plugins and dependencies already feels like a tangled mess.

Embarrassingly, this has been pushing me toward using paper and the simplest, dumbest tech possible—no extensions, no plugins—just to feel some sense of control or security. I know it’s not entirely rational, but I can’t shake this growing disillusionment with modern technology. There’s only so much complexity I can tolerate anymore.

  • YouAreWRONGtoo 35 minutes ago

    Emacs itself is probably secure and you can easily audit every extension, but if you update every extension blindly via a nicely composable emacs Nix configuration, you would indeed have a problem.

    I guess one could automate finding obvious exploits via LLMs and if the LLM finds something abort the update.

    The right solution is to use Coq and just formally verify everything in your organization, which incidentally means throwing away 99.999% of software ever written.

immibis 5 hours ago

> If you’ve read the man page for xargs, you’ll see this warning:

>> It is not possible for xargs to be used securely

However, the security issue this warning relates to is not the one that's applicable here. The one here is possible to avoid by using -- at the end of the command.

ishouldbework 8 hours ago

> It is not possible for xargs to be used securely

Eh... That is taken out of context quite a bit, that sentence does continue. Just do `cat "$HOME/changed_files" | xargs -r editorconfig-checker --` and this specific problem is fixed.

  • hombre_fatal 7 hours ago

    Though that's like adding `<div>{escapeHtml(value)}</div>` everywhere you ever display a value in html to avoid xss.

    If you have to opt in to safe usage at every turn, then it's an unsafe way of doing things.

    • stonogo 5 hours ago

      I don't disagree but "it's not possible for xxx to be used securely" is a long way from "it's cumbersome and tedious to use xxx securely"

      • JasonSage an hour ago

        But "it's not possible for xxx to be used securely" is a better premise if it deflects people who can't do it correctly.

  • woodruffw 8 hours ago

    Yeah, I don't think the specific reason for that sentence in the manpage applies here. But the general sentiment is correct: not all programs support `--` as a delimiter between arguments and inputs, so many xargs invocations are one argument injection away from arbitrary code execution.

    (This is traditionally a non-issue, since the whole point is to execute code. So this isn't xargs' fault so much as it's the undying problem of tools being reused across privilege contexts.)

    • ishouldbework 5 hours ago

      Well, anything POSIX or GNU does support the --. I think most golang libraries as well? And if the program does not, you can always pass the files as relative paths (./--help) to work around that.

      For sure though, this can get tricky, but I am not really aware of an alternative. :/ Since the calling convention is just an array of strings, there is no generic way to handle this without knowing what program you are calling and how it handles command line. This is not specific to xargs...

      Well, I guess FFI would be a way, but it seems like a major PITA to have to figure out how to call a golang function from bash shell just to "call" a program.

      • woodruffw 5 hours ago

        > This is not specific to xargs...

        Right, it's just that xargs surfaces it easily. I suspect most people don't realize that they're fanning arbitrary arguments into programs when they use xargs to fan input files.

lostmsu 8 hours ago

There's a huge footgun in that article that has broader impact:

> but it gets worse. since the workflow was checking out our PR code, we could replace the OWNERS file with a symbolic link to ANY file on the runner. like, say, the github actions credentials file

So git allows committing soft links. So the issue above could affect almost any workflow.

  • danudey 10 minutes ago

    Yes, but IIRC when you run `pull_request_target` the credentials are to the target repository - i.e. the one you're merging into. When you run `pull_request`, it's to the source repository, the one the attacker is in control of.

jmclnx 8 hours ago

Well the "good" new is, OpenBSD and NetBSD still uses CVS, even for packages. So this will not work on those systems. I do not know about FreeBSD. Security by obscurity :)

But I have been seeing docs indication those projects are looking to go to git, will see if it really happens. In OpenBSD's case seems it will be based upon got(1).

  • seanhunter 8 hours ago

    Just to make it clear, what you say is correct, but this is not a git vulnerability, it's a github actions vulnerability. That is, the BSDs are secured by CVS only because github doesn't do CVS. If you use git and even github but don't do CI/CD using github actions you are not affected by this.

  • graemep 8 hours ago

    This is not a git issue, it is a github issue, and as far as I can see specific to github actions.

  • Mic92 8 hours ago

    Don't they use email to accept contributions? Seems like security nightmare w.r.t to impersonation.

    • udev4096 7 hours ago

      How? It's signed with their keys. Linux kernel also uses mail lists and I have yet to see someone trying to impersonate someone

    • edoceo 8 hours ago

      Aren't messages and/or patches signed?