Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux

Linus Torvalds Tactfully Discusses Value of getrandom() Upgrade for Linux vDSO (phoronix.com) 86

Linux's vDSO (or virtual dynamic shared object) is "a small shared library that the kernel automatically maps into the address space of all user-space applications," according to its man page. "There are some system calls the kernel provides that user-space code ends up using frequently, to the point that such calls can dominate overall performance... due both to the frequency of the call as well as the context-switch overhead that results from exiting user space and entering the kernel."

But Linus Torvalds had a lot to say about a proposed getrandom() upgrade, reports Phoronix: This getrandom() work in the vDSO has been through 20+ rounds of review over the past 2+ years, but... Torvalds took some time out of his U.S. Independence Day to argue the merits of the patches on the Linux kernel mailing list. Torvalds kicked things off by writing:


Nobody has explained to me what has changed since your last vdso getrandom, and I'm not planning on pulling it unless that fundamental flaw is fixed. Why is this _so_ critical that it needs a vdso? Why isn't user space just doing it itself? What's so magical about this all?

This all seems entirely pointless to me still, because it's optimizing something that nobody seems to care about, adding new VM infrastructure, new magic system calls, yadda yadda. I was very sceptical last time, and absolutely _nothing_ has changed. Not a peep on why it's now suddenly so hugely important again. We don't add stuff "just because we can". We need to have a damn good reason for it. And I still don't see the reason, and I haven't seen anybody even trying to explain the reason.



And then he responded to himself, adding:


In other words, I want to see actual *users* piping up and saying "this is a problem, here's my real load that spends 10% of time on getrandom(), and this fixes it". I'm not AT ALL interested in microbenchmarks or theoretical "if users need high-performance random numbers". I need a real actual live user that says "I can't just use rdrand and my own chacha mixing on top" and explains why having a SSE2 chachacha in kernel code exposed as a vdso is so critical, and a magical buffer maintained by the kernel."


Torvalds also added in a third message:


One final note: the reason I'm so negative about this all is that the random number subsystem has such an absolutely _horrendous_ history of two main conflicting issues: people wanting reasonable usable random numbers on one side, and then the people that discuss what the word "entropy" means on the other side. And honestly, I don't want the kernel stuck even *more* in the middle of that morass....

Torvalds made additional comments. ("This smells. It's BS...") Advocating for the change was WiredGuard developer Jason Donenfeld, and more communication happened (and continues to happen... 40 messages and counting).

At one point the discussion evolved to Torvalds saying "Bah. I guess I'll have to walk through the patch series once again. I'm still not thrilled about it. But I'll give it another go..."
This discussion has been archived. No new comments can be posted.

Linus Torvalds Tactfully Discusses Value of getrandom() Upgrade for Linux vDSO

Comments Filter:
  • by Rosco P. Coltrane ( 209368 ) on Sunday July 07, 2024 @05:27PM (#64607837)

    Dude's getting old.

    • by 2TecTom ( 311314 ) on Sunday July 07, 2024 @05:47PM (#64607881) Homepage Journal

      Dude's getting old.

      his experience, and those like him, are what keep Linux effective and ethical

      if you want corporate bs, there's always M$ Windows

      • by ihadafivedigituid ( 8391795 ) on Sunday July 07, 2024 @05:52PM (#64607895)
        Amen. Linus fights fiercely for the end user, for which we should all be happy.
        • He's become a legend at this point
        • by sinij ( 911942 )

          Amen. Linus fights fiercely for the end user, for which we should all be happy.

          They way Linus and the entire community defines 'end user' is exactly why there won't ever be a year of desktop Linux.

        • Re: (Score:1, Troll)

          by Excelcia ( 906188 )

          The fact that he works on a kernel only makes it look like sometimes he's fighting for the user. The reality is that Linus fights everyone fiercely because he's an asshole.

          • by ihadafivedigituid ( 8391795 ) on Sunday July 07, 2024 @09:24PM (#64608217)
            We clearly need more "assholes" then.

            The guy has run a worldwide project involving some of the world's biggest companies and egos for three decades. No one is forced to work with him: the kernel could be forked tomorrow, but it's obvious that both his tech and people skills are better than anyone else's based purely on his track record.
            • by gweihir ( 88907 )

              Indeed. And the Linux kernel has succeeded despite numerous attempts at sabotage, both legal and technological. Not putting in stuff unless there is a really good reason for that is central to that success. It is also central to why Windows is such a mess in all aspects.

            • by dfghjk ( 711126 )

              "No one is forced to work with him: the kernel could be forked tomorrow..."

              Now there's a wildly untrue statement. /. is truly the home of shallow thinkers.

            • I've never faulted his technical merits or accomplishments. But it doesn't follow that failures to follow normal interpersonal communication and respect and accomplishments are mutually exclusive. Nor does it follow that accomplisments in one area are an excuse. There have been more than one "intervention", and he even took a "sabatical" away from the public when all the major players told him to fly straight or literally all of them were going to fork the kernel and go elsewhere. He has depended for ag

        • No, that's Tron.
      • by Anonymous Coward

        Dude's getting old.

        his experience, and those like him, are what keep Linux effective and ethical

        if you want corporate bs, there's always M$ Windows

        I have a better idea! Implement systemd functionality in VDSO! That'll sure speed up things and make linux way more efficient than it is now!

      • Worth remembering that Steve Jobs' greatest service to Apple was saying "no." to a great deal of horseshit.

        • by 2TecTom ( 311314 )

          Worth remembering that Steve Jobs' greatest service to Apple was saying "no." to a great deal of horseshit.

          Sadly those days are long gone. Now days all our leaders care about is how much power they can shove down their power holes.

          Power is worse than crack, at least there's some hope for crack addicts ...

        • You mean saying "no" when people wanted two buttons on a mouse?

          • by jddj ( 1085169 )

            You mean saying "no" when people wanted two buttons on a mouse?

            Yes, exactly!

            It was important to maintain the simplicity of the design. We increase ease-of-use by reducing unnecessary user choice, which paradoxically leads to greater agency for the user (who now feels more in control of the machine).

            Many geeks believed that the Mac was poorly-designed for their needs by featuring a single-button mouse. The truth though, is that the Mac was _never_ being designed for geeks, but instead for the people who didn't want to use a computer because they were afraid that if they

      • his experience, and those like him, are what keep Linux effective and ethical

        His toxic work environment is one reason many people don't want to contribute to linux code.

        • by 2TecTom ( 311314 )

          his experience, and those like him, are what keep Linux effective and ethical

          His toxic work environment is one reason many people don't want to contribute to linux code.

          which is evidenced by all the people who do contribute and have over the years

          thanks to those who do and have despite ingratitude

        • "Too few lines of code!" is not a complaint I've ever heard about the linux kernel.
    • Dude's getting old.

      So are you.

      Film At 11

      • "You're older than you've ever been,
          and now you're even older,
          and now you're even older...
          and now you're even older....

        You're older than you've ever been,
          and now you're even older...
          and even older still....

  • by ThePhilips ( 752041 ) on Sunday July 07, 2024 @05:48PM (#64607883) Homepage Journal

    Indeed. If you spend too much time in random, why not implement your own?

    Linus, a noob in security, back then rolled out his own primitive implementation based on SHA(?) hash, and despite decade of heavy critique, it was proven to be good implementation.

    I'm not sure what problem the VDSO supposed to solve here, really. I've seen how VDSO is implemented, and frankly would have avoided polluting it.

    E.g. At start of session, fetch initial random from kernel. Feed it into cryptohash to populate the random pool. Produce random numbers from the pool. Then every e.g. 1000 generated randoms, fetch random from kernel again to randomize the pool again. (If needed across many processes, shove into a shared library with piece of shared memory. The mutexes/friends also can work off the shared memory. If needs to be scaled further/heavy multi-threading, to avoid contentions on single pool, you could even make hash of pools. And so on and so forth.)

    HW randoms were always slow. And remain slow. Not sure how VDSO would solve this problem.

    • Very good thoughts. One question I have is "how random do you need?" A long time ago, a friend worked on random number generators and I remember looking over his work. There are measures of randomness, and getting "better" random numbers is not linear in effort, from what I remember. It's Been A Long Time since I opened that volume of Knuth...

      • Hehe. I also wasn't looking into the random numbers for quite some time. The basic test is that of distribution: in range of [0, N) all numbers should appear with the same probability. (Some RNGs biased toward zero.)

        But more advanced tests... No clue.

        P.S. As I wrote it, I have obviously also googled it. Wikipedia has succinct no-frills description. [wikipedia.org] They even list approved designs. TL;DR: any current stream cipher can serve as base for RNG.

        • Worth noting: on the Wikipedia page, those are pure software algorithms. If your system has HW RNG (vast majority today has), then periodically seeding your pool with HW RNG (or kernel's random functions), pretty much guarantees that your RNG would be OK.

          Flip side is that the RNG pool is now lies in user-space. But if they implement random in VDSO, the kernel's random pool would also be (at least partially) mapped into user-space.

        • by Entrope ( 68843 )

          A simple modulo-N counter provides a random number with uniform distribution -- just sayin'.

          Those "more advance tests" turn out to be quite important, for example when you use them for Monte Carlo simulations and thus want them to be quasirandom in multiple dimensions [dtic.mil].

        • Part of the tests involve the period over which you test the probability distribution... (i.e. how many values do you have to check?) I think there are some other parts, but I'd have to go read more, and it would probably make my brain hurt.

    • by vadim_t ( 324782 ) on Monday July 08, 2024 @04:14AM (#64608653) Homepage

      Indeed. If you spend too much time in random, why not implement your own?

      Because PRNGs have state, and state can be arbitrarily replicated by the OS, which is insecure. Eg, a VM is paused, cloned, two clones now run at once. Or fork() is called. The PRNG inside both copies has the same state, and generates the same random numbers.

      The application code is at the mercy of the OS outside it. It can't know precisely about such external manipulations. So if this is a concern, your only resort is to every single time go to the kernel and ask for a true random value.

      I'm not sure what problem the VDSO supposed to solve here, really.

      Performance. syscalls are slow if you have to make one every time you generate a random number (for optimal security).

      • by raynet ( 51803 )

        Well application code can fetch extra entropy from many external sources if it doesn't trust the OS or HW.
        And the randomness will quickly diverge in cloned system, and again, if you feel that is an issue for your app, include external sources of entropy and also perhaps mix things like MAC address into the pool so when the VM is cloned, the MAC must change (if it doesn't, then the clone is technically invalid to interact with the external network).

        • by vadim_t ( 324782 )

          Or... the OS could actually make it convenient and save the need for every user that needs randomness the need to do all of that. Because that's what operating systems exist for in the first place, and the kernel has all the required randomness collection, hardware interfaces and so on that work great and have been worked on by multiple very smart people.

          All that's needed is better performance to access what already exists.

      • The PRNG inside both copies has the same state, and generates the same random numbers.

        Which is why it had been strongly advised for decades to factor in current time ("highest precision available") into the randoms. (At least when reseeding the pool.)

        Eg, a VM is paused, cloned, two clones now run at once.

        Why people clone VMs? For the RNGs to become a problem, such cloning should happen en-mass. What sounds a bit insane. (Or another fad due to the "cloud computing"'s quirky pricing schemes?)

        And in well implemented cloning, I expect the OS has no idea that it was cloned. And that again brings us back to the ancient advise: factor in the current

      • by jonadab ( 583620 )
        In 100% of the scenarios where you might be running in a malicious VM you can't detect or control, you also can't trust the kernel, for exactly the same reasons.
    • by gweihir ( 88907 )

      You do not need to reseed. Unless you use a crappy generator, seeding once is enough. And there are enough known-good generators out there that you can use.

      • by qbast ( 1265706 )
        Yes, you sometimes need to reseed - for example when you clone a VM
        • Why the hell people are "cloning VMs"? What fresh hell is that? Esp since it's implied that they clone a live VM.

          Can anyone enlighten me to the use-cases?

          • by gweihir ( 88907 )

            Why the hell people are "cloning VMs"? What fresh hell is that? Esp since it's implied that they clone a live VM.

            Can anyone enlighten me to the use-cases?

            Simple: Laziness, stupidity, incompetence. Of course these people will claim that they are doing it right and that _others_ have to fix the mess they made.

        • by gweihir ( 88907 )

          If you clone a VM, you need to reseed as part of the cloning process, not as part of running the clone. Anything else is too risky. You also need to change host keys, user keys, etc.

    • Linus, a noob in security, back then rolled out his own primitive implementation based on SHA(?) hash, and despite decade of heavy critique, it was proven to be good implementation.

      He did all but explicitly admit that he changed some code in that "random" routine based on NSA suggestions. You shouldn't trust it.

    • vDSO solves this problem in the same way it solves gettimeofday() performance.

      Kernel maintains a buffer with entropy data in it, vDSO pulls from it.
      vDSO is user-space code, but it ensures that the buffer and code that accesses it can change without breaking the cardinal rule (don't break userspace)

      I don't understand how you're complicated as to how this can help.
      I think you perhaps underestimate the cost to "fetching the random from the kernel".
      Context switches are expensive.
      • Context switches are expensive.

        Syscall != context switch.

        Try to benchmark the "getpid()" to see the actual overhead of a syscall.

        Good RNGs I've seen in the past were much slower compared to a syscall's overhead. If you take out syscall overhead you barely, improve anything.

        Practical example, on my temp Ubuntu VM with gcc12. Best of 5 (because VMs are funny this way):

        getpid: 0.159092 seconds, 1000000 calls
        cost per syscall: 0.000000159 sec
        cost per syscall: 159.09 nsec

        getrandom(4,0): 0.370854 seconds, 1000000 calls
        cost per syscall: 0

        • Syscall != context switch.

          syscall always equals context switch.

          Try to benchmark the "getpid()" to see the actual overhead of a syscall.

          You're not benchmarking what you think you are ;)
          You're benchmarking libc's cached value.

          • Actually- that seems way too high. I wonder if they got rid of the getpid() cache.
            You should use syscall(SYS_getpid) for your benchmarks though.

            And really, your argument here is silly, if I'm reading it correctly.
            "Since we can't get it down to 20ns, we should not bother taking the nearly 100% latency improvement."

            Is that about the jist of it?
            • we should not bother taking the nearly 100% latency improvement

              Where have you found the "100% improvement"???

              $ strace -ttT openssl ecparam -name prime256v1 -genkey -noout -out private-key.pem
              ... [snip] ... 22:59:47.879849 openat(AT_FDCWD, "private-key.pem", O_WRONLY|O_CREAT|O_TRUNC, 0600) = 3 <0.000073>
              22:59:47.879980 fcntl(3, F_GETFL) = 0x8001 (flags O_WRONLY|O_LARGEFILE) <0.000029>
              22:59:47.880389 brk(0x55c208346000) = 0x55c208346000 <0.000033>
              22:59:47.880541 futex(0x7f509a67ef00, FUTEX_WAKE_PRIVATE, 2147483647) = 0 <0.000029>
              2

              • $ strace -ttT openssl ecparam -name prime256v1 -genkey -noout -out private-key.pem ... [snip] ... 22:59:47.879849 openat(AT_FDCWD, "private-key.pem", O_WRONLY|O_CREAT|O_TRUNC, 0600) = 3 22:59:47.879980 fcntl(3, F_GETFL) = 0x8001 (flags O_WRONLY|O_LARGEFILE) 22:59:47.880389 brk(0x55c208346000) = 0x55c208346000 22:59:47.880541 futex(0x7f509a67ef00, FUTEX_WAKE_PRIVATE, 2147483647) = 0 22:59:47.880638 getpid() = 1400 22:59:47.881093 brk(0x55c208367000) = 0x55c208367000 22:59:47.881182 brk(0x55c208366000) = 0x55c208366000 22:59:47.881353 getrandom("\x48\x1f\x37\xed\x4e\x98\x0a\x56\x6e\x41\x85\xe5\x02\x28\xc7\x20\x54\x65\x63\x97\x09\x63\x98\x74\xe8\xf9\xb3\xa7\xfd\x7c\xee\x5b"..., 48, 0) = 48 22:59:47.881445 getpid() = 1400 22:59:47.881528 getpid() = 1400 22:59:47.881609 getpid() = 1400 22:59:47.881685 getpid() = 1400 22:59:47.881811 getpid() = 1400 22:59:47.882338 newfstatat(3, "", {st_mode=S_IFREG|0600, st_size=0, ...}, AT_EMPTY_PATH) = 0 22:59:47.882444 write(3, "-----BEGIN EC PRIVATE KEY-----\nM"..., 227) = 227 22:59:47.882540 close(3) = 0

                Uh, math.
                If the total latency is near 370ns, and the syscall overhead is 160, what are we left with?

                P.S. Heresy and ignorance. The getpid() is never cached. Never. getpid() is always a syscall. For very many good (MT-safety and security) reasons. Also, cache, as a global per-process variable, requires a likewise global lock. (Glibc has a number of those - and every one of them occasionally pops up in benchmarks as a bottleneck.)

                Don't be an arrogant asshole.
                You Are Wrong. [sourceware.org]

                P.P.S. Here you go. The getpid() is on the list of auto-generated asm syscalls of the libc. [github.com]

                Of course it is- because it's a fucking syscall- even when it was cached, it was *still* autogenerated.
                It merely had a stub that pulled the cached value instead of entering the kernel to retrieve it.

                • And yes, I'm aware I worded the percentage strangely-
                  The latency is near 100% (73%- ok, maybe I should have said, "approaching") worse with the syscall, vs the hypothesized vDSO call.
                  It's only "43% percent better".
                  On my aarch64 test though, the syscall latency is actually *worse* (150ns) than the derived time of getrandom() (100ns)

                  Quibbling over that is a pretty clear sign you've lost this argument, beyond your actual technical inaccuracies (getpid() is _never_ cached? tell me about these syscalls that
                • Uh, math.

                  Yes, math. The call to getrandom() took 30us - in the private key generation that itself took ~20ms. But your math somehow gives you "100% better".

                  Ludicrous.

                  Don't be an arrogant asshole.

                  a. Read the damn change you quote. They don't cache getpid() in glibc - they simply avoid calling it too many times in nptl.

                  b. You are welcome to show me the "cache" in glibc code.

                  Or you can stop posturing, and just look at the disasm of the syscall.

                  You literally have no idea what you are talking about wrt syscalls and libc. Or performance probl

                  • Yes, math. The call to getrandom() took 30us - in the private key generation that itself took ~20ms. But your math somehow gives you "100% better".

                    Yup- should have said it's "nearly 100% worse for the syscall" rather than "the vDSO is nearly 100% better". It took 37us, and 21us.
                    30 and 20 are convenient roundings, wouldn't you say?

                    a. Read the damn change you quote. They don't cache getpid() in glibc - they simply avoid calling it too many times in nptl.

                    No, they didn't.
                    -#if IS_IN (libc)
                    -static inline __attribute__((always_inline)) pid_t really_getpid (pid_t oldval);
                    -
                    -static inline __attribute__((always_inline)) pid_t
                    -really_getpid (pid_t oldval)
                    -{
                    - if (__glibc_likely (oldval == 0))
                    - {
                    - pid_t selftid = THREAD_GETMEM (THREAD_SELF, tid);
                    - if (__glibc_li

                  • Now, I want you to admit you are a fucking moron and that you were wrong. It's ok, you can do it.
                  • machine: Linux * 2.6.32-754.35.1.el6.x86_64 #1 SMP Sat Nov 7 12:42:14 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
                    libc: glibc-2.12-1.212.el6_10.3.x86_64
                    getpid libc call: 0.058719 seconds, 10000000 calls
                    cost per syscall: 0.000000006 sec
                    cost per syscall: 5.87 nsec

                    getpid syscall: 4.005963 seconds, 10000000 calls
                    cost per syscall: 0.000000401 sec
                    cost per syscall: 400.60 nsec

                    clock_gettime vDSO call: 0.302681 seconds, 10000000 calls
                    cost per (not)syscall: 0.000000030 sec
                    cost per (not)syscall: 30.27 nsec

                    clo
        • A better apples to apples test, dodging any getpid() caching that may be hiding in our libc:
          Same criteria, but aarch64 and gcc13.
          syscalls are using svc, equivalent to x86-64 syscall instruction.

          clock_gettime vDSO call: 0.047155 seconds, 1000000 calls
          cost per (not)syscall: 0.000000047 sec
          cost per (not)syscall: 47.16 nsec

          clock_gettime svc: 0.187986 seconds, 1000000 calls
          cost per syscall: 0.000000188 sec
          cost per syscall: 187.99 nsec

          getrandom svc call: 0.250148 seconds, 1000000 calls
          cost per syscal
  • I do find it interesting (although rather outside my ken), but the discussion itself seems quite non-controversial. Is it (the discussion) actually news? Linus just seems to be asking the sorts of questions we'd expect someone to ask.

    • by sarren1901 ( 5415506 ) on Sunday July 07, 2024 @06:20PM (#64607927)

      He did say the word "damn" once in the summary parts. I'm sure some very thin skinned individual took this as hostile.

      I thought it all seemed very normal and well argued. Especially the part where he's just asking someone to spell out why needing this is so important. Loved the part where he says, "We don't add stuff "just because we can"."

      We need so much more of THIS then what we get in every other direction.

  • by fahrbot-bot ( 874524 ) on Sunday July 07, 2024 @06:12PM (#64607923)

    We don't add stuff "just because we can".

    This isn't systemd. :-)

  • by www.sorehands.com ( 142825 ) on Sunday July 07, 2024 @06:52PM (#64607975) Homepage

    Linus' response sounds like a response to a Karen.

    I can understand that response. I can appreciate the inverse; as engineers, we always look for inefficiencies and failure points. But we always have to consider the real-world implications. Is it worth optimizing a keyboard input routine that handles a keystroke in 1ns vs. 1 ms?

  • by ThePhilips ( 752041 ) on Sunday July 07, 2024 @07:08PM (#64608001) Homepage Journal

    The thread with the patches. [kernel.org]

    IIRC, VDSO are executed in user-space, thus subject to signals. And the guys are using global variables to maintain the RNG state. I would have rather avoided anything with global state in VDSO. (But then I'm not a kernel developer, so what do I know.)

    P.S. If of any relevance to anybody: it's for 64-bit archs only.

    • The thread with the patches. [kernel.org]

      IIRC, VDSO are executed in user-space, thus subject to signals. And the guys are using global variables to maintain the RNG state. I would have rather avoided anything with global state in VDSO. (But then I'm not a kernel developer, so what do I know.)

      P.S. If of any relevance to anybody: it's for 64-bit archs only.

      To me that makes it even more objectionable since it means yet another block of IFDEF-ing (as if there isn't enough of that already in Linux code) at compile time so something like an ARM kernel won't have it.

      • > something like an ARM kernel won't have it.

        Why not aarch64?

        Is this an assembly patch?

        • TFS mentioned SSE2, which is x86

        • It's not limited to AMD64, but generally 64-bit archs. So aarch64 (and other 64-bit archs) should be theoretically capable of implementing it.

          The 64-bit limitations stems from using a new bit flag (for the virtual memory part of the patch) that is larger than the 32 bits.

          TL;DR, the patch also adds a new flag to the memory mappings that would cause the mapping to be zeroed (as if freshly mmap'ed) in the fork's child (i.e. not copied, left blank). That is used to prevent children getting verbatim copy of

    • IIRC, VDSO are executed in user-space, thus subject to signals.

      Correct.

      And the guys are using global variables to maintain the RNG state.

      All vDSOs use some concept of a "global variable" for their state- that's why they're vDSOs.
      Currently, the only one uses a mapped global from the kernel- its current time.
      getrandom() will use the kernel's RNG state.

      I would have rather avoided anything with global state in VDSO. (But then I'm not a kernel developer, so what do I know.)

      I think that may be a weird hangup of yours regarding someone having taught you that the word global is a bad word in programming, like goto.

      They did you a disservice, if so.

      • Somewhat justified comment, but.

        There is no harm exposing how kernel computes the current time and its internal state variables. There is definitely security implication when anybody can peek into the random pool of the kernel. To me these two are incomparable.

        P.S. Of course the global variables are bad. Often: evil. (And by the way I personally have little against goto.) Global variables have impact on the whole application/system (see: global locks), whereas gotos are limited to a function. Replacing

        • There is no harm exposing how kernel computes the current time and its internal state variables. There is definitely security implication when anybody can peek into the random pool of the kernel. To me these two are incomparable.

          It's a random pool for your process- not the entire kernel.
          You are correct that it would be a massive security violation to leak any state that couldn't be easily retrieved by other processes already. That's not happening.

          The kernel has thousands of global symbols.
          Saying that "globals are bad" is very reductive.

          In software with clearly defined scoping, it can make sense to make sure everything is scoped correctly.
          Your interrupt handler isn't started up with a reference to every possible object it may

  • When Linus Torvalds dies? I donâ(TM)t need answers, just something to think about.

  • Odds are Donenfeld has thought this through.

    What problem is he trying to solve?

  • Making applications that read random bytes out of the L' ' cashe or its own page file entries as entropy then storing a database of random (ASCII ??) things made from that....? Might be counterproductive....
  • I suspect someone has the means to swap out such vDSOs with malicious ones. If you can do that with getrandom(), then you've broken every crypto algorithm out there and it may take years before someone notices.

"The eleventh commandment was `Thou Shalt Compute' or `Thou Shalt Not Compute' -- I forget which." -- Epigrams in Programming, ACM SIGPLAN Sept. 1982

Working...