Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Linux

Basic Rust Support Merged For Upcoming Linux 6.1 (phoronix.com) 83

"This Monday, the first set of patches to enable Rust support and tooling was merged for Linux 6.1," writes Slashdot reader sabian2008, sharing an update from longtime kernel developer Kees Cook: The tree has a recent base, but has fundamentally been in linux-next for a year and a half. It's been updated based on feedback from the Kernel Maintainer's Summit, and to gain recent Reviewed-by: tags. Miguel is the primary maintainer, with me helping where needed/wanted. Our plan is for the tree to switch to the standard non-rebasing practice once this initial infrastructure series lands. The contents are the absolute minimum to get Rust code building in the kernel, with many more interfaces[2] (and drivers -- NVMe[3], 9p[4], M1 GPU[5]) on the way.

The initial support of Rust-for-Linux comes in roughly 4 areas:
- Kernel internals (kallsyms expansion for Rust symbols, %pA format)
- Kbuild infrastructure (Rust build rules and support scripts)
- Rust crates and bindings for initial minimum viable build
- Rust kernel documentation and samples
Further reading: Linux 6.0 Arrives With Support For Newer Chips, Core Fixes, and Oddities
This discussion has been archived. No new comments can be posted.

Basic Rust Support Merged For Upcoming Linux 6.1

Comments Filter:
  • so how can Rust make it into the kernel when SystemD is at the gates?

  • How about improving c compilers instead and focus on missing wifi drivers instead of bloating the kernel with a new toolchain which takes longer to compile. This is why Microsoft gets away with telemetry and Microsoft accounts because they know Linux is infighting. Linux is 31 years old and still can't focus on the basics.
    • by pr0nbot ( 313417 ) on Wednesday October 05, 2022 @05:32AM (#62939955)

      I follow Linux/kernel developments on lwn.net. Linus is a pragmatic fellow and his assessment of Rust is positive. I don't have the quote to hand, but it was something like, it's the first hip language he's looked at that fixes a lot of things that suck about C without adding a load of new suckage like C++. So he's receptive to the idea of Rust in the kernel. From the outside, there's no great in-fighting that I can see. As I understand it the main issue with drivers is undocumented hardware.

      • by serviscope_minor ( 664417 ) on Wednesday October 05, 2022 @05:55AM (#62939985) Journal

        There isn't a lot of "suckage" about C++, this is mostly made from emotional rant from Linus circa 1996 before standard C++, when compilers were less good and based on a lot of misunderstandings, such as insisting that C++ has to be written in a particular, not very good, style. Combine that with being surrounded by massive sycophants, his opinion was a bit warped and there was nothing to correct it.

        Thing is if C++ is so suckage, then how come you need C++ to write the compilers to compile the kernel?

        What's clear is Linus has matured somewhat in the last 30 years which is fine. What's a shame is that the kernel has been stuck in C-land for all this time. All this means is that there's a massive ton of extra boilerplate which C++ does for you (i.e. constructors), and the VFS system is full of hand-written OO which is less memory efficient than the C++ style (and also knobbles the branch target predictor while it's at it).

        • by Viol8 ( 599362 ) on Wednesday October 05, 2022 @06:20AM (#62940013) Homepage

          While C++ has improved a lot and is far more functional that back in 96, the syntax unfortunately is getting to the point where it could challenge Perl for the Line Noise Cup. You only have to look at Lambda definitions for an example.

          I realise sometimes they don't have much choice due to backwards compatability with older versions of C++ and C, but a clean sheet language like Rust will probably look a lot more readable to beginners who want to pick a systems programming language and don't want a mountain to climb.

          • by DarkOx ( 621550 ) on Wednesday October 05, 2022 @07:54AM (#62940181) Journal

            If you want a systems language that readable an approachable for beginners we already have it - its 'C' or possibly 'C with Classes'

            The sensible thing to do is not re-invent the wheel here; make sure whatever you do is linker compatible with C. Create a C or 'C with Classes' dialect that adds some bounds checked array/vector types with iterators and then sit down and think really hard about what changes need to be made to casting, promotion, and void pointers.

            I am not going to pretend that last part is easy. It is however way easier then inventing and instructing people in an entirely new language, and replacing or adding an entirely separate tool chain to mature projects like Linux.

            I have looked at Rust quite a lot and frankly I just don't see what is compelling about it besides the safety elements. The syntax is not really an improvement, the learning curve for using it well seems to be pretty steep. For application layer stuff having to have yet another run time is PITA. Don't read that to say Rust is bad from a technical perspective, those are mostly negatives only in the context of already have existing ecosystems. If it was 1972 and not 2022, I'd probably be saying Rust is really groovy-mannnn. However its NOT 1972, and its not 1992, its not even 2002. Realty is we have real big corpus of nice things like GNU and Linux in C and a huge set of mature really good tooling around C. Uplifting that stuff to an improved C is easier than starting again. Rust is just 'doing it the hard way' for mostly NIH reasons, it seems to me.

            • by sg_oneill ( 159032 ) on Wednesday October 05, 2022 @09:33AM (#62940413)

              Forget the Syntax. If looks good to me. Not to others. Thats not *actualy* the selling point here.

              Its the Safety. That safety is the whole damn point. 90% of the derp outs that Linus is constantly having to reject or yell at people on mail lists about are largely precluded by Rusts borrow checker and its type safety system.

              And no there is no run-time. It isn't Go. Its just a different front end to LLVM with a few conventions that require a bit of preparation to work with the kernel (Notably, that the standard library isn't there, because kernel space != user space)

            • by Uecker ( 1842596 )

              I agree. In fact, you can get a lot of safety already with C if you use the right tools and compile-time proofs could be layered on top of it using new annotations. In fact, Rust was inspired by C dialects such as Cyclone.

              I personally would love to see C get improved in this direction as I do not really like Rust's syntax, complexity, horrible compile times, lack of portability and stability, etc.

            • I have looked at Rust quite a lot and frankly I just don't see what is compelling about it besides the safety elements. The syntax is not really an improvement, the learning curve for using it well seems to be pretty steep. For application layer stuff having to have yet another run time is PITA.

              You just really badly contradicted yourself here. If you really had looked at rust quite a lot as you claim, you'd know that any runtime it has is completely optional. You can literally use your OS's C runtime if you wish, or no runtime at all.

          • Proposed alternative syntaxes for C++ actually make the lambda syntax the only syntax for defining functions.

            I think it's just lack of familiarity that makes you think one syntax is worse than another.

            • by Viol8 ( 599362 )

              I've been using C++ since the 90s so no, its not lack of familiarity, its more a case of not having blinkers on and realising the C++ syntax is a dogs dinner.

              • I've been using C++ since the 90s so no, its not lack of familiarity, its more a case of not having blinkers on and realising the C++ syntax is a dogs dinner.

                So have I, and am paid 6 figures a year to continue to do so. And I entirely disagree with you.
                If you're looking for dogshit syntax, Rust gives C++ a solid run for its money, and in some places, knocks it out with ease.

                That being said, I fully support Rust in the kernel.
                It's not like hackery required to try to make kernel functions safe is beautiful.

          • While C++ has improved a lot and is far more functional that back in 96, the syntax unfortunately is getting to the point where it could challenge Perl for the Line Noise Cup.

            Have you even looked at the line noise from Rust?

            simple example: static keymaps: [[[u16; 3]; 2]; 1]
            C++ version: const uint16_t keymaps[][2][3]

            I honestly tried to learn Rust but even the tutorials hit me with unknown and unexplained syntax.

            • First, you don't even need that outer slice, you're trying to make it look like brainfuck when it doesn't need to. Second, look at what you had to do to get that: rust is just u16 and const is always implicit. Now consider that you probably will have a lot more variables that are integer types, and most of them either will be or at least should be const anyways (if not, you're doing it wrong,) and that rust also supports implicit types, so the vast majority of your variables will look like "let foo = bar".

              • you're trying to make it look like brainfuck when it doesn't need to.

                This is just a snippet I took from someone complaining about Rust line noise.

                rust is just u16 and const is always implicit.

                * "uint16_t" versus "u16" seems like a non-issue but the later is slightly tidier. C/C++ would have had the same but there were legacy issues with "u8" representing an 8-byte type rather than 8-bit. It's fortunate that Rust could ignore this.
                * Types being implicitly "const" is merely an optimization trick that even C/C++ compilers do.

                rust also supports implicit types, so the vast majority of your variables will look like "let foo = bar"

                Sounds like "auto". Anyway, I would be interested to see a study done on how that impacts project

                • There's a solution to that, and it's quite a nice one: inlay hints. Basically no need to type up the type name, but it's still displayed as if it was anyways. Also, for numeral types, there's an easy way to do what you want:

                  let foo = 8u16

                  In rust analyzer, this shows up as:

                  let foo: u16 = 8u16

                  • Oh also, if you were passing foo to a function that asks for a u16, then you'd just type this:

                    let foo = 8

                    And rust analyzer shows:

                    let foo: i32 = 8

                    Until you've passed foo to the function, then that line changes to this:

                    let foo: u16 = 8

                    • Why does the analyzer maximize line noise? Why not simply type name = value? This is simple while the analyzer output seems to be maximally obtuse.

                    • It doesn't, it's an inlay hint. In other words, the text isn't actually there, rather your editor just shows it as if it was, only using a different font/size/etc to indicate as such. And you can turn it off if you'd like.

                      If you want implicit types to show up as of they were explicit, the tooling has you covered. If you don't want it, because you simply don't need it, the tooling has you covered.

                      Make sense?

                  • I'm aware of the "solution" but the appended type can easily be removed for some reason and now you may created a bug. C++ has the same thing with auto and numeric types but that doesn't make it a good thing. As I wrote before, automatic typing is a convenience that should not be overused.

                    In rust analyzer, this shows up as:

                    let foo: u16 = 8u16

                    Talk about noisy.

                    • I find the inlay hints extremely annoying and always turn them off. If I need to know the type of something, hovering works. I also think that inlay hints shouldn't be relied on, and it's sometimes good to add your own annotations. For example, no problem IMO with `let s = "something";` because there's no doubt that it's a `&str`, or `vec![0u8, 1, 2, 3, 4]` being a `Vec`. Same for `let food = 8u16;` (I don't like that the analyzer adds the redundanct `: u16` here, but at least it's consistent), and for
                    • I personally like inlay hints, mainly because I don't always know what the method I'm calling is going to yield, especially with a library I've never worked with before, and it makes prototyping much easier. And when you leave the types implicit, if you ever need to refactor, your job is that much easier.

        • There are things that C++ is bad at. ABI compatibility is one of them. The kernel needs good ABI compatibility.
          • That's not right. It's not even wrong.

            The underlying C++ ABI on Linux, the Itanium ABI hasn't changed in a very long time. If you use complex types in interfaces, the if you change the definition of one of those types then you will break the ABI. This is EXACTLY the same as C: if you pass a struct to a function not as a pointer, then if you change the struct definition, you need to recompile the function. The way around that in C/C++ is to pass a pointer, but that's not idiomatically done in C++, whereas it

            • No it's not the same. If I compile C++ code with Visual Studio and identical C++ code with GCC each compiler mangles the names in different ways and I cannot link the two together without getting undefined symbol errors.

              Then I have to write a shim in C, and be careful not to expose any C++ classes in any headers, finally I can link the software between two compilers.

              I actually do this often as I manage a large software project that uses enough components to be forced into these stupid but necessary situatio

              • by Viol8 ( 599362 )

                extern "C"

              • If I compile C++ code with Visual Studio and identical C++ code with GCC each compiler mangles the names in different ways and I cannot link the two together without getting undefined symbol errors.

                There is no standard for C++ name mangling. It's really a platform-level decision, and whoever creates/owns the platform gets to define it. MS owns Windows, and got to define how mangling works there. GCC made different decisions, and their decisions got adopted by the POSIX world. Neither is right or wrong.

                "When in Rome, do as the Romans do" applies. It's really up to GCC to do things the MS way on Windows, if they want to interoperate with the Windows ecosystem.

                I understand that MS hasn't always provided

              • How is any of that relevant to the Linux kernel which is full of GCCisms and certainly won't compile with VS.

          • There are things that C++ is bad at. ABI compatibility is one of them

            Problematic as C++ can be in this department, rust is even worse. Or to put it a different way rust is where C++ was decades ago. Rust has no ABI stability for "repr rust" types, each version of the rust compiler just does what it's developers believe will give the best performance. Rust libraries also don't tend to care at all for ABI stability. This is why you will find that rust programs are nearly always statically linked agaist rust libraries (they may be dynamically linked against C libaries of course

        • by DrXym ( 126579 )
          Oh there is plenty that sucks about C++. Every mistake you can make in C you can make in C++ since it is 99% a superset. On top of that it has it's own layer of bullshit to deal with - the rule of 3, the rule of 5, pointer / reference abuse, weird constructor traps around type coercion, destructor traps around use of virtual, fragile base classes, multiple inheritance issues, exceptions etc. etc.

          I'm not surprised the kernel didn't want to go there.

          • Every mistake you can make in C you can make in C++ since it is 99% a superset.

            Technically correct (the best kind) while being wrong in practice. Yes, you can write C++ code while playing to all the weaknesses of the language rather than its strengths and is thereby prone to making the same errors as in C, thereby allowing you to make more errors. In C++ you can automate a bunch of stuff you have to do by hand in C thereby ensuring you never make the errors.

            For a simple example, if you have a class/struct w

            • by DrXym ( 126579 )
              Every single you project you cite has had crashes related to NULL pointers, double frees and all the rest. Every. Single. One.
              • So?

                No one claims c++ is perfect. It's not like Linux has never had a kernel panic. The question is whether there are fewer bugs, and given that C++ lets you automate certain classes of bug into nonexistence I'd say yes there are fewer with defect.decent C++ compared to decent C.

                • by DrXym ( 126579 )
                  C++ doesn't automate certain bugs into non-existence. That's a nonsense. I suppose if you have RAII style wrappers which were locked down to prevent inadvertent copy / assignment issues you might spare yourself some memory allocation issues, but of course memory allocation in the kernel is something that you don't take for granted as you do in userland. And you'd probably spend as much time locking down code to stop all the other class of problems you've invited in. And the kernel being the kernel it wouldn
                  • C++ doesn't automate certain bugs into non-existence.

                    Yes it does.

                    I suppose if you have RAII style wrappers which were locked down to prevent inadvertent copy / assignment issues you might spare yourself some memory allocation issues, but of course memory allocation in the kernel is something that you don't take for granted as you do in userland

                    You kind of do. kmalloc etc. It's jut not that hard. And constructors also mean you can never forget to initialize the class. That's a bug automated away.

                    And the ke

        • Yes C++ has made progress; however, in my opinion it is not enough to be used in kernel development. It is not just Linux that has still uses C for kernel development: Windows, MacOS, BSD, AIX, all use C for a reason.
          • ... It is not just Linux that has still uses C for kernel development: Windows, MacOS, BSD, AIX, all use C for a reason.

            FYI... Windows has used C++ in the kernel since at least NT 4.0, and the usage has increased over time. Public interfaces have remained C, but C++ is used in the implementation. To be fair, this is kernel-specific C++ code, not using STL or other C++ standard library things. I don't have a problem with that, but I know some people feel "It's not really C++ if I can't use STL".

        • by slack_justyb ( 862874 ) on Wednesday October 05, 2022 @08:22AM (#62940245)

          Thing is if C++ is so suckage, then how come you need C++ to write the compilers to compile the kernel?

          Because writing something that will run in ring-3 versus something that runs in ring-0 on the processor are different enough to be significant for the question you bring up. A compiler running doesn't have to concern itself with memory alignment, nor if the parameters passed were stack based or register based. All of that is magically abstracted away because you're running at such a high level. When you are programming something that runs in ring-0, you do not have any of the assurances and niceties you'd get with userspace programs.

          I think this is one of the things that so many people forget about what we are even talking about here. Writing a kernel requires a much finer and more verbose control of the final machine code than say writing GIMP or gcc. Case in point, you cannot just create a function with some parameters and call it from your main function at ring-0 without serious consideration on what the compiler will actually create in the final binary. At ring-0, if the compiler puts those parameters on the stack, your kernel is going to be painfully slow as the processor tries to pull every function call into a L3 cache request. At ring-0, you just do your best to keep everything in registers to avoid all of that, hence why fastcall is a thing in the kernel. Shoot, memory alignment is a massive issue at ring-0 because if the kernel is asking for something not at proper byte borders, the processor assumes there's a pretty big effing reason for this and will note that in the prediction pipe. ARM will just NOT let the kernel even do that. You either request two pages or one page properly aligned, there isn't some partial pull it will allow.

          And that gets into the issue with C++. C++ makes it a pain in the ass to add the required things to have this level of control over the final output, while still offer the things that people like about C++. Reference to objects, not just pointers, but actual "&" references are a PIA to do and still be able to allow something like "const &" or a copy c-tor off a reference. Because C++ assumes a lot about the state of the running machine that just aren't there when we talk about the kernel. There would be a need to have answer to these assumptions, compiler extensions, and you run into some problems with C++. There can be ways to answer how to deal at the kernel level with references "&" on parameters (maybe just treat "&" passing like "*" passing) but that then creates all kinds of issues with nice things like "MyClass(&MyClass) c-tor" that someone might like to have.

          And the parameters thing is just one thing. The weird way C++ does type coercion, the way it treats the stack as just free real-estate when unwinding, and so on. All of these things are great stuff for userspace, but really hard to address issues at ring-0 where you just simply cannot assume those things are things to rely on.

          The thing is Rust, and hell C as well, by itself has issues that make it hard to use out of box for kernel development. But both can easily be extended to make a lot of those issues disappear. Like computed gotos in C. That's not C standard, that's something added to gcc so that the Linux kernel can use that. Being able to know where the goto will appear in the binary and what the "jmp" will look like in compile, because at ring-0, relative to current instruction pointer isn't something assured to go well with current L1 cache paging. Especially if that's a long jump. The compiler needs to assure that a goto will fit inside what the processor has pre-warmed.

          All this means is that there's a massive ton of extra boilerplate which C++ does for you (i.e. constructors)

          All I'm saying is that the nice things that C++ does for you assumes a lot and there isn't clear answers on how to answer some of those assumptions that are not there for you at systems leve

          • Up until the last paragraph, I'm with you.

            However, then you start talking about what amounts to the STL/stdlib. That simply doesn't apply at kernel level, since you'd need to reimplement what you want from the STL within kernel space code, wherein you can already make any allocation tweak you want. Even disregarding the STL specification if you so choose.
            Unless I'm misreading something, in-kernel rust will have its own "stdlib".

          • Because writing something that will run in ring-3 versus something that runs in ring-0 on the processor are different enough to be significant for the question you bring up.

            I disagree, and will address the various points. Mostly it's that on the system call boundary, things get weird and you need some pretty wacky stuff. However, the majority of the kernel code isn't wacky stuff, it's code doing normal code things and benefits from the tools for larger scale organization and automation that C++ provides.

            Ta

          • Case in point, you cannot just create a function with some parameters and call it from your main function at ring-0 without serious consideration on what the compiler will actually create in the final binary.

            There are indeed code paths where that is true. However, they consist of approximately 0.001% of the kernel.
            Linux is fully preemptive, and utilizes fine-grained locking.
            There is precisely no need for its code to be more performant than anything else for the wildly overwhelming amount of what it does.

            ARM will just NOT let the kernel even do that.

            Nonsense.
            Though I'll grant you it was true, once upon a time (ARMv4 and below)

            Incorrect points aside, you're not completely wrong in the general case.. However, there are benefits to C++ that have made it w

        • by vbdasc ( 146051 )

          C++ (especially current C++) is extremely powerful, but at the same time extremely complex. This makes it good for cathedral-style projects, and bad for bazaar-style ones. Perhaps that's what Linus actually meant.

    • by MartinG ( 52587 ) on Wednesday October 05, 2022 @06:15AM (#62940001) Homepage Journal

      Improving c compilers is a useful exercise, and should continue happening anyway, but that doesn't solve the same problems that Rust solves.

      Regardless of how good a c compiler is, it simply doesn't have the information available at compile time that a Rust compiler has available, enabling it to offer unique safety guarantees.

      As for missing drivers, I don't see any suggestion that those working on adding Rust support would otherwise have been working on wifi drivers anyway.

    • Taking longer to compile is an acceptable tradeoff if it leads to better memory protection. Improved C compilers can't fix the inherent unsafe memory model of the language. Plus, it's not the kernel team's job to improve the C compilers.
    • by DrXym ( 126579 )
      Because C compilers are the problem to start with. Or rather the C language is and you *can't* improve it without breaking it.
      • Nothing wrong with c. The OpenBSD team seem to have a figured out secure code.

        • OpenBSD constantly trails behind other Unixlikes in features and functionality because the team is spending all of its time finding and fixing bugs that might lead to exploits. It's pretty hard to spin that as a victory for the language the system is written in.

          • What's the ratio of Linux kernel developers to OpenBSD developers? 10,000:1 probably. Linux could get serious about security but they'd have to drop the "if it compiles ship it" mentality.

        • The OpenBSD team seem to have a figured out secure code.

          And if that's the kind of specific high expertise that is required then C is not suitable as a general purpose language. While you continue to rely on experts the world continues to suffer exploits due to a lack of experts.

          Also OpenBSD is so far behind the curve that it can be barely called a modern OS. An excellent system for a mission critical server, not so much for a general purpose machine. If that level of outdatedness is required for security then I reiterate, C is not suitable as a language.

          • Also OpenBSD is so far behind the curve that it can be barely called a modern OS. An excellent system for a mission critical server, not so much for a general purpose machine.

            To what purposes is OpenBSD ill-suited? It's rock solid as a firewall, a server, or a desktop.
            What features do you think it's missing? It runs basically all the open source software out there, with unmatched security features.
            For desktop use, I'd just like to see ports of KDE Plasma and version 2 of FluidSynth.

        • by DrXym ( 126579 )
          Yes they have - by having using their limited resources to constantly check and double check code with the negative impact that has on their dist's progress. And since nearly half of CVEs are things like null pointers, double frees, buffer overflows etc. much of that wasted time can be squarely attributed to the shitty design of C (and C++) that allows them to happen.
        • by Jeremi ( 14640 )

          Nothing wrong with c.

          What's wrong with C is that it was designed in a pre-Internet decade where efficiency was everything and security wasn't even an afterthought.

          C is very good at meeting the goals it was designed to meet. But the world has evolved since then, and additional requirements have been added that C isn't well-suited to meet. Now we need performance and security -- not just in the code written by The World's Best Programmer Who Never Makes a Mistake, but in all the code written by all the Mediocre Programmers also

    • This is why Microsoft gets away with telemetry and Microsoft accounts because they know Linux is infighting.

      I'm sorry, but do you genuinely think 99.9% of the Linux user base gives a shit about this? Do you think 99.99% of the Windows user base even knows what a kernel or even knows why they wouldn't give a shit about this?

      You are incredibly out of touch with the rest of the world.

      • Hell, I was a linux user for 16 years (moved to Macs with the M1), and kept a partition for booting windows because I also have a love for video games.
        I have written several thousand lines of kernel modules (1 to break the eMMC protection in a popular T-Mobile phone so that we could root it, a couple of ACPI support modules for a few laptops I had, but mostly various network protocol helpers, and network translation modules... so much time in xtables.).

        And I seriously don't give one flying fuck about Mic
    • by UnknowingFool ( 672806 ) on Wednesday October 05, 2022 @07:06AM (#62940101)

      How about improving c compilers instead

      Why is improving C compilers part of the responsibility of Linux? That is GCC's domain as Linux uses the gcc compiler and gcc software but Linux is not part of gcc.

      This is why Microsoft gets away with telemetry and Microsoft accounts because they know Linux is infighting. Linux is 31 years old and still can't focus on the basics.

      Er what? Please describe this infighting as it seems you are mistaking that people have an opinion about Linux == infighting. As for basics, you seem unaware that Linux is used extensively in the server world. Desktop not so much but this article is about Linux kernel development not application development.

      • It doesn't matter whose responsibility you think it should be. There is a constant drive to improve performance and we end up using the solution that offers that, whether it is thought to be (or even objectively is) technically superior or not. If that weren't true then security would be job #1, but it generally winds up taking a back seat to performance.

        Besides, this stuff is all [F]OSS, it's anyone's "responsibility" to improve it — whoever can, anyway.

        • Again, GCC is improving their compiler as is their role. This article is about Linux kernel development and their inclusion of Rust to improve security and reliability of the kernel.
          • There is substantial interaction between gcc and Linux developers. Compilation of the Linux kernel has been known to trip bugs in compilers because kernels are so extra. gcc and Linux projects both accept patches from outside sources.

            • There is substantial interaction between gcc and Linux developers. Compilation of the Linux kernel has been known to trip bugs in compilers because kernels are so extra. gcc and Linux projects both accept patches from outside sources.

              You do understand that improving compiler performance is still ongoing, right? You do understand that is a different focus of work than kernel development that Linus and his team in their release of new versions, right?

              • You do understand that improving compiler performance is still ongoing, right?

                Yes, but not what you think that has with the price of tea. Many things are still ongoing. If you get near a point...

                You do understand that is a different focus of work than kernel development that Linus and his team in their release of new versions, right?

                Yeah, it's a different product. So what? Again, where's the point? When will you get to it?

                • My point for the OP is that improving compiler performance is not the main focus of Linus and his team. OP thinks only in binary: That either the compiler gets improved or the Linux kernel is advanced. These are done by separate teams and different efforts.
                  • Well, all of those things are true. It is a general principle that one project doesn't necessarily take talent away from another project for a zillion reasons.

                    What I was aiming at was that more work on one actually can create improvements in the other. Because they're so well-linked, they have significant effects on one another.

      • Frankly, if you take C and add compiler-enforced memory safety with no garbage collection and abstractions without a runtime cost, and a macro system that understands the syntax of the language, rather than simply text manipulation, you'll basically end up with something like Rust. Whether some way will be found to add C++ style classes is another question (both necessity and implementation).

    • > How about improving c compilers instead and focus on missing wifi drivers

      Do you know how hard it is to write a secure wifi driver in c? The talent pool is very small. Demand greatly outstrips supply.

      > instead of bloating the kernel with a new toolchain which takes longer to compile.

      Many of us remember starting linux compiles at the end of the work day in hopes it would be done by the next morning. That's how we got here. Kernel compiles are *fast* today.

      > This is why Microsoft gets away with

    • How about improving c compilers instead and focus on missing wifi drivers instead of bloating the kernel with a new toolchain which takes longer to compile

      Most end users will never compile a kernel, nor will even most corporate users any more, so why would compilation time be relevant?

      WiFi drivers and basic architectural features may sometimes be developed by the same people, but they're still orthogonal problems. And nothing is preventing OEMs from writing Linux drivers for their hardware and contributing them. It's not terribly difficult to find supported hardware. There are lots of garbage Windows drivers not worth using, as well. Out of the box and as del

      • Maybe someday Linux will be written mostly in Rust, or maybe this will just turn out to be an ill-fated experiment, but either way Linux can't evolve without trying new things. If that effort weren't being duplicated, it still wouldn't necessarily mean faster development of any other project, since more developers doesn't necessarily mean more useful development.

        Yes, especially since Linus has said that using Rust is an experiment right now. Linus has never said they are converting the entire kernel to Rust. They are trying Rust to see if it has tangible benefits. If it does not work out, I can see Linus removing it later. Some posters have reacted to Linux including Rust as the same as Linux abandoning C immediately.

        • I see more and more people coming into Slashdot discussions who have no clue that there were prior discussions on the same subject, and who won't do the work to find out. The "editors" and slashcode alike do us no favors by typically not linking relevant stories to new ones (which can absolutely be done programmatically using the categories and subject lines alone) but people barely RTFS and rarely RTFA, they're not gonna RTFO[ld]A anyway I guess. Usually half the comments in discussions like these could ju

        • by DrXym ( 126579 )
          Nor is there reason to convert the kernel to Rust. If it works, it can carry on that. I expect however there is a desire to ensure new drivers are more stable out of the gate and therefore providing Rust is a way to encourage that.
    • Comment removed based on user account deletion
      • If Rust or some other alternative isn't used, then Linux's "31 year heritage" becomes a liability, and the case for a complete replacement becomes louder by the day.

        That's a bit of hyperbole.

        Rust might provide some utility, or it might not be worth the extra complexity, time will tell. But we aren't about to throw out the linux kernel just because it hasn't taken up language flavour of the month.

      • We use C because it was the best available language for computers with RAM measured in kilobytes

        This statement alone makes it very clear that you are not included in any cohort that makes decisions about which systems language to use.

  • Rust compiler (Score:5, Interesting)

    by fred6666 ( 4718031 ) on Wednesday October 05, 2022 @07:26AM (#62940141)

    So, is a rust compiler now needed in addition to a C compiler to compile any Linux kernels? Or only when some specific modules are selected?

    • by Anonymous Coward

      You should see the list of requirements for building rust itself. At some point these little embedded platforms won’t even be able to natively compile their own kernel. It’s going to be cross compilation only.

      • At some point these little embedded platforms wonâ(TM)t even be able to natively compile their own kernel. Itâ(TM)s going to be cross compilation only.

        It's already like that for many of these platforms, which are memory-constrained. Debian is not self-hosting on a pogoplug. The closest I could get would be to use distcc with a remote cross-compiler.

        • You are completely correct.
          I did embedded linux development (including pseudolinux- ucLinux for machines with no MMU back in the ARM7TDMI days) for many years.

          These days with things like Raspberry Pis, you can compile on the device.... if you hate your life, or are so enamored with the novelty of the process that you don't understand unfolding before you that you derive some amount of dopamine from it... but for everything else, there is, and always has been, crosstools-ng.

          Coward you're replying to is
      • Dude, what in the literal fuck are you talking about.
        I have done embedded linux programming for well over a decade (Started with the ARM7TDMI in the Motorola V3 RAZR. If you're one of the people who got fucked when Moto moved to RSA signed bootloaders, I'm the guy who saved you- you're welcome)

        You do not fucking natively compile the kernel on an embedded platform. The idea gives me cancer just thinking about it.

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors

Working...