Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Linux

Rust-Based Redox OS Is Nearly Self-Hosting After Four Years (theregister.co.uk) 62

Long-time Slashdot reader sosume quotes the Register: Redox OS, written in Rust and currently under development, is only "a few months of work away" from self-hosting, meaning that the Rustc compiler would run on Redox itself, according to its creator Jeremy Soller...

Redox has a POSIX-compliant C library written in Rust, called relibc. It is Linux-compatible both at the syscall API level and at the syscall ABI (Application binary interface) level, subject to the same architecture.

The article notes that the OS's latest release was version 0.5 last March, arguing that it's "best described as experimental..."

"Still, if Rust continues to grow in popularity, its characteristics of safety and unimpeded performance seem ideal for creating a new operating system, so perhaps Redox will become more prominent."
This discussion has been archived. No new comments can be posted.

Rust-Based Redox OS Is Nearly Self-Hosting After Four Years

Comments Filter:
  • by Anonymous Coward on Sunday December 01, 2019 @11:42AM (#59473520)

    I'm sure it's a sin, too.

  • by nospam007 ( 722110 ) * on Sunday December 01, 2019 @11:43AM (#59473524)

    Is that like almost pregnant?

  • Not a great design. (Score:5, Informative)

    by Gravis Zero ( 934156 ) on Sunday December 01, 2019 @11:54AM (#59473552)

    Redox discards the UNIX "everything is a file" design. Instead it goes with "everything is a protocol" which ends up working in a similar fashion to how Windows NT kernel functions.

    They claim that it somehow eliminate "buggy drivers" because of memory safety features but then omit the fact that those features are disabled using the "unsafe" keyword for anything that interacts with hardware. They also talk about how fast it boots but then fail to mention that the downside of a microkernel is slow IPC.

    I like some of the safety features of Rust but the Redox kernel design offers no improvements over what other kernels offer.

    • Re: (Score:2, Troll)

      by gweihir ( 88907 )

      They claim that it somehow eliminate "buggy drivers" because of memory safety features but then omit the fact that those features are disabled using the "unsafe" keyword for anything that interacts with hardware. They also talk about how fast it boots but then fail to mention that the downside of a microkernel is slow IPC.

      While Rust has some nice features, most of the claims made are direct or indirect lies. That community is rotten to the core. Nothing new.

    • by jythie ( 914043 ) on Sunday December 01, 2019 @12:06PM (#59473586)
      Eh, every generation of programmers needs to spend some time rediscovering the things they refuse to listen about from the previous generation.

      Though to be fair, there is a fine line between not learning from the past and revisiting things to see how times have changed. Hard to say where this one will fall.
      • by gweihir ( 88907 )

        Eh, every generation of programmers needs to spend some time rediscovering the things they refuse to listen about from the previous generation.

        Unfortunately true. But this is a general thing: Many people have trouble learning from their own experiences. Almost nobody can learn from the experiences of others though, unless systematically taught. When programming finally is engineering things will get better.

      • They haven't gotten around to rediscovering Algol 68 yet.

    • the downside of a microkernel is slow IPC.

      The IPC overhead is not nearly the worst part of a microkernel. The worst is that the different tasks are not sharing a coherent state of the system. For instance, if one task is creating a file in a directory, but another task is busy moving that directory, and a third task is just filling up the last free block on the file system, it becomes extremely complicated to manage this correctly when the tasks do not have access to most up-to-date information.

      The simple cop-out solution is to assign one task to m

      • by gweihir ( 88907 )

        Sounds like a nightmare to get organized and then bug-free. Well, as usual, you cannot actually get rid of complexity. But you can always increase it by an unsuitable design. Not saying micro-kernels are generally unusable, but they do come with their own specific problems...

        • by jythie ( 914043 )
          In a weird way, this is actually why I am in favor of projects like this. As you say, micro kernels have their own problems, but they also have some advantages. If you are trying to develop a new language designed to address common issues in the field, seeing how well it can be applied to a known system with known issues is an interesting test.

          Moving away from Rust specifically, it can be worthwhile to return to these types of systems now and then to see if advances in other areas of computing. It is k
      • by dog77 ( 1005249 )

        The simple cop-out solution is to assign one task to managing the entire filesystem, and serve requests from the other tasks, but this suffers from performance problems. If that task has to wait for one particular hardware requests to complete, everything else comes to a halt.

        Wouldn't the task be multi-threaded and thus should not have to come to a halt waiting for the hardware anymore than a kernel would?

        • Wouldn't the task be multi-threaded

          A multi-threaded filesystem without the coherency between tasks is basically a distributed filesystem, and those are much harder to design, and will still lack performance. The basic problem is that one task (or thread) can make a change to the filesystem that impacts all the other tasks. In order for the other tasks to become aware of this change, they all need to exchange messages. But each task can only listen to a message at a certain point in its execution path, so you need to wait for the task to get

          • by Anonymous Coward

            On a monolithic kernel, such as Linux, the common state of the filesystem is shared in memory, and is immediately available for every task that's working on it.

            A microkernel does not imply that the threads are running in different address spaces, only that the memory is protected between different subsystems. The Mill talks [millcomputing.com] describe a protection mechanism that makes possible highly secure and fast single address space operating systems. Crossing protection domains costs barely more than an ordinary function call, and sharing memory is as simple as sharing a pointer.

            Microkernel performance is terrible on legacy architectures, but so is all IPC between user processe

            • A microkernel does not imply that the threads are running in different address spaces, only that the memory is protected between different subsystems.

              If the protection means that you cannot freely access the shared memory, you essentially have different address spaces. On the other hand, if you have protection barriers that you can lift at any point in time, you essentially have shared memory.

              It boils down to two options: all threads share the same state of the system, which implies locking and exclusive access, and all the associated problems with that, or your threads have multiple copies of the state, and you get complicated coherency problems. Also

      • by tlhIngan ( 30335 )

        The IPC overhead is not nearly the worst part of a microkernel. The worst is that the different tasks are not sharing a coherent state of the system. For instance, if one task is creating a file in a directory, but another task is busy moving that directory, and a third task is just filling up the last free block on the file system, it becomes extremely complicated to manage this correctly when the tasks do not have access to most up-to-date information.

        The simple cop-out solution is to assign one task to m

      • That sounds like threading. The Linux kernel doesn't have a BKL anymore and can be running on multiple CPUs simultaneously.

        The solution on monolyths is to have semaphores. On microkernels, those semaphores are managed via IPC between services so that the relevant state is shared; whereas on monolyths, those semaphores are managed via independent bodies of code that don't necessarily communicate.

    • by Bengie ( 1121981 ) on Sunday December 01, 2019 @03:06PM (#59474010)
      About 1-2% of their code is "unsafe". Much easier to validate those few. All C lines of code are unsafe. There are many anecdotes from veteran C programmers who were able to re-write a complex check of C code in Rust in 1/2 the lines of code. On top of that, certain classes of bugs were completely eliminated. And Rust allows for certain better algorithms and datastructures that can't safely be used in C because the code becomes too complex for humans to use correctly, and "easy"+safe concurrency.

      The fact of the matter is that Rust not only has static analysis baked into the language, but the language was built around static analysis instead of bolted-on half-baked option. Rust as a community has been making tremendous headway in all areas, at rates much faster than seen in nearly all other projects and with a fraction the number of people, and with few issues.

      Redox is an interesting experiment that is different enough from other microkernels of the past to be a useful case study. No doubt some useful knowledge will come from the project. And it's a great test bed for the Rust language as a whole, covering a gamut of use cases, exposing limitations and issues with the language.

      Rust is better than C in the same way C is better than Brainfuck. You can write good code in C, but Rust enforces more of it. An in my experience, the value of clean code increases exponentially. A project with lots of poor code and a little great code is barely better than all poor code. Even a project with 50/50 poor/great code is still a pain in the ass. But a project with mostly great code where some of the poor code is cleaned up is now much much easier to work with. The biggest issue I have with handing off projects is code rot. Most programmers are not disciplined enough to deliver clean code. And messy code begets messy code. I've seen smallish projects that I was able to put enough effort to deliver immaculate code in the hopes of it being easy to manage in the long run. Only to have the next several programmers to make a mess of the code over time. In many of these situations, Rust would have been throwing all kinds of compiler errors because of bad practices.
      • About 1-2% of their code is "unsafe". Much easier to validate those few.

        If it is possible to fully validate this code then what exactly is the excuse for failure to do so?

        The fact of the matter is that Rust not only has static analysis baked into the language, but the language was built around static analysis instead of bolted-on half-baked option.

        Really what it has baked in are constraints on what is allowed. You can impose similar constraints in any language including C.

        Redox is an interesting experiment that is different enough from other microkernels of the past to be a useful case study. No doubt some useful knowledge will come from the project. And it's a great test bed for the Rust language as a whole, covering a gamut of use cases, exposing limitations and issues with the language.

        Ultimately all that matters are results. All Rust needs to do is get real world outcomes demonstrating superiority over other languages. Currently all there seems to be is excuses for lack of real world results.

        Rust is better than C in the same way C is better than Brainfuck. You can write good code in C, but Rust enforces more of it. An in my experience, the value of clean code increases exponentially. A project with lots of poor code and a little great code is barely better than all poor code. Even a project with 50/50 poor/great code is still a pain in the ass. But a project with mostly great code where some of the poor code is cleaned up is now much much easier to work with.

        I'm just going to step away now and leave your own words speak for itsel

        • by vadim_t ( 324782 )

          If it is possible to fully validate this code then what exactly is the excuse for failure to do so?

          That OS programming doesn't work under the constraints of Rust in some situations. Eg, in C if you want to write to VGA video memory, you can just make a pointer to 0xA000. Rust in the safe mode doesn't allow for that to happen, so unsafe code must be written to deal with things like that some things involve writing data into magical locations.

          On the other hand, there are plenty situations in an OS kernel in w

          • That is all I get from the overview docs. Not very interesting.

            I suspect that there is interesting stuff there, but when the first several pages say nothing than they have syntax for Unique Pointers then I switch off.

            Maybe there is something better to read? Not written for an idiot?

            Rust does not have (real) garbage collection. That makes it unsuitable for 95% of real programming. And reference counting does not count -- it is slow and it fragments memory, it is 1970s technology. There are better approa

          • by slack_justyb ( 862874 ) on Sunday December 01, 2019 @10:39PM (#59475188)

            in C if you want to write to VGA video memory, you can just make a pointer to 0xA000

            And you would be writing bad code if you did exactly what you just said in C. You need to mark anything like that as volatile in C. And this is why Rust wraps that kind of stuff inside core::ptr::write_volatile and core::ptr::read_volatile, because most C programmers forget that you need to add volatile to such kinds of memory access.

            Thank you, you've demonstrated exactly why Rust does the things that it does.

            • So you don't need to know to put the volatile keyword. You just have to know to use core::ptr::write_volatile and core::ptr::read_volatile. So much less to know...
              • I think his point is that it's impossible to do it without the volatile keyword, whereas in C it's totally possible to do it without the volatile keyword but it will definitely break at unexpected times.
                • Most people would use libraries that already handle it. So people needing to know the volatile keyword would be the same type of people who implement the Rust volatile functions. In professional practice, there's no difference. At best, it's just handholding for people knew to hardware programming.
        • Really what it has baked in are constraints on what is allowed. You can impose similar constraints in any language including C.

          And yet, C does not do this. Why? Because then it would not be C. You're saying that you can just magically tell a compiler to add all of these constraints that aren't specified in the C standard. Well, technically, you can make a C compiler do whatever magical whim tickles your fancy. However, once you do that, it's no longer a C compiler but something else. That's what standards are for. If we're just going to say, "Well just have a compiler enforce whatever your want" then what's the point of maki

          • And yet, C does not do this. Why? Because then it would not be C. You're saying that you can just magically tell a compiler to add all of these constraints that aren't specified in the C standard. Well, technically, you can make a C compiler do whatever magical whim tickles your fancy. However, once you do that, it's no longer a C compiler but something else.

            There is nothing substantive here just characterizations / word games.

            The constraints, features, etc that are baked in are exactly what makes the language. This is literally one of the first things taught in computer science courses and Bjarne Stroustrup of all people is one of the main advocates of "a standard makes the programming language."

            Numerous standards exist that constrain the use of language and yet nobody is arguing in those cases a different language is created.

            Kernel programming is often severely constrained. Some embedded systems and design standards don't allow heap allocation. Many secure coding standards constrain use of language features. Nobody is saying these things create a separate language. It constrains and qualifies the language.

            No, maintainability is incredibly high up there on even the pointy heads in the skyhigh offices list of things that matter.

            Maintainability is

            • Kernel programming is often severely constrained

              Ah I see where we have gotten our understanding mixed up. You are speaking on a constraint of the target, whereas I am speaking on a constrain of the compiler. Imagine that you had a C compiler for your embedded example where heap allocations are prohibited. Would you say that the compiler is a C compiler when it indicates that calloc is prohibited at compile time? I do not think you would but perhaps call it a subset of C compiler. And perhaps that is semantics, but a subset of C compiler is in my opi

    • Redox discards the UNIX "everything is a file" design. Instead it goes with "everything is a protocol" which ends up working in a similar fashion to how Windows NT kernel functions.

      If true that's really too bad.

      They also talk about how fast it boots but then fail to mention that the downside of a microkernel is slow IPC.

      I like some of the safety features of Rust but the Redox kernel design offers no improvements over what other kernels offer.

      My full time day job is embedded Linux. I've shipped real kernel code in real products, and I think that microkernels have a lot of promise to reduce attack surface for security vulnerabilities. In a day and age where we have VMs full of containers, why complain about the microkernel messaging overhead? However, who's going to write all the drivers?

    • by Tom ( 822 )

      Redox discards the UNIX "everything is a file" design. Instead it goes with "everything is a protocol" which ends up working in a similar fashion to how Windows NT kernel functions.

      Those who don't understand UNIX...

      It's worthwhile experiment. It won't go anywhere, most likely, because most of those don't. But it will lay the groundwork for an actual Rust-based OS, and that's a good thing.

      The real winner will be a Linux-compatible Rust OS, the way that Linux started out as a Minix clone.

    • Sounds like they fell into the "this language is good therefore everything built with it will be good too" trap.

    • I like some of the safety features of Rust but the Redox kernel design offers no improvements over what other kernels offer.

      An operating system kernel is a many dimensional thing. I do not put much stock in a sweeping dismissal such as yours. Neither am I familiar enough with Redox to rebut or confirm your characterization. I strongly suspect that the same applies to you.

    • "Everything is a file" is not a great design. It never was. No "everything is this one thing" is ever great design - it is poor design. Nothing is ever just one thing. It is just cargo cult design. Instead of deliberating on actual design tradeoffs, you just need to repeat a mantra "it's a UNIX principle" without ever doing actual thinking.
    • Microkernels don't necessarily have slow IPC. Minix's IPC overhead was something like 3% long ago, while L4 is down to 300 cycles on x86-64 and as few as 150 on ARM. On a 2.5GHz processor, that's 0.000012%.

      My approach is instead a managed language, notably C#, with a Kernel CLR. At the OS level, this presents some interesting challenges and advantages. Notably, you can have an RTOS with garbage collection in kernel drivers: the garbage collector service can keep three page tables (current, ready, and

  • After our Dear Leader.
    It would be hitlarious!

  • Worse even, if that one issue is one, only coders care about.
    And then doesn't even go all in.

    Sorry, nobody ever used an OS because of the language it was written in.

    Your intentions may be good. But in light of the massive mountain range of things people care and don't care about it is placed in, it looks a bit like a Yeti pile. Interesting, but not worth stepping into. :)

  • had to be said

    Just my 2 cents ;)
  • We have a bunch of programmers simply trying something new. This is good, no? The worst case, lessons are learned and everybody benefits. The best case, the project is a success, lessons are learned , and everybody benefits. Why is this bad? Is this not how BSD started? New language to build a new OS?
    • by vadim_t ( 324782 )

      The hate is because Slashdot is a shadow of its former self. A lot of the former user base has moved on to greener pastures long ago, and those that remain appear to be in good part to be those that think the pinnacle of computer technology as achieved in the 80s-90s and nothing useful has been invented since then.

      Hence the blind worship of "unix philosophy" as the principle to rule all computing, and why nobody seems to know any relevant technical details to discuss.

  • Self hosting is uninspiring; call when it's self aware.

  • I found this platform a great source of information on different fields. Online Solution for Query [classmate4u.com]

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...