Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Operating Systems Google Open Source Software Linux Technology

'Fuchsia Is Not Linux': Google Publishes Documentation Explaining Their New OS (xda-developers.com) 245

An anonymous reader quotes a report from XDA Developers: You've probably seen mentions of the Fuchsia operating system here and there since it has been in development for almost 2 years. It's Google's not-so-secretive operating system which many speculate will eventually replace Android. We've seen it grow from a barely functional mock-up UI in an app form to a version that actually boots on existing hardware. We've seen how much importance Google places on the project as veteran Android project managers are starting to work on it. But after all of this time, we've never once had either an official announcement from Google about the project or any documentation about it -- all of the information thus far has come as a result of people digging into the source code.

Now, that appears to be changing as Google has published a documentation page called "The Book." The page aims to explain what Fuchsia, the "modular, capability-based operating system" is and is not. The most prominent text on that page is a large section explaining that Fuchsia is NOT Linux, in case that wasn't clear already. Above that are several readme pages explaining Fuchsia's file systems, boot sequence, core libraries, sandboxing, and more. The rest of the page has sections explaining what the Zircon micro-kernel is and how the framework, storage, networking, graphics, media, user interface, and more are implemented.

This discussion has been archived. No new comments can be posted.

'Fuchsia Is Not Linux': Google Publishes Documentation Explaining Their New OS

Comments Filter:
  • interesting (Score:3, Insightful)

    by Anonymous Coward on Wednesday April 11, 2018 @05:46PM (#56420681)

    It looks like an interesting kernel using microkernel ideas. It should be interesting if they can get around the latency for process switching and message passing.

    • Microkernel? Oh, god, just bury it now. Microkernels have been the coming thing for decades now, and they never make it work.

      • I've worked on a number of microkernel systems with no problems. They are used in many many consumer and industrial embedded systems.
        • sure, great for a single purpose appliance. for a modern smartphone or tablet not so much

          • Re:interesting (Score:4, Informative)

            by Anonymous Coward on Wednesday April 11, 2018 @06:19PM (#56420881)

            Blackberry used QNX, a microkernel based OS used in lots of real-time applications.

            Yeah, the device failed, but not because of the microkernel.

            • old blackberry is an abacus next to modern smartphone. Note they now use Linux kernel. microkernel can't cut it.

              • old blackberry is an abacus next to modern smartphone. Note they now use Linux kernel. microkernel can't cut it.

                Don't you remember when QNX gave away a Unixlike distribution? It was totally usable as a desktop.

                • And it fit on a 1.2MB floppy disk - with GUI. I forgot about QNX. It was a lot of fun to play with. Real time systems got it right (as opposed to pre-emptive).
                  • Gosh, I remember when Commodore suggested that the next Amiga might use QNX, before they fell over and died....
                    • I saw a qnx demo of a bouncing cube and realtime video playing on the different sides of the cube - and it was running on low end hardware (by today's standards) and kb and mouse inputs with zero lag, no matter how heavily loaded (my favorite feature of well done realtime systems). I got a personal tour of their offices in the 90's. I'm pretty sure it could handle a cell phone without problem.
                    • Gosh, I remember when Commodore suggested that the next Amiga might use QNX, before they fell over and died....

                      Look up the Unisys ICON [wikipedia.org], an 80186 based computer back in the 1980's that ran QNX. They had quite the interesting architecture, and were basically early network computing environments (they had their own local memory and processing, but a centralized network disk share by all systems).

                      I have fond memories of those machines.

                      Yaz

                    • by haruchai ( 17472 )

                      "I saw a qnx demo of a bouncing cube and realtime video playing on the different sides of the cube"

                      I think that was BeOS unless QNX also had a similar demo

                    • by Faw ( 33935 )

                      Are you sure it was a QNX demo or the Amiga DE SDK [vesalia.de] based on TAO intent OS [c2.com]? I still have the Amiga SDK manual for that OS and probably the DVD is hidden somewhere around my house. I actually liked that OS.

                  • Real time systems got it right (as opposed to pre-emptive).

                    Huh? Real time systems can still pre-empt. Time sharing RTOS' do exactly this. Depending on wether the RTOS is hard of soft; so long as you can deterministically meet a deadline, you can pre-empt and still be a hard RTOS.

                    Yaz

                    • Depending on wether the RTOS is hard of soft; so long as you can deterministically meet a deadline, you can pre-empt and still be a hard RTOS.

                      That's EXACTLY what she said!

                • I was saddened when it was no longer free. I was tempted to use it on the desktop.
              • by PCM2 ( 4486 )

                Yeah, but this wasn't the "really, really old" BlackBerry you may be thinking of. BlackBerry devices based on QNX didn't start shipping until 2013.

                Yeah ... you've never seen one in the wild.

              • The Blackberry with QNX was usable, smooth, and had a great programming paradigm. QNX is a great operating system.
              • You're uninformed. The QNX one was for BB10, not BBOS4-7. It was a much better multitasker than iOS.
            • I worked on the first major device running QNX as a kernel for an interactive device. It was the most difficult platform I ever coded for and to be honest, there were many things which were very problematic with the platform.

              I do chalk most of this up to QNX being an RTOS as opposed to it being a Microkernel.

              These days, with multi-core CPUs everywhere, I think the whole Microkernel/message passing vs everything else is basically irrelevant since the real-time stuff can be run on one core while everything el
              • I worked on the first major device running QNX as a kernel for an interactive device. It was the most difficult platform I ever coded for and to be honest, there were many things which were very problematic with the platform.

                ICON?

              • Wow, your post is an example of what makes Slashdot worth reading. I never coded for QNX myself, and appreciate your perspective, it has the ring of truth.

                I just took a look at some of the internal apis for Magenta [github.com] and it is clear that a kernel built around clunky glue like that can't be anything other than a dog. Will there somehow be a flash of genius to make it magically fast? Don't count on it.

          • Why? Single purpose appliance can still run hundreds of applications simultaneously (bit, logging, control, monitoring, databases access. All in real time). I did testing on hardware maybe 15 years ago with 10,000 threads showing a 1-2% context switch overhead.
          • 'single purpose'.

            I don't think you know what the hell you're writing about. A robotic vacuum cleaner might meet the definition of 'single purpose', but what that's relatively primitive. So microkernel should be considered appropriate for toasters, etc?

            Tell that one to Apple. Or BlackBerry.

          • sure, great for a single purpose appliance. for a modern smartphone or tablet not so much

            So, obviously iPhones and iPads are no good then as they run the Mach microkernel. I mean, it's clear by their marketshare that it's a dead-end system.

      • Re:interesting (Score:4, Insightful)

        by phantomfive ( 622387 ) on Wednesday April 11, 2018 @07:51PM (#56421361) Journal
        Microkernels work fine, they just take (up to) a 30% performance hit depending on the workload. So you are making a tradeoff between crashing when you find a bug and speed.

        I will tell you the problem with Android bugs is not in the kernel. This is a severe case of NIH. The biggest problems in Android come from Google themselves.
        • Re:interesting (Score:5, Insightful)

          by Actually, I do RTFA ( 1058596 ) on Wednesday April 11, 2018 @09:20PM (#56421731)

          It's not a severe case a of NIH. It's a severe case of GPL. This is just so they can move all the Android devices to Fuchsia (under permissive licenses), and then slam the door shut by requiring things for new Fuchsia devices once the whole ecosystem has moved over. It's an evolution of what they started to do with GApps.

          • Good point.
          • This is just so they can move all the Android devices to Fuchsia (under permissive licenses), and then slam the door shut by requiring things for new Fuchsia devices once the whole ecosystem has moved over.

            Care to make a wager on that? I've got $10K that says you're wrong.

        • I will tell you the problem with Android bugs is not in the kernel.

          Go look at the monthly Android security bulletins. It's a rare month that does not have a serious kernel vulnerability. The kernel is the biggest single source of security problems in Android. The Android security team badly wants to replace Linux.

      • QNX [wikipedia.org]

    • latency for process switching

      Has this been a problem for decades on user devices?

      • On x86, yes. That's why user space drivers suck for things with frequent calls.

        • IIRC, this issue was pretty much fixed since the advent of PCID. Also, it was never as much of an issue on ARM.

    • Re:interesting (Score:4, Informative)

      by shoor ( 33382 ) on Wednesday April 11, 2018 @08:57PM (#56421641)

      The early microkernels had problems, but a 2nd generation of 'L4' kernels, pioneered by Jochen Liedtke (who died an untimely death at age 48) seems to have gotten around that. From the wikipedia article on L4 microkernel family:

      The poor performance of first-generation microkernels, such as Mach, led a number of developers to re-examine the entire microkernel concept in the mid-1990s...
      Detailed analysis of the Mach bottleneck indicated that, among other things, its working set is too large: the IPC code expresses poor spatial locality; that is, it results in too many cache misses, of which most are in-kernel...
      Liedtke came to the conclusion that several other Mach concepts were also misplaced. By simplifying the microkernel concepts even further he developed the first L4 kernel which was primarily designed with high performance in mind. In order to wring out every bit of performance the entire kernel was written in assembly language, and its IPC was 20 times faster than Mach's.[4] Such dramatic performance increases are a rare event in operating systems, and Liedtke's work triggered new L4 implementations and work on L4-based systems at a number of universities and research institutes,

    • Spring (https://en.m.wikipedia.org/wiki/Spring_(operating_system)?wprov=sfti1 ) beat many of the performance issues. Chorus too, but with different solutions. Linux is not the beginning nor the end of OS development. Kudos to Google management for making the investment

    • Re:interesting (Score:5, Interesting)

      by LostMyBeaver ( 1226054 ) on Wednesday April 11, 2018 @11:30PM (#56422125)
      Actually, Google will make Fuschia work as a smartphone/tablet platform and whatever else. From a design perspective, it's actually quite bad.

      When I first started reading the code to Fuschia, I was going line by line asking myself "Haven't we already made this mistake before?". It was like one major compilation of "I took an OS course based on Tananbaum's book and decided just to copy every mistake we never learned from". And in the end we have a brand spanking new 30 year old operating system.

      Ok, I'm being harsh and it's only partially fair. Let me start with your issues.

      It's not necessary to sort out the issues with latency and message passing. They are making a real-time (or near real-time) operating system which in its own right already suggests that they're willing to sacrifice performance in favor of deterministic time. Telephones always benefit from real-time kernels in the sense that it allows dropping overall transistor and component count. Every telephone which ever boasted 4 day batteries ran real-time operating systems and it was generally a good idea.

      Secondly, there's been a pretty huge move in Firefox and Chrome to optimize their shared memory systems to reduce or eliminate hardware locks by marshalling the memory reads and writes. Add to that that almost all modern development paradigms are asynchronous programming... unless you're stuck with some shitty language like C or C++, and most of the switch and latency issues are irrelevant. This is because you can keep multiple cores spinning more or less non-stop without much concern for kernel level inter-thread synchronization. Take it a few steps further and expose things like video hardware access directly to individual applications that would operate their own compositors based on a GPU context and apply shaders to support windowing type tasks... then it's going to be quite impressive and the locks should be a lot less relevant.

      From that perspective, I don't see a good solution to the audio problem as I've never seen a sound card which would support the principle of shared resources. I don't think it would be even mildly difficult to design such a device though. The only real issue is that if mixing is moved entirely to hardware, then depending on the scenario, it would be necessary to have at least quite a few relatively long DSP pipelines with support for everything from PCM scaling to filtering. There's the other problem which is that protection faults to trigger interrupts could be an issue unless there's some other creative means of signalling user mode code of buffer states without polling. Audio is relatively unforgiving of buffer starvation.

      So, let's start on my pet peeves.

      Google's been working on this for some time and they still don't have a system in place for supporting proper languages. C and C++ are nifty for the microkernel itself. But even then, they should have considered Rust or rolling their own language. This world has more than enough shitty C based kernels like Linux and BSD. If you want to search the CVEs and see what percentage of them would never have been an issue if the same code was written in a real programming language, be my guest, but I'm still sitting on WAY TOO MANY unreported critical security holes in things like drivers from VMware, drivers from Cisco, OpenVPN certificate handling code, etc... I absolutely hate looking at C or C++ code because every time I do, unless it's painfully static in nature, it's generally riddled with code injection issues, etc...

      And yes, I've been watching Magenta/Fuschia evolve since the first day and I follow the commit logs. It's better than TV. It's like "We have a huge number of 22 year old super hot-shot coders who really kick ass. Look at this great operating system kernel" and it looks like some bastard high school or university project written by people who have little or no concept of history.

      Linux is good for many things. It's amazing for many reasons. Linus did an amazing job making it and it's a force of natur
      • Nah, there's little content in what you've said. Real programming languages? I'm going to guess that your real programming languages will keep changing over time to follow the current trend. When producing something long lived like an OS, you want to use established technologies where you know the implications and shortcomings, so something like C is a good place to start.

      • Re:interesting (Score:4, Interesting)

        by AmiMoJo ( 196126 ) on Thursday April 12, 2018 @07:05AM (#56423155) Homepage Journal

        This world has more than enough shitty C based kernels like Linux and BSD. If you want to search the CVEs and see what percentage of them would never have been an issue if the same code was written in a real programming language

        This is a very outdated security model, one which any really secure OS has abandoned long ago. Security by eliminating all bugs is just deluding yourself into thinking that's even possible. Relying on a "safe" language to do it for you is even more foolish.

        In fact that appears to be why Google is developing this OS. It's designed to be secure, in a way that building sandboxes on top of Linux or Windows can't be. The microkernel is necessary for this.

        This security model is proven to work. It's how all modern operating systems try to implement security, but it's tacked on later rather than designed in from the kernel up.

    • Re:interesting (Score:4, Informative)

      by religionofpeas ( 4511805 ) on Thursday April 12, 2018 @01:54AM (#56422479)

      The problem with microkernel performance is not due to latency for process switching and message passing.

      The problem is the synchronization between different parts. Imagine for instance a multi-threaded filesystem. A filesystem as whole has a certain state. That state describes all the files and directories, file sizes and contents. Now imagine that one of the threads makes a change to the state. The problem is how to get that state update to all the other threads with a minimum of a delay.

      In a monolithic kernel, the problem is solved by getting a lock, update the state, and releasing the lock. It's a very simple and efficient operation.

      In a microkernel, you need to send messages around. You can optimize the message passing itself, but you'll still have the problem that the receiving thread is doing other things, and only handles the messages at certain points. While it is doing those other things, it's working with an outdated version of the state. Basically you're getting into the design of distributed filesystems/computing, and this is a very hairy subject. The complexity of the problem is much larger than just sticking to simple locking.

      The traditional solution is to keep the filesystem (and similar parts of the OS) in a single thread. This may be a viable solution on some platforms (perhaps a phone or tablet), but it will quickly run into scalability problems on a large, general purpose computer.

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Wednesday April 11, 2018 @05:46PM (#56420683)
    Comment removed based on user account deletion
    • FINL is not Linux.

      Even better FINAL.

      FINAL is not a Linux

    • Google is young enough not to remember the demise of the Yggdrasil Linux.

  • Fuchsia? (Score:3, Insightful)

    by TiberiusKirk ( 2715549 ) on Wednesday April 11, 2018 @05:50PM (#56420717)
    "You've probably seen mentions of the Fuchsia operating system here and there since..."

    Um, no. Never heard of it.

  • Fuchsia is not Linux

    Translation: Fuchsia is Linux. We took Linux and hacked away at it to make Fuchsia. Because Oracle won in court recently, we're being a bit more careful at covering our tracks this time.

    • I've gotten to play with many operating systems..vx, minix, psos, linux, nt, 5 or 6 home grown varieties. They all more or less provide the same primitives - processing, object and temporal and sometimes spacial separation. From the Zircon Kernel Concepts section it looks like the same stuff and concepts as posix...but it did have wait_for_many() from nt land. Don't think posix had that. Maybe borrowed some object ideas from Plan9?
    • Nope, Fuchsia is a microkernel inspired by Spring, seL4 and others. The low-level APIs are very different from Linux, as is the structure of the OS. It is able to run the Android Runtime, and can support a POSIX compatibility layer (though that's not what it's designed for).

      That said, recent Android's Project Treble moves a load of device drivers out of the Linux kernel and into userspace, which would make it much easier to provide versions that work with both Linux and Fuchsia.

  • So you are in front of the ads sooner.
  • by ArhcAngel ( 247594 ) on Wednesday April 11, 2018 @05:59PM (#56420779)
    Naming their manuscript "The Book". I mean God is merciful but taking the same name as His manuscript just might be smiteworthy.
  • by Pfhorrest ( 545131 ) on Wednesday April 11, 2018 @06:13PM (#56420849) Homepage Journal

    ...and that's FINL!

  • by Anonymous Coward

    Zircon has a unified system to manage the lifetime of, and control access to, all kernel objects. It feels very much like the Windows kernel. The way Zircon uses handles, and the zx_object_wait_one() and zx_object_wait_many() functions, really show the Windows influence. I personally think this is a Good Thing -- my disagreements with Windows lie mostly in user mode -- but YMMV.

    Just to be clear... I'm not saying Zircon is a Windows clone -- just that I see clear influence from Windows.

    • FWIW, I mostly agree. NTOSKRNL is very well-designed kernel by 1990 standards, and it still mostly holds up today. The lack of a unified event system, for example (e.g. wait on a socket and a condvar), is one of the outstanding issues in Linux. (For completeness, FreeBSD's kqueue almost does the job, and libevent makes things a bit less painful.)

      The main problem with Windows NT is not the kernel design, it's almost everything else.

  • by ka9dgx ( 72702 ) on Wednesday April 11, 2018 @07:15PM (#56421141) Homepage Journal

    So, this can join GNU Hurd and Genode in the queue of things that we all need, but nobody (else) knows it yet. I look forward to running on of these, some day, so I can ditch the virus scanners, and surf the web in perfect safety... downloading and running whatever I want without worry.

    • If you read the white papers on HURD architecture, they're quite clear that we don't need it, and everybody knows it already. ;)

  • Bet the only thing not patented already or copyrighted is the name of the OS lol
  • by douglips ( 513461 ) on Wednesday April 11, 2018 @11:06PM (#56422065) Homepage Journal

    "I really must insist you call it GNUschia."

  • I think I'll go with microkernels are the fuchsia.

  • Comment removed based on user account deletion
  • The question is: why make such an effort when Linux is already there and can be modified to your needs.

    One reason may be, that there is something you want to do fundamentally different, so it's easier to start a new project than change the old one to your needs.

    Another thing may be that they want to get rid of (L)GPL-Licensed parts in their OS, and that may well be the more important motive here.

    See here how the android website argues for other licenses than LGPL for User space apps:
    https://source.android.c [android.com]

    • Ever since I first learnt of Fuchsia I thought Google wanted control about the Android kernel. I still think the same.

"...a most excellent barbarian ... Genghis Kahn!" -- _Bill And Ted's Excellent Adventure_

Working...