Upside Article On Embedded Linux 42
Paddy wrote to us about
today's Upside article on embedded Linux, and what it can/does mean for Linux. Talks a bit about Lineo, and the possibility of kernel forking - good summary of the landscape.
GPL isn't the best for the embedded market (Score:1)
1) The present 8 bit processors work fine for many tasks. So why have all the bits needed for an OS when an 8051 or z80 will do the job.
2) The embedded market lives and dies (at least as far as management is concerned) by IP (Intellectual Property, not Internet protocol
Given 1 and 2, any of the 'modern' Oses (WINCE, QNX, EPOCH, Embedded NT, *BSD or even Linux) need 32 bits and relying on a GPLed code base opens you legally up to releasing part of your IP to 'the community'.
Even if you don't HAVE to release your code, do you, as a manufacturer, want to have to go to court to prove you don't have to a jury of your peers, is someone demands the code?
Most companies that have IP they want to defend don't want to take that risk, even if the cost of GPL is $0 per seat, and WINCE is $35.
There *IS* a choice to the GPLed Linux in the Open Source OS arena. It's BSD. And, the BSD license *DOES* protect the IP of others.
If the goal of the Open Source world is to promote Open Source, then why the drum beats for Linux, when, for the embedded world, BSD is a better choice to protect what the embedded world considers its most valuable asset - IP?
Re:Yup, I like it... (Score:1)
Remember, an operating system is no more reliable than the hardware on which it runs.
Re:The meaning of "real-time" (Score:1)
Re:isn't kernel forking the point of the GPL? (Score:1)
In my opinion, they should change the name a little. But that would be Linus' decision.
Whats all this Lineo stuff anyhow (Score:1)
From talking with them, it didn't sound like it would be GPL'd.
Its my understanding (someone from Lineo can chime in here and correct me), that they will
sell "their" linux with a Royalty fee for each system deployed.
That is the pre-historic business model currently used by several of the embedded O/S companies.
The problem with making Linux into an embedded system is that it is a PITA to strip it down to a reasonable size (I know, I am doing it now).
Ouch, man (Score:1)
The forking issue came up both because of last week's uproar over TurboCluster and my background reading on RTLinux, an "extension" of the Linux kernel that brings the real time capabilities to the fore.
Basically, I wanted to see what Linus thought about such "extensions." Apparently, he seems to approve. As a reader mentioned above, the GPL allows forking. This only becomes a Unix-style problem when the bulk of embedded developers -- a more commercially-minded group than PC hackers, apparently -- suddenly decide to branch off and start their own kernel group.
As for knocking the depth of Upside's content, what can I say? This is an interactive medium. If I get a ton of e-mails today and tomorrow begging me to dig deeper, I won't feel like I'm kicking so many readers out of the sandbox just by using the RTOS acronym.
Later, Sam Williams Upside Today
low latency Linux may happen automatically (Score:1)
Linus chooses GPL, GPL allows forking, ... (Score:1)
If Linus wants to stop his software from forking then he should have placed restrictions upon users to prevent such. However, since he chooses to use the GPL, he chooses to allow forking. Hence, the choice has been left, by Linus, up to the user. So let the user make the choice!
We can always choose to ignore a user who forks software in a manner we don't like.
Competition from VxWorks, QNX, eCOS, et al (Score:1)
The beauty of it is -- and incidentally this points out why we don't need to get worked up about forkin' forks -- when you get all said and done you can telnet/ftp into any one of them. At the other end of the wire they all can pass, to varying extents, what we might call the Penguin test of OSes: If it BOOTPs like a penquin, and if it NFSes like a penquin, it's going to play well with everyone, except possibly certain dain bramaged miscreant DHCP clients out of Redmond.
sync;sync;sync
Re:Don't care? (Score:2)
Re:Whats all this Lineo stuff anyhow (Score:1)
The next wave (Score:1)
Embedded systems is the next wave, but definitely deserving of more than a page of text. Just Slashdot alone has several articles pouring in regarding cheap hardware becoming ideal embedded devices(the router/mp3 player, the tiny web server). The market deserves just as much attention as the desktop market because the desktop phenomena is 20 years old and soon to hit the performance peek. Wake up, progress is all about expanding.
Re:Yup, I like it... (Score:1)
The hardware is great... it's just the damn internal logic errors in the current os!
Where is Embedded BSD? (Score:1)
Forking is GOOD! (Score:2)
There are a number of valid reasons to fork the kernel. No OS, no matter how good is likely to be the master of everything. The embedded environment is very different from clustered supercomputing is different from workstation.
If you're controlling a VCR, and take a 25% cut in speed to cram your code into the next smaller sized EPROM, you'll be very happy (as long as it's still 'fast enough'). If you are doing clustered supercompution, you'll prefer the speed over the space savings.
For a desktop, you'll probably prefer versatility over a few K of memory and you'll probably be willing to sacrifice maximum speed in order to have a more responsive system (which will actually feel faster on a desktop even though it's slower on batch jobs).
Some of the above are doable with tuning parameters, and by configuring compile-time options. Whenever possable, that is the best thing to do.
The balkanization of Unix was a very different issue. In that case, each proprietary vendor forked the code and closed it. Then they made just enough changes to 'justify' preferring their minor varient over the competition. In truth, most of those 'differences' could have been handled with a few tuning parameters and an ifdef or two in the source.
In short, those were NOT technically justifiable forks, they were business decisions meant to benefit ONLY the corperates who did it.
It is a mistake to lump those two things together.
isn't kernel forking the point of the GPL? (Score:3)
Nothing is intrinsically special about Linux Torvalds. He concentrates on controlling the kernel for the main branch. But if there needs to be a separate branch for embedded systems, cluster systems, and other specialized systems, so be it. I don't want to see Linux become a "jack of all trades, master of none" in the name of not kernel forking.
I say, for it and keep specialized code from cluttering my kernel
Jeremy
Yup, I like it... (Score:2)
Re:My two cents (Score:1)
But developing on the same platform as your target is another. No emulators, no conversions, it's the same thing.
That is what the developers want. Portability
Personally, I don't think that anybody would want to go out and have to pay $300 for a copy of Office Pro 2000, and then have to spend another $150 (or something like that) for Office Pro 2000 Windows CE version.
In this competition, I have to give the advantage to Linux.
Nothing insightful in the article (Score:1)
Articles like this one have been posted before... to sum it up:
Unnecessary Forking Is Expensive (Score:2)
The cost is in retrofitting changes that go into one tree into the other tree.
Thus, if there turns out to be a Lineo Fork , the folks at Lineo will find that they either:
For instance, if they fork, and ext3 and XFS filesystems are introduced in the Torvalds Branch , it may be as costly (in terms of programmers' time) to introduce those into an independent fork as it was to implement them in the first place.
The folks building embedded systems may not care right now about IP6 or USB or k001 journalled filesystems; if they branch out now, when the support for such things is tentative, it may wind up being difficult-to-impossible to take the code that emerges a year from now and add it in then.
The overall point here is not that Forking is bad and evil and must never happen, Bad! Bad! Bad! The point is rather more like Forking results in there being substantial costs attached to synchronizing the functionality of the varying versions. Forking is EXPENSIVE which can be rather bad.
Linux may be free of charge, but development of code is most definitely not free of cost. And code forks are perhaps the ultimate example of this.
The meaning of "real-time" (Score:3)
The author seems to misunderstand the point of real-time operating systems, which is not to be fast (though, of course, that is always nice), but to have a guaranteed response time.
So in a real-time OS you have pre-emptive scheduling and an upper limit on the time it takes to context switch. This way you can guarantee that an important signal ("this patient is about to die, doctor") is serviced (page the doctor, or whatever) in real-time and in the appropriate time (i.e. before the patient dies.)
Do the current Linux kernel have this feature? There is no reason why it couldn't have: good ol' VMS had basic real-time features built in (pre-emptive scheduling), you could add the rest (I think the product was called ELN and was described as "the best kept secret in DEC"), and you could still use it as a "normal" operating system.
My two cents (Score:2)
So far as the kernel forking. Who cares? If the kernel forks in order to make concessions to the embedded market, it's still GPL'ed, so other developers won't be left out in the cold. It'd actually be better if it forked in this way, rather than following mMicrosofts footsteps with Win32, where we've now got WinCE, Win9x, and WinNT, which are largely incompatible with each other, serve completely different markets, yet have the same interface and API's tying them together...
With all that, it'd definetly be a boon for developers of embedded devices, supposing the could cut away enough fat from the kernely to make it competitive with other embedded OS'es. Though the cost savings probably won't be enough to make it back to the customer, i'm sure they'ed appreciate having an open platform to develope on, rather than needing to learn the semanics of a new OS when they switch projects...
Re:isn't kernel forking the point of the GPL? (Score:2)
The one strong (well in other peoples opinions) arguement against Open source is that there is "no centralized authority" to make the desicion. (Not my arguement, I've seen this numerous times.)
It's like someone born into communism saying "But how does the economy you know how many XXXX's to make?" They would very natually keep harping on this point becuase they can't comprehend supply and demand.
The Closed-Source proponents can't understand that if the fork splits, that it either 1) falls out of use, 2) is kept current with the main source by the forker, 3) becomes a second product (an example might be emacs and xemacs (I know I'm strectching)) or 4) gets rolled back into the main codebase.
What we really need to start working on is to inform people on how descisions are made in the open source community and point out forks that have occured in the past without harm occuring.
At least that's what I think,
RobK
Re:isn't kernel forking the point of the GPL? (Score:1)
another example of another irrational (ignorant?) fear.
Compare to the US constitution (Score:2)
However the fact that much of the constitution (and some early amendments) has clauses supportive of rebellion doesn't mean that anyone wants to really see one.
It is the same principle. The ability to fork is good. Using that ability is bad and only justified if it not forking is going to be worse...
Cheers,
Ben
Linus forking (Score:1)
What they would need is someone like Linus in a "master control" position to make sure that all the feature development that's relevant to the forked code gets moved over. It's no good if suddenly you need a feature from 6 months ago in a segment of code that's already mutated 3 more times. You also need someone like Alan Cox that can field cross-project patches into a pipeline for the master control person. The problem you'll run into quickly is two-fold. You dilute the quality of people working on the main thread/fork and you end up with competing personalities/styles between forks. Tread carefully.
Re:isn't kernel forking the point of the GPL? (Score:1)
Nergh! What a day.
Re:isn't kernel forking the point of the GPL? (Score:1)
Re:The meaning of "real-time" (Score:2)
It's probably fair to say that things like eCos, [cygnus.com] RTEMS, [army.mil] VX-Works, and QNX should not worry too much about Linux; if the application involves life-and-death control issues, I'd rather prefer to use one of those. In critical cases, the maker of an embedded system will have source code access, whatever the cost.
But there is certainly room for Linux to crowd out lower grade things like WinCE in less critical "Soft RT" applications where the cost of the solution is a critical factor.
Second Windows War? (Score:1)
Actually I am VERY interested in possible embedded uses of Linux; I've spent most of the last ten years designing industrial control software....
Don't care? (Score:1)
The same could be said of your computer. If you have to know what operating system its running, it wasn't implemented correctly. However, most of us power users out there would like to know. If only I could reprogram my VCR so that I could actually use it.... Then I would patent one click recording of my favorite shows.
Re:Where is Embedded BSD? (Score:1)
Ikan.
modularity is the answer (Score:2)
- a fork, from then there are two incompatible versions of linux that will grow further apart in the future since they are maintained by people with different interests
- no fork, this means accepting that linux is unfeasible in certain domains
Probably a fork will happen. Why? because linux is not flexible enough to efficiently provide a solution for both domains (without forking). A few years ago the linux kernel was a big fat monolith with everything compiled into it. The situation has improved over the years but not enough. It's still a monolithical system. An even more modular system would perhaps have prevented a fork since the embedded version of linux could then choose to leave out a few modules or choose to provide custom implementations of a few modules.
The point is... (Score:3)
Having helped to build such a distribution, all I can say this is exactly the point. Developers can run his program on exactly the same platform that will be used for the product. The only difference is that the product does not have development tools (such as gcc) on it.
This eliminates the need for cross-compiling and uploading the code to the embedded board, making life a lot easier. Also, developers can use a standardized platform they are familiar with.
Re:Where is Embedded BSD? (Score:1)
No. Linux explicitly allows binary closed-source "modules" (ie device drivers).
If you need to modify the actual kernel source to use your module, you have to release that code. Most likely that code is not going to reveal your Kool Secret Stuff (TM), since it just calls some new module interface. I think you will get some pretty nasty remarks from OSS if you do this and your code will never go into the main source, so try to avoid this.
PS: This is all my opinion, I don't know for sure.
Re:Where is Embedded BSD? (Score:1)