Review of Sorcerer GNU Linux 286
ladislavb writes: "Sorcerer GNU Linux is not just another Linux distribution. It did not follow the tried and tested path of modifying a major Linux distribution and releasing it under a new name. Instead, the Sorcerer development team embarked on a completely unconventional way of putting together a unique distribution with features not found anywhere else. Once installed, it will be 100% optimised for your hardware, it will include the very latest Linux applications and it will provide an incredibly convenient way of keeping all software, even essential system libraries, up-to-date. The full review of Sorcerer GNU Linux, as written by DistroWatch.com."
make buildworld (Score:3, Insightful)
I've been wondering when someone would do a linux distro that compiled itself during instalation, or at least a kernel.
not just about optimization (Score:5, Insightful)
Sometimes (most often in the case of SSL) you'll see multiple versions of the same package to satisfy problems like this. This is a hack to solve the problem. Sure I can install lynx-ssl instead of lynx but what if I want lynx to use slang instead of ncurses?
This is more about control than optimization. While I'm not sure this level of control is necessary for everyone, control is one of the selling points of Open Source Software and something that most people like to have.
Re:Not totally convinced (Score:4, Insightful)
The problem behind that is that the 2.4 kernel series recommends double your ram in swap. Since ram is ultra-cheap, and the average computers have 256-512mb ram, voila, 1 gb.
>Anyways, the big feature this distro seems to be claiming is the automatic (and seamless updates).
And compiling from source, using the settings you prefer. And using upstreams sources generally.
>You can run this "sorcery update" command in a cron job at night and have a brand spanking new system the next morning. While this sounds like the cats meow, what if I don't want the latest and greatest?
Then dont run the sorcery update!!
>I personally don't to live on the bleeding edge and don't always want the lastest. Also, who decides what's the latest?
The upstream, actually.
>The latest beta?
No.
>Is it running the 2.4.17 kernel or something even newer?
2.4.17
>What version of KDE does it have?
Umm.. Dont know off teh top of my head, but it is probably listed on the site, at least the current ver #'s used to be.
All in all, they are fairly cutting edge, but you can always choose NOT to be. Just dont cast the latest version!
Re:Similar to an idea I've been mulling over (Score:3, Insightful)
All of the other things you mention are matters of choosing which binary package to install, or how to configure them. There's nothing to be gained by automatically recompiling.
Re:Depends on what you've got. (Score:2, Insightful)
My complaint. (Score:4, Insightful)
Sure, it's gonna' take a while to compile X, KDE and the like. But this is the other half of the operating system, many people will be using this exclusively, right? Wouldn't you want that to be optimized? It's the GUI, in my experience, the most demanding part of the system.
But, I'm just a user. The whole thing sounds damn cool. I don't mind taking a day to compile *all* the software, just don't make me sit there in front of the machine the whole time hitting 'Y' and enter.
The thing that intimidates me [still, yes] is the fact that I have to configure the kernel. I have never really got this to work for me. I think [complaint coming] there should be an choice for "Auto-configure" or "Configure Now".
I'll understand if this isn't possible, but that seems to be the only thing in my way. I'm still going to download this and try it. The kernel config gets me though. I always seem to leave something out.
Question: Are we using the same config utility that comes with the kernel tarballs? And does the configuration menus come up blank or does it 'best-guess' and let you add and remove the things you need?
There more then $CFLAGS= (Score:4, Insightful)
No. I don't know what all the fuzz about those optimizations is, but the real advantage imho is the configuratibility.
You want your ftpd, mta to have kerberos support, ldap ? You want KDE to user alsa only instead of OSS ? You do not use Netscape Plugins and there do not need this motif dependency ?
Here is your way to go:
You want to provide a binary for every "architekture" with every possible combination (dependency) ?
No. usually you are eating what your distribution serves you and don't care about having your postfix make you also require to install slapd, too. Even if you don't use it at all. Fair trade of convenience vs. bloat.
I do not know how those "source based" Linux Distros (rocklinux, lrs-linux, gentoo, sorcerer) handle this, but for FreeBSD you have either your make.conf or your configure-args in your port makefile. Or even a curses based selection menu. Latter one rocks.
Sure, for a "base system" one usually does need to many selectins (maybe include acl support, glibc support for 2.2 kernels or use pam or not), but for "applications" I consider this essential.
Anyway, I will sooner or later give either of those four known-to-me source-based distros a chance and see how tey stad up to mighty-ole slack
Its about choice, not binaries.
Re:My complaint. (Score:2, Insightful)
Bullsh.. (cough) (Score:4, Insightful)
The whole idea of a kernel is that it provides an abstraction layer to the hardware: the optimization based on all these "thousands of hardware configurations" takes place in the kernel.
Let's take a look at the most important pieces of hardware that are in the computer:
The conclusion is that it is really nonsense to compile _everything_ from source. Have the users compile their kernel based on their hardware. Make sure they have the correct xfree driver for their video card and a correct xfree config file. That's all the "hardware optimization" you're ever going to need.
So why do people nowadays compile programs, then? That has nothing to do with hardware, but with the myriad of libraries Out There. Binaries will of course change if you compile them on libc5 or glibc2.1. But if you stick with one distribution, that is never a problem. The problem is there if you want programs that don't come with your distro. Again, there's really no reason to compile the programs that do come with the distro.
But that's just my opinion. Of course these people should be praised, because they had a Good Idea [tm] and did something with it.
Re:Similar to an idea I've been mulling over (Score:5, Insightful)
Nobody in their right minds would spend money on a brand-new Athlon or P4 system for a printer server, so you're likely looking at needing 486 and 586 options.
The ARM/StrongARM architecture is probably just as important as the PPC and PPC64 architectures. Then, as you say, there's the Sparc and Sparc64, the Alpha, the s390 and s390x (hey! IBM are spending big bucks on promoting these!), the parisc (HPUX is carp, which means Linux is a viable option for HP boxes), the Motorola 680x0 series (lots of VME crates out there, and not everyone likes VxWorks), the Itanium (yeah, that does exist), MIPS and MIPS64, and the User-Mode Linux architecture. There are other architectures (eg: the VAX) and other OS layers which can sit on Linux (eg: FreeVMS), and the list is continually growing. Rather than pre-compiling everything for a system combination you don't know will ever be used, it really does make more sense to wait until someone says they want to use it.
Ok, so we've plenty of architectures, and at least five compiler options that need to be considered, depending on the hardware. What other possibilities are there?
Well, if you don't have a sound card, and you don't desire a network sound daemon, then you probably don't want sound support compiled into applications. Likewise, if you have a text-only system or a headless system, and don't want any X support, then you don't need GUI support in your apps. (For something like emacs, this is quite significant.) Linuxconf has support for remote administration. If you don't want external network suport, you're not going to be needing that.
On the flip-side, let's say you opt for a MOSIX configuration, or have an SMP system. If an application can support threads, you probably want to include that. You can't get any benefit out of the system, if you prevent it from sharing the work. Then, there's X itself. You only need the drivers for the hardware you're going to use. You also might want to enable extensions, where applicable. Dual-headed systems do exist, but they're not much use if the software isn't told about them.
If you've opted for a secure system, you might well want applications compiled with SE-Linux support enabled. Setting up a development box, rather than a high-performance server? Then maybe libraries need to be compiled with debugging & profiling, and no optimization.
Setting up an embedded box? Then compile with maximum stability in mind. Performance isn't so important, in these cases, but you'll notice any downtime. If it's a home-user box, hey, they can handle an application crashing once a week. No big deal. Tell that to a robot explorer on Mars, an oil flow control system in a pipeline, or an air traffic control system at peak time.
Then, there's the ability to link to different libraries, depending on what's installed. That, in turn, depends on what people want installed, what the system needs to function, and what else is necessary to meet both of those two requirements.
At this point, the number of options has totally exploded. There is simply no way on Earth you could handle all those possibilities, as pre-generated setups. Indeed, most distros have dropped support for many architectures, because they COULDN'T do it as pre-generated setups.
I think compiling on-the-fly, on a server farm, would be the only realistic way to provide this level of support. "But who would WANT to provide that level of support?" Me. Because I'm not satisfied with second-best, especially when the best is easier and requires less effort.
Why put out extra effort, in order to do less than you could? That sounds so....... stupid!
Will it download the patched sources as diffs? (Score:5, Insightful)
It would heavily reduce bandwith, were it possible to grab just the diffs (and maybe an MD5 sum of the package complete with diffs), and not everyone has a T1 out there. With binaries that wouldn't make much sense (until one applied a very specialized diff), but with source-updates it would work well.
.
Re:Bullsh.. (cough) (Score:2, Insightful)
There is for example memory allignment for the variables, which differs depending hardware. There are the different execution costs of certain asm comands, so the compiler can chose the best for every arch.
All these things have nothing to do with Kernel or Librarys, and no (Binary) Distro can do this optimizatons for you, since these binarys won't run on every arch. (e.g. i686 binaries won't run on i586)
And belive it or not, this realy makes a difference. Especially huge Programs, like X or KDE gain noticiably from this Optimisations.
The other advantage of Compiling yourself is control, you can control any aspect of the build process. Take Mozilla as an example, you can choose whether you want Mail/News client, SVG, MathML, you can prefer certain librarys over others (each of which has its own pros and cons) you can even choose which kind of garbage collection you want it to heav, if any.
And another upside is that you can also do any step manually, which teaches you many things, and gives you even more freedom.
I never leard more about the inner workings of Linux, as when I first installed it manually from sources for the first time. (see http://www.linuxfromscratch.org, IIRC the devel Team from Sourcerer used this as a Starting point)
sorcery metaphors (Score:3, Insightful)
Don't Pheer the Source! (Score:5, Insightful)
Here's what you do on your distro (if it will let you compile from source without mangling the package system):
Monday: compile the kernel and glibc
Tuesday: compile gcc, binutisl, textutils
Wednesday: compile XFree86
Thursday: compile kde-libs, kde-base (or gnome equivs)
Friday: compile kde-network, kde-utils (or gnome equivs)
etc
etc
etc
The advantages of doing your own builds, summarized (you can get detailed advantages on the sorceror, gentoo and freebsd pages):
Significant performance increase
Customized package configuration (Dia without GNOME, Xmms with mods, etc)
Fewer dependency problems (let configure worry about which exact libs you have installed)
Binary packages are convenient during installation, but they shouldn't be the final product on your box. They're so you can get a system up and running fast. Afterwards you can rebuild everything in the background while you're posting your trolls.
Most of you Linux guys are fanatical about Open Source and Free Software. It's your mantra and credo. Yet you fear the words "./configure; make; make install" every bit as much as the clueless windoze lusers. Free Software is meaningless without source code. Without source it might as well be proprietary freeware. Source code is your power. Don't pheer the source!
A Free Operating System allows you to do whatever you want with it! You are in control. Your box is yours. So why is some release manager at Redhat or SuSE your sysadmin-by-proxy?