Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Review of Sorcerer GNU Linux 286

ladislavb writes: "Sorcerer GNU Linux is not just another Linux distribution. It did not follow the tried and tested path of modifying a major Linux distribution and releasing it under a new name. Instead, the Sorcerer development team embarked on a completely unconventional way of putting together a unique distribution with features not found anywhere else. Once installed, it will be 100% optimised for your hardware, it will include the very latest Linux applications and it will provide an incredibly convenient way of keeping all software, even essential system libraries, up-to-date. The full review of Sorcerer GNU Linux, as written by DistroWatch.com."
This discussion has been archived. No new comments can be posted.

Review of Sorcerer GNU Linux

Comments Filter:
  • by CaptainAlbert ( 162776 ) on Friday January 18, 2002 @09:34AM (#2861542) Homepage
    I recompiled Qt from source on my four-year-old machine the other day, and it took six hours. I'm not recompiling every bit of software on my machine... it would take weeks. I doubt I could even fit all the source code on my HDD. But then it's a seriously retro setup so I'm probably making a fuss about nothing.

    Where I think this would come into its own is on a site, like a university or large company, where there are (a) hundreds of identical machines with exactly the same specification (down to the position of the sticker on the case), and (b) people who know what they're doing (ha ha) in charge. You could amortize the time taken to create the optimised system over the savings once you've installed it on every PC.

    I wonder if they support using a compile farm to perform the rebuild? That would be sweet.
  • make world (Score:2, Interesting)

    by Builder ( 103701 ) on Friday January 18, 2002 @09:34AM (#2861545)
    So there is a GNU/Linux distribution with the equivalent of FreeBSD's cvsup and make world process. Yah. Whoohoo.

    So now we can have packages optimised for out platform at the cost of building everything from source. Sounds like a heavy cost to me. Wouldn't it be more efficient to provide a couple of different binary packages for each package a'la mandrake (i586 and i486) ? Compile once and let everyone install them as opposed to everyone compiling ?
  • by d-Orb ( 551682 ) on Friday January 18, 2002 @09:42AM (#2861579) Homepage

    Just a question: is this distribution's approach similar to the BSDs? I think if not the same, it is very similar to the ports system, a very useful and clever approach in a lot of respects.

    On the other hand, I don't think that many people would be that keen to recompile KDE/Gnome from scratch every time! Specially in legacy (i.e., more than 3 months old) hardware. However, for a (say) dedicated web server or something like that, it might have its uses...

  • by prototype ( 242023 ) <bsimser@shaw.ca> on Friday January 18, 2002 @09:50AM (#2861620) Homepage
    I think this is an interesting idea but has a few flaws.

    First they ask that your swap image be at least 1gb in size. I don't know about everyone else, but my linux partition is just 2gb so that's half of my disk already. I know, I know, these days everyone has 30, 40, and 60gb drives so it's not a big deal. Maybe it's just time for me to get more iron.

    Anyways, the big feature this distro seems to be claiming is the automatic (and seamless updates). You can run this "sorcery update" command in a cron job at night and have a brand spanking new system the next morning. While this sounds like the cats meow, what if I don't want the latest and greatest? I personally don't to live on the bleeding edge and don't always want the lastest. Also, who decides what's the latest? The latest beta? Is it running the 2.4.17 kernel or something even newer? What version of KDE does it have?

    It's also a huge distribution and requires a dedicated weekend to get up and running. The name implies that it's something that a beginner could sit down and startup with, but this is not the case. If you're looking for a simple install, stick with Mandrake/RedHat or something. If you have a few gigs to chew up and a weekend to burn, maybe give it a try.

    liB

  • I just posted this [slashdot.org] last night...
  • Optimization (Score:5, Interesting)

    by LinuxGeek8 ( 184023 ) on Friday January 18, 2002 @10:02AM (#2861669) Homepage
    I am sure it is all nice and optimised when you compile everything from source.
    There is just one disadvantage; while you are compiling that latest version of XFree86, gnome or kde the computer does not feel really optimised.
    Compiling everything is just too much hassle, and takes too much time and computing power.

    For a server there are not that many packages installed, so it can be usefull. But on my desktop I have about 2Gb software installed. Keeping that up to date.......nah.

    Just let me update everything from binary, be it apt-get or urpmi.

    Btw, I have a friend who was horrified when I showed him apt-get. Do you update from binaries? Do you call that security?
    He liked to install security-updates from source.
    When asking sometime later how he kept his FreeBSD boxes up to date he said he did not do that. He felt safe behind a firewall.
    Hmm, I guess it is just too much hassle.
  • by nagora ( 177841 ) on Friday January 18, 2002 @10:18AM (#2861740)
    I have broadband and 600Mhz which probably never get more than 1% usage over a whole week, with no reduction in cost if I don't use my bandwidhth. Downloading and recompiling at night would suit me fine and I'd actually be gettig better value for the broadband.

    If I was on a 56K modem and a slow machine I'm not so sure that this would be worth while. But slow machines are getting rare now.

    TWW

  • The difference is, I don't see why someone should need to recompile ANY source on their computer. What if you download a very simple mini-kernel, which scouts out what hardware that machine has, and then allows you to upload that information to the server site.


    The user would then pick what software they want installed on their system, as per any other distro.


    The server site can then take the source, recompile it for that configuration, and generate a set of ISO images containing the optimized setup for that machine.


    One advantage of this approach is that if you're installing on multiple identical machines, you would only go through the process once. Once it's done, you'd have a set of "instant" install CDs. No menus, no further tweaking, just a direct blast onto the hard drive(s).


    A second advantage is that a server site can have a compiler farm, making the build process MUCH quicker than would be possible for an individual.


    A third advantage is that if someone sends in a configuration which matches one that's already been done, the compiler farm only needs to rebuild updated packages. The rest has already been done. The CDs can then be built out of freshly-compiled binaries and pre-compiled ones.


    A fourth advantage is start-up time. Because you're downloading a very basic bootstrap, rather than a mini-distro, the time to download, install and run is going to be much much less.


    The last advantage is when it comes to updating your system. Again, with all the compiling being done on a remote compiler farm, the time it would take to do a basic update would be minimal, compared to Sorcerer, and far more optimal, compared to Up2Date or Red-Carpet.


    The key to something like this would be the detection of hardware and (on networks) servers. Kudzu is good, but it's limited. sensors-detect is OK, but it's specific. I don't know what Anaconda uses to detect the graphics stuff, but again that is good, but specific. Any router can detect other routers working with the same protocol. There's plenty of stuff that none of the above detect, but would need to, for a truly optimized build & auto-configure. (Is the network multicast-aware? Will the network support ECN? What is/are the IP addresses of routers on the network? Where is a DNS server? Is the sound device better supported under ALSA or OSS? Do memory constraints indicate optimizing for speed or size? etc.)


    An optimized build is more than just tweaking the configure options. It's also choosing the right compiler (where multiple options exist). It's setting up the configuration files for those things that can be discovered. It's about asking for the information that's needed, rather than the information that can be found out.


    My idea would be that the servers would have a database, containing source and binaries, identified by a 1-way hash of the relevent hardware information. This avoids any privacy issues, as there's nothing private stored. Each access would amount to a search for all records where the hashes of the relevent hardware match. For updates, the user's machine could then select/deselect stuff it already had. The rest would be put into ISO form, and be available for download.

  • by Necroman ( 61604 ) on Friday January 18, 2002 @10:24AM (#2861776)
    This is just another linux distro, like all the others. It runs the linux kernel, GNU, xfree86, add all the other fun apps that all distros use. What it has over everyone else is, everything installed on there is gonna be brand spanking new.

    Who is this for?
    NOT EVERYONE!

    There has been so many threads about people saying this is not good for them. Well, you know what, then it isn't. This distro is for the people that want to have everything up to date. It won't be the Best distro in the world since the combination of all the different apps you are installing has not been testing, but it leaves you with something that is setup the way YOU want it setup, not the way some developer over at (insert distro name here) decided to do it.

    Look for the good in the distro, don't just go hounding it.
  • Updating. (Score:3, Interesting)

    by saintlupus ( 227599 ) on Friday January 18, 2002 @11:00AM (#2862037)
    You know, one of the main reasons I like the *BSD operating systems so much is the port / package systems that make this sort of updating so simple.

    I've tried Debian, but I don't know if it was the weird hardware (Using the m68k build) or just my newbieness (more likely) that made me dislike it so much.

    This Sorceror distro, on the other hand, sounds like all the ease of maintenance of the FreeBSD "make buildworld" setup with the greater driver base of Linux. Win / win. I might just have to check this out.

    --saint
  • by alext ( 29323 ) on Friday January 18, 2002 @11:17AM (#2862150)
    You know, just maybe the emergence of systems that try to build the whole shebang from source is a way of telling us that we need a better way to distribute programs for different platforms?

    Rather than assume that everyone can download the 386 version or compile their own on their embedded Linux PDAs, let's address the real requirement and make some progress, however small, towards practical cross-platform code distribution.

    Eric and the rest of the 'visionaries' can ignore it as long as they want, but the fact is that Linux as a platform is going nowhere relative to .NET and Java unless it adopts a VM. Cross-hardware platform distribution issues are already affecting PPC and ARM users and this will get worse as small non-x86 devices spread.

    While code distribution requirements alone would be sufficient to justify a 'Linux VM', there is another possible benefit here which gets precisely zero attention. Ever thought what might happen if the source code and compiled code were semantically equivalent? Right now, this is almost the case now with Java bytecode, in that decompilers such as JAD can turn compiled .class files back into source. Real equivalence (a bit like old tokenised BASIC systems) would mean that all that ever needs to be distributed is the 'compiled' form and, by its very nature, this code is always open.

    Sound tempting? Well, it's hardly rocket science to implement these days - we have Java, Mono and Parrot VM work going on anyway, and the commercial world has pretty much left the goalposts wide open for an improved bytecode or AST representation of programs.

    Why is the investment of time and money going into such a parade of half-assed solutions?
  • Re:But will it do.. (Score:2, Interesting)

    by zby ( 398682 ) on Friday January 18, 2002 @11:38AM (#2862304) Homepage
    They claim to have the newest versions
    (something like a day after freshmeat).
    I wander if simply compiling on the user machine
    does so much simplify putting together packages
    to make a distribution.
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday January 18, 2002 @03:42PM (#2863951) Homepage Journal
    I agree with the points you raised, and think that a -total- system analysis should provide a reasonable "version 1" configuration. Sure, it won't provide a "perfect" kernel, but if I can squeeze out a 25-50% improvement, hey, that's probably worth it, compared to using a stock kernel.


    eg: Does the machine have an ISA bus? This is one of the points you mentioned. Easy to find out. You probe for an ISA bus. If there's nothing there, you don't need ISA support. If there's something there, but no cards can be detected, then any ISA option the user picks for the build will probably be better off as a module.


    Let's also look at the PPP/SLIP stuff, for a second. If someone has a modem, you cannot determine by merely probing the system whether that modem will be used for PPP or SLIP (or even just raw connections). If it uses PPP, are they wanting ATM over PPP? There are many possibilities, as you rightly say, and many configurations to suit each.


    My proposal here is to use the detected hardware to generate a list of questions that need to be answered for an optimal solution, with a "likely" default, based on known patterns.


    eg: I'd say that if a user has an ethernet card, a built-in modem, and a modem on the serial port, then the built-in modem is probably "extra" and, if there's a specific driver for it, that driver would be best compiled as a module. However, if the first 1000 people who access such a distribution say "hell, no, I want that compiled directly into the kernel", then there's a decent chance that the 1001th person will have the same reaction, so that then becomes the default.


    In other words, the optimizations I'd select will not be fixed. The system would be capable of simple learning and pattern recognition. The most basic form would be to give every possible hardware/software combination an index number, and then use the ordered pair of (kernel option, index) to find the option that most people who had that combination preferred.


    How this would work, in practice: Person A has an internal modem, and specifies that they want networking software and a dial-up client. The default for this combination would be for PPP to be built into the kernel. However, person A says "no, I don't want PPP at all." The weighting now shifts, for that specific combination, and no other.


    An extremely trivial way to do this is to store the weighting as a floating-point value between 0 and 1. 0 to 1/3 is defined as "don't include". 1/3 to 2/3 is "include as a module". 2/3 to 1 is "include in the kernel". If a value somehow hits a boundary perfectly, then the default would be to go to the lower of the two options. A choice is given a value of 1 for "kernel", 1/2 for "module" and 0 for "not at all". You then simply find the average value, to compute the default most likely to be what's wanted, or close to.


    So, back to person A. The value started at 1, but is now 1/2, because they chose not to include PPP. This means that the next person with that same setup will be offered PPP as a module, as the default. If they, too, say that they don't want PPP at all, then the value drops to 1/3, which pushes it into the camp of not being included, in any form, by default, unless otherwise specified.


    The most complex such system I can readily imagine for this would be to set up a database as above, but instead of storing a simple value as the data, you could store a simple neural network. The network generates the default. Where the default is different from the selected value, the network is trained with the data set of the selected value, plus any related non-hardware data.


    Would such a phenominally complex, massive, overwhelming system have any benefits over just handing over some pre-compiled binaries, plus source?


    THAT, IMHO, is a difficult question to answer. At first glance, the answer would be no. The speed/size variations would be too small for Joe Average to notice, under normal conditions. (Although, under real stress, they might notice SOME improvement.)


    However, let's examine a few things here. This kind of system would not be aimed at Joe Average, because Joe Average doesn't give a damn about the kinds of optimizations this system could perform. This kind of system would be used by power users, who need damn close to maximum performance with the minimum of resources.


    It's that specific combination that this whole convoluted mess of spaghetti-logic is aimed at. How to get the absolute most out of the absolute least. Generic binaries won't do that, and for some people, the "absolute least" will preclude compiling monsters such as GCC or XFree86 for themselves.


    (The docs for X say that it requires something like 50 megabytes to hold the source, and another 50 to compile in. This is over and above the space requirements for GCC (and related tools). True, most people have 100 MBytes of free disk space handy. What if it's for an embedded card, with 16 MBytes of flash RAM, doubling as "hard drive", though? You can't even put the sources for GCC or X in that kind of space. For that matter, you couldn't even compile the Linux kernel itself in that kind of space!


    Optimal solutions for these cases requires some form of cross-compiling, by someone, whether that someone is the admin, or an online compiler farm.


    What about other cases? Low-end servers, for companies that can't afford Unix admins with the skills needed to precision-tune a machine? Schools, where very specific requirements need to be met, quickly, with nobody knowledgable enough to meet them? Hardware geeks, who want to test the limits of their machine, but who don't know enough about the software to find those uttermost limits?


    These people don't really have a distribution that works for them, because what they need is too fluid. You can't pin it down, excactly, and even if you were to produce a "good guess" for one case, the next case would be too different for generic solutions to work well. What's worse, these are often cases where the user often doesn't have all the skills needed to do the work themselves. Sometimes they do, more often they'll have -some- of the skills, and in many cases, they'll have no skills in this area at all.


    The challange, IMHO, is to find out how to optimize for everyone, asking only as little as absolutely possible.

Say "twenty-three-skiddoo" to logout.

Working...