Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Review of Sorcerer GNU Linux 286

ladislavb writes: "Sorcerer GNU Linux is not just another Linux distribution. It did not follow the tried and tested path of modifying a major Linux distribution and releasing it under a new name. Instead, the Sorcerer development team embarked on a completely unconventional way of putting together a unique distribution with features not found anywhere else. Once installed, it will be 100% optimised for your hardware, it will include the very latest Linux applications and it will provide an incredibly convenient way of keeping all software, even essential system libraries, up-to-date. The full review of Sorcerer GNU Linux, as written by DistroWatch.com."
This discussion has been archived. No new comments can be posted.

Review of Sorcerer GNU Linux

Comments Filter:
  • Wait a sec... (Score:3, Redundant)

    by saberworks ( 267163 ) on Friday January 18, 2002 @09:27AM (#2861518)
    Don't all Linux distros claim to be "not just another Linux distro?"
  • sounds familiar (Score:2, Informative)

    by koekepeer ( 197127 )
    if you find the idea of a linux-dstro optimized for your machine appealing, you should check out Gentoo Linux [gentoo.org].
    • or FreeBSD (Score:5, Informative)

      by hawk ( 1151 ) <hawk@eyry.org> on Friday January 18, 2002 @12:12PM (#2862560) Journal
      (and to a lesser extent, debian).


      they seem to do a bit more hardware detection, but for a running freebsd, the sequence


      cd /usr/src
      make update
      portsdb -uU
      make buildworld
      make buildkernel KERNELCONF=mykernelname
      make installkernel KERNELCONF=mykernelnambe
      init 1
      make installworld
      reboot
      pkgdb -F
      portupgrade -arR


      will recompile every last line of the OS, and then fetch source code from the distribution site for every port that has an update available (the -a), check checksums, grab the ports upon which these packages depend (the -R [ok, this is overkill:) ]), compile those, compile the updated packages, and then do all packages which depend upon the freshly compiled packages (the -r).


      a really cool process, which eats tons of cpu & banwidth


      hawk

      • adding:
        CPUTYPE=p2
        CFLAGS=-O2 -pipe

        to /etc/make.conf is probably sufficient to get 90% of the performance gains that you're going to get from Sorceror on an i686 system. It will cause the base system and ports to be built with those optimizations.
  • by CaptainAlbert ( 162776 ) on Friday January 18, 2002 @09:34AM (#2861542) Homepage
    I recompiled Qt from source on my four-year-old machine the other day, and it took six hours. I'm not recompiling every bit of software on my machine... it would take weeks. I doubt I could even fit all the source code on my HDD. But then it's a seriously retro setup so I'm probably making a fuss about nothing.

    Where I think this would come into its own is on a site, like a university or large company, where there are (a) hundreds of identical machines with exactly the same specification (down to the position of the sticker on the case), and (b) people who know what they're doing (ha ha) in charge. You could amortize the time taken to create the optimised system over the savings once you've installed it on every PC.

    I wonder if they support using a compile farm to perform the rebuild? That would be sweet.
    • yeah I know how it feels... until recently when I upgraded from p166@200 to a duron900 :-)

      I compile everything from sources nowadays, it really makes a big difference for the graphical stuff (XFree, Gnome , KDE, etc).

      You're right about the sites where there's plenty of identical machines (just clone it!), but there's also good use for your average user nowadays. Most people have sufficiently capable hardware to compile their own linux. And if you don't: just type in "make everything-as-i-want-it install" and go outside! get some fresh air, it's good for you ;-)
    • make buildworld (Score:3, Insightful)

      by swb ( 14022 )
      Weeks? C'mon, maybe on a 486. I did a FreeBSD buildworld on a SMP PIII700 (100Mhz FSB) just yesterday and it only took an hour, if that. I seldom re-do the applications and non-system libraries (stuff from ports) unless I really need to or am upgrading them specifically, but I bet even that stuff would only add another 30-40 minutes tops.

      I've been wondering when someone would do a linux distro that compiled itself during instalation, or at least a kernel.
      • by ichimunki ( 194887 ) on Friday January 18, 2002 @11:11AM (#2862107)
        Hmmm. That really doesn't sound like hardware that is four years old to me. And are we talking a buildworld from scratch-- none of this stuff had ever been compiled on that machine before? For how much of the ports tree? Just for the minimal install or for X, Gnome|KDE, some major apps like GIMP or KOffice or Evolution or Mozilla?

        You see where I'm going with this, of course... but compare your situation to mine. I've got one machine that I use regularly that is five years old, and therefore has a P/133 in it (it's a laptop, and it works for the things a laptop should do, there is no reason to go spending hundreds of dollars on newer hardware). That said, I disagree with our erstwhile whiner about the compile times being an issue. It won't take weeks, but it will take some effort and maybe a few days, especially if you don't have an automation tool to help you. Even so, set the larger packages to compile when you go to bed, or off to work, or whenever. And think of this as a one time investment, you won't be recompiling most of this stuff over and over.

        If this distro does even half of the hard parts: knowing where to get fresh source code, downloading the code to a local repository, helping configure the build, automating the build process, then it's quite a boon. And yes, you may have to wait for some time for this stuff to compile and install, but through the magic of CVS you might not have to wait so long to upgrade parts of it. *And* you get all the benefits of compiling from source- optimizations, special ./configure options you want, and more control over the whole install.
        • Re:make buildworld (Score:2, Informative)

          by Anonymous Coward
          I'm _obviously_ the only one that uses/used this distribution that's posted, so first off to those who have started bashing an excellent noncommercial backed opensource initaitive, check yourself. This distribution as I see it has one and only one boon to the world. Amd users will nolonger have to put up with the pathetic rpm's that are compiled for at best an i586 pentium. I can say from experience that even console apps feel much faster than in a binary distribution, and it also does compiler optimization flags for you -k7 etc. this isn't gentoo, it's linux from scratch for ESR's Aunt what-zername. cast and dispell provide quick addition and removal of source packages with dependancy checking, no more hours of ./configure... hmm what broke, ok compile that next ./configure hmm something else broke etc etc, it's all taken care of.

          now in saying that yes I believe the packages are a bit bleeding edge, and I had to deal with a kernel which had a broken 1284 driver (IEEE printer) but life goes on and new kernels are released. From what I saw a cast sorcerer has MUCH less chance of breaking your system than apt-get dist-upgrade from debian unstable, and is the next move up the food chain for the bleeding edge software crowd.

          all in all if I was rateing sgl I'd give it perhaps a 3 or a 4 out of 5 stars. it's speed and outright power on amd archetectures (like almost every computer in my house) is a huge boon, as well as the rest of the systems, as the scripts continue to advance this distribution will become better and better. that is unless binary distributions would like to give me stock kernels, and packages in .k7.rpm

          again just my $.02

          Trelane
    • I recompiled Qt from source on my four-year-old machine the other day, and it took six hours. I'm not recompiling every bit of software on my machine... it would take weeks.

      Well, this distro uses a thing called tmpfs, in which it does most of it's compiling operations in a RAM disk. I would assume this means that you need ample RAM, however, but it should speed things up considerably.
      \
  • make world (Score:2, Interesting)

    by Builder ( 103701 )
    So there is a GNU/Linux distribution with the equivalent of FreeBSD's cvsup and make world process. Yah. Whoohoo.

    So now we can have packages optimised for out platform at the cost of building everything from source. Sounds like a heavy cost to me. Wouldn't it be more efficient to provide a couple of different binary packages for each package a'la mandrake (i586 and i486) ? Compile once and let everyone install them as opposed to everyone compiling ?
    • Wouldn't it be more efficient to provide a couple of different binary packages for each package a'la mandrake (i586 and i486) ?

      That would cost more for the CDs and for download bandwidth, especially when you take into account Alpha, Sparc, MIPS, and all the other PC-class-or-higher architectures that Linux runs on. See also my Everything 2 article about making Linux distributions smaller [everything2.com].

    • >Wouldn't it be more efficient to provide a couple of different binary packages for each package a'la mandrake (i586 and i486) ?

      Yes and no.

      If you take into account the LOST efficiency of the enduser not being able to compile their kernel (or glibc, or bash) with their optimizations, then no.

      Things like being able to use 2.95.3 instead of 2.96, or being able to use stackguard. These are the beauties of a source-based distro.

      Not to mention the architectures that the distro maker doesnt have hardware to test on, nor time to test on. PS/2 anyone?

      So, no.

      Plus, your argument puts the burden on the distro-maker to compile for all the different architectures, host them, and provide the bandwidth for all of that.

      Sorceror (very wisely) has you download the majority of the distro from upstreams upon recasts.
  • The correct name of the distribution should be "Sourceror GNU Linux"
  • Anyone with a mirror?
  • by Anonymous Coward on Friday January 18, 2002 @09:39AM (#2861562)
    Here's the review.

    Would you like a Linux distribution which is 100% optimised for your hardware? Would you like one which includes the very latest software packages as they are released by their respective maintainers? How would you feel if we told you about a Linux distribution where the entire download-compile-install process of any software (including the Linux Kernel, glibc, GCC, KDE) is done by one simple command? Intrigued? Then read on. Welcome to the magic world of Sorcerer GNU Linux!

    1. Introduction Once you install a few Linux distributions, you will soon get to understand the basic process, which is rather similar in most mainstream distributions. Partition your hard drive, select the packages to install, listen to the CD spinning in your drive and when it calms down, you might do some hardware and network configuration to conclude the process. Less then an hour after inserting the installation CD you will have a fully working Linux system on your computer.

    But things can be done completely differently. The beauty of Linux is that there are plenty of ways to achieve the same goal. As long as we are free to exercise our creative abilities and implement the resulting ideas, we can create amazing things.

    Just think about this for minute - since the vast majority of Linux software comes with source code, why is it necessary to download binary files that somebody compiled on a particular hardware platform and included all sorts of options to run it on thousands of different hardware configurations? Would it not be more logical to compile everything on your own machine, ensuring that the code is optimised for exactly your hardware?

    Enter the magic world of Sorcerer GNU Linux, a Linux distribution with a difference.

    2. Basic Information Sorcerer GNU Linux (SGL) is a new Linux distribution. Its first release was produced in July 2001 and subsequent updates were very frequent at roughly twice a month. The project's web site is unlikely to win any design awards, but the essential information with FAQs and mailing lists to get anybody started is present.

    The basic philosophy of Sorcerer GNU Linux is amazingly simple - after installing it, you will end up with the most optimised system for your particular hardware configuration and, at the same time, you will be running the absolute latest software available.

    How does Sorcerer achieve this? First, you download the compressed ISO image, unzip it, burn it to a bootable CD and boot from it. After answering a few questions, you will watch the installation of a basic Linux system on your hard drive. Nothing really differs much from any other distribution until you get to the kernel compilation stage. And this is when things become interesting...

    But we will leave a more detailed description of the installation process for the appropriate chapter. Here, just a basic overview: After your kernel is compiled, you will reboot into a brand new system. You are invited to configure your networking, knowing that soon there will be plenty of interesting code running down your cable or telephone line from various parts of the world. The great fun of selecting, downloading and compiling your packages can begin.

    So what is the downside of such a distribution? If it is so great, why isn't everybody using it? The main reason is the fact that it takes a bit of knowledge and a lot of time to get Sorcerer GNU Linux up and running on your computer. Installing most other distribution will last less than an hour before you get a full-featured Linux desktop with several window managers, servers and more applications than you can shake a stick at. With Sorcerer, well, you'd better reserve a rainy weekend for it if you'd like to achieve the same. Some people might find this too time consuming, but those of us who like to tinker and optimise every bit and every byte of our hardware and those of us who like to run the latest software as it is produced, will find that Sorcerer is a dream come true.

    3. Installation Let's get going with the installation process. There are several steps to accomplish this:

    1. Download the compressed ISO image. 2. Install a basic (binary) Linux system from the downloaded CD image. 3. Compile the kernel. 4. Configure networking. 5. Recompile all the applications from the original CD image. 6. Get, compile and install all applications you need.

    We will look at each of these steps separately.

    3.1 Downloading Sorcerer The Sorcerer GNU Linux can be downloaded from the project's web site and its mirrors (see the side bar for links). The size of the compressed ISO image is about 80 MB and this swells to about 250 MB after unzipping the archive. You can use any standard CD writing tool to create a bootable installation CD.

    Keep in mind that this ISO image gets updated frequently, so your downloaded image will quickly become out of date. This is where a unique feature to update the downloaded ISO image comes handy. This was primarily designed for those on a slow connection - instead of downloading the entire new ISO image, you can opt to download a small patch with the *.xdelta extension. To upgrade the original ISO image to the latest version, simply type:

    xdelta patch sorcerer-$OLDDATE-to-$NEWDATE.xdelta sorcerer-$OLDDATE.iso sorcerer $NEWDATE.iso

    This command will produce a new ISO image. Now you can verify the md5sum of the new ISO and if it matches the one found on the Sorcerer's download page, the image is ready to be burnt!

    Naturally, there is no need to get the latest ISOs if you have Sorcerer GNU Linux up and running, but this feature might come handy in cases when you wish to burn the very latest image without actually downloading it. This way you can always keep the latest ISO image on the hard disk in case you decide to do a new installation or in case you would like to pass the latest image to a friend.

    3.2 Installing a Basic System This step is, in most parts, similar to installing any other Linux distribution. The installer offers partitioning tools, such as fdisk, cfdisk and parted to create partitions. You can then proceed with selecting your root partition and its file system (ext2, ext3, ReiserFS and XFS are offered) and a swap partition. You will be advised to create a large swap partition, the basic rule is that your amount of RAM plus the size of swap partition should be at least 1 GB. Do not ignore this advice! Although the installer will complete the installation even if the above condition is not met, you will run into trouble later when trying to compile large programs and get frequent "out of memory" errors.

    This memory requirement might seem strange at first, but the logic behind it is that Sorcerer makes use of "tmpfs", a virtual RAM drive which can also use swap space, to accelerate compilation and minimise file system fragmentation. Because of the "tmpfs" file system, Sorcerer expedites compilation by making the most efficient use of RAM. This makes sense as you are about to do a lot of compiling and the compilation speed gains are definitely noticeable.

    As soon as the partitioning is done, you can start with the installation. Note that at this stage you are still installing binary files found on the installation CD to create a basic working system. These packages will be recompiled at a later stage.

    3.3 Compiling Linux Kernel The next step is where similarities with mainstream distributions end. Yes, you are going to compile the kernel. This is of course where you can spend quite a lot of time tweaking and optimising, but at the very least, you should make sure that you select all the necessary modules for your hardware. Check the modules for your network, sound and video devices as well as any other hardware you need. Of course, you can always recompile the kernel later if something is missing.

    After the compiling is done, you will be prompted to configure networking.

    3.4 Configuring Networking First of all, you need to load the module for your network card (provided that you have compiled it into the kernel). This is best done by creating an alias in your /etc/modules.conf file: 'alias eth0 network-card-module', then loading the module with 'modprobe eth0'. At this point you should have a working network card. If you had compiled the network driver into the kernel, the above step is unnecessary.

    Sorcerer used to provide both DHCP a PPP support to connect to the outside world. Being on ADSL connection, I looked for a PPPoE support which my service provider required. I failed to find any way to connect with the tools provided so I had to install Roaring Penguin's RP-PPPoE package, sources of which I had on another partition. When I mentioned this to Kyle Sallee, the author of the distribution, he promptly produced a brand new Sorcerer release with the PPPoE support built into the installation script! "If you need anything slightly more exotic, just provide me with the details to set it up", replied Kyle.

    Ah, the wonders of personalised technical support only available in niche distributions!

    3.5 Recompiling Installed Applications You only need to execute two commands in this stage, but the execution is likely to kill an entire evening. The two commands are: 'sorcery update' and 'sorcery rebuild'. The first command simply fetches the latest application database from the Sorcerer web site. This is to ensure that you do not compile older packages if newer are available.

    The second command recompiles the existing software on your system. This process is interactive by default and you are prompted to look at the compilation log after each package. It is possible to disable the prompt in the sorcery menu so that you can run this process during your sleeping time. And when you wake up, you will have a 100% optimised system and it is not even necessary to reboot!

    3.6 Getting, Compiling and Installing Applications The final step towards a fully working Linux box is downloading, compiling and installing all the software (or spells, in Sorcerer's terminology) you need. The beauty of this seemingly troublesome and scary process (especially if you have tried to do this on other distributions) is that all this is done by a simple command, which is 'cast package-name'. If you prefer a menu-driven way, you can simply type 'sorcery' and you will be presented with a categorised list of applications to choose from.

    The list of applications (or spells) is taken from a database called 'grimoire', which is a sorcerer's book of spells containing nearly 700 spells at the time of writing. This database is updated daily ensuring that all new software releases find their ways into the grimoire within a short period of the official release. Trying to impress your friends? Sorcerer GNU Linux will certainly make them green with envy...

    So how are all the dependencies resolved, you might ask. Isn't this part the most troublesome of all? The dependencies are taken care of for your convenience and peace of mind. Once you cast a spell (i.e. execute a package installation command), you will be prompted to include all the necessary dependencies into your spell. Additionally, you will be prompted to include or exclude optional dependencies, another feature not found in any other Linux distribution. The most amazing thing about this feature is that the entire download-compile-install infrastructure was written in nothing more sophisticated than Bash.

    As soon as the basic stuff is installed, you are free to indulge in the vast packages resources provided by the distribution. You can continue with compiling XFree86 which is a rather lengthy step, taking nearly 40 minutes on the Pentium 4 machine. Casting XFree86 prompts you to run the configuration menu, which is an essential step if you would like to include specific drivers for your graphics card. Afterwards you can proceed with casting a window manager of your choice (the very latest versions of KDE, Gnome, WindowMaker, IceWM, Sawfish, Enlightenment, AfterStep, Blackbox and others are included) plus its associated libraries. Install any other software you need and a few hours of largely incomprehensible messages on your monitor (unless you prefer to turn this output off) will produce a pretty complete system by any standard. The Sorcerer's spells have been cast!

    4. Post-installation Now you have a complete Linux system ready to be put to productive use as would be the case with any other Linux distribution. But apart from having the most up-to-date and most optimised system within a considerable radius of your location, there are still some interesting and unique tricks up Sorcerer's sleeves, not seen anywhere else. Let us examine some of the more interesting ones:

    1. Software updates 2. Sorcery options 3. Rescue and maintenance

    4.1 Software Updates The beauty of this distribution is that it is incredibly effortless to keep up-to-date with the ever evolving Linux application world. With a simple command of 'sorcery update', you can do a system-wide update of your entire installation. Creating a crontab entry for this command to run every night will result in a completely seamless update of your applications. Wouldn't it be nice to wake up one morning to find out that you have a brand new KDE desktop without as much as moving your finger? With Sorcerer GNU Linux, this is indeed a reality.

    There are two ways to update software on your system - a menu-driven sorcery and a command-line sorcery.

    Beginners might find the menu-driven way easier at first and it is certainly worth a look. The utility is invoked by typing 'sorcery' on the command line. The first option on the list is 'Spell', which, when clicked, reveals a further submenu. From this menu, you can select to install new applications from a categorised list of nearly 700 spells, you can recompile all applications on your system, you can select applications that should not be updated during the next system-wide update and you can remove applications from your system. The menus are very logical and self-explanatory and help is provided in the form of (sometimes humorous) one-line hints at the bottom of the screen.

    Updating your system from the command line is equally easy once you master a few simple commands. Installing a new package is done with the 'cast package-name' command; just remember that the term 'install', used by most other Linux distributions is not accurate as 'casting' actually involves downloading, compiling and installing the package. Ever tried to compile KDE on another Linux distribution and gave up because of the hard work involved? With Sorcerer, all you have to do is to type 'cast kde' and off you go, no more studying of installation instructions, interpreting cryptic error messages and searching newsgroups! It really is as easy as that!

    The next command you will use frequently is 'gaze'. It offers an amazing array of useful options, which you can view by simply typing 'gaze'. Would you like to see a list of all packages installed on your system? Type 'gaze installed'. Do you want to see the list of available packages? Type 'gaze grimoire'. Fancy to find out the package's description, web site, maintainer or md5sum? How about searching the package list, viewing compile logs or listing the source files for a package? All this and a lot more can be done with the 'gaze' command, which is worth investigating in detail.

    4.2 Sorcery Options The Options menu in 'sorcery' offers a range of useful features. We will only mention some of the more interesting ones, but do take your time to find what else is offered.

    The PROMPT DELAY option allows you to set the delay time, in seconds, for prompts while compiling and installing multiple packages - if no input is given within the specified time, a default action is taken.

    The APPRENTICE option allows you to execute a command even if the associated package is not installed on your system. Sorcery will simply download, compile and install the necessary package before executing it. The list of executables and associated packages is stored in the /var/lib/sorcery/apprentice directory.

    The AUTOFIX option, if enabled, is very useful in cases when important system libraries get updated. This would normally break most packages that depend on these libraries, but the AUTOFIX options checks and rebuilds all packages that would otherwise be broken. This option is enabled by default. It is worth noting that any packages that need to be rebuilt as a result of updated system libraries will not be downloaded again, but rather compiled from sources already present on your hard drive.

    Imagine for a moment that a new version of glibc gets released. To update it, you can simply 'cast glibc', which will build a new glibc, remove the old glibc libraries and recompile every package dependent on glibc. During this process, there is a brief moment when there is no glibc in /lib or /usr/lib, but despite of that no application already running on your system will be affected! The Sorcerer's magic at its best!

    Other options worth mentioning are MAIL_REPORTS, which will e-mail installation reports to the specified e-mail address, VOYEUR, which turns on or off compiler's verbose output and REAP, which, if enabled, causes that upon removing a package, all associated files are also deleted.

    Finally, two more useful options to speed things up. To eliminate bandwidth bottlenecks, the software packages do not get downloaded from a central location, but rather from a maintainer's home site, FTP server or mirror. The 'Software Mirrors' option allows you to select a nearby mirror for the Linux Kernel, GNU packages, KDE, XFree86 and Gnome. The last option on the menu is 'Optimize Architecture', which gives you a chance to optimise all source code compiles for one of the available processor architectures - i586, i686, K6 or Athlon.

    It should be noted that the list of features is not static, but keeps growing based on user feedback.

    4.3 Rescue, Maintenance and Administration It is worth mentioning that your Sorcerer CD can serve as a rescue image as well. If you happen to get into trouble, you can boot from the CD, log in and carry out any necessary maintenance tasks. While this is not a unique feature, it is always nice to see that there is a simple way to log into your Linux system!

    The Sorcerer web site provides a categorised list of FAQs in the documentation section and you are encouraged to join the mailing lists where you will receive a warm welcome from the members.

    System debugging is handled with the help of extensive log files No other Linux distribution provides this feature, which greatly simplifies bug fixing and bug reporting procedures. The fact that Sorcerer GNU Linux is so highly up-to-date and remarkably bug-free can be partly attributed to the use of these log files Have you encountered a problem during software compilation or installation? Instead of describing it verbally, submit the log file and the bug will be fixed in no time!

    The next question that comes to mind is what if you want to install a package not yet available in the grimoire, the Sorcerer's software database? Apart from compiling the package using the standard method of configure && make && make install, you are more than welcome to create your own spells. This way, not only will the new package become part of the sorcery, simplifying the install/uninstall management, you can also share your spell with the rest of the Sorcerer user community. Creating spells is not too difficult and extensive instructions are provided.

    Finally, a recently introduced sorcery feature, called 'cabal' provides administration and command execution tools for simultaneous use on multiple Linux systems. It uses nothing more fancy than ssh2 keys, ssh, and scp, a set of simple tools to make a system administrator's life a little easier.

    5. Pros and Cons Advantages

    1. Sorcerer is 100% optimised. Not many people will argue this benefit. By virtue of compiling every piece of code on your own system, you are making sure that you get as much out of your hardware as you can. Of all the Linux distributions currently tracked by DistroWatch, only three are source-based (the other two being Linux From Scratch and Gentoo Linux). No binary distribution can beat a source-based distribution compiled on its home turf!

    2. Sorcerer is the most up-to-date distribution. This benefit might be void in cases where there is no need to run the latest and greatest software, such as in case of some specialist servers. Still, security issues do appear from time to time and having the benefit of an easy path to upgrading the affected application can save many hassles. Because this distribution is so highly up-to-date, all security and bug fixes are applied routinely.

    3. Sorcerer offers excellent support, both direct and via mailing lists. In line with most smaller and niche distributions, the author will often personally reply to your concerns and listen to users' suggestions. The PPPoE feature I asked about was placed in the next Sorcerer release literally within a few hours after I e-mailed the author!

    4. Sorcerer is continuously being enhanced. While already pretty complete and feature-rich, there are still many new features planned for inclusion in future releases. Of course, many other Linux distributions can claim this, but given the completely unconventional way of doing things at Sorcerer, we can only look forward to more unique features not found anywhere else.

    5. Sorcerer is fun to use. Admittedly, this is a highly subjective quality, but I can honestly say that I have never had more fun with any other Linux distro. Period.

    Disadvantages

    1. Sorcerer takes time to get installed. Installing a Linux distribution is a lot less troublesome than it used to be and most mainstream ones will get you up and running in less than an hour. Sorcerer, on the other hand will require many hours of compiling before producing a full-featured system. However, the benefits of compiling all software are unquestionable.

    2. Sorcerer lacks decent documentation. As is often the case with many new projects, the documentation has not been given the highest priority. The current structure of hard-to-navigate FAQs and installation/usage notes resemble a schoolboy's scribbles rather than a carefully designed operating manual. The author seems well aware of this deficiency and welcomes any contributed documentation such as FAQs, man pages or installation instructions.

    3. Sorcerer is not for beginners. You don't need to be a Linux megaguru to install and run Sorcerer, but you should be reasonable proficient in basic system administration. There are many choices to be made during installations and configurations of your system, some of which may be vital. You also need a thorough knowledge of your hardware and the kernel modules required by each hardware component. If you are not sure, you can simply accept the defaults, but it helps if you know a little about the Linux Kernel, XFree86, Perl and other configuration options.

    6. Summary Of all Linux distributions available today, Sorcerer GNU Linux is positively the most unconventional. Instead of following the tried and tested method of many binary distributions, the development team has not only created a unique product, it has at the same time solved many of the perceived problems, traditionally associated with the Linux operating system. The amazingly simple and fluent method of installing and upgrading software and system libraries makes one wonder why no other distribution has invented anything even remotely similar.

    While some may argue that Sorcerer's is a more time consuming method of keeping up with the Linux application development, it is still much less time consuming than resolving library dependencies, interpreting error messages and spending time on newsgroups searching for answers. Because of the sophisticated download-compile-install infrastructure of sorcery (while written in nothing more sophisticated than Bash), keeping your Linux box up-to-date is a painless process. Best of all, every single package is fully optimised for your hardware as it is compiled.

    Despite the development team's claim, Sorcerer GNU Linux is, in my opinion, a revolutionary Linux distribution. Do yourself a favour and download the 80 MB file, then find a weekend to explore it. I can virtually guarantee that at the end of the weekend you will be asking yourself questions like: "Why has nobody else thought of this!? Is it possible that installing and upgrading Linux software can be as simple as that?" You will have to pinch yourself to believe that you are not dreaming.

    I do not normally indulge in predicting the future. But if only 6 months of development resulted in the product of this sophistication and quality, then I honestly believe that Sorcerer GNU Linux is going to become a major Linux player in the near future.

  • 1. My dishes (hardy har har)
    2. reply to my girlfreind's daily comedy emails with replies that actually sound like I was at said site.

    Seriously what more besides apt-get do people need for updates? I mean I was so disenchanted with mandrakeupdater that when I got back into the swing of linux after a dry spell I almost gave up. Now with debian at least I can update things without fear of the kernel segfaulting on the next boot.

    • next boot? What is this "boot" you speak of?
    • by FreeUser ( 11483 ) on Friday January 18, 2002 @10:47AM (#2861939)
      Seriously what more besides apt-get do people need for updates? I mean I was so disenchanted with mandrakeupdater that when I got back into the swing of linux after a dry spell I almost gave up. Now with debian at least I can update things without fear of the kernel segfaulting on the next boot.

      I am an avid Debian user, and have moved an entire enterprise over to Debian because apt-get makes a system administrators life so much easier and it halved my work load as a result. For binary distributions apt-get is unmatched, and apt-get source, while not perfect, is a very nice way to get sources and compile them.

      However, there are better approaches. FreeBSDs "ports" system comes to mind, where a skeletal directory tree structure and a simple make command are all that are required to automate the download, compilation, and installation procedure for a plethora of third party applications.

      No library conflicts. Any necessary patches applied on the fly, optomized and compiled for your system. It was, until this distribution came along, the only installation method I'd ever heard of, much less seen, that beat even apt-get hands down.

      If this distribution lives up to its billing, it will be only the second, placing Debian's apt-get, Sorcerer, and FreeBSDs "ports" in a class all their own. Even as an avid Debian user I will be spending much of this weekend playing with Sorcerer.

      The real question is, will there be a good replicator or, better yet, automated installation utility so I can build 50 machines on 50 similar but not identical machines, without having to sit in front of each one? Replicator is the one thing that will keep me using Debian at work ... building new machines (even slightly different ones than the model) is just too easy to give up ... even for this.
    • Re:But will it do.. (Score:2, Interesting)

      by zby ( 398682 )
      They claim to have the newest versions
      (something like a day after freshmeat).
      I wander if simply compiling on the user machine
      does so much simplify putting together packages
      to make a distribution.
  • by d-Orb ( 551682 ) on Friday January 18, 2002 @09:42AM (#2861579) Homepage

    Just a question: is this distribution's approach similar to the BSDs? I think if not the same, it is very similar to the ports system, a very useful and clever approach in a lot of respects.

    On the other hand, I don't think that many people would be that keen to recompile KDE/Gnome from scratch every time! Specially in legacy (i.e., more than 3 months old) hardware. However, for a (say) dedicated web server or something like that, it might have its uses...

  • by 3seas ( 184403 ) on Friday January 18, 2002 @09:44AM (#2861589) Homepage Journal

    Does this mean Aunt Tillie gets to build her own kernel [slashdot.org]?

    Damn, That was fast! Leave it to the GNU community!!!!
  • by Idimmu Xul ( 204345 ) on Friday January 18, 2002 @09:48AM (#2861608) Homepage Journal
    Here [ntlworld.com] is a mirror of the review of sorcerer as the distrowatch site appears to have had the obvious done to it already :/
  • Gentoo Linux (Score:2, Redundant)

    ...looks very good. Gentoo [gentoo.org], like Sorcerer, builds an installation from source. It looks like I can create a fine-grained, very targeted installtion with Gentoo, so I'll try it out on a new box next week and see how it works.

  • by prototype ( 242023 ) <bsimser@shaw.ca> on Friday January 18, 2002 @09:50AM (#2861620) Homepage
    I think this is an interesting idea but has a few flaws.

    First they ask that your swap image be at least 1gb in size. I don't know about everyone else, but my linux partition is just 2gb so that's half of my disk already. I know, I know, these days everyone has 30, 40, and 60gb drives so it's not a big deal. Maybe it's just time for me to get more iron.

    Anyways, the big feature this distro seems to be claiming is the automatic (and seamless updates). You can run this "sorcery update" command in a cron job at night and have a brand spanking new system the next morning. While this sounds like the cats meow, what if I don't want the latest and greatest? I personally don't to live on the bleeding edge and don't always want the lastest. Also, who decides what's the latest? The latest beta? Is it running the 2.4.17 kernel or something even newer? What version of KDE does it have?

    It's also a huge distribution and requires a dedicated weekend to get up and running. The name implies that it's something that a beginner could sit down and startup with, but this is not the case. If you're looking for a simple install, stick with Mandrake/RedHat or something. If you have a few gigs to chew up and a weekend to burn, maybe give it a try.

    liB
    • by iamsure ( 66666 ) on Friday January 18, 2002 @10:25AM (#2861786) Homepage
      >First they ask that your swap image be at least 1gb in size. I don't know about everyone else, but my linux partition is just 2gb so that's half of my disk already. I know, I know, these days everyone has 30, 40, and 60gb drives so it's not a big deal. Maybe it's just time for me to get more iron.
      The problem behind that is that the 2.4 kernel series recommends double your ram in swap. Since ram is ultra-cheap, and the average computers have 256-512mb ram, voila, 1 gb.

      >Anyways, the big feature this distro seems to be claiming is the automatic (and seamless updates).
      And compiling from source, using the settings you prefer. And using upstreams sources generally.

      >You can run this "sorcery update" command in a cron job at night and have a brand spanking new system the next morning. While this sounds like the cats meow, what if I don't want the latest and greatest?
      Then dont run the sorcery update!!

      >I personally don't to live on the bleeding edge and don't always want the lastest. Also, who decides what's the latest?
      The upstream, actually.

      >The latest beta?
      No.

      >Is it running the 2.4.17 kernel or something even newer?
      2.4.17

      >What version of KDE does it have?
      Umm.. Dont know off teh top of my head, but it is probably listed on the site, at least the current ver #'s used to be.

      All in all, they are fairly cutting edge, but you can always choose NOT to be. Just dont cast the latest version!
    • I was interested in sorceror distro too until I saw that line. I'm still interested in Gentoo and LFS (Linux From Scratch), based on the theory, "the more you have to do, and the more opertunities you have to screw up, the more you learn".

      However, that 1-gig swap file turned me off. I'm on a 2 gig disk, with either a 32 or 128 meg swapfile (16 megs memory), on an aging p166 that's my DHCP server. (Working fine too, other then a problem with /var/log/wtmp growing by 10 megs a day!) "Optimized" basic distros shouldn't need insane requirements.

      Just my $.02

  • I just posted this [slashdot.org] last night...
  • Optimization (Score:5, Interesting)

    by LinuxGeek8 ( 184023 ) on Friday January 18, 2002 @10:02AM (#2861669) Homepage
    I am sure it is all nice and optimised when you compile everything from source.
    There is just one disadvantage; while you are compiling that latest version of XFree86, gnome or kde the computer does not feel really optimised.
    Compiling everything is just too much hassle, and takes too much time and computing power.

    For a server there are not that many packages installed, so it can be usefull. But on my desktop I have about 2Gb software installed. Keeping that up to date.......nah.

    Just let me update everything from binary, be it apt-get or urpmi.

    Btw, I have a friend who was horrified when I showed him apt-get. Do you update from binaries? Do you call that security?
    He liked to install security-updates from source.
    When asking sometime later how he kept his FreeBSD boxes up to date he said he did not do that. He felt safe behind a firewall.
    Hmm, I guess it is just too much hassle.
    • by kaisyain ( 15013 ) on Friday January 18, 2002 @10:22AM (#2861762)
      A decent amount of software you can get for linux nowadays comes with a ton of compile options. When I get a binary package from debian or redhat I have no say over which of those options were turned on. Maybe I don't want postfix to be able to support ldap and mysql and postgresql lookups? Well, tough, I don't have any choice in the matter and so I have to download and install those libraries to satisfy postfix's dependencies.

      Sometimes (most often in the case of SSL) you'll see multiple versions of the same package to satisfy problems like this. This is a hack to solve the problem. Sure I can install lynx-ssl instead of lynx but what if I want lynx to use slang instead of ncurses?

      This is more about control than optimization. While I'm not sure this level of control is necessary for everyone, control is one of the selling points of Open Source Software and something that most people like to have.
      • Yup, you are right about the control thing.
        But instead of compiling everything from source you can just get the specific src.rpm, edit the specfile and rebuild the rpm with your options.

        I am mostly just talking from the way I deal with it.
        Mostly I do not really care about dependencies or build options. There are just a few packages for which I find it really important.

        Another point though.
        Building software from source has often its own dependencies. You need a compiler suite and lots of headerpackages.
        I recently struggled a lot to get a kernel built on my Alpha firewall with only 300 Mb disk.
      • It seems like most of your compile time changes are really the decission on the distribution (slang vs. ncurses).. and they are not going to be choices in this new Linux distribution either.

        Plus, if your really stripping postgre support from the binary what happens when you latter want to install postgre? Shure, your not going to be installing postgre later if it's a server, but you might be installing postgre if it's your desktop.

        Clearly, there is a market for an opimized server distribution, but flexibility is more importent in a desktop distribution. RedHat could just fork it's AMD and Intel distributions to get most of the benifits of opimizing.
    • I'm gonna explain this as a FreeBSD user. Using a /usr/ports type style, which this Linux distro might use (I didn't read about it, I admit it) makes it all pretty easy. Onmy 200mhz machine, I had a small ascript that would compile and install everything I needed: php, apache, xfree etc etc...

      Doing it in real time is insane, but over night isn't so bad.
    • Btw, I have a friend who was horrified when I showed him apt-get. Do you update from binaries? Do you call that security?

      While paranoia is a valuable quality to have when you're dealing with security issues, it is most helpful when it is carefully directed. One has a limit amount of time and attention to spend on things, and being paranoid about silly things is a waste of those valuable resources.

      Paranoia about installing from source instead of binaries is misdirected unless you friend personally inspects the source very carefully for trojans before compiling anything.

      The only thing you're protecting against if you compile from source is if the distribution maker has a piece of malware that auto-trojans any binary created with their toolchain. On the threat scale, I would consider that one pretty unlikely, and the energy spent trying to protect against that would be better spent protecting against some more likely threat.

      In fact, keeping the FreeBSD boxes up-to-date is probably a good protection against much more likely threats than the one I outlined above.

    • >When asking sometime later how he kept his
      >FreeBSD boxes up to date he said he
      >did not do that.


      did he call the police after you smacked him? :)


      yikes, it's so bloody easy in bsd you can script it . . . (which I'm to paranoid to do . . .)


      hawk

    • Btw, I have a friend who was horrified when I showed him apt-get. Do you update from binaries? Do you call that security?

      apt-get source package

    • by Arandir ( 19206 ) on Friday January 18, 2002 @04:14PM (#2864178) Homepage Journal
      You don't have to compile your 2Gigs every night. Nor do you have to compile it all at install time. I don't think Sorceror will let you install from premade binaries, but that's irrelevant if you aren't using binaries.

      Here's what you do on your distro (if it will let you compile from source without mangling the package system):
      Monday: compile the kernel and glibc
      Tuesday: compile gcc, binutisl, textutils
      Wednesday: compile XFree86
      Thursday: compile kde-libs, kde-base (or gnome equivs)
      Friday: compile kde-network, kde-utils (or gnome equivs)
      etc
      etc
      etc

      The advantages of doing your own builds, summarized (you can get detailed advantages on the sorceror, gentoo and freebsd pages):
      Significant performance increase
      Customized package configuration (Dia without GNOME, Xmms with mods, etc)
      Fewer dependency problems (let configure worry about which exact libs you have installed)

      Binary packages are convenient during installation, but they shouldn't be the final product on your box. They're so you can get a system up and running fast. Afterwards you can rebuild everything in the background while you're posting your trolls.

      Most of you Linux guys are fanatical about Open Source and Free Software. It's your mantra and credo. Yet you fear the words "./configure; make; make install" every bit as much as the clueless windoze lusers. Free Software is meaningless without source code. Without source it might as well be proprietary freeware. Source code is your power. Don't pheer the source!

      A Free Operating System allows you to do whatever you want with it! You are in control. Your box is yours. So why is some release manager at Redhat or SuSE your sysadmin-by-proxy?
      • this is undoubtedly one of the best posts I've read on slashdot in quite a while... let me see... how do I add Arandir to my "friends" list?
  • I have broadband and 600Mhz which probably never get more than 1% usage over a whole week, with no reduction in cost if I don't use my bandwidhth. Downloading and recompiling at night would suit me fine and I'd actually be gettig better value for the broadband.

    If I was on a 56K modem and a slow machine I'm not so sure that this would be worth while. But slow machines are getting rare now.

    TWW

  • by jonv ( 2423 ) on Friday January 18, 2002 @10:20AM (#2861754)
    This...
    You will be advised to create a large swap partition, the basic rule is that your amount of RAM plus the size of swap partition should be at least 1 GB. Do not ignore this advice! Although the installer will complete the installation even if the above condition is not met, you will run into trouble later when trying to compile large programs and get frequent "out of memory" errors. This memory requirement might seem strange at first, but the logic behind it is that Sorcerer makes use of "tmpfs", a virtual RAM drive which can also use swap space, to accelerate compilation and minimise file system fragmentation. Because of the "tmpfs" file system, Sorcerer expedites compilation by making the most efficient use of RAM. This makes sense as you are about to do a lot of compiling and the compilation speed gains are definitely noticeable.
    Reminded me of a quote from an old fortune file: Virtual Memory ? Wow! Now we can have really large RAM disks...
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Friday January 18, 2002 @10:21AM (#2861757) Homepage Journal
    The difference is, I don't see why someone should need to recompile ANY source on their computer. What if you download a very simple mini-kernel, which scouts out what hardware that machine has, and then allows you to upload that information to the server site.


    The user would then pick what software they want installed on their system, as per any other distro.


    The server site can then take the source, recompile it for that configuration, and generate a set of ISO images containing the optimized setup for that machine.


    One advantage of this approach is that if you're installing on multiple identical machines, you would only go through the process once. Once it's done, you'd have a set of "instant" install CDs. No menus, no further tweaking, just a direct blast onto the hard drive(s).


    A second advantage is that a server site can have a compiler farm, making the build process MUCH quicker than would be possible for an individual.


    A third advantage is that if someone sends in a configuration which matches one that's already been done, the compiler farm only needs to rebuild updated packages. The rest has already been done. The CDs can then be built out of freshly-compiled binaries and pre-compiled ones.


    A fourth advantage is start-up time. Because you're downloading a very basic bootstrap, rather than a mini-distro, the time to download, install and run is going to be much much less.


    The last advantage is when it comes to updating your system. Again, with all the compiling being done on a remote compiler farm, the time it would take to do a basic update would be minimal, compared to Sorcerer, and far more optimal, compared to Up2Date or Red-Carpet.


    The key to something like this would be the detection of hardware and (on networks) servers. Kudzu is good, but it's limited. sensors-detect is OK, but it's specific. I don't know what Anaconda uses to detect the graphics stuff, but again that is good, but specific. Any router can detect other routers working with the same protocol. There's plenty of stuff that none of the above detect, but would need to, for a truly optimized build & auto-configure. (Is the network multicast-aware? Will the network support ECN? What is/are the IP addresses of routers on the network? Where is a DNS server? Is the sound device better supported under ALSA or OSS? Do memory constraints indicate optimizing for speed or size? etc.)


    An optimized build is more than just tweaking the configure options. It's also choosing the right compiler (where multiple options exist). It's setting up the configuration files for those things that can be discovered. It's about asking for the information that's needed, rather than the information that can be found out.


    My idea would be that the servers would have a database, containing source and binaries, identified by a 1-way hash of the relevent hardware information. This avoids any privacy issues, as there's nothing private stored. Each access would amount to a search for all records where the hashes of the relevent hardware match. For updates, the user's machine could then select/deselect stuff it already had. The rest would be put into ISO form, and be available for download.

    • What's the point? There's only so many different kind of processors. In fact, these days there's *basically* two options. Why not just build i686 and Athlon optimized versions and be done with it? If you want to hit more fringe groups, build PowerPC (yeah, sorry), (Ultra)Sparc, and Alpha packages too.

      All of the other things you mention are matters of choosing which binary package to install, or how to configure them. There's nothing to be gained by automatically recompiling.
      • by jd ( 1658 ) <imipak@ y a hoo.com> on Friday January 18, 2002 @12:04PM (#2862496) Homepage Journal
        High-memory, low clock-speed systems will probably want much higher levels of optimization (eg: -O6). Low-memory systems will be much better off with -O1. If debugging and/or profiling is required, then -O0 -g -pg is what you need. Otherwise, you want -O2, EXCEPT where it is known that you can achieve better compilation with naming the specific optimizations. (The kernel is an example of this.)


        Nobody in their right minds would spend money on a brand-new Athlon or P4 system for a printer server, so you're likely looking at needing 486 and 586 options.


        The ARM/StrongARM architecture is probably just as important as the PPC and PPC64 architectures. Then, as you say, there's the Sparc and Sparc64, the Alpha, the s390 and s390x (hey! IBM are spending big bucks on promoting these!), the parisc (HPUX is carp, which means Linux is a viable option for HP boxes), the Motorola 680x0 series (lots of VME crates out there, and not everyone likes VxWorks), the Itanium (yeah, that does exist), MIPS and MIPS64, and the User-Mode Linux architecture. There are other architectures (eg: the VAX) and other OS layers which can sit on Linux (eg: FreeVMS), and the list is continually growing. Rather than pre-compiling everything for a system combination you don't know will ever be used, it really does make more sense to wait until someone says they want to use it.


        Ok, so we've plenty of architectures, and at least five compiler options that need to be considered, depending on the hardware. What other possibilities are there?


        Well, if you don't have a sound card, and you don't desire a network sound daemon, then you probably don't want sound support compiled into applications. Likewise, if you have a text-only system or a headless system, and don't want any X support, then you don't need GUI support in your apps. (For something like emacs, this is quite significant.) Linuxconf has support for remote administration. If you don't want external network suport, you're not going to be needing that.


        On the flip-side, let's say you opt for a MOSIX configuration, or have an SMP system. If an application can support threads, you probably want to include that. You can't get any benefit out of the system, if you prevent it from sharing the work. Then, there's X itself. You only need the drivers for the hardware you're going to use. You also might want to enable extensions, where applicable. Dual-headed systems do exist, but they're not much use if the software isn't told about them.


        If you've opted for a secure system, you might well want applications compiled with SE-Linux support enabled. Setting up a development box, rather than a high-performance server? Then maybe libraries need to be compiled with debugging & profiling, and no optimization.


        Setting up an embedded box? Then compile with maximum stability in mind. Performance isn't so important, in these cases, but you'll notice any downtime. If it's a home-user box, hey, they can handle an application crashing once a week. No big deal. Tell that to a robot explorer on Mars, an oil flow control system in a pipeline, or an air traffic control system at peak time.


        Then, there's the ability to link to different libraries, depending on what's installed. That, in turn, depends on what people want installed, what the system needs to function, and what else is necessary to meet both of those two requirements.


        At this point, the number of options has totally exploded. There is simply no way on Earth you could handle all those possibilities, as pre-generated setups. Indeed, most distros have dropped support for many architectures, because they COULDN'T do it as pre-generated setups.


        I think compiling on-the-fly, on a server farm, would be the only realistic way to provide this level of support. "But who would WANT to provide that level of support?" Me. Because I'm not satisfied with second-best, especially when the best is easier and requires less effort.


        Why put out extra effort, in order to do less than you could? That sounds so....... stupid!

        • The reason we all have PCs is that they're commodity hardware -- the similarities far far outweigh the differences. Even for most niche markets -- handhelds, embedded systems, mainframes, whatever -- it's almost always worth it to build a distribution for the general case.

          Realistically speaking, general-purpose options hit all but one in a hundred installations. Actually, I'd be willing to bet that i386 optimization is perfectly good for that amount of people. Throw in i686/athlon optimization, and add a high-stability/high-performance split, and you've down to something like one in a thousand.

          For the rest, including the cases you mention -- the Mars explorer team, and those who are just "not satisified with second-best" can certainly compile their own apps without the overhead and potential extra problems -- not to mention expense -- of the server farm idea. The return on investment (in many senses) just isn't worth it.
        • OK, so they can't have all things... Why can't [INSERT YOUR FAVORITE DISTRO HERE] have at least an athlon image to go with the generic i586 image?

          No offense meant to the others, but x86 is probably the most popular architecture. Maybe:

          i386
          i586
          i686
          k7

          At least for the kernel! That way you hit the low end print servers, the P5-166s that are running as gateways, and the higher end machines.
    • This seems like a good idea, but how about a P2P distribution? You want to install in your machine, you first run a little stub that checks witch arch/optimization/etc you need, then it goes to the net and asks who has package xxx with yyy compiler flags, if no one has you build it (bad luck) and make it available to every one else.

      :-)
      • I've thought about a P2P solution, and certainly it sounds a good idea. The only problems I can see is that it needs to reach a "critical mass", before it offers any advantages, and at the same time, if it grows too big, the bandwidth is consumed.


        Now, if you were to combine the schemes (central repository & P2P), you'd have something a lot more powerful than either solution, and both the above problems would be resolved. (You could ensure that the central repository -started- at the critical mass point, and because that would handle "common" queries, only the unusual stuff needs to be distributed P2P, which would eliminate all the real bandwidth killers.)

      • You do realize you just introduced the first fatal unix trojan, right?

        Please don't ever ever ever ever design something like this. For if you do, I shall smite thee.
        • You're absolutly correct, the security of this would be very dificult to maintain. The only solution I could think is to centralize the builds and sign the packages that would be installed to make shure they were built in the central server(s), but this would need a centralized build server witch is the problem with the fisrt idea. :-/

          Well was just an idea that came on top of my mind. :-)

          Too bad that this would not work, because compilation is one thing that requires much CPU and would benefit from a distributed service.
          • The only feasible way in which this would work is if the people in charge of the packages (authentication across a P2P network is another issue in and of itself.. but a checksum algorithm against a trusted source would work, assuming your trusted source file never gets overwritten) were to accept binary builds through a trust metric system (think Advogato) then it may possibly work. However, the risk of something like this would be rather large I doubt any decent level of inception from system admins.
  • This is just another linux distro, like all the others. It runs the linux kernel, GNU, xfree86, add all the other fun apps that all distros use. What it has over everyone else is, everything installed on there is gonna be brand spanking new.

    Who is this for?
    NOT EVERYONE!

    There has been so many threads about people saying this is not good for them. Well, you know what, then it isn't. This distro is for the people that want to have everything up to date. It won't be the Best distro in the world since the combination of all the different apps you are installing has not been testing, but it leaves you with something that is setup the way YOU want it setup, not the way some developer over at (insert distro name here) decided to do it.

    Look for the good in the distro, don't just go hounding it.
  • Sorcerer: FGWTMTOTH (Score:3, Informative)

    by bshroyer ( 21524 ) <<bret> <at> <bretshroyer.org>> on Friday January 18, 2002 @10:27AM (#2861806)
    For Geeks With Too Much Time On Their Hands
    (not that this is a bad thing)

    Paraphrase of the FAQ:

    "This distro is new and different. It will take a lot of tinkering to get it running. Be prepared to blow a rainy weekend before you even see a decent window manager. You'll have to learn to cast spells. But when you finally succeed, rest assured that you'll have the very latest software, all compiled on your machine. Cool, huh?"

    Seriously -- reading through the FAQ, I got the impression that it was BRAGGING about the complex, time-intensive, processor-intensive, memory intensive nature of the installation and maintenance procedures.

    Diff'rent Strokes for Diff'rent Folks, I guess.
  • My complaint. (Score:4, Insightful)

    by ImaLamer ( 260199 ) <john.lamar@gma[ ]com ['il.' in gap]> on Friday January 18, 2002 @10:56AM (#2862000) Homepage Journal
    Everyone here seems to point out that the time it takes to compile the complete system is the problem. I mean, isn't this the price to pay for optimized software? Isn't this what [plenty of] people want?

    Sure, it's gonna' take a while to compile X, KDE and the like. But this is the other half of the operating system, many people will be using this exclusively, right? Wouldn't you want that to be optimized? It's the GUI, in my experience, the most demanding part of the system.

    But, I'm just a user. The whole thing sounds damn cool. I don't mind taking a day to compile *all* the software, just don't make me sit there in front of the machine the whole time hitting 'Y' and enter.

    The thing that intimidates me [still, yes] is the fact that I have to configure the kernel. I have never really got this to work for me. I think [complaint coming] there should be an choice for "Auto-configure" or "Configure Now".

    I'll understand if this isn't possible, but that seems to be the only thing in my way. I'm still going to download this and try it. The kernel config gets me though. I always seem to leave something out.

    Question: Are we using the same config utility that comes with the kernel tarballs? And does the configuration menus come up blank or does it 'best-guess' and let you add and remove the things you need?
    • Re:My complaint. (Score:2, Insightful)

      by BACbKA ( 534028 )
      Answer to your question: yes; buest-guess. And you can also preserve your machine's older config if you upgrade to a newer kernel (with make oldconfig). Redhat kernel sources, e.g., usually come with the same config as the distribution kernel. So if you just want to recompile the kernel for a different CPU, you can use it, and still have the generic "safe bet" answers for the other stuff. Automatic choices will not be as perfect - they sacrifice efficiency for a more generic solution. But if you are looking at Sourcerer, maybe you're just becoming brave enough to leave the generic pre-compiled kernel aside. Go look at http://www.digitalhermit.com/linux/kernel.html for a start, and look at the kernel config. howto as well. Consider #kernelnewbies (See http://www.kernelnewbies.org/) if you want advice on kernel hacking beginning. It's not as scary as it might seem...
      • Ok, I guess that isn't something I've understood before hand. I din't know that Red Hat, Debian, etc worked with pre-compiled kernels. But that makes a helluva lot of sense considering it would kill install time.

        Don't get me wrong, I've configured and compiled the kernel before. I even get it to boot... all the way. It just seems that I screw up when it comes to network cards or the like.

        I am going to poke around with this distro. Seems perfect for my older K6-2 that mainly sits there waiting for me to log in.
  • Updating. (Score:3, Interesting)

    by saintlupus ( 227599 ) on Friday January 18, 2002 @11:00AM (#2862037)
    You know, one of the main reasons I like the *BSD operating systems so much is the port / package systems that make this sort of updating so simple.

    I've tried Debian, but I don't know if it was the weird hardware (Using the m68k build) or just my newbieness (more likely) that made me dislike it so much.

    This Sorceror distro, on the other hand, sounds like all the ease of maintenance of the FreeBSD "make buildworld" setup with the greater driver base of Linux. Win / win. I might just have to check this out.

    --saint
  • by Anonymous Coward on Friday January 18, 2002 @11:04AM (#2862072)
    >Wouldn't it be more efficient to provide a couple of different binary packages for each package a'la mandrake (i586 and i486) ?

    No. I don't know what all the fuzz about those optimizations is, but the real advantage imho is the configuratibility.

    You want your ftpd, mta to have kerberos support, ldap ? You want KDE to user alsa only instead of OSS ? You do not use Netscape Plugins and there do not need this motif dependency ?
    Here is your way to go: ./configure --with-...

    You want to provide a binary for every "architekture" with every possible combination (dependency) ?

    No. usually you are eating what your distribution serves you and don't care about having your postfix make you also require to install slapd, too. Even if you don't use it at all. Fair trade of convenience vs. bloat.

    I do not know how those "source based" Linux Distros (rocklinux, lrs-linux, gentoo, sorcerer) handle this, but for FreeBSD you have either your make.conf or your configure-args in your port makefile. Or even a curses based selection menu. Latter one rocks.
    Sure, for a "base system" one usually does need to many selectins (maybe include acl support, glibc support for 2.2 kernels or use pam or not), but for "applications" I consider this essential.

    Anyway, I will sooner or later give either of those four known-to-me source-based distros a chance and see how tey stad up to mighty-ole slack :)

    Its about choice, not binaries.
  • So far most of the comments have been from people who either poo-poo the idea, think FreeBSD or some other *nix has already done it, don't want to compile all their own software, or have a similar idea of their own. But have any of you actually *tried* this distro? If so, speak up! What is your experience with it?

    Let's stop reviewing the review and the concept, and actually review the distro for god's sake.

    --SC

  • Comment removed based on user account deletion
  • New Trend... (Score:5, Informative)

    by XRayX ( 325543 ) <tobias.boeger@w[ ]de ['eb.' in gap]> on Friday January 18, 2002 @11:43AM (#2862346) Homepage Journal
    After the years of RPM-Based Distros, it seems as if those "self-building" distros are the new trend. We now have 3 of them:
    RockLinux [rocklinux.org]
    Gentoo Linux [gentoo.org]
    and Sorcerer Linux...
    From my experiences and what I've heard Gentoo is the by far stablest and easiest to install of them and recently got a really good review [newsforge.com] at Newsforge [newsforge.org].

    I don't really know if that is good concept, because the time/use of self-compiling every bit of software is quite low IMO. What is needed is a new Distro, that builds the Kernel itself and installs all the other application through RPM. That would maximize Speed and usability. My friend and I are working on something like this right now ;).
  • Firstly, Religious Tolerance online [religioustolerance.org] does not recognise/list Linux distros, or the open source software movement, as ethical systems. Send e-mail to ocrt@religioustolerance.org to get this corrected. Seriously, I bet we can get them to include it as a religion. They include Scientology, after all.

    Secondly, it is important that we tip off "investigators" from the counter-cult movement about this new, occult Linux distribution (former Linux programmer, now saved, reveals Satan's plan for open source software!). Nothing drums up good PR like being an instrument of the great beast. Religious Tolerance keeps a list of these fruitcakes. [religioustolerance.org] These people have suffered fundamental damage to the credulity centers of their brains and will believe anything packaged as evidence of Satan's machinations.
  • Bullsh.. (cough) (Score:4, Insightful)

    by tijsvd ( 548670 ) on Friday January 18, 2002 @11:56AM (#2862433) Homepage
    Just think about this for minute - since the vast majority of Linux software comes with source code, why is it necessary to download binary files that somebody compiled on a particular hardware platform and included all sorts of options to run it on thousands of different hardware configurations? Would it not be more logical to compile everything on your own machine, ensuring that the code is optimised for exactly your hardware?

    The whole idea of a kernel is that it provides an abstraction layer to the hardware: the optimization based on all these "thousands of hardware configurations" takes place in the kernel.

    Let's take a look at the most important pieces of hardware that are in the computer:

    • The mainboard. It hosts some chipsets, perhaps sound chip, perhaps ethernet interface. Optimization takes place in the kernel and the binary code of any program wouldn't be any different when compiled for a different mainboard.
    • The video card. By selecting the correct xfree driver, you have the optimization. Standard interfaces to video card features are available through OpenGL etc. A 3d program can use the OpenGL interface and it might be implemented in the video card driver, or in software. The binary of the program wouldn't change, right?
    • Add-on cards. A program would be bad if the binary would depend on the kind of ethernet card you have, wouldn't it? There is exotic software for, for example, video capture cards, but they have often been written for a specific chip and don't work with other chips at all.
    • The CPU. Now there binaries may differ because of pipelining and special instructions. However, I don't believe the overall speed of your average system would increase that much if all binaries had been cpu-optimized (for example, the site that hosts the article lacks bandwidth...). If you really want this optimization it would make more sense (as another author suggested) to create binaries for different CPUs. (There must be a reason no other OS distributor doesn this, right?)

    The conclusion is that it is really nonsense to compile _everything_ from source. Have the users compile their kernel based on their hardware. Make sure they have the correct xfree driver for their video card and a correct xfree config file. That's all the "hardware optimization" you're ever going to need.

    So why do people nowadays compile programs, then? That has nothing to do with hardware, but with the myriad of libraries Out There. Binaries will of course change if you compile them on libc5 or glibc2.1. But if you stick with one distribution, that is never a problem. The problem is there if you want programs that don't come with your distro. Again, there's really no reason to compile the programs that do come with the distro.

    But that's just my opinion. Of course these people should be praised, because they had a Good Idea [tm] and did something with it.

    • by Turiya ( 519262 )
      Your conclusion is wrong, since there are many things the compiler can optimize for you.

      There is for example memory allignment for the variables, which differs depending hardware. There are the different execution costs of certain asm comands, so the compiler can chose the best for every arch.

      All these things have nothing to do with Kernel or Librarys, and no (Binary) Distro can do this optimizatons for you, since these binarys won't run on every arch. (e.g. i686 binaries won't run on i586)

      And belive it or not, this realy makes a difference. Especially huge Programs, like X or KDE gain noticiably from this Optimisations.

      The other advantage of Compiling yourself is control, you can control any aspect of the build process. Take Mozilla as an example, you can choose whether you want Mail/News client, SVG, MathML, you can prefer certain librarys over others (each of which has its own pros and cons) you can even choose which kind of garbage collection you want it to heav, if any.

      And another upside is that you can also do any step manually, which teaches you many things, and gives you even more freedom.

      I never leard more about the inner workings of Linux, as when I first installed it manually from sources for the first time. (see http://www.linuxfromscratch.org, IIRC the devel Team from Sourcerer used this as a Starting point)

    • The conclusion is that it is really nonsense to compile _everything_ from source.

      You're confusing abstraction with optimization. Abstraction makes sure that you do not have to recompile all your applications, and, in fact, not even the kernel.

      Optimization includes making use of the latest features and optimizing for a certain architecture (e.g. Pentium vs. AMD). This usually doesn't make sense if you're shipping precompiled binaries, and abstraction makes sure it's not required. But that doesn't mean optimization shouldn't happen.

      Free Software is about having the software configuration you want. Why not extend that to having it tuned for your hardware?

      However, I don't believe the overall speed of your average system would increase that much if all binaries had been cpu-optimized

      You don't think a Pentium 4 optimized or Athlon optimized executable is much more performant than a generic 486 executable? Besides, even if it's just a 10% increase in speed, why not? You get it for free (well, maybe the installation takes longer due to the compile time, but that's a one-time effort).

  • by scorcherer ( 325559 ) on Friday January 18, 2002 @12:19PM (#2862610) Homepage
    Thought it should be called Sourcerer.
  • by gotan ( 60103 ) on Friday January 18, 2002 @12:21PM (#2862624) Homepage
    When updating packages rpm-style (especially something like KDE) it's really annoying that some of them are so large, while the change from the last version (that might still lie around on your platter somewhere) is probably less than a few percent of that.

    It would heavily reduce bandwith, were it possible to grab just the diffs (and maybe an MD5 sum of the package complete with diffs), and not everyone has a T1 out there. With binaries that wouldn't make much sense (until one applied a very specialized diff), but with source-updates it would work well.
    .
  • ...as far as I can see, Debian can do this easily: apt-get source packagename .

    Then you can ./configure and make it yourself. If you only care about the optimizations (and not about the compile-options), you can even do apt-get source -b packagename and it will be built automatically.

    The required development packages can be installed with apt-get build-dep packagename by the way, so you don't have to worry about that either.
  • Step 5. Recompile all the applications from the original CD image.

    Uh KDE can take a day to compile and with gnome there are hundreds of apps. That could take a week.

    Great its optimized, how long does the install take, and how fast of a machine do they recommend having?

    A kernel compile is not a problem on my athelon as that takes 4 minutes from make dep to make bzImage / make install, but on my old P133 it takes about an hour.

    I wonder if they have the newbie in mind here. Does your grandma really want to be compiling her own kernel? I think not. Most people don't want to know that much about their computer, I guess that is why so many people use windows is cause it allows people to stay ignorant about their computer (until something goes wrong).

    It does however sound like something worth trying. I am wondering if this would work on my old P133 and how long it would take to do a full install.

  • Couldn't help but notice the "Redmond Linux" icon out of all those at the top of the page there.....

    http://www.redmondlinux.org/

    Hmmmmm. I'm not sure what to think. :)
  • Mirror online at: (Score:2, Informative)

    by Mall0 ( 252058 )
    well, wox.org is /.'d all to hell, so the impatient can simply:

    wget http://distro.ibiblio.org/pub/Linux/distributions/ sgl/sorcerer.iso.bz2

    Happy downloading!
  • I've been saying for some time that I wanted a linux distro that would compile itself self and be optimized specifically for the system it was installed on.. everyone I talked to about it said I was crazy and that there would be no need for such a thing or that it would be a waste of time to optimize it so..
  • sorcery metaphors (Score:3, Insightful)

    by Suppafly ( 179830 ) <slashdot@s[ ]afly.net ['upp' in gap]> on Friday January 18, 2002 @03:52PM (#2864022)
    While the sorcery metaphors seems a tad bit overdone, the whole idea seems like a good one.. the package management (if one can rightly call compiling source package management) appears to be easy to use from the general description of it all.. it appears like it has an apt-get kinda ease to it.. its a shame the main site for the distro has been /.'ed but I can see where that would happen once all us get wind of it and run to download the iso's. hopefully someone that has used it can better comment on the ease of use overall and the package management especially.

  • I once had a slashdot sig to the effect of "Slashdot is not news for geeks, its a battleground for wannabe managers." Again I'm seeing the tendancy for people to offer their opinions as if they were managing a project.

    What these people are doing is pretty cool. Its something that I've been hoping would come out for a while (actually it has, at rocklinux.org and has been mentioned on slashdot as much as two years ago.) Back in the days of StampedeLinux I found out first hand the benefits of a hand-optomized compilation. It easily ran at 20% faster speeds than anything else for desktop use.

    Now that the optomizations of yesteryear found in egcs and pgcc are done pretty well in in gcc 3.0, I wonder why I still run packages for an i386 on my Athlon. Rock linux is by far the most real-man linux out there, and is the most stable and unbreakable. If Sorcerer is only a bit more managable I'll have to try it out too.
  • by Angst Badger ( 8636 ) on Friday January 18, 2002 @05:52PM (#2864836)
    The APPRENTICE option allows you to execute a command even if the associated package is not installed on your system. Sorcery will simply download, compile and install the necessary package before executing it.

    Hmmm...

    [newbie@home newbie]$ startx

    "Damn, X sure does load slowly on this box!"

    (Seriously, it does sound like a cool idea, even if I'm not convinced it's practical, but I may try it.)

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...