Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Debian

Debian 3.0r4 Released 194

SeaFox writes "The Debian group has released an update to the 'Woody' distribution of the popular Linux/GNU OS. From the site: 'This is the fourth update of Debian GNU/Linux 3.0 (codename woody) which mainly adds security updates to the stable release, along with a few corrections to serious problems. Those who frequently update from security.debian.org won't have to update many packages and most updates from security.debian.org are included in this update.' But the question on everyone's mind is probably when the current Testing branch, featuring much more up-to-date packages, will be named the new stable release."
This discussion has been archived. No new comments can be posted.

Debian 3.0r4 Released

Comments Filter:
  • testing?! (Score:5, Insightful)

    by didde ( 685567 ) * on Sunday January 02, 2005 @09:12AM (#11237856) Homepage

    But the question on everyone's mind is probably when the current Testing branch, featuring much more up-to-date packages, will be named the new stable release.

    Oh, come on! When will the submitter realize that stableis what most of us want to run on our servers and mission-critical hardware. I for one cannot afford doing an apt-get upgrade and breaking three, two or even _one_ package. Even worse would be putting a serious bug in the software on a production machine. With stable this chance is minimal, but of course not non-existant.

    One possible solution would be to divide Debian into a "server version" and one for the workstations who actually _want_ (or need) to run stuff from testing. Although this would mean double the work for the package maintainers (et al) I'm sure it would make Debian even more attractive as a desktop alternative. Today, I don't know a single n00b or even semi-n00b using it for her home PC or similar - it's all Windows, Xandros or possibly SuSE. On the other hand basically all of my friends who proudly call them selves sysadmins are running Debian (stable) on their production boxes...

    Unless of course they need to run RH to get IBM to support WebSphere =)

    • Re:testing?! (Score:4, Interesting)

      by IonPanel ( 714617 ) on Sunday January 02, 2005 @09:15AM (#11237860) Homepage
      A Debian Server variant would indeed be good - with perhaps a pre-configured installer that sets up the most comonly used packages on a server.

      Of course, another open-source group could provide this alongside the Debian Project ;)
      • A Debian Server variant would indeed be good -

        Well, no need for that. The 3 main distros (stable, testing and unstable) simply represent the "level of paranoia"/package staleness choice one can make, i.e. stable is old stable packages, testing is reasonable up to date packages with a few problems, and unstable is cutting edge and you're on your own with problems.

        What one may with is an additional level between stable (which is truly quite stale) and testing

        with perhaps a pre-configured installer that

        • What you really want (and what everybody wants) is an easy intuitive point-and-click thingy that'll finally replace dselect.

          Synaptic?
          • Re:testing?! (Score:3, Insightful)

            by tacocat ( 527354 )

            Why must the solution always require a X-window GUI? You've now required a huge amount of resources be deployed just to update/select packages for a DNS/printer server.

            Aptitude/apt-get rocks the socks off anything I've seen and I would really hate to try and run some GUI over my internet SSH connection across the country just to execute my periodic 'apt-get update && apt-get dist-upgrade'

            • Re:testing?! (Score:2, Informative)

              by rpozz ( 249652 )
              While I use Gentoo, SuSE has come up with quite a nice way of dealing with the problem you describe. YaST2 - while being a tad bloated, can either run in an X-Windows GUI, or work under ncurses using its own toolkit and thus keeping the layout just about the same.
            • You do realize that dselect and what you are proposing do with apt-get are not the same thing, right?

              -Erwos
      • Re:testing?! (Score:4, Insightful)

        by imroy ( 755 ) <imroykun@gmail.com> on Sunday January 02, 2005 @10:43AM (#11238065) Homepage Journal
        ...with perhaps a pre-configured installer that sets up the most comonly used packages on a server.

        Ooh, bad idea. Multiple vendors (amongst them Microsoft and RedHat) have already demonstrated that it's a bad idea for an OS installer to silently install services/daemons. When an exploit comes around, someone *will* write a worm and say bye bye to your credibility. Because there'll be an aweful lot of people who didn't even know that Apache/Sendmail/BIND/whatever was installed on their machines and didn't know to update. No siree, I like the current trend of disabling services/daemons on installation. Even better, Debian often sabotages config files to force the admin to spend at least a little time looking at a config file before firing up some daemon.

        • Even better, Debian often sabotages config files to force the admin to spend at least a little time looking at a config file before firing up some daemon.

          Actually I would not call it sabotage as I usually see a good default config file that will work for most people. What I am seeing more and more are files in /etc/default that have a simple variable that needs to be defined and set to 1 otherwise the /etc/init.d/{service} script will refuse to start the service. This is very cool because it forces the
          • Re:testing?! (Score:3, Informative)

            by imroy ( 755 )
            Yes, you're right. I guess I showed how long I've been using Debian. They used to do more subtle things to config files so that the daemon wouldn't start and you had to spend at least a little time looking in the main config file. Now they are putting a RUN_DAEMON="false" variable or something similar in the /etc/default/{service} config file that's read by the init script. Although a few still put an exit command early in the init script and require you to remove it or comment it out. This is a bad way to
        • by fforw ( 116415 )
          Even better, Debian often sabotages config files to force the admin to spend at least a little time looking at a config file before firing up some daemon.
          stockholm syndrome?
    • One possible solution would be to divide Debian into a "server version" and one for the workstations who actually _want_ (or need) to run stuff from testing. Although this would mean double the work for the package maintainers (et al) I'm sure it would make Debian even more attractive as a desktop alternative.

      That would be the best way to go but wouldn't that also restrict features and perhaps some secrity loop-holes in the long run?!
    • Re:testing?! (Score:1, Interesting)

      by Anonymous Coward
      Well here's a semi-n00b who uses sarge/win98 (for gaming) dualboot on his home PC. Sure Debian was a bit hard to get running but now that I've gotten it up I must say it's the distro I've most enjoyed. Besides, I've learned a lot using it.
      Even though sarge is "testing" It's been really stable and also my choice for desktop use.
    • To be honest I've used testing (codename: sarge) on production machines without problems at all. The versions of software in stable is useless at some machines. I mean who can seriously use a mysql 3.x to anything other than fill up space ?
      Though of course I am running a smaller risk, but usually I just upgraded a smaller box first to see if anything was broken - didn't encounter one broken thing yet... it's unstable (codename: sid) that gives me the most if any problems of course I have never, and will nev
    • Re:testing?! (Score:5, Informative)

      by novakreo ( 598689 ) on Sunday January 02, 2005 @09:37AM (#11237907) Homepage

      One possible solution would be to divide Debian into a "server version" and one for the workstations who actually _want_ (or need) to run stuff from testing.

      Or you could, you know, actually run stable on your servers and testing on workstations. Debian will let you mix and match, it's called pinning [sbih.org], and if you're not willing to run testing or unstable, Debian Backports [backports.org] provides modern packages compiled for stable.
      The system you're describing already exists, you just need to know how to use it.

    • Re:testing?! (Score:5, Interesting)

      by tacocat ( 527354 ) <tallison1&twmi,rr,com> on Sunday January 02, 2005 @09:40AM (#11237911)
      One possible solution would be to divide Debian into a "server version" and one for the workstations who actually _want_ (or need) to run stuff from testing. Although this would mean double the work for the package maintainers (et al) I'm sure it would make Debian even more attractive as a desktop alternative. Today, I don't know a single n00b or even semi-n00b using it for her home PC or similar - it's all Windows, Xandros or possibly SuSE. On the other hand basically all of my friends who proudly call them selves sysadmins are running Debian (stable) on their production boxes...

      Please don't...

      Debian already has four levels of version: stable, testing, unstable, and the new expiremental. Adding any more levels or options to the process will only slow down the release of stable. I really don't think you want to wait for the next release of Debian Dorever 3D do you?

      If you want a server version then stick to stable. If there's a package that you need that's newer then selectively import that from testing while keeping the rest of your system stable.

      It's a cute sounding suggestion, are you the one who is actually going to have to live with it, or are you trying to sound intelligent? You forget you are dealing with a voluneer group. If you add a shitload of beaurocratic complexity to the process you will have to start paying them to put up with your stupid ideas.

      I've worked will someone for over a year on using Linux and they have settled on SuSE. They don't like it, but they just don't want to learn anything more about it. They have to settle for a lot of things that they can't do or can't do right.

      Adding more distribution levels to Debian will only make things more difficult to manage. Don't fuck with it unless you want to fix it yourself.

      When are you going to realize that there will always be two types of users on computers? Sheep and Geeks. Sheep like to download virus and spyware and adware and if they can't have butterflies for their mouse pointer they shit themselves. And they don't care about anything else. Let the sheep use Windows and be stupid and pathetic and annoying and let the rest of us use Linux and have a clue and not have to deal with the sheep unless we need some money. Sheep pay a lot of money for stupid stuff. Don't fix it for them, or we might all be out of work.

      • Re:testing?! (Score:2, Insightful)

        by Anonymous Coward
        If you want a server version then stick to stable.

        Stick with OpenBSD. It's more secure than Debian, and substantially more up to date package/version-wise.
        • Re:testing?! (Score:4, Interesting)

          by Master Bait ( 115103 ) on Sunday January 02, 2005 @02:05PM (#11238873) Homepage Journal

          If you happen to buy a new computer, Debian 'stable' is too old to support the chipset, many devices and perhaps even the cpu (such as Opteron or Apple's 64-bit PPC). Otherwise, Debian stable is fine for new servers -- but only if you buy them used on Ebay!

          They should reorganize their release names from stable, testing, unstable and experimental to Grandpa, Greybeard, Production and Current.

      • Wow, you made sense until that last paragraph. If that's really the way you view Windows users your unbelievably conceited and elitist. Oh and as someone who uses Windows but was using Linux probably years before you ever even heard of it I have to say that your theory on sheeps and geeks is full of shit.
        • No, I was being unbelievably sarcastic.

          I'm scared to death that in the name of "Ease of Operation" people will settle for the likes of SuSE at the sacrifice of Debian and Gentoo.

          I spend more than a year running only SuSE around here and found that it was very nice to use. Just as long as you didn't try to do anything that they didn't anticipate. They have a good concept in management of a workstation/server. But if your needs deviate from their path, it becomes increasingly difficult to execute. Many

    • Six month release cycle, new packages, desktop orientation.
    • Re:testing?! (Score:3, Interesting)

      by grumbel ( 592662 )
      What Debian IMHO needs to do is to split their distribution into different parts and release those independendly for each other (base, x11, gnome, kde, etc.).

      Its simply a completly hopeless undertaking to try to get all multiple thousands packages in Debian stable stable at the same point in time, it simply won't work. And while this undertaking is already almost impossible at the release time of a new Debian stable, it gets rather pointless once the Debian stable distro got a year or more old. At that poi
    • Re:testing?! (Score:5, Informative)

      by Erik Hensema ( 12898 ) on Sunday January 02, 2005 @10:20AM (#11237995) Homepage
      Oh, come on! When will the submitter realize that stableis what most of us want to run on our servers and mission-critical hardware.

      Yes, but you don't want to install the current debian stable on new servers. It's just too old. Stable lacks the hardware support for modern servers (does Stable ship with a kernel which supports dual xeon machines with 2 GB ram? AMD Opteron? Modern chipsets? SCSI controllers?).

      Debian Stable is good for old servers. Debian has no good offering for new servers. Nobody cares that debian can be installed in 48 MB of ram. 48 MB does not make a server. It makes an antique.

      Debian should realise that if they want to make a serious server distribution, that people will want to run it on a server. A real one.

      • Re:testing?! (Score:4, Insightful)

        by Kent Recal ( 714863 ) on Sunday January 02, 2005 @12:58PM (#11238528)
        Give me a break here. For real linux-servers you'd better roll your own linux (remember, a real server takes a real admin...) or at the very least compile the critical runtime stuff (usually database, webserver, app server) and ofcourse the kernel from scratch.

        If you seriously intend to put a stock distro kernel on it you have no deal setting up a "real" server.
        • I'd fire you for doing that. Seriously. All the top distributions (including Debian) do extensive QA on the software they ship, especially the kernels. When compiling your own software, you dismiss this QA and take your own responsibility for the software quality, knowing that the quality is usally less. And don't even get me started on security fixes. Do you want to keep track of all custom compiled software on all your servers and make high qualily security fixes under high pressure?

          Of course I can comp

          • When compiling your own software, you dismiss this QA and take your own responsibility for the software quality, knowing that the quality is usally less.

            Ridiculous. How is software that was handpicked and compiled for the specific task by a qualified person worse than a generic distro package where often you don't even know what compiler version and flags were used and which exotic patches might have been applied?

            If you prefer to not know what's running on your server, fine.
            The fun ends when you sell tha
            • Well I only own a regular pc and I can tell you Xine and Mplayer crap on me all the time for any distro except debian and the 4.x versions of FreeBSD.

              Now tell me if most of hte linux distro's are really that tested? SuSE particularly is very buggy.

              If I were your boss I would fire you for installing free software that has not been testing for QA.

              The redhat scare had to do with some patches for the stl library which many c++ unix programs needed to be ported to Linux.

              In general Solaris and other unix adm

          • When compiling your own software, you dismiss this QA and take your own responsibility for the software quality, knowing that the quality is usally less.


            It is probably a good idea if you need 2.6 and you need it to be stable, but 2.4 is now quite mature and does not undergo very drastic changes. It would be pretty reasonable to compile your own 2.4 and expect it to be pretty stable.

        • Give me a break here. For real linux-servers you'd better roll your own linux (remember, a real server takes a real admin...) or at the very least compile the critical runtime stuff (usually database, webserver, app server) and ofcourse the kernel from scratch.

          Since none of the commercial middleware and relational database vendors support anything except the commercial "enterprise" distributions, running stock kernels and stock everything else, you're pretty much blowing hot air here. Unless you think tha

          • Since none of the commercial middleware and relational database vendors support anything except the commercial "enterprise" distributions, running stock kernels and stock everything else, you're pretty much blowing hot air here. Unless you think that the multi-cpu Oracle systems running on RHEL and SLES aren't "real" servers, which would be a pretty interesting position to take.

            These are not servers within your responsibility. You buy them as blackboxes and expect oracle-support to keep them running for y
        • This is insightful? What advantage does rolling your own give you at all? "Of course the kernel from scratch" is taunted, but is there any actual justification for this? Package management was invented for the explicit purpose of being better able to manage software upgrades, dependencies, and file locations. Can anyone give me one reason why this should be abandoned?
          • I'll try to explain further, this is why it is essential:

            1. You choose and know what will be running, including all libraries (starting
            from glibc), patches, compiler and interpreter versions. No generic distro
            package will be optimized for your purpose because no maintainer knows what
            set of services is critical to you and what configuration options are
            appropiate. You do not want to run public services with all kinds of
            unnecessary modules and options compiled it (many of which could beco
      • Re:testing?! (Score:5, Informative)

        by ArtDent ( 83554 ) on Sunday January 02, 2005 @02:01PM (#11238854)
        I run Debian Stable on a very modern server, with >2GB RAM, Fusion-MPT SCSI, gigabit ethernet and all that good stuff. It's just a matter of using a newer kernel than Woody's default.

        I want the distribution to be stable, but I don't mind keeping the kernel up to date myself. With make-kpkg, it's a snap to build Debian packages out of kernel.org tarballs and, on this machine, it just takes a few moments.

        (And yes, if this really was a mission critical server, and not just a department build machine, I'd build and test my kernels elsewhere before deploying them, but that's not the point.)
        • We patch kernels so infrequently, I usally build them from source anyhow. For the most part, a kernel ir a kernel is a kernel, and I have never encountered any sitiuation where running my own kernel has messed up packages or dependancies.

          I'm getting to a point where there are things in testing that I need, I'll grab those packages from backports.

          Xix.
      • Re:testing?! (Score:3, Informative)

        by raynet ( 51803 )
        But with make-kpkg you can easily compile and install your own custom kernel with any hardware support and patches you might need. And you can install the created .deb to as many servers you need to.

        And I think Debian Stable comes with SMP enabled kernel, so it should work with dual xeons with up to 4GB of ram.
    • Oh, come on! When will the submitter realize that stableis what most of us want to run on our servers and mission-critical hardware.

      Maybe the submitter meant he can't wait for the testing release to become the new stable one because most of the included packages are mature enough (Sarge seems to be lagging because they wanted to include gnome 2.8) and because security updates are done asap on stable.

      As for making a "server" distribution, I think it's not worth the effort, as answering the FAQ "which de
      • Sarge seems to be lagging because they wanted to include gnome 2.8

        Gnome 2.8 is now in testing, according to the last release update [debian.org]. The main delay is the security update infrastructure for testing.

        Security updates are done ASAP on unstable too, so it's also an option if you don't want to wait for security updates to migrate into testing.

    • Re:testing?! (Score:4, Interesting)

      by __aainau5532 ( 573672 ) on Sunday January 02, 2005 @10:36AM (#11238043)
      First of all, I liked debian and run it for years, but. Yes but. Its become something like Qmail or djbdns. It became unmaintained, it became a nightmare. It has software what is over 30 months old and most software isn't even supported anymore by upstream. For example try to submit a PHP-bug or complain about Postfix or get support for Postgresql. It isn't there anymore. I don't mind running behind with my software when its still safe, but when upstreams say "UPGRADE before you complain!!!" its over for me. Currently I have machines with backports and lots of it, but I'm not going to wait for Sarge. I'm running tests with FreeBSD 5.2.1/5.3 for a while now and soon the first debian machines will be something of the past.
      • Re:testing?! (Score:3, Informative)

        by Mr.Ned ( 79679 )
        " First of all, I liked debian and run it for years, but. Yes but. Its become something like Qmail or djbdns. It became unmaintained, it became a nightmare. It has software what is over 30 months old and most software isn't even supported anymore by upstream."

        Debian is most certainly not unmaintained in any sense of the word. Debian backports security fixes to the version in stable.
      • I'm running tests with FreeBSD 5.2.1/5.3 for a while now and soon the first debian machines will be something of the past.
        Interesting. I used to have FreeBSD on both my server and my two desktops, but I recently switched the desktops to Debian. For me the real issue was that certain ports just were old, or were not really working and available. For me, the biggest ones that mattered were inkscape (an old version is all that's available), lilypond (I have several pages of notes on the problems involved in
    • I've been running this debian sid(unstable) installation for 2.5 years now, since the time my old hdd gave it in. It is most of the time stable, and as some people were telling, the only difference between stable and unstable is that unstable isn't verified to be stable, most of the time you won't have problems with it. From the things i use on this desktop the only thing which was bugging me for a while was when they broke a library which made mplayer broken (it didn't play movies, just shown a grey frame
    • "basically all of my friends who proudly call them selves sysadmins are running Debian (stable) on their production boxes"

      Should that be interpreted as you suggesting you are a Debian missionary or something? I've observed that Debian seems to have a higher proportion of users who advocate it as the One True Linux compared to the other distributions. Only one of my many friends who are sysadmins uses Debian, and only for machines that have been around for awhile. The new stuff gets something like Gento

  • by Anonymous Coward on Sunday January 02, 2005 @09:28AM (#11237887)
    I've always defended Debian Stable's stale package versions for the sake of stability, but recently a serious issue has arisen. The recent PHP security flaw has made this issue apparent. The version packaged for Woody is 4.1.x. The PHP developers no longer pay any attention to the 4.1 branch and their recent release for the newer 4.x release which fixed the security issues, also had other fixes included, making it difficult to backport them to the 4.1 branch. Last time I checked, no one on the Debian side had stepped up to fix the issue in 4.1.

    Something really needs to happen here (and installing 3rd party backported packages is not a clean solution). Perhaps a policy that packages that are no longer supported upstream will be upgraded in stable.
    • The recent PHP security flaw has made this issue apparent. The version packaged for Woody is 4.1.x. The PHP developers no longer pay any attention to the 4.1 branch and their recent release for the newer 4.x release which fixed the security issues, also had other fixes included, making it difficult to backport them to the 4.1 branch. Last time I checked, no one on the Debian side had stepped up to fix the issue in 4.1.

      As someone pointed out in response to another post, the same problem happens with Cyrus
    • Could you have used Debians pinning to selectively upgrade PHP to the testing branch?

      You've got a point about the slow fix to the flaw. But I don't believe the solution is going to be managed through adding additional levels complexity. But focusing on how to get the existing process to move more smoothly.

    • by stevey ( 64018 ) on Sunday January 02, 2005 @10:27AM (#11238005) Homepage

      The PHP issue was complex due to initially there being a lot of issues reported and ID's given which were later retracted.

      All this was muddled by the PHPBB2 worm which the PHPBB people claimed for a long time was a flaw in PHP itself being exploited not a hole in their software.

      Few people seemed to care to look into the situation carefully, had they done so they'd have released that woody wasn't vulnerable to several of the isses, eg these two [debian.org].

    • I found that using sort() on array with only 1 value doesn't reindex it, but sort() on an array with multiple values does.

      The debian PHP maintainers arguement was that some people might be relying on that bug. I can see his point but it's such a broken bug that I still feel it should be fixed. It makes doing a for loop through an array that has been sorted unreliable so that you have to use for each.
      • You should always use foreach when looping over an array in php.

        It makes me faint to think of you doing otherwise.

        Sam
        • I have been converting to foreach, but is there any reason not to do it the other way?

          The only reason I did it the other way is that's what I originally learned bringing it over from ASP years and years ago.
          • If you for($i=0;$i$dummy) {
            $object=&$array_of_objects[$key]; ...
            }

            Because we have been permitted to see the growth and maturing process of PHP we see lots of weirdisms some of which are not ironed out till PHP5. This also means that in PHP4 (and evebn more so in PHP3) there are a few clumsy idioms like the one I gave above that need to be used but which really ought to have a more compact representation.

            Some others are (remember that PHP arrays are ordered):

            1) get the first item out of an array as
  • I personally run a Debian install from a Knoppix 3.6 HD Install at home on a couple boxes. It defaults to testing, and is quite happy to let me upgrade packages from "unstable" as well. I think there's something to be said for giving the user a few different branches of choice, and let them decide the level of risk they're comfortable with.

    Some packages, such as MPlayer, I know are tested enough by the development team that I'll take the newest version as soon as it comes out. Others I'd prefer to know someone else has taken some pain with it :-)

    Just my .02 worth

    ---

    For more of my ramblings, look here [blogspot.com]

  • by Anonymous Coward on Sunday January 02, 2005 @09:48AM (#11237930)
    Seriously, ever try installing Woody on a new machine with a new hardware RAID controller? You can't, you need a custom hacked install CD. I admin a bunch of servers and my boss likes Debian, however I'm sick of having to bend over backwards to just install Debian on our new rack boxes, much less try to use up-to-date packages. I'm going to try to sway him towards FreeBSD. Debian was a great thing back when compiling packages took hours and hours, but as fast as machines are these days waiting several years between stable releases is not viable. On top of that, with the time spent on debian-devel discussing (and flaming) trivial things like package ratings (someone posted an ITP for some R-rated thing), it's all just a waste of time.
  • by Knights who say 'INT ( 708612 ) on Sunday January 02, 2005 @10:09AM (#11237970) Journal
    A: "Debian is all old!"
    B: "Yes, but it's stable and it rulez in professional environments where you can't crash"
    C: "Um, but Red Hat has pro support, if you're a pro"
    B: "You can buy support from vendors"
    D: "Don't people realize stable means stable, and testing means testing and it's wonderful that there are so many options?"
    E: "My Gentoo system rox!"
    A,C,D: link to sites like funroll-loops.org
    F: Hypes up debian-based Knoppix.
    G: Hypes up debian-based Ubuntu.
    A: "Debian testing is still old, I need new"
    B: 'You could try gentoo, you unfaithful kid".
    yadda yadda yadda.
    • by tacocat ( 527354 ) <tallison1&twmi,rr,com> on Sunday January 02, 2005 @11:50AM (#11238231)

      The each have their own place

      RedHat (SuSE)
      A good distribution for someone who is looking for products which are supported by contractors and vendors. A widely popular distribution which targets the Enterprise computer industry with marketed points of Vendor support, Third party package availability, simplified GUI's with a design towards a single look and feel for all concerned.
      Gentoo
      Very actively developed based on some good ideas. It's newness prevents it from really approaching a serious consideration for many users and most Enterprise applications. Exceptions do exist, but are the minority. Very high potential for success once some concessions are made towards making the system more stable, easier to manage, and less likely to explode.
      Debian
      One of the oldest distributions and also surprisingly popular with software developers. Definitely one of the top five in the industry and holding strong. While it does not cater to the Enterprise crowd through market-speak, it could perform as such given the chance. Also there is a fundamental lacking in the One Size fits all approach that SuSE (and to some degree RedHat) have taken. This can lead to a confusion at the desktop when users switch between KDE, Gnome, and WindowMaker (top 3). It's also know for it's focus on being stable over current.

      While there is a lot of pressure on Debian to move off the focus on stable and move towards being more current, this needs to be addressed not as a means of changing the process with greater options for the user community, but to address how the existing (and proven over years) process might be better improved upon. Much has been done through automation of the defined process steps already.

      • You make a good point.

        Current is oftenmore important than stable where "stable" is stable beyond the practical life of the hardware and "stable" wont install on new machines.

        Fossils are stable too, but not much good as meat.

        As is pointed outm "stable" is just a label though, and although calling something less stable "stable" doesn't make it so, and you can selectively pick pages from "testing" and do your own security fixes.

        I think security fixesfor testing, and easier pinning control in dselect would
  • Debian Unstable (Score:4, Interesting)

    by SiChemist ( 575005 ) on Sunday January 02, 2005 @10:10AM (#11237975) Homepage
    I've been running Debian Unstable on my home machine for a few months and I have to say that it's every bit as stable as the Fedora install it replaced on the same hardware. It's my main desktop at home and gets quite a workout.

    The Debian "unstable" branch is as stable (at least for me) as any Linux distribution that I have used. Fast, too.
    • Re:Debian Unstable (Score:5, Informative)

      by mikeage ( 119105 ) <{slashdot} {at} {mikeage.net}> on Sunday January 02, 2005 @10:34AM (#11238036) Homepage
      This is a common misconception about stable and unstable. Unstable does NOT mean that it's fragile, going to break, or unsafe for use. Instead, it means that it has not been verified as stable.

      The guidelines for unstable/testing/stable as basically as follows:
      All new packages are in unstable
      After about 2 weeks, they are moved to testing, if there are no major bugs
      At release time, they go into stable.

      Thus, if you'd download the latest version from sourceforge, or any kind of "nightly build", you may as well use unstable. If you only use things that have been tested first, but like recent software -- use testing. If you need the best testing availabe (without, of course, paying for testing or doing it yourself!), go with stable
      • If you only use things that have been tested first, but like recent software -- use testing.

        No no no. testing doesn't get security updates whatsoever. If for whatever reason sec-update package is bugged, it will not propagate to testing. This will make the 2 week delay even longer, possibly putting sec update off for a very long time. It's stable or unstable, testing is just to have new stable in a few years.

      • True, but since this misconception is so common, perhaps the "Unstable" label is not appropriate (i.e. who the hell gave the "unstable" name in the first place? we know it means fragile!)

        So perhaps choose another name for this branch: incoming? bleedingedge? cuttingedge?
    • Re:Debian Unstable (Score:3, Insightful)

      by cortana ( 588495 )
      It doesn't mean unstable as in crashing; it means unstable as in volitile, changing. Every night you can apt-get upgrade to a new host of potential problems. Stable is called such because the only changes that are ever made are backports of security fixes. Thus, stable is suitable for servers or large workstation deployments, etc, while testing/unstable are ok to use for random hacking on a desktop machine at home.
    • Um, I don't see why one distribution would be any 'faster' than another, for the most part; they all run essentially the same code, and per-processor optimizations don't make any real-world difference (i.e., Gentoo). The only real difference might be in boot-up time, because Debian tends to be pretty minimalistic when it comes to the 'base' distribution required for installation, but this is quite tunable in RedHat, SuSE/Novell, Slackware, etc.

      I use Debian more because it's designed, or has the appearance
    • Re:Debian Unstable (Score:4, Informative)

      by Spazmania ( 174582 ) on Sunday January 02, 2005 @11:56AM (#11238238) Homepage
      I've been running Debian Unstable [and] it's every bit as stable as the Fedora install it replaced

      I've been running Debian stable systems since '97 or so. I did some recent short-term work where I had to build and support some Red Hat Enterprise 3 systems and some Fedora Core 2 systems.

      Talk about "fun" problems. I got all manner of grief from Red Hat's Linux kernel. I had a particularly fun one where every couple weeks the cached copy of one of the filenames would have a corrupted last character. So I compiled and installed a new kernel from the base linux source. I had also set the / partition to "rw,noatime" instead of "defaults" in /etc/fstab. Oops! mkinitrd (not used in Debian) turned this into a "mount -r -o rw,noatime /" in its script which crapped out fsck on boot. The server was 50 miles away and trying to talk someone through fixing it was even more fun: it seems I couldn't get it to continue to boot up after failing the fsck the way Debian will. No, exiting that shell generated a nice reboot and repeat.

      And don't get me started about "up2date", Red Hat's version of apt-get. Damn thing gets stuck in infinite loops consuming 100% of the cpu until killed hours later. And no, the most recent updates havn't fixed it. Nor did following the instructions for regenerating the .db files.

      My point is: I don't want to run anything as unstable as Fedora or Red Hat. That's why I chose Debian in the first place. So why would I want to run Debian Unstable?

      I do want to see SOME forward motion though. Its long past time for those few package maintainers that are blocking testing's release to stable to either buckle down and get it done or be replaced.

      Maybe it would help if they halted updates to those maintainers' packages in unstable and experimental until testing was releasable.
    • "Every bit as stable as fedora", "Every bit as unsinkable as the titanic"

      Fedora Core 3 was one of the buggiest distros I've used. If Debian Unstable is still like that (which it was when I used it a year or so ago), I wouldn't use it for anything beyond a test system.

      On Fedora: gnome-volume-manager died all the time on unmounting memory cards, syncing to my Palm wouldn't work (kernel patch problem), xemacs wouldn't maximise, up2date would say "Updates available" in the notification area and then refuse to
  • Testing, Sid, or... (Score:5, Interesting)

    by Anonymous Coward on Sunday January 02, 2005 @01:18PM (#11238660)
    Quite a few people are commenting about using testing or Sid instead of stable, for a desktop. And other comments include using testing or backports if you don't like stable for a server.

    The problem is that even though sid is fairly stable compared to other popular Linux distros (though things do break occasionally), others in this same story, and rightly so, have said they would never use sid for a server. The whole purpose of stable is for running a server these days. I'm sure there are some users out there that may use stable for purposes other than a server (Bonzai was good enough for me for low resource hardware, when I installed it, it was based on stable, don't know now). But most users who are installing stable on a new server, with new hardware, have rightly pointed out that many pieces of the new hardware either don't work, or if it is possible to get working, have to be heavily hacked.

    If stable were newer, it may be considered more for company installs, as long as the Oracle or Websphere, or whatever other certification doesn't require Red Hat or Suse. And I'm sure that even in companies that run Red Hat or Suse for some applications that need it, may also run Debian Stable for some purposes where they can just set it and forget it!.

    I've tried stable in a newer computer. And besides the difficulty with some hardware, I found X with XFce difficult to use. Even though it is a server install, I still find it easier and more productive to install and use KDE gui apps for administration. Sure, I use the server for development also. It isn't my main development box. But for tweaking some html here and there, dragging and dropping files here and there quickly, and for some other purposes, I simply prefer a gui to do it with. I would've used Firefox (wasn't out yet) or Mozilla with another app for file browsing, but I like konqueror for web and file browsing (and fish/ssh) and a few other utilities it is good at. And though KDE is really bloated and I'd like to free up some space (every time I try uninstalling something KDE related, it wants to uninstall most or all of KDE or important libraries, like trying to uninstall XMMS, or other KDE utilities or apps), but KDE or synaptic won't allow it. Synaptic is another reason for my running X. And that I also wanted to try out Quanta Plus.

    The release I'm using on the server is testing. As some other posters have suggested using. But the problem with testing is that it doesn't get the attention of the security team. I believe this changed a month or two ago because testing is close to going stable. But I'm not aware of a security repository for testing. I'm sure I would have seen an announcement about it here on /., perhaps in one of the posts, or elsewhere (distrowatch maybe), or on one of the mailing lists. But I haven't seen anything.

    If the testing distro did receive the attention of the security team, and there were security repositories, then that would make testing far more palatable for many users as a server distro. With careful updates/upgrades, it would be a good solid release for a server, with much more up to date applications.

    My testing distro was once Mepis. But once installed, I uninstalled some unnecessary apps, fixed my sources list, and slowly but surely, the install is becoming 100% testing. It currently has KDE 3.2.3, instead of the KDE 3.3.x version. I haven't taken a look at KDE 3.3 yet, nor do I plan to install it, as that would entail switching to unstable for a few repositories, and pinning, two things I don't want to do. But KDE 3.2.3 is working good for me, and as I stated, it is on a server install, so the latest and greatest isn't necessary.

    I had planned on waiting (when Bonzai didn't work out for me) for testing to become stable. Good thing I didn't, because I never would have got anything done. Since I got tired of waiting though, I installed testing, and now hope KDE 3.3

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...