Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Debian

Ian Murdock Answers 86

Here are the answers to the questions we asked Ian Murdock, original Debian instigator and current President/CEO of Progeny Linx Systems.

1) Distributions
(Score:5, Interesting)
by Chalst

What's your second-favourite Linux distribution?

Ian:

I got started with Linux in 1993 using a distribution called SLS. I started Debian later that year, and have been using Debian ever since. So, I guess that makes SLS my second-favorite distribution.

2) Debian stuff
(Score:5, Interesting)
by Uruk

What do you think about the current political problems with KDE in debian, the possible removal of non-free, and any other 'political' issues you care to comment on?

How has debian converged or diverged from what you originally wanted it to be?

If you were Wichert, which direction would you take debian in now?

Ian:

I tend to be more interested in technical rather than political issues, so to be perfectly honest, I don't have too many opinions to share with you. One thing I can say about the non-free removal debate is that many are arguing that changing the social contract would be akin to changing the founding principles of Debian, which isn't entirely true. The social contract came along well into Debian's lifetime. It captures a set of principles that evolved over time, and is really a snapshot of that set of principles taken at the time it was written. Perhaps it's time to evolve further. Perhaps it isn't. I'll leave that question to the Debian folks.

In terms of convergence or divergence, Debian has become so much more than I ever dreamed it would become, so I'm nothing but thrilled with how well it's all come together. Debian is one of the best examples of just how well the open development model can work. I'm immensely proud of all that Debian has accomplished and all that it continues to accomplish, and I'm proud to have been a part of it.

Where would I like to see Debian go from here? I'd really like to see Debian's appeal broadened. For a long time, Debian was the best Linux distribution you'd never heard of, and that's started to change this past year. Debian is a really great system, the best out there in a lot of ways, but a few small things sometimes get in the way. Installation needs to be easier, many things need to be better integrated, releases need to happen more frequently, interfaces need to be designed with a broader range of users in mind. It looks like a lot of this is starting to happen, and I'm very happy about that. And, by the way, I think Wichert is doing an outstanding job.

3) Preview?
(Score:5, Interesting)
by pb

Could you tell us about this "Linux NOW" project you guys are working on now?

Will the filesystem be based on Coda, or are you writing something completely new?

How does the distributed architecture compare with what is currently available?

Will it offer distributed computing, or just centralized administration?

It's great to hear that this will be released back to the community; I'm sure this will be released long before Microsoft makes any real headway on their "Millenium" project. :)

Ian:

Linux NOW makes a network of workstations look like a single integrated system. The basic problem that we're addressing is how to integrate the network, how to make networks easier to manage and use. The basic observation is that many problems become very difficult very quickly as computing environments grow from a single machine to dozens of machines to hundreds to thousands. For example, people have a fairly firm grasp on how to manage a single machine, how to share resources on a single machine, how to provide a good environment for users on a centralized machine. When it comes to dozens, or hundreds, or thousands of machines, however, these issues become much more difficult, and even after twenty years of using networks, people still don't have good ways of approaching them.

Linux NOW makes a network look like a single system to simplify the task of managing the network, sharing resources on the network, making the network secure, providing a consistent environment to users. System administration and security management are much simpler because there is only one system to manage rather than many independent systems. Sharing resources like files and hardware devices is much easier because there is just one set of resources rather than many independent sets distributed around the network in various places. The user's environment is consistent across machines because the system is the network, the network is the system, rather than the network being just a physical medium for connecting systems together. In Linux NOW, the network is more than a mere communications vehicle, a way for systems to talk, it's an integral part of the system, it *is* the system. Linux NOW is about building a good abstraction, about simplifying, about reducing big problems to smaller, more approachable problems.

We are writing a new file system. Many of the features that we need have been implemented in one file system or another, but there is no one file system that does all we need in one package. The file system that we're writing is largely influenced by the Sprite file system (http://HTTP.CS.Berkeley.EDU/Research/Projects/sprite/sprite.html), but we're integrating in various bits and pieces from other file systems where that makes sense. For example, Sprite was written ten years ago, and these days, networks are no longer static things, they contain laptops and mobile devices that come and go, and those mobile devices should be equal members of the network of workstations. So, we're looking very closely at projects like Coda and InterMezzo that provide support for mobile computing and disconnected operation, and borrowing ideas and code from those where that makes sense. We are also looking at cluster file systems, like GFS, and other network operating systems, particularly Plan 9.

In terms of how Linux NOW compares to what is already available, the closest cousin to a NOW is a cluster, like a Beowulf. Both run above a collection of computers, and turn that collection of computers into a larger thing. The main difference is the approach, and the end goal. Most clusters are dedicated collections, tightly-coupled collections, and are specifically designed to do a very specialized task. For example, the task a Beowulf is designed to do is high-performance computing, number crunching and computationally-intensive things. In contrast, Linux NOW is a general-purpose system and infrastructure for large networks. Linux NOW's goal is to simplify system administration, resource sharing, security management, user environments, and so on, general-purpose tasks rather than specialized tasks. And Linux NOW is designed to drive the workstations that sit on people's desks, whereas clusters are usually dedicated things that sit in the machine room. Furthermore, a cluster is usually a tightly-coupled group of machines, whereas a NOW can include laptops, home offices, and those sorts of things. In short, the underlying foundation is very similar, but what we do with that foundation is very different.

In terms of whether we will support distributed computing, we do plan to provide limited support for things like process migration to make resource sharing easier. That being said, Linux NOW is fundamentally a general-purpose system, so we're only interested in such features to the extent that they solve general-purpose, end-user kinds of problems. Process migration can be used to do distributed compiles, for example, or to move running processes off of a workstation that is being rebooted. So, if you're interested in using a collection of machines to do a specialized thing like distributed computation, or load balancing, or failover, then a cluster is probably a better fit. Of course, Linux NOW has its place in clusters as well. After all, clusters have to be managed, and providing shared storage across the cluster is very important, and Linux NOW can provide that.

4) Distro wars
(Score:5, Interesting)
by BgJonson79

What do you think is the best way to put out the distro flame wars and welcome more people into the world of Linux?

Ian:

We're never going to put out the flame wars. The best we can do is hope that people will learn to devote their energies to more productive things, like making the software better, and not let the flame wars get in the way of making progress. Flame wars are an unfortunate byproduct of the passion that people put into free software. When people are willing to spend hours and hours arguing about things, sometimes arcane things, that means they care very much about them. Can you imagine people arguing endlessly about the merits of a particular toaster or microwave oven? People in this community care very much about their software, and that's a big part of the reason why Linux, and free software in general, have come so far in such a short amount of time. People are willing to pour everything they've got into this. Given that kind of passion, it's inevitable that we're going to have flame wars.

5) Hurd/Linux
(Score:5, Interesting)
by Tiro

Why has Debian tied its long-term future to the Hurd's so long before the Hurd is ready for prime time? We all know about the hopes and dreams the GNU project has for its kernel, but why is Debian going along for the ride when the future is so hazy?

Ian:

I don't think Debian has tied its long-term future to the Hurd at all. Debian is a volunteer project, so it's not like Debian is taking away resources from other projects to work on the Hurd, like a company might have to do. Volunteers tend to work on what interests them, and the Hurd interests many Debian volunteers, so that's where they're going to work. Personally, I'm glad to see that Debian is providing the kind of support that is moving the Hurd's development forward. The Hurd has a very interesting design and incorporates some very interesting ideas, and I hope that something eventually comes of it.

6) Debian development
(Score:5, Interesting)
by fremen

Debian has often been accused of having a very slow development cycle. The "stable" distribution is still using two to three year old technology, while frozen is getting more and more out of date each day. Meanwhile, companies like Mandrake are releasing much more bleeding edge distributions. These distributions have more bugs in them, but are also ahead of the game in terms of performance enhancements, newer software, and fixes for older bugs that still plague the older software in Debian. How do you respond to companies like this, and what do you see as Debian's place among these companies?

Ian:

I agree that the slow release cycle is a problem. The Debian folks recognize it as a problem too and are taking steps to address it. Release management is very hard, especially when you're dealing with hundreds and hundreds of people, many of whom have never met and most of whom work on the thing purely as a hobby. It's far easier when you have a company and people are all in the same place and getting paid. So, this is a common problem among free software projects, and Debian is having to deal with it on a scale larger that most projects have had to deal with it. And they're getting there.

7) What would you add to *nix?
(Score:5, Interesting)
by miahrogers

If you could take 2 features from two other operating systems and add them to *nix what would they be?

Ian:

The first feature I would add to Unix is a good distributed file system. Unix has been lacking in this department for a long time. This is really unfortunate, because the file system is such a central abstraction in Unix, arguably *the* central abstraction in Unix. In Unix, if you can get the file system right, solutions to a remarkable number of very difficult problems just fall out, so the lack of a good distributed file system has really been the central thing that has made networks of workstations so hard to manage and use.

The most important thing that a file system does is provide a name space, a high-level view of data storage. In spite of this, this is exactly where most network file systems for Unix fall short. Network file systems for Unix tend to be designed to share private name spaces, rather than to build common, network-oriented, network-wide name spaces. Look at the current state of affairs in Unix. Each machine on the network has its own disk and its own private name space built above it. Unix gives us NFS and AFS and other file systems to share name spaces, but the end result is that all these machines still have their own disks and their own name spaces built above them. Resources are scattered all over the network, and you end up with this crazy quilt of name spaces stitched together in haphazard ways. Some of the name spaces are shared, some aren't, and some parts of the private name spaces need to be shared but can't be shared easily. So, you end up with all sorts of problems, like how do you keep configuration consistent, how do you provide a consistent environment to users, how do you keep software up to date, and things get very complicated in a hurry.

In terms of what other operating systems have done with file systems, Sprite got the name space issue right. Sprite provided a single system image across a cluster of machines, including a single file system image; so, although there may be many computers and thus many disks in the network, there is one file system shared by all of them. Unix needs a file system that builds a network-wide name space, and provides high performance, high availability, good security, support for mobile computing, and other things too.

The second feature I would add to Unix are the per-process name spaces of Plan 9. That is just an incredibly good idea. Although they are different in many ways, Plan 9 is like Sprite in that it builds a single system image across a network of machines, and there is one file system providing access to a global set of resources, just as there is in Sprite. The difference is that, in Plan 9, machines, users, and even processes can build their own local view of this global name space, rather than sharing one common view. This is a very powerful mechanism because you don't always want to see the same name space. For example, how do you deal with heterogeneity in a network of workstations? How do you deal with different classes of machines or users with varying access rights to the network's resources? Plan 9's per-process name spaces address these kinds of issues in a very elegant way.

8) Mobile Linux and Other Debian-based distros
(Score:5, Interesting)
by zeevon

Despite promises from Lineo and Blue Cat to be the embedded Linux specialists, Transmeta is using Debian as a base for its Mobile Linux. In addition, Corel uses Debian for its own distro.

Do you see Debian becoming a base upon which other distributions are built instead of "just another Linux distribution[1]." Given the amount of ports Debian has expanded to (x86, 68k, Sparc, Alpha, ARM, i-opener, etc), do you see it becoming the uber-distro for embedded (and unorthodox) systems?

Ian:

Sure. Debian is a great foundation for building systems, embedded or otherwise. It's the foundation for Linux NOW. It allows people to concentrate on doing what they do best, to concentrate on building value, rather than on reinventing the wheel.

The nice thing about Debian in this respect is that it's modular. The package concept lends itself very well to modularity. That was the whole reason behind basing Debian on packages. I wanted others to be able to contribute to Debian, to participate in the development process, and breaking the system into modular packages seemed the best way to enable that.

Other distributions have almost universally adopted the package concept by now too, but most of them still tend to be arranged as complete, take-it-or-leave-it systems. Debian is more of a collection of packages that can form a complete system, custom-tailored just the way you want it. So, because of the package concept, the resulting modularity, and the "collection of packages" approach to constructing the system, it's very easy for someone to take just those parts of Debian that he needs and build value above them. And that's why Debian is a better system for this purpose than any other.

This discussion has been archived. No new comments can be posted.

Ian Murdoch Answers

Comments Filter:
  • Prolly.. You don't want it though.. I played with the initial SLS release, and ended up rolling my own after a week or two. Granted, it was convenient, and I did use it as a base for the roll, but still..
  • by MostlyHarmless ( 75501 ) <artdent@nOsPAM.freeshell.org> on Thursday July 27, 2000 @07:39AM (#900515)
    The system is the network, the network is the system" ...

    ... and we are all together. See how they fly like pigs in the sky see their stock soar.

    I'm buying.

    Sitting on a hard drive, waiting for the 'net to load.

    Pornographic pictures, stupid NT systems, man you've been a naughty boy you got your sendmail old.

    I am the admin. WHOOO!
    They are the admins. WHOOO!
    I AM THE PENGUIN! GOO GOO GOO JOOB!

    --
  • Download packages individualy? EEEEwwww! ISO images? That means burning a cd, and re-installation! EEEEwww!

    I wanted to go from deb. 2.0 to 2.2, frozen branch. So, I typed apt-get dist-upgrade, went and made a sandwich, played some Zelda 64, and came back, answered some config questions (need a way to automatically accept default...), and had a working 2.2 without having to reboot. Sure, I had DSL to make this nicer, but still... this was very cool.

    I used to use Debian because I liked the philosophy. Now I use it because of apt-get (and the philosophy too).

    From a technical standpoint, apt-get is what sets debian apart.

  • No, a woodchuck would chuck as much wood as a woodchuck could chuck if a wouldchuck could chuck wood.
  • I am not sure I would trust the judgment of someone who is not even vaguely familiar with any other distro than his own. Kinda seems like a "head in the sand" mentality to me.
  • He didn't explain it fully in his answer.

    With your example you still have multiple individual computers connected to a central server that store /home and /usr

    But with linuxNOW it doesn't sound like you have a central computer just a bunch of computer that are connected. Some of the data may be stored on your buddies computer down the hall. If you access the same data a lot it would move itself over to your hardrive.

    This is what I understand at least.

    check out these other articles aobut it:
    http://www.linux-mag.com/online/pro geny_01.html [linux-mag.com]
    http://www.linux.com/interviews/2000071 2/63/ [linux.com]

  • Where do you see this? Anyway, the Debian HURD packages are merged with the Debian Linux packages to the maximum extent possible - i.e. everything down to binutils and glibc compile from the same package.
  • The oldest (and only) cd-rom I have is yggdrasil with the 1.2.13 kernel on it. I've kinda just upgraded the whole system bit by bit, and now (like jefferson airplane) not one of the original components remain. (no! I tell a lie, still using the same crappy floppy & cd-rom drives -- they don't see much use).
  • We have a GE microwave oven that has possibly the worst UI design of any one ever built... normal touch-sensitive buttons, but the biggest two buttons (and colored differently from the numbers) on the bottom of the pad are: "clock" and "timer"(the non-cooking countdown timer)... "start" and "stop/cancel" are hidden to the side, and even smaller than a number button. Very, very strange.

    GE toasters are pretty cool.... I mean, hot...
  • Reading his comments about AFS vs. Sprite I... well... still prefer AFS. I looked into Sprite a while back, and I either didn't "get it" (a favorite Katz phrase), or we're talking about two totally different things...

    >AFS and other file systems to share name spaces, but the end result is that all these machines still have their own disks and their own name spaces built above them. Resources are scattered all over the network

    and...

    >Sprite provided a single system image across a cluster of machines, including a single file system image; so, although there may be many computers and thus many disks in the network, there is one file system shared by all of them.

    This is where AFS seems to have a nice advantage over Sprite for some instances, but Sprite would work well in others... Sprite seems to assume a lot of things, and moves itself into more of a niche, rather than a nice broad-purpose file system. Anybody feel any different on this one or want to point out what I don't see?
  • wasn't it coo-coo cu-choo (or koo-koo ku-choo)? Certainly no 'b' on the end (rifiling though CDs to find it...) [damn, no lyrics printed]...
  • apt-get is only great if you are on a relative fast connection, or if you don't have to upgrade a large number of packages, IMO.

    Seems to me apt-get is a pretty bad solution if you are installing a new system, and you are on a slow connection.
  • However, Debian does provide packages for much of those things, specifically, there is a QMail package. (Though it's slightly different from most others because of QMail distribution requirements).

    qmail is technically non-free (it fails to meet the Debian Free Software Guidelines) because its license prevents redistribution of modified versions. Instead, one must distribute a patch which can be applied to the pristine upstream source.

    Debian can't package a pristine qmail binary, because qmail's design conflicts with the Debian policy. (qmail uses ~/Mailbox for message storage instead of /var/spool/mail/$USER; and Debian requires that all mail programs use the Debian locking library, which qmail naturally does not use.)

    Thus, Debian cannot provide qmail binaries. Instead, they provide "Debianized qmail sources" -- which is basically a collection of the pristine qmail source tarball along with Debian patches and build scripts.

    I've used the Debianized qmail before, but honestly, I just don't like it. (It's also caused grief for a lot of people, since at some point during potato it stopped working. I don't have details beyond that, because I stopped attempting to use it, and stopped caring about it.)

    Personally, I recommend building qmail yourself from the source code (download it from qmail.org [qmail.org]). Debian already gave you the user-IDs and group-IDs in /etc/passwd and /etc/group, so that much of the installation is already done before you even download the source.

    Of course, building something as fundamental as a mail transfer agent tends to raise issues with the Debian packaging system. But there's an easy solution: equivs. The Debian "equivs" package allows you to tell the packaging system that you already have a mail-transfer-agent package installed, thankyouverymuch, and please don't delete all the packages that depend on mail-transfer-agent. :)

    Oh, and in answer to a previous question in this thread: the default Debian MTA is exim, not sendmail.

    (Some of you may know me as greycat on #debian.)

  • by Multics ( 45254 ) on Thursday July 27, 2000 @05:29PM (#900527) Journal

    I attended a presentation he made at Purdue on Monday, 7/24/00 to the PLUG (Purdue Linux Users Group). He gave an hour presentation and dealt with an hour of formal questions and more than an hour of one-on-one questions. Several things struck me about his presentation:

    1) In the hour presentation, he spent 40 minutes talking about Unix history. Sadly, he was wrong about lots of little things, such as Unix was designed to be a time-sharing system -- NOT. I would have hoped for 10 minutes of history and 50 minutes of NOW.

    2) In the remaining 20 minutes, he described NOW. It sounded much like Athena and especially Plan-9. It is problematic that Plan-9 solved many of his problems and took 10 years to do that while they have far less time than that. He was "not familier" with MIT Oxygen [mit.edu]

    3) NOW's time-line seemed unrealistic. NOW's lack of core PhD class CS problem solvers was notably missing. NOW's goals (given the time line) should have been aggresively well defined and yet "we're looking at that" was often an answer.

    4) He was factually incorrect about the features of Plan-9. If he'd even read and absorbed Plan 9 from Bell Labs [bell-labs.com] he'd have been in better shape.

    5) The company is missing a definitive business plan. It shows already and they're barely off the ground.

    6) The office location they've selected in Indianapolis, is one of, if not the most expensive locations in the entire city. This means their venture capital burn rate will be extremely high. Within 5 minutes of that location there are places that cost 25% of that location.

    7) The presentation was an un-abashed hunt for warm bodies that know something about Unix (Indianapolis is a nice place, but far from a hot-bed of computing -- Unix or otherwise).

    So I came away with the feeling that they'd not done their homework before they started. Further that their venture capitalists said, "Linux is hot, who is available? Ian? oh good. Let's give him buckets of money and see if he can do 'stuff'."

    In the end, they're destined to fail. They have a poor grasp of Linux pre-history (Multics [multicians.org] & Unix real history) and lack good technical management to judge wisely how to spend their finite amount of money.

    Too bad. NOW as a concept doesn't seem like a clunker as little as we were told about it.

  • 2.0 was glibc based. Actually, the reason I did the apt-get dist-upgrade in the first place was because trying to upgrade from glibc 2.0 to 2.1 was turning into a nightmare. apt-get did it for me. It was similar actually to libc5->glibc, but not as bad. I don't know how apt-get would have worked on that gordian knot.

    And I'd long since stopped using the old-as-dirt 2.0.36 kernel that came with deb. 2.0, so there was no need to upgrade the kernel, hence no need to reboot.

  • However, Debian does provide packages for much of those things, specifically, there is a QMail package. (Though it's slightly different from most others because of QMail distribution requirements).

    qmail is technically non-free (it fails to meet the Debian Free Software Guidelines) because its license prevents redistribution of modified versions. Instead, one must distribute a patch which can be applied to the pristine upstream source.

    Erm, yes, hence the note, which you quoted, where I mention that it is different from standard Debian packages. ;-)

    Debian can't package a pristine qmail binary, because qmail's design conflicts with the Debian policy. (qmail uses ~/Mailbox for message storage instead of /var/spool/mail/$USER; and Debian requires that all mail programs use the Debian locking library, which qmail naturally does not use.)

    This is correct.

    Thus, Debian cannot provide qmail binaries. Instead, they provide "Debianized qmail sources" -- which is basically a collection of the pristine qmail source tarball along with Debian patches and build scripts.

    Actually, there is more to it than just the 'Debianized QMail sources'. When you get the qmail-src debian package, it also pulls down a build script that will automatically build and install QMail for you. This script can be run with 'build-qmail'. (Modified versions of this script are included with most of the QMail related programs, such as ezmlm, ucspi-tcp, dot-forward, and rblsmtpd, making it as easy as a single command to have working QMail binary packages.)

    I've used the Debianized qmail before, but honestly, I just don't like it. (It's also caused grief for a lot of people, since at some point during potato it stopped working. I don't have details beyond that, because I stopped attempting to use it, and stopped caring about it.)

    Interesting, I've been using QMail since before it became a Debian package, and I've been using the Debian package as long as it's been available, and I've never run into this problem. I've also tracked the (semi-official) Debian QMail mailing list since it's creation, and I don't recall hearing about any problems like that there. I've actually been very impressed with the high quality of the package, and the excellent job it's maintainer has done delivering an easy to install QMail binary with minimal hassle, thanks to the build scripts.

    Personally, I recommend building qmail yourself from the source code (download it from qmail.org). Debian already gave you the user-IDs and group-IDs in /etc/passwd and /etc/group, so that much of the installation is already done before you even download the source.

    For the record, I'd strongly suggest just grabbing the Debian QMail 'src' packages. Doing that will have QMail up and running for you in just minutes, and with essentially no hassle at all. Considering the license restrictions the QMail pacakge maintainer has to work around, the ease with which it installs is little short of amazing.

    Of course, building something as fundamental as a mail transfer agent tends to raise issues with the Debian packaging system. But there's an easy solution: equivs. The Debian "equivs" package allows you to tell the packaging system that you already have a mail-transfer-agent package installed, thankyouverymuch, and please don't delete all the packages that depend on mail-transfer-agent. :)

    Ugh. I would *strongly* recommend against the above mentioned procedure, especially in order to get QMail working. The Debian equivs package is something of an ugly hack, and should only be used when it's absolutely necessary. When packages are already there, you're just gonna get yourself in trouble unless you really know what you're doing. (In which case getting the qmail pacakges built and installed should be cake.

    Oh, and in answer to a previous question in this thread: the default Debian MTA is exim, not sendmail.

    This is correct, with a significant minority of the Debian developers pushing for Postfix as a possible replacement for exim as the default.

    (Some of you may know me as greycat on #debian.)

    (None of you will know me as anything on #debian, as I tend to live on SorceryNet [sorcery.net]. ;-)

  • Ah... well, then - if the viynl says so ;-)

    I'm 22, and the albums are mine... what does that say?
  • Yeah - here (and back at school) we ran with a fairly sizable AFS cache (~30-200MB in RAM, depending on the machine).

    >So, at a particular local machine you can see some files that other machines in the cell see, but not others

    I've run into that in my AFS admin days, too... nothing is really perfect, but as a rule, it seemed to work better than most. Plus, it's a pretty nice solution across AIX/SysV/Sol/Linux. DFS is nice, too (and has better NT support), but some strange things tend to happen with the local cache and tickets (more often than AFS, it seems).

    The ACL/mode bit deal does get confusing now and then... such are the problems with certain filesystems. Personally, I much prefer the ACL control - it's far more flexible and, in most cases, more useful. Usually, if most of your stuff is on AFS, the users never have to play with the local drive and the different security model.

    I'll have to look more into the details of Sprite, since I'm still not exactly sure where all of these files are kept... if the files are distributed across the different workstations, and one goes down... that would be a bad thing. Backups would suck up tons of network bandwith (nobody there in the middle of the night anyway, but hey)... I probably need to do some more reading on it, but it doesn't seem like that great a solution right off the top...
  • Linux NOW sounds pretty interesting. Something I didn't see him address, though, is the absolutely most important feature of a distributed (file) system: Simple, easily understood (and discovered) behavior. Every time I save a file, I don't want to have to think to myself "Let's see, I saved this from a desktop, so if I go to a laptop I have to hit refresh but make sure not to save changes....etc".

    If done right, this could prove a real nightmare to administer, especially if my home-configured notebook is to be integrated without thinking about whose console i will be trying to use when those guys in the other part of the building have finished lunch (or when that switch is reconnected...).

    Am I the only one to have thought this sounds just a bit like vapourware coming out of micros~1 just two years before they start putting on their thinking-caps?

    Theres everything in there - people will not need to know, the network will be fully transparent, the system will just do anything you want it to - action at your nerve-tips!

    Just being sceptical. Never mind. Maybe its just the all-powerful Debian that will do it. You never know.

    Kiwaiti

  • by Ron Harwood ( 136613 ) <(ac.xunil) (ta) (rdoowrah)> on Thursday July 27, 2000 @07:48AM (#900533) Homepage Journal
    Some references I found:

    "Slackware is based on the older SoftLanding System Linux"

    "SuSE started in the Linux business by distributing the SoftLanding Systems version of Linux"

    I also found this reference [linuxjournal.com] to an old Linux Journal article... written by Ian A. Murdock... saying basically SLS was so bad, that he had to do better... and started Debian.
  • Shoook! *Bang*

    I'm not very good at sound effects... that's the sound of a piece of metal being drawn to a magnet.

  • by Jason Earl ( 1894 ) on Thursday July 27, 2000 @07:48AM (#900535) Homepage Journal

    There are, thank goodness, images of both frozen and even unstable available. See: cdimage.debian.org [debian.org] for more information.

    In fact, if you are new to Debian I would strongly recommend checking out the frozen iso images. Installation has improved tremendously, and you'll end up with substantially newer packages.

    Just remember that Debian is designed in such a way that you only install it once. So when you get tired of upgrading your RPM based distros piecemeal, wade through a Debian install and learn the power of apt-get.

  • by Anonymous Coward
    Oh please, their release cycles are way too long.

    I mean, who wants to be toasting with 2-3 year old technology?

  • Well, it was one of the first distributions and generally thought of as the forerunner to Slackware (also one of the first). I have a CDROM with this Soft Landing Systems v1.2 from January 1994 (I believe it was) and it was from the times when one CDROM was plenty of space for a whole distribution :-)
  • Instead of running all applications remotely from a server(and thereby focusing the load on the server) Linux Now is attempting to utilize the power(ie processor and storage) of each individual workstation and distributing the load on the network while at the same time retaining the easy administration of one system.
  • This is similar to "The Debian Linux Manifesto" [sfai.edu], the document that started the Debian project (as included in A Brief History of Debian [sfai.edu]).
  • How is Debian with mixed packages (i.e. deb packages and .tar.gz).

    I usually like to get my kernels directly, mostly because the RedHat patches make it impossible a patch the kernel from ftp.kernel.org

    I also don't like to use packages like BIND or Sendmail (using dnsdjb and qmail instead).

    RPM doesn't have a problem with this, but then again. RPM seems pretty braindead compared to apt-get
  • ... you can prevent an NFS crash from hanging other systems that mount the NFS disk, there are some options to mount_nfs which need to be changed from the defaults. It differs from system to system however so I won't say 'do it this way' but there is some hope. You're right though, the long term solution is to replace NFS with something less crufty.

    WWJD -- What Would Jimi Do?

  • Well, yeah. But by the same token downloading an ISO image has the same problems, except that it's good for a new system and not an existing one.

    Actually, i don't think you can use apt-get to install a brand-new system. It at least has to boot debian before apt-get will help you.

    Anyway, once the system is up and running, just running apt-get frequently will keep the number of packages to update small, thus alleviating the bandwidth concern.

  • The link is not www.mosix.org but www.mosix.com [mosix.com].

  • There's nothing stopping you from installing from tarballs in Debian, but why would you want to? With very few exceptions, Debian provides virtually every package you could ever want. You certainly don't have to use sendmail if you don't want to (in fact, I don't think it's even the default, although it's been a while since I last installed Debian). The incredible upgradability and dependency checking of .deb's also means that you'll never want to use anything else.

    And about kernels: Debian provides both images and source on their servers. I never touch the images so I can't comment on them. They are generally pretty good about providing up-to-date source (of course, I'm using unstable), although naturally if you want to get the latest version right away you're better off downloading it from kernel.org. However, there's not much of a difference when you're talking about kernel sources. Even with the .deb's, the source isn't unpacked until you do it yourself, so the source package basically contains one file. The only real reason to go with the .deb is convenience.

    Now, Debian does provide a great package (make-kpkg) that allows you to compile and install your kernel as a Debian package. You can use this regardless of where you got your source and I highly recommend you do. It streamlines the whole process of installing and maintaining your kernel.
  • The Magical Mystery Tour vinyl says goo goo goo joob. I've also heard people say that near the end, when it starts degenerating into cacophany, you can hear "dead si nhoj", backwards for "John is dead".

    No, I don't believe that one either. :-)

    And in case anyone happens to click my user info and find out that I'm 14, it is my parents' record :-)
    --
  • How is Debian with mixed packages (i.e. deb packages and .tar.gz).

    If you put your custom software in /usr/local, then Debian packages will *NEVER* touch them. Debian policy is that /usr/local is sacrosanct, and must always be left for the sysadmin.

    I usually like to get my kernels directly, mostly because the RedHat patches make it impossible a patch the kernel from ftp.kernel.org

    Then you'll prolly love Debian's make-kpkg utility. Debian provides standard kernel packages, for those who choose to use them. They also provide make-kpkg, which allows you to download the kernel source yourself, and then compile it according to *your* configuration, and then turn that kernel into a Debian package.

    This way, you can add in whatever custom patches you want, and still install your kernel as a standard Debian package for organizational purposes.

    I also don't like to use packages like BIND or Sendmail (using dnsdjb and qmail instead).

    Again, if you just place all of your own stuff in /usr/local, you'll be just fine, as Debian will never touch it. However, Debian does provide packages for much of those things, specifically, there is a QMail package. (Though it's slightly different from most others because of QMail distribution requirements).

    If you like adding custom patches to software that has Debian packages, it is *very* easy to get the source [package] (Debian doesn't actually have source packages, but you can grab the full source of a program, along with the debian build scripts, as easy as 'apt-get source packagename', which you can then patch as you like and then turn it into a Debian package which you can install as easy as the original.

    RPM doesn't have a problem with this, but then again. RPM seems pretty braindead compared to apt-get

    apt-get should have no problems with any of this, either. And it is much more intelligent and powerful than RPM. ;-)

    Have fun, and good luck!

  • 3) Incompatibilies between versions. Version 2 is incompatible with version 3 (currently used in 6.x series), and the next version 4 is incompatible with 3. And I can't upgrade rpm, because the rpm package itself is in rpm version 4. Which means I can't upgrade the packages at rawhide, because they're all in 4.0 rpm packages.

    Tech tip: Install rpm-3.0.5*.rpm - it understands both version 3 and version 4 packages, and should allow you to get going on those Rawhide packages. That said, I've had some funnies with 3.0.5 seg-faulting on me, so it's not exactly bullet proof. Exact diagnosis of the problem has escaped me... I run a heavily rawhide patched RedHat 6.1 with HelixCode installed over the top, and the problems arrived when I stripped out XFree86 3.3.5 in preparation for 4.0.1.

    Cheers,

    Toby Haynes

  • It mostly happens on most mundane things like libs and some basic apps that deal with the command line. I may have seen some Gnome things in there too..
  • Bummer. I think he's a slackware holdout. *g*

    Actually, it's called he answered that one at the top of the article.

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • I guess I'm lucky, cause I have never had any problems with RPM.

    My problem, though, is this: I have been using Red Hat forever, it seems like. I know how it works, I know what it does, and I am, believe it or not, happy. I keep wanting to play with Debian, to see if apt-get is really what everyone says.

    But my Red Hat machines run well, have never been cracked, and I know the system. My stuff is nicely customized, and I'm happy.

    Now, I know, the answer here is "Fine, you're happy, shut up then" but that isn't exactly what I am looking for. What I am asking is, what then is the "migration path" (sorry) if I wanted to use Debian? My first impulse was buy a new hard drive, move hda to hdc, make the new one hda, and install. Then just after I'm done, copy things as needed. Then the happy-fun time of downloading 8 zillion things, over and over, to get it all, since I am using Qt and other things that make some Debian types I know really, really cross. If something goes really, really bad, I can just put the old hda back, and reboot.

    Thats going to take a weekend, I guess. There *has* to be a shorter path, I can't be the first person to want to do this, and I can't believe that droves of people are going to Debian and manually reinstalling their machines after using Red Hat, Slackware, or SuSE. "Wade through" doens't inspire me. I've been using Linux for years now, and I don't wade through anything. If I wanted to wade, I'd use NT (although thats more like drowning, isn't it?).

    Thanks in advance.
  • Or at least it was, a year and a few months or so ago when I decided to try Linux for the first time. It stands for Softlanding Linux System and exists (AFAIK) entirely as disk sets. I tried to d/l the 1st two, disk sets A and B which were evidently what you needed to get a basic system started (without niceties like TeX and X and emacs) and tried to rawrite them but I failed to get a system going due to my inability to grasp things like partitioning hard drives and fips-ing.

    Which is fortunate because about a month later I found debian, which quickly became both my first successful non-windows install and my OS of choice. And which ironically mentioned SLS on its web site.

    I found SLS by searching AOL for "linux" (yes, I was on AOL--I went for the free month to use their browser because one of my DLLs was corrupt and it prevented both IE and netscape from working on my then win95 box, plus stopping me from reading email, prompting me to get my yahoo addy; this was also what prompted me to consider a change in OS) and it may still be there. It evidently hadn't come very far from 1993 (prob'ly ended completely somewhere along the line and was simply circulating in frozen form); it didn't seem very advanced, looking at the descriptions and features, once I found out what other distros could do.

    If you want a full system it involves d/ling (or somehow obtaining, I don't think anyone sells SLS) something on the order of thirty or forty, or fifty floppies, give or take. Yecch. I haven't seen this system in CD form.

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • Sorry, couldn't resist that one.

    I would mod that one up +1 insightful or interesting (on of the two) if I hadn't already posted this round. Somebody do it for me? pleaze?

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • Is it just me, or does Softlanding Linux System sound like a great Hitchcock title?

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • The other distros are hopelessly out of date. You can auto-d/l all relevant packages after just installing the base system. Or you could auto d/l all packages into an archive diretory leaving them compressed until such time as you need to install them, selecting the ones you need using the fancy semi-graphical package manager interface. Or you could make an ISO image of the contents of that directory and once you burned a CD you could delete it all.

    You can also have it update automatically the day a new version of your favorite package comes out, but you don't care about the convenient features, you prefer the ones that tie you down so never mind.

    But yes, they do provide images of the "official" distribution cd. I'm not sure where on which part of which site but I am 100% certain I read in their official documentiation on debian.org under "how do I get debian" that they provide cd images. Vendors also custom mix their own debian cds to sell too from packages but they do have imags they provide.

    Also bear in mind there is much more to the world than just you. I have installed more debian systems from floppy than from the CD set that I bought. I can't see how the point of the distro would be not to have to d/l the packages--if it weren't for the distro, there would be no packages to download! Most of the installs I've done there was no CD drive available and so after the ten-floppy base set was installed I let it loose on the modem and the whiz-bang package installer went and got everything else I'd picked off its list while I was away.

    Ever get the impression that your life would make a good sitcom?
    Ever follow this to its logical conclusion: that your life is a sitcom?
  • Alright then...

    You put /home on its own partition, right? Use tar to back up /etc and maybe /var, /usr/local, /opt, /root (and /home if you were unfortunate enough to put /home on the / partition). Then toast your / partition and install Debian off the full cd set (or from a fast net connection). Debian includes more packages than any other distro so most stuff you use will be there. You can just use your old /home as is, maybe removing some dotfiles if you'd rather use Debian's application defaults. The other stuff I listed for backup is mostly "just in case," so if you don't remember how you had your system configured, you can pull it off backup and check. The beauty of this is that all of your important files are located under /home, since you know better than to do routine work as superuser and put files outside your home directories. (Right?) And should you want to revert to your old distro for some odd reason, the above backup and reinstall plan works just as elegantly for putting that back on instead too.

    Regarding stuff that isn't in Debian...

    First off, a number of packages that peeve Debian (such as KDE) are available from third-party sources. The best thing about apt is that you can insert the location of these files into your sources list, and voila, they will be transparently handled as Debian packages.

    Secondly, Debian includes the alien utility which allows you to just use RPM packages. So if you still have those RPMs of the packages you want on your hard drive, or for that matter have them on your Red Hat CD, you can just let alien install them. (Although I recommend against this approach if possible, RPMs routinely ignore the filesystem standard and files end up in weird places.)

    Also, I've seen cd distributors that make unnofficial Debian CDs with KDE or what not on them.

    (Note: I'm planning to do much the same thing within a month to clean up some cruft from running unstable versions of Debian. If anyone sees any flaws in the above backup system, please correct me.)
  • by jawad ( 15611 ) on Thursday July 27, 2000 @07:02AM (#900556)
    Interview him again! What's his FIRST favorite Linux Distribution?
  • If you weren't gunning for first post (like I was), you'd see the first freaking question!

    1) Distributions
    (Score:5, Interesting)
    by Chalst

    What's your second-favourite Linux distribution?

    Ian:

    I got started with Linux in 1993 using a distribution called SLS. I started Debian later that year, and have been using Debian ever since. So, I guess that makes SLS my second-favorite distribution.

  • Read from the beginning:

    1) Distributions
    (Score:5, Interesting)
    by Chalst

    What's your second-favourite Linux distribution?

    Ian:

    I got started with Linux in 1993 using a distribution called SLS. I started Debian later that year, and have been using Debian ever since. So, I guess that makes SLS my second-favorite distribution.

  • Anyone know if the SLS distro Ian mentioned still floating around somewhere? I haven't heard of it and it would be interesting to see how distros have improved and changed since 1993.
    -colin
  • by Anonymous Coward
    Too bad you used your +1 bonus on this AND didn't read the interview before posting...
  • Umm. That was the first question he answered.
  • by FascDot Killed My Pr ( 24021 ) on Thursday July 27, 2000 @07:14AM (#900562)
    "Debian is a volunteer project, so it's not like Debian is taking away resources from other projects to work on the Hurd, like a company might have to do."

    Parameterize this baby and you've got yourself a response to "why don't these developers quit writing IRC clients and all contribute to some one project".

    Linux NOW sounds pretty interesting. Something I didn't see him address, though, is the absolutely most important feature of a distributed (file) system: Simple, easily understood (and discovered) behavior. Every time I save a file, I don't want to have to think to myself "Let's see, I saved this from a desktop, so if I go to a laptop I have to hit refresh but make sure not to save changes....etc".
    --
    Give us our karma back! Punish Karma Whores through meta-mod!
  • is it just me or is social contract sound like a great Hitchcock title?
  • Shhh! I'm trying out my new troll magnet.
  • Sounds more like a political idea by Hobbes, but that's just me. ;-)
    Now, Slashdot! sounds like a Hitchcock title.
    -- the demiurge
  • Per-process namespaces like in plan9--these are slated to be added to Linux, are they not?

    Scott
  • Moderators!
    where are you?

    this one is ART!!!

    long live The Beatles!

    Im buyiiiiing....

    Kiwaiti

  • by PD ( 9577 )
    I did the same thing. I used SLS for three years until spring 1996. I kept updating the thing, until I finally installed Red Hat. I switched because it was very painful trying to keep up with library, compiler, X, and other changes by compiling every package from the source. I wasn't getting anything done because there was so much upgrading to do.

  • The claim that Debian is out of date and has a slow release schedule is rather overstated, in my opinion.

    Firstly, Debian releases many minor releases ("outdated" slink is at 2.1r5) which have updates to the most recent security alerts.

    In addition, there's apt, which updates your system incredibly easily. In fact, there was a recent post on debian-devel [debian.org] where a user updated from 1.3 (released 3 years ago) to frozen quite easily.

    Basically, if you want to run an "up-to-date" system with Debian, it's release cycle is not getting in your way.
  • Back in March, there was a question [slashdot.org] and answer [slashdot.org] interview with Pat Volkerding of Slackware [slackware.com].

    Someone posted a link to a 1994 Pat Volkerding interview [linuxjournal.com] done by Linux Journal [linuxjournal.com] in which, among other things, he talks about how Slackware came into existence.

    --

  • Well I am ignorant of Sprite, so I am just going on what it seems to be from Ian's description. But I have used AFS for years, and adminned it for a while.

    The problem with AFS is that it is bolted onto UFS. So, at a particular local machine you can see some files that other machines in the cell see, but not others. That causes one sort of administrative problem, where semi-savvy users don't understand the difference. But it can also be a problem when you are trying to distribute everything out from the central servers to the clients. They have their own OS, executables, etc on the local system and you want to update them. It is not as easy as just changing one file. You have to have a system whereby the client machines examine the central server, and update themselves. Not that that is any great shucks, but it is a lot more complex than just "put the file there, and it's there". Which is what this Linux NOW and Sprite sound like. Furthermore, it was relatively easy to have something go wrong in the update scripts, leaving the machine in a bad state where it would not boot or where something was not updated correctly, etc. So we ended up doing a lot of "telnet to the user's machine and poke around" sort of administration.

    Another problem with AFS is that it tends to underuse the disk that is really available, since the local disk(s) on most machines cannot be accessed readily. For a system like I administered, that was not a problem. They had beaucoup cash. But for my home system, I don't want to see even a tiny little modern disk, like say, 13 GB, languishing semi-or-unused. 10 years ago disk was expensive and centralizing it made more sense that it does now, IMO.

    Finally, I think it is annoying (and somewhat hard on users) to have two different protection models for files depending on where they are. I liked ACLs, and really wouldn't mind even more flexibility of control. But try to explain to a secretary why this file has mode bits, this one ACLs, and what the difference is -- and why.

  • www.linuxiso.org [linuxiso.org] has a 2.2 (frozen) iso =)
  • Just one question. Included in the upgrade from 2.0 to 2.2 is a kernel updgrade, right? So don't you have to reboot in order to boot the new kernel? And what about the libc5/glibc thing? Ok, sorry that was two questions.

  • I've done this thrice in the last couple of months, including 2 where the net connection is via modem.

    The basic rule of thumb is that you can keep your /home, but redo the rest. (Just keep a backup of /etc for configuration issues that some later, and of course your /var/spool/mail).

    I personally find the best thing other than apt in debian is that it makes good use of a large hdd.

    IF you have a fast internet connection, have a look at the recent auto-apt (if you try to use a file that is in something that you haven't got installed... it automatically downloads and installs it for you.

    Have a look at all of the non-RedHat packages you currently have, and see that almost all of them are already somewhere in debian (other than kde-like ones which you just have another sources.list (i.e. apt-get) entry). This makes upgrading to debian much easier... after an intall, just type in apt-get (list of all the programs you like) and come back in the morning.

    I guess that doing transferring each of my machines to debian was about a 6 hour process, BUT (due to technical local network difficulties) had to install slink first before upgrading to potato, and had to recompile kernel each time. Unless you have anything tricky (like need to recompile kernel) it may take you as little as 3 hours (including installing lots of programs that with redhat you used to have to search for).ymmv
  • But you would have to mirror all of the clients data multiple times, to account for power supply failures, those laptops moving on and off of the the network, and meteor hits to your buddies house, its sounds about as gracefull as raid, in software, over a network.... guess I'll have to rewire the place for gigabit.
  • Could be your grandparents record... ah the miracle of youth and technology
  • I worked at Locus Computing for seven years on their cluster technology, which is a contemporary of MOSIX. These are both single system image (SSI) systems that make a collection of Unix workstations appear to be a single computer. We actually shipped this technology successfully as subcontractors for a large hardware manufacturer (and indeed it still lives on across several business units of yet another large hardware manufacturer).

    We discovered, however, that not many end users cared very much about total SSI semantics. They typically wanted only the distributed file system, or the DFS plus process migration, or remote device support, but seldom did they care for the whole ball of wax. Locus never lost money on the technology, but it never gained the market acceptance and market share we had hoped for.

    So SSI is nothing new. The question I'd like to ask Mr. Murdock is, what has changed in the marketplace that makes SSI now a viable product on which to base a business? I don't believe that "because it's open source" is a credible answer.

    Ciao....

  • Well, there are a couple of things that annoys the hell out of me about RedHat (which is the one I'm using, and have been using since I started with Linux back in RedHat 5.1 days).

    1) I never been able to upgrade a RedHat distribution. If it doesn't fsck up my system, the installation crashes. Which is why I now have /home in its own partition. Last time I did a upgrade (6.1->6.2), I just printed out a list of packages installed, and went through the packages on the CD and upgraded manually. Pretty painfull.

    2) RPMs messes up with my custom configuration.

    3) Incompatibilies between versions. Version 2 is incompatible with version 3 (currently used in 6.x series), and the next version 4 is incompatible with 3. And I can't upgrade rpm, because the rpm package itself is in rpm version 4. Which means I can't upgrade the packages at rawhide, because they're all in 4.0 rpm packages.

    The one saving grace it has is that I know it. I know how to work around most of the quirks. I know how to create, fix and patch .rpm packages.

  • Ahhh... the VAX cluster lives again..

    On cheap hardware, even :)

  • Ian's last name is Murdock, not Murdoch. See e.g. his old homepage [arizona.edu]
  • Does Debian provide ISO images of their frozen branch?

    The whole idea of a distribution, to me, is so I don't have to download all the packages individually. Kind of defeats that purpose if I install Debian, and have to download all the packages to get them up-to-date on my system.
  • I don't know if many people know about the "Mosix" project at www.mosix.org. It transparently dispatches a process in the kernel level to a cluster of other computers in the network. I tried it on a group of computers at school and it is pretty amazing.

    This project was actually covered on Slashdot a year or more back. There was a large noise made because it consisted of a kernel patch and a module which were not GPL. Naturally people had the right to complain and they resolved it by licensing it as GPL. Now the irony is that inspite of this, the Mosix folks can't seem to get any of the Mosix code to be folded into the mainstream kernel! As it stands now, there is a patch and module which must be applied to the kernel each and everytime you get a new revision. When I first tried it, it took a few tries to get it all working. The Mosix folks seemed to be genuinly interested in getting any portion, however small, to merge into the mainstream kernel because it makes their patch maintainence effort that much easier and more accessable to a larger user base. I think it would be great if someone with more involvement in the core Linux kernel development assisted them in this regard.

  • by pb ( 1020 )

    Even though that was an excellent answer to my question, it raises yet more questions. However, now I'm pretty excited about Linux NOW, so I guess that's a fair trade.

    At my University, like so many others, we use AFS and Kerberos to integrate all the machines. It sorta works, but it's messy and annoying, and sometimes it really lags, too. I'd love to see something better, instead of a bunch of symlinks and mount points stuck together with a bunch of duct tape.

    Also, I've configured NFS before; we recently needed to share resources between two machines, and that makes things impossible. If one of them goes down, the other one becomes catatonic. I realize NFS on Linux isn't perfect yet; in fact, it's downright bad.

    However, my box at home isn't connected, so until something better comes along, I'll run 'make -j' in a corner on my unclustered, uniprocessor box... :)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • i wonder if linux NOW has roots in the idea of catholicism WOW!

    sorry, i know ;)
  • Does Debian provide ISO images of their frozen branch?

    Yes. Even images of "unstable". At cdimage.debian.org [debian.org]

  • Can you imagine people arguing endlessly about the merits of a particular toaster or microwave oven?
    Well of course not. I mean, everyone knows that GE makes the best toasters.
    GE Toasters forever!
  • Ian Murdock has
    Replied to your great questions.
    So pay attention.

    Please read the answers,
    Don't be like Signal 11,
    Trying for first post.
  • "Linux NOW makes a network of workstations look like a single integrated system."
    "When it comes to dozens, or hundreds, or thousands of machines, however, these issues become much more difficult, and even after twenty years of using networks, people still don't have good ways of approaching them."

    An interesting statement, given that Digital Equipment Corporation first provided that capability on VAXclusters under VMS in 1983, and the current development of that inital product is available [digital.com] from Compaq under both OpenVMS VAX and Alpha systems as well as forming the basis for TruCluster for Tru64 UNIX [compaq.com], including a distributed Cluster File System.

    Not that it's isn't a good thing for Linux to have these capabilities, or for them to be Open Source; but maintaining that nothing like NOW exists currently just isn't the case.

  • In addition, there's apt, which updates your system incredibly easily. In fact, there was a recent post on debian-devel where a user updated from 1.3 (released 3 years ago) to frozen quite easily.

    When I went from 1.3 to 2.0, it was pure pain. The main obstacle was the move to glibc. I followed the upgrade instructions as closely as I knew how, but still ended up with stuff broken. Don't get me wrong, I like Debian, and still use it, but I find it difficult to believe someone went from 1.3 to 2.2 "quite easily".
  • Exactly...
    Can't possibly be Debian... he must be completely bored with it...


    --
  • disclaimer: i have been a debian user for about 3 years now. debian is easily the best linux distro i've ever used- on a desktop. this is not a flame.

    debian is horrible for embedded systems. i've tried it. the debian package management depends heavily on perl and a bunch of associated utilities & libraries which don't fit too well on, say, 8mb of flash disk. in fact, it takes a lot of effort to strip a debian distro down under 20-30 megs.

    if you really want to do use linux on embedded hardware, do yourself a favor and just build your own distro. for small systems, it's really not all that hard.

    if you are lazy, check out lem [linux-embedded.com], linux embedded. it's about 8 megs total, and includes X and glibc 2.1.x.

    if you want linux on a desktop, or for a linux server, you can't go wrong with debian.

    =--- - - .
  • He's only used two distros. SLS was the first one he used and since Linux kicks ass, he probably liked it alot.
  • When upgrading my Woody-based system, I sometimes see [hurd binutils] after some packages. Is Debian GNU/Linux merging with the Hurd packages?
  • So he works in a vacuum? I would think that any self-respecting developer would want to check out the "competition" every once in a while to see if there are any good ideas out there, unless he thinks that only Debian developers can come up with innovative stuff.
  • IIRC, Slackware grew out of the frustration at the lack of updates from SLS. The Slackware package system and section layout is a reminant of its SLS heritage (and from when you had to download several dozen floppy images to install it)
  • So, I guess that makes SLS my second-favorite distribution.

    Yep, he did. Not that I've ever heard of SLS...
  • The horrible thing is I made the same mistake when I was first looking through the headlines.

    The good news is: what would you ask Rupert Murdoch? All I can think of is "Does it upset you that the Simpsons spends a lot of it's time insulting you and your company?" But Ian Murdoch, well... I couldn't think of much to ask him either, but other people asked much better questions.

    Devil Ducky
  • Does Debian provide ISO images of their frozen branch?
    Check out http://cdimage.debian.org/ [debian.org]. There is an unobtrusive link towards the bottom about getting official CD images for potato. (It's not more prominent because they don't want to push "beta" software. IMO, potato is wonderfully stable.)

    PS They use the pseudo-image kit because of limited bandwidth on those sites which mirror the .iso images. The idea being that you conserve bandwidth on the main mirrors and get a fast mirror for the packages, then you rsync it (which, according to them, takes up around 1% of the bandwidth of an .iso download) against one of the rsync mirrors. They have kits for both Linux and Windows.

    --

  • It sounds to me like what he's describing can be done as follows:

    Mount /usr and /home remotely.
    Mount /usr/local and everything else in the root directory off of the hard drive.

    Voila! The system administrator now only has to keep the main system upgraded. The rare updates to the root directory can be automated over the network, and each user can still install his/her/its stuff in the usr/local directory without fear of it getting overwritten. Of course, this is a simplification -- a bunch of other directories can also be mounted remotely, but I don't know enough about the hierarchy to specify which one.

    So it sounds to me like LinuxNOW just bundles that with a bunch of remote administration utilities. Where is the advantage?

    --

Reality must take precedence over public relations, for Mother Nature cannot be fooled. -- R.P. Feynman

Working...