Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Debian

Interview with Ian Jackson 142

Figuring you can never get too much Ian Jackson, Trevelyan writes: "Debian Planet has an interview with the long time Debian maintainer, and a former DPL, a current member of the technical committee and the author of dpkg. Also announced Debian GNU/Linux 2.2r7 released. In case some of you thought Debian won't be releasing anything this year =)"
This discussion has been archived. No new comments can be posted.

Interview with Ian Jackson

Comments Filter:
  • Desktop or Server (Score:3, Insightful)

    by xannik ( 534808 ) on Saturday July 13, 2002 @07:04PM (#3879230)
    I would be interested for slashdot to host and interview with Ian. As a user of gentoo linux I have experienced much of the power of a ports based system with its portage package management system, which has close ties to Debian's very own apt-get and dpkg. Debian seems very focused on a stable kernel, even more so than any other distribution I know of. Would it not serve Debian to focus more on the Server side of things and leave the desktop to the propeller heads, Gentoo that is. :)
    • by reaper20 ( 23396 )
      Debian seems very focused on a stable kernel, even more so than any other distribution I know of.

      I think you mean a stable core of the distro. Debian uses Linus' kernels, they don't keep a seperate distribution specific fork like the commercial distributions.
    • by jsse ( 254124 ) on Sunday July 14, 2002 @01:15AM (#3880435) Homepage Journal
      As a Debian and Gentoo user and being used Linux extensively for production environment, I think I could answer some, if not all, of your questions.

      Gentoo is great, its portage system rocks. The feeling of optimizing every single piece of packages squeezing last drop of performance out of existing hardware is so cool.

      However, portage system cannot beat Debian's package system in production environment. First of all, most production systems have most development system removed, especially for firewalls and edge servers. We are not doing it in order to make life harder, but we must reduce of risk and lost when the boxes are being hacked.

      Second, updating of packages in portage system takes too much time. Even you do the update every day 'emerge -u world' still takes you a lot of time. Not to mention when we could only perform the update once per week.

      Third and most important, the strength of Debian's package system does not only lie in its technical merit, but also its overall management by maintainers. As we know Debian is divided into three distro - stable, testing and still in development(or unstable). Each branch is carefully managed and maintained. The stable distro is very desirable for most production environment.

      You may say most packages in 'stable' are too out-of-date, but it's really stable, thanks to the efforts of many maintainers.

      I can say, the status of portage system is very near the sid distro of Debian. However, having unstable version deployed in production environment is very risky, especially on some servers involved expensive transactions where 10% boost in performance cannot cover the lost in single downtime.

      Just my two cents.
  • by Anonymous Coward
    Its current (stable) distro has the oldest Linux software of any of the major distributions. If they do not release a 3.0 stable soon, they will most likely go the way of other dearly departed Linux Distributions, such as SLS (Steve's Linux System), and Yggdrasil.

    Lots of current Debian users have all ready moved on to Gentoo. And while it is a fairly nice setup, I will continue to enjoy my uncrackable OpenBSD install. There's a reason they're going on 5 years without a remote hack.
    • by SavingPrivateNawak ( 563767 ) on Saturday July 13, 2002 @07:25PM (#3879287)
      Well 'outdated' is not a major problem when you want a RELIABLE server.
      And I think this is the point of Debian Stable. Tell me exactly what you would need for a home or semi-pro server that's not in Debian Stable?
      If you want recent software, 'testing' and unstable shall make you happy.

      I think it's a very good thing that a distro keeps a branch that is very unlikely to cause security problems
      • Tell me exactly what you would need for a home or semi-pro server that's not in Debian Stable?

        Journalling filesystems. I've got 120GB of crap on my home system, and there's not a fucking chance that I (or anyone else) should have to sit through ext2fs crawling through a filesystem that big because someone accidently pulled out the wrong power plug or there was a brownout.

        People keep saying that Debian stable is perfect for servers. How can it be?!?! It's running from a 2.2 series kernel, so unless you're going to start patching like a quiltmaker on amphetamines (thus defeating the purpose of all that wonderful stability testing), you're not going to have journalling filesystems, large files over 2GB, decent SMP support, or drivers for any new gigabit ethernet, SCSI, RAID or SAN adapter card released in the last 3 years.

        What sort of servers exactly is Debian stable suitable for? The only thing I can think of is small uniprocessor PC systems without any significant amount of attached storage, ie: a pissant little firewall or router. And there are better distros than Debian for those purposes.
        • I'm running kernel 2.4.18 with ext3. It really just depends on how much work you want to put into your system after Debian finishes installing. Switching from ext2 to ext3 literally took me less than 10 minutes.
          • I know it's possible to upgrade Debian 2.2 to just about anything, but the argument that keeps being presented by Debian supporters is "Sure, Debian 2.2 is old, but it's heavily tested, and stable as hell". Fair enough. But when you start upgrading core components like the kernel, you aren't running that "heavily tested stable as hell" distribution any more. You're running something of your own invention, or something which hasn't been subjected to anywhere near the same level of testing and abuse.

            It's either one or the other: You're stable but 3 years out of date, or you're up to date, but you're in the same boat as all the other distributions with regard to the amount of testing done...except that you've had to do all the hard work of upgrading, patching, etc etc yourself.
            • "Sure, Linux 2.2 is old, but it's heavily tested, and stable as hell". Fair enough, when when you start upgrading to a later brank of the kernel, you aren't running that "heavily tested stable as hell" kernel any more.... same level of testing and abuse...
              stable but 3 years out of date.... yada yada.

              If your hardware is brand new and only supported by unstable Linux, then it isn't considered stable hardware. If your Linux is brand new and only supported by unstable GNU then it isn't considered a stable kernel. If your server had more than one client, then you sould have been foolish to do all the hard of of upgrading, patching, etc etc yourself to the latest unstable Stable Linux. Much better to track a distribution's less volatile path towards your desired featureset even if it isn't "up to date." Most distributions of GNU have proven over time their priorities regarding stability versus featuritis. Choose one that reflects your comfort level.

              The point of Debian is that you don't have to do the hard work of upgrading and patching yourself. When pointing at an appropriate APT repository, you type "apt-get update; apt-get dist-upgrade". While Debian Potato isn't considered Stable (offically sanctioned by the Debian Project) with Linux 2.4, that doesn't mean that it isn't stable. As for the hard work of DIY, Google for "Debian Potato 2.4" and feel the pain of typing "apt-get dist-upgrade".
        • Debian Stable is perfect for servers because it is stable. By definition, a 2.2 kernel is more stable than a 2.4 kernel, and arguably a 2.0.39 kernel would be best if you want to minimize uncertainties and surprises. That would explain the continuing development into 2.0.40 and the 2.2.x tree.

          Althogh I know of people running idle "servers" that have money to burn for bragging rights, most serious server administrators actually have a budget. Hobby or professional, that means that 1000bT and RAID aren't usually purchased until the previous component is actually a bottleneck. Does SAN really exist outside of the enterprise? How many home servers are on SMP capable motherboards (including the infamously unstable BP6)?

          Perhaps your 120GB of data really is crap, which could explain why instead of being properly partitioned, it is existing on a filesystem almost certainly an order of magnitude larger than it should be (according to any Unix administration rule of thumb). If this is the case, then maybe you won't care about your files falling corrupt when your journalling FS sacrifices them for the sake of metadata cohesion. I'm certain that a home server has no feasible means of backup for such a filesystem, so if the data weren't crap, the filesystem should be at least as stable as ext2 with asynch writes disabled.

          Again, the point of Debian Stable, in case you didn't catch it, is that it is stable. Many servers are values more for their robustness, which is a typical byproduct of maintained stability. Flaws are addressed by backporting fixes without new features exposing new flaws. The latest featured advances in Linux based systems are definitely useful, but contradictory to the goal of stability. Rather than mad haphazard patching, Debian suggests tracking their Stable tree, which maintains well tested patches that don't add features. If this is a public system, then security takes a justifiable front seat to stability, and so follow that tree as well.

          As for your parting shots, Debian Stable is indeed suitable for large systems, especially when it is hard for physical RAM to reach "large" in personal class servers. RAM too large to autodetect in less-than-recent kernels is easily accessible by passing the value to the kernel during boot.
          Multiprocessor systems are definitely useable, the latest advances in 2.5 don't negate the previously available SMP functionality of 2.2. If you are maxxing out your SMP hardware, maybe less stability would be a valuable tradeoff for improved SMP utilization. Is your home server stressing the locking schemes? Remeber, Seti@home won't benefit from improved SMP - you still have to run multiple instances of the client.
          If you use "PC" in the Intel-x86 sense, then you couldn't be further from the mark. Debian Stable supports a wider range of platforms than any competing GNU distribution's latest release. Higher Debian releases support even more architectures. I don't think SGI or IBM servers could really fall within the scope of semi-pro or personal without aiming half a decade into the past.
          As for attached storage, propose a home or semi-pro serving situation where there will be heavy writing activity which may benefit from a journaling FS so that I know where you are coming from. Most serving implies reading, not writing of files, and serving from a read-only filesystem might even be prudent. (Notice my previous comment about partitions.)
          As for your pissant serving tasks, wouldn't stability still be top priority? To promote further discussion, please point out which distros handle routing and firewalling better then Debian, and just how so. Which features are better than stability?

          Honestly, I don't think any distro yet available will stop you from shooting yourself in the foot if you insist on running a "server" without following standard administrative guidelines... what does your fstab look like? Does your primary serving task involve MPEG2 rips of LotR, Shindlers List and Dances with Wolves? Please, choose between personal, semi-pro, or enterprise class serving. (SAN? give me a break.)
    • There's a reason they're [OpenBSD] going on 5 years without a remote hack.
      Funny, wasn't OpenBSD about the only Unix that turned out to be vulnerable to the OpenSSH hole in its "out of the box" configuration? Then again, I guess being an AC means you get to make shit up with no accountability...
  • by tlambert ( 566799 ) on Saturday July 13, 2002 @07:50PM (#3879354)
    When I first joined the project, dpkg was a very evil shell script that was little more than a placeholder.
    Listen, lad. I built this kingdom up from nothing. When I started here, all there was was swamp. Other kings said I was daft to build a castle on a swamp, but I built it all the same, just to show 'em. It sank into the swamp.
    "I wrote dpkg v2, a Perl script, because it was the thing that needed writing next."
    So, I built a second one. That sank into the swamp.
    "The current C dpkg is actually the third dpkg. I think there may have been parts of the Perl one that were done by someone else, but I basically wrote the Perl version, and wrote all of the C version."
    So, I built a third one. That burned down, fell over, then sank into the swamp
    "Around the introduction of the C version, the format was changed from the old `two lines of text and two gzipped tarfiles'. I wanted something more extensible, with room for some additional out-of-band metadata. I also wanted something you didn't need Debian-specific tools to unpack."
    but the fourth one... stayed up! And that's what you're gonna get, lad: the strongest castle in these islands.

    -- Terry
  • by WolfWithoutAClause ( 162946 ) on Saturday July 13, 2002 @08:32PM (#3879480) Homepage
    Figuring you can never get too much Ian Jackson

    Grins. That was not exactly my experience. I used to work with him whilst he was doing a summer job before he went to Cambridge. He didn't actually get fired or have to resign; but let's just say that at the time he was rather more interested in security than the system administrators would wish perhaps...

    Anyway he matured loads at Cambridge; must have done, cos he's still alive ;-)

    Bloody smart guy.

  • Debian is awesome. (Score:2, Informative)

    by Anonymous Coward
    I shouldn't feed the various trolls, but I guess I will. For one, someone mentioned OpenBSD's no remote hole in 5 years... Well that has changed now. Debian can compete with OpenBSD directly in the realm of security, because Debian backports EVERYTHING. They audit their code. No, they don't tout this fact like Theo does, but Debian is defintely covering their bases. And, you can be sure they will release a advisory AND a patch in a timely manner. Debian maintainers are some of the most talented guys out there, and highly motivated.

    Someone said that Debian was dying because it hasn't made a stable release.. Well, clueless troll you are! Run Unstable. It is _cutting edge_.
    I have ran Debian since 1.3, and for most of those years, I have used the unstable branch exclusively. I have been burned by it maybe 3 times. 3 bugs bad enough to affect my life. And every bug was fixed within a day. Let's see m$ or anyone else have that level of dedication.

    Debian is very much alive and well, thank you very much, and I will continue to use for years to come, and should they stop maintaining, I will be happy to contribute, just to keep it going. Security, current packages, and reliability. Not bad for free software.
    • by Anonymous Coward
      Debian can compete with OpenBSD directly in the realm of security
      Indeed, since they both run OpenSSH, they're both vulnerable to exactly the same remote root exploits.

  • 'sup with that session-id? Shouldn't you strip it from the link...
  • Who founded debian then? I'm confused.
  • rpm of course isn't anywhere near as snazzy as dpkg: you basically can't do remote, incremental upgrades without reboot

    I always thought that it were the "losers" in de Debian userbase who don't know anything, but it seems that it even counts for project leaders.
    Dpkg and rpm can do just about the same.
    You can use a frontend for them to handle the dependencies, like apt or urpmi.
    With rpm you can do incremental upgrades. I'm running Mandrake Cooker for about one and a half year, and it mostly works (ok, it's a development version of Mandrake).
    Rpm can do post-install scripts and all the rest.
    And you can upgrade from a gcc-2.96 distro to a gcc-3.1 distro.
    It annoys me to hell when I read messages from Debian users on forums or on Usenet like "rpm sucks" and then try to explain why. Now, if even project leaders talk this kind of shit, it explains to me why the Debian userbase sucks.
    Well, I can only assume he hasn't seen rpm in 5 years or so.....
    That's the only excuse I can think of.
    • Are rpms still such a pain in the ass to create? With dpkg, all you need is a "make clean" and "make install" target, and 2 minutes with dpkg-make and you've got a deb of your own software.

      And if you're packaging something that uses libaries, dpkg-buildpackage will automatically figure out what libraries you're using, what packages provided those libraries, and then automatically add them to the package's dependency list.

      Combine that with the ability to easily make your own sources for apt, and making many workstations is as easy as creating one deb file that depends on all the packages you want to have on a workstation. Just add your local source to /etc/apt/sources.list on a new workstation, apt-get update, and apt-get install ourworkstationload and it downloads the latest version of everything and installs.

      No hassle.

      apt and dpkg rock compared to rpm.
      • Are rpms still such a pain in the ass to create? With dpkg, all you need is a "make clean" and "make install" target, and 2 minutes with dpkg-make and you've got a deb of your own software.

        In essence, Yes.
        You make a specfile which mostly consists of macros; like %configure, %make, %makeinstall. Of course you have to specify other Metadata, like License, Source, Patch1, Patch2, Url. You can make a filelist yourself, where you choose which files end up in the package.

        And if you're packaging something that uses libaries, dpkg-buildpackage will automatically figure out what libraries you're using, what packages provided those libraries, and then automatically add them to the package's dependency list.

        Yup, rpm uses on mandrake the scripts /usr/lib/rpm/find-requires and find-provides. You can manually add Requires or Provides to the specfile, like Provides: smtpdaemon.
        A difference is that rpm uses mostly files from libraries as dependencies, while dpkg uses packages. In the end that should just work the same.

        Combine that with the ability to easily make your own sources for apt, and making many workstations is as easy as creating one deb file that depends on all the packages you want to have on a workstation. Just add your local source to /etc/apt/sources.list on a new workstation, apt-get update, and apt-get install ourworkstationload and it downloads the latest version of everything and installs.

        Well, if you use apt together with rpm, you can just do the same I suppose.
        If you use urpmi with rpm, you can use genhdlist which makes a hdlist.cz file with the rpm-headers. You can then use "urpmi.addmedia name ftp://ftp.bla.org/RPMS with hdlist.cz" and install packages from that repository.
        And for the fake package, you can make a specfile without a real tar.gz and filelist, but with your own defined dependencies.

        apt and dpkg rock compared to rpm.

        There you go again.
        You can compare dpkg and rpm.
        And you can compare apt+dpkg and apt+rpm or urpmi+rpm.
        You cannot compare apt to rpm, in the same sense that you cannot compare apt and dpkg.
        • Q: Are rpms still such a pain in the ass to create?

          A: In essence, Yes.

          Duh, I meant No. And Yes, they are rather easy to build.
          I never built deb packages though, so I can't really compare them.
        • apt does appear to be easier to use than rpm. Not sure about urpmi.

          The problem is with dependency resolution. rpm doesn't handle this, though, e.g., up2date attempts to handle it. Whether the reason is the care with which the pieces are built, or for some other reason rpms seem to break via dependency problems more often. (Of course, apt-get assumes an internet connection, etc.)

          OTOH, as of Friday, Debian was having problems with validity checking (or possibly with packages?). To be specific, the fileutils module has been refusing to update itself due to validity check violations. Now this may well be because I don't really understand the system. I don't know. And I don't know how to find out.

          OTOH, I've had this kind of problem before with rpms, though usually not with anything that seems to be a essential part of the system.

          Still, with apt-get I was able to switch from Progeny to Debian testing to Debian stable and back to Debian testing. I was never able to do anything equivalent with rpms.

          To my mind both systems have advantages. I haven't decided which I prefer. But there are a lot more packages that are available as rpms. And sometimes those are the ones I need. Compiling will usually work, but if there are unsatisfied dependencies, that conflict with other things that are installed... OUCH!

          What I'm really hoping for is that the apt-get for rpms becomes more standardized. (And just why is alien being removed from the next version of Red Hat?)

"Don't tell me I'm burning the candle at both ends -- tell me where to get more wax!!"

Working...