Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Linux

Is It Time To Split Linux Distros In Two? 282

snydeq writes Desktop workloads and server workloads have different needs, and it's high time Linux consider a split to more adequately address them, writes Deep End's Paul Venezia. You can take a Linux installation of nearly any distribution and turn it into a server, then back into a workstation by installing and uninstalling various packages. The OS core remains the same, and the stability and performance will be roughly the same, assuming you tune they system along the way. Those two workloads are very different, however, and as computing power continues to increase, the workloads are diverging even more. Maybe it's time Linux is split in two. I suggested this possibility last week when discussing systemd (or that FreeBSD could see higher server adoption), but it's more than systemd coming into play here. It's from the bootloader all the way up. The more we see Linux distributions trying to offer chimera-like operating systems that can be a server or a desktop at a whim, the more we tend to see the dilution of both. You can run stock Debian Jessie on your laptop or on a 64-way server. Does it not make sense to concentrate all efforts on one or the other?"
This discussion has been archived. No new comments can be posted.

Is It Time To Split Linux Distros In Two?

Comments Filter:
  • Nonsense (Score:5, Insightful)

    by lorinc ( 2470890 ) on Monday September 08, 2014 @03:45PM (#47855889) Homepage Journal

    It's up to the distro to focus on what they want and make declination either for desktop, or servers, mobile, embedded, etc. None of this has anything to do with linux which is, you know, just a kernel.

    • Re:Nonsense (Score:5, Insightful)

      by MightyMartian ( 840721 ) on Monday September 08, 2014 @04:20PM (#47856247) Journal

      So many of the kernel's functions and optimizations can be altered now, there strikes me as no reason to ship entirely different kernels. Who does that any more? Even Windows kernels are largely the same, with optimizations triggered either by the registry or by the edition.

    • Well, the title refers to "Linux Distros" and a distro is, you know, just a distro...
    • Re:Nonsense (Score:5, Insightful)

      by Existential Wombat ( 1701124 ) on Monday September 08, 2014 @05:31PM (#47857061)

      How come we can’t mark the OP article Troll?

  • No Need (Score:5, Informative)

    by darkain ( 749283 ) on Monday September 08, 2014 @03:45PM (#47855891) Homepage

    This is already done. For instance, I personally use Turnkey Linux for my servers and Debian Linux for my workstations. Both of these use Debian as their back end repository system, but Turnkey Linux has a system setup tuned specially for working within a virtualized server environment, whereas Debian Linux is more general purpose (which is what a workstation needs)

    • by Zocalo ( 252965 )
      I would say the same thing. The user can currently either choose a different "sub-distro" based on their primary flavour of choice, opt for a desktop/server specific spin, or just accept the current one distro to rule them all but just install the necessary packages for what they want approach. There really shouldn't be any need to split a Linux distro (or BSD distro for that matter) in two for this (and why stop there, why not a phone/tablet optimised version, or one for embedded devices...?) - just prov
  • A Betteridge No. (Score:5, Interesting)

    by TechyImmigrant ( 175943 ) on Monday September 08, 2014 @03:47PM (#47855913) Homepage Journal

    The issues that brought systemd into existence (coordinated handling of a frequently reconfigured hardware as things are plugged in and out and connections go up and down) is not different to hardware that reconfigures frequently due to power saving, with things turning on and off. This just isn't a big issue in server chips right now, but it will be in the future as advanced power saving techniques move from mobile to desktop to server. Then the split will look silly.

  • No (Score:5, Insightful)

    by blackomegax ( 807080 ) on Monday September 08, 2014 @03:48PM (#47855919) Journal
    Splitting upstream would be disastrous. (Desktop would lose the behemoth of code contributions from Redhat for the most part). Just leave it to the distro's to do the 'splitting'. EG Ubuntu Server vs Desktop
    • Yep. If they were officially and permanently split, desktop linux would first stagnate, and then eventually cease to exist. For as good as linux is for desktop use, there just isn't enough interest to maintain it as a purely desktop system. Otherwise the oft predicted 'year of linux on the desktop' would have happened long ago. Because Linux is popular as a server OS, the community gets the benefits of just having to maintain a few modules on top of it to make it into a perfectly serviceable desktop OS.

    • by nhaines ( 622289 )

      But that's the point. Ubuntu uses the same kernel on both its desktop and server (and phone/tablet) installs. The only difference is the default selection of packages.

    • by Rich0 ( 548339 )

      Splitting upstream would be disastrous. (Desktop would lose the behemoth of code contributions from Redhat for the most part).

      Just leave it to the distro's to do the 'splitting'.

      EG Ubuntu Server vs Desktop

      Yup. The fact is that virtually all the investment in Linux is for running it as a server. That is where the money is.

      The desktop stuff just layers some applications on top, benefiting from the stable platform underneath. The more the desktop diverges, the less support it is going to tend to get, since nobody really invests in linux for the desktop seriously.

  • More Forks! (Score:5, Insightful)

    by Prien715 ( 251944 ) <agnosticpope@TWAINgmail.com minus author> on Monday September 08, 2014 @03:49PM (#47855925) Journal

    Yes, more fragmentation in the Linux community will make things even more usable for your average user! He should write a custom package manager for servers and another for clients, because we don't have enough of those. Let's fork the kernel too -- or at least make a completely different fork of GLIBC so we'll need to recompile every package we want to install from source -- as God intended. The year of the Linux Desktop is here!

  • by Imagix ( 695350 ) on Monday September 08, 2014 @03:51PM (#47855949)
    Betteridge's law of headlines. No. The article doesn't say a whole lot. Just makes the assertion that "servers" and "desktops" are different, and lightly appears to dislike systemd. Tries to make the assertion that the security concerns are different on the desktop and on the servers, but doesn't provide a strong argument for that assertion (or really any assertion it makes).
    • In modern operating systems, the differences are in optimizations. A desktop might want to have more ticks dedicated towards foreground GUI apps (though I'm not even sure that matters is the age of multicore processors with gigs of RAM), whereas a server might want to dedicate more resources to I/O. But in most cases, at least with any software and Linux distro I've seen in the last decade, much of that can be accomplished by altering kernel and daemon parameters.

      Windows does the same thing. The base kernel

    • by davecb ( 6526 )
      A tale told by an idiot, full of sound and fury, signifying nothing.
    • Well, for me the year of the Linux desktop is here since 2007. I just hope that not too many people get the same idea and decide to switch to GNU/Linux in future, because that would mean that I'd need to give free tech support to more people and I have no interest or time for that. It is better if GNU/Linux continues to stay under the radar of casual users, moms and morons (I was about to say "greedy business men" too, but then I realized that Android probably also counts as "linux").

  • Keep em together (Score:5, Insightful)

    by myrdos2 ( 989497 ) on Monday September 08, 2014 @03:51PM (#47855953)

    I've always been impressed with how rock-solid, and well, server-like the Debian desktop has been. I wouldn't want to give that up - it's simple, it's clean, it's ultra-reliable. If I want to run a website or allow remote access, there's really not that much to learn. Compare that to the complexity of Windows server.

    Is this split actually a valid suggestion, or more anti-systemd rhetoric? If there was no such thing as systemd, would you even care about splitting?

  • by Anonymous Coward on Monday September 08, 2014 @03:52PM (#47855965)

    Allow me to make the following naming suggestions

    Desktop:
    Linux Standard
    Linux Pro
    Linux RT

    Server:
    Linux Storage Workgroup
    Linux Storage Standard
    Linux Server Foundation
    Linux Server Essentials
    Linux Server Standard
    Linux Server Datacenter

  • by thule ( 9041 ) on Monday September 08, 2014 @03:53PM (#47855977) Homepage

    RedHat 7 ships with systemd. But, but, but, we all know that RedHat totally and completely abandoned the desktop years ago.

    So we have two options. Either systemd is not just for desktops or RedHat never completely abandoned the desktop. Either way, there is no need to split distros. RedHat does provide a nice tool called 'tuned' that helps tweak kernel and system parms for desired load.

    • by armanox ( 826486 )

      Explains the whole RHEL-Desktop, doesn't it?

    • by caseih ( 160668 )

      Using CentOS 7 on my desktop right now. It supports modern hardware, and I have a nice, usable desktop environment. I'll never use Gnome 3, so the frozen version number won't bother me any. Systemd works quite nicely for the desktop, and I can see how it will be a good thing on servers too.

  • by JustShootMe ( 122551 ) <rmiller@duskglow.com> on Monday September 08, 2014 @03:54PM (#47855995) Homepage Journal

    I am a linux sysadmin, and many of the packages required for desktop use not only don't apply to me, but are pretty well useless. I would love to see a distribution where any dependency on X11 was not only stripped out - but *compiled* out. I would love to see a distribution where systemd was not getting its mitts into everything.

    But it's not only that, it's tuning. I discovered that Ubuntu's default scheduler settings on a Dell R620 with 384G of RAM and a nice beefy RAID 10 array are actually the *worst* settings for this kind of system. Everything else I tried - other schedulers, tuning CFQ, etc., they all led to better write throughput. Which leads me to wonder how many processor and other cycles are wasted because sysadmins just install with the default settings and hope for the best?

    There needs to be a distro where the adults are in charge. I'd even build it if I had time, and I most certainly would be willing to put some time into working on one.

    • Two quick points 1) A good System Admin would know that default settings suck. 2) Change settings based actual needs.

      TL;DNR version:

      Default settings suck for just about everything, but are usually for "common scenarios" that aren't common any longer. I see people still setting up Linux as if it were for a 384 MB RAM server running on a single HD (or RAID0), when the new system is running Multi GB RAM, SSD SAN, etc. Not only are the original assumptions are wrong, the whole thing is screwed because there is

    • by lorinc ( 2470890 ) on Monday September 08, 2014 @04:22PM (#47856271) Homepage Journal

      I am a linux sysadmin, and many of the packages required for desktop use not only don't apply to me, but are pretty well useless. I would love to see a distribution where any dependency on X11 was not only stripped out - but *compiled* out. I would love to see a distribution where systemd was not getting its mitts into everything.

      It's called gentoo.

    • Funny thing.. Back when I was a day-to-day administrator for Solaris (2.4-2.8), the kernel was optimized for the desktop. AIX, at roughly the same time, was optimized for the desktop.

      And shoot.. Even now that AIX is a "server" only operating system, tuning the kernel is still a requirement. Whatever your settings are is kind of irrelevant in the grand scheme of things since no one can tailor a kernel that is perfect for everyone. The first day you roll out your great distro, people will be complaining abo

      • Where did you get the idea that this was about forking the kernel?

        Linux (at least in this context) is not only the kernel, it's the whole ecosystem - whatever you call it. I don't think the person who wrote this article was arguing that the kernel should be forked. The only time I'll seriously consider that is if they try to put dbus into it.

        I don't see anything wrong with the kernel that a little tuning can't fix. I do see plenty wrong with lots of distributions.

        • Because the article was about device support in the kernel and systemd..

          There are already a lot of server centric distributions. Ubuntu is just not a good choice for server side. That says nothing about the ecosystem in general. It doesn't even really say anything about Debian, which is Ubuntu's base.

          • Sometimes you use what the user demands/needs rather than what is the best choice technically. I'm not too fond of Ubuntu on the server either, but there are specific reasons for it in our particular environment that I can't get around.

    • Making the assumption that every server out there primarily relies on write throughput?

      Default settings may not apply to your scenario. News at 11.

      • I was using it as an example to show that server workloads are fundamentally different from desktop workloads.

    • by Anrego ( 830717 ) * on Monday September 08, 2014 @04:47PM (#47856565)

      This is actually a major benefit of gentoo, and one of the reasons I run it on my servers (they are all hobby-ish, I get that gentoo in production is probably a bad idea).

      Trying to run a Debian or similar server, you inevitable end up with a bunch of X packages because some random tool comes with a built in GUI and no one bothered to package a non-X version.

      It extends even beyond X or no-X. You find yourself with database drivers for all the major (and some minor) databases regardless if you use any of them, and loads of other cruft.

      This is obviously part of the tradeoff for a system that just works, but it's annoying when some gnome library breaks the update on a _server_.

      As a side note, it's becoming increasingly frustrating to be a non-systemd user. I've had to re-arrange a tonne of packages as stuff switches. I know systemd is inevitable, but I'd like to hold out just a little longer :(

      • If there was a way to package an already built gentoo distribution for rollout, I would seriously consider it. (and there may be, I haven't used Gentoo in a long while)

        It's not an option for the huge company I work for, though. Most companies want Debian or RPM based systems.

        • by Rich0 ( 548339 )

          You can build and install binary packages on Gentoo. That is actually the approach that most admins of Gentoo-based datacenters take. Granted, it isn't nearly as popular as Debian/CentOS for obvious reasons. However, there are niches where it is used, and a perfect one is when you have a need to be able to tweak compile-time configuration of many packages (like stripping out X11). Gentoo Hardened has been popular for a long time as well (though other distros have matured quite a bit in this space - Gent

      • by Rich0 ( 548339 )

        As a side note, it's becoming increasingly frustrating to be a non-systemd user. I've had to re-arrange a tonne of packages as stuff switches. I know systemd is inevitable, but I'd like to hold out just a little longer :(

        There is a reason it is popular. :) I'm sure OpenRC will be as supported as it can be on Gentoo for a long time, but we can't control what upstream does (Gnome, etc).

        At this point SystemD on Gentoo is pretty mature, so you probably should at least experiment with it. I suspect that within a year sysvinit will no longer be in the stage3s and you'll just pick an init during install the way you pick a syslog or kernel or bootloader. It has gotten to the point where installing systemd from a stage3 is as sim

    • by Zenin ( 266666 )

      Better question: It's 2014, why the hell are you still manually tuning kernels?

      I'm not saying you don't need to for Linux...I'm asking why you or anyone else feels this is an acceptable requirement? Is it just to keep Linux sysadmins employed?

      Sure, for some incredibly unusual workloads we might not be able to expect the kernel to self-tune, but for the other 95% of typical uses they kernel really should be able to tune itself and do so far better than any human.

      Seriously, why do people put up with schedule

    • I am not a linux sysadmin. Is there something that cannot be accomplished with a command line?

      Compile systemd without getting its mitts into everything, or don't use it. Compile an alternate scheduler.

      Can you not do this? The crux of the argument seems to rely on you not being able to do this at all. Otherwise, the answer is to be in charge of the command line.

  • by belthize ( 990217 ) on Monday September 08, 2014 @03:55PM (#47855997)

    The byte-wise difference between a desktop, laptop and server based on the same distro is in the tiny fraction of a percent. It's mostly some minor tuning and chkconfig tweaks. The difference between an optimal desktop and optimal server is in the choice of distro.

    Either pick a distro based on some sense of case sensitive optimization or standardize on a distro for supportability (at the cost of optimization). Forking a distro is the worst solution.

  • by Noryungi ( 70322 ) on Monday September 08, 2014 @03:55PM (#47855999) Homepage Journal

    A server is just a bigger laptop. Don't laugh: technologies such as virtualization, para-virtualization, SSD, dual-type disk drive HDD+SSD, low-power CPUs, multiple high-density CPU cores and even high-end graphical cards can be found in both types of PC (Think OpenCL on the server, and Unreal Tournament -- or whatever the shoot'em up du jour is -- on the laptop for that last one).

    Linux and BSDs make this possible, even trivial. Heck, these days, a lot of people even test entire server platforms or AJAX applications on virtual machines on their laptop - I know I do. Ideally, all machine should be both servers and personal machine.

    I want my operating system to be flexible and able to adapt to different computing platforms. I want something smart enough not to push a GUI down my throat if I don't need it. Improvements on one platform will also be a benefit to the other. Having a laptop with 24 to 48 CPU cores may still be science-fiction today. But it won't be tomorrow. On the other hand, building a fast SSD-only Petabyte server using nothing but laptop SSDs would allow you to cram way more data... for less price than those slow SATA disks.

    In other words: splitting Linux is simply a bad idea. Thanks, but no thanks.

    • A server is just a bigger laptop.

      Not to mention, today's server spec is comparable to tomorrow's laptop spec.

      • mm so dual 18 core xeons 64 GB to 96GB primary memory and 24-48 Tb of spinning disk I think not
        • I've still got tape drives hooked up to a four CPU Sun system with 8GB memory and less than 1TB of disk in total - a monster in it's day but outperformed by many laptops in this day.
    • you wont find virtualisation in any big HPC or Hadoop cluster - its fast xenon's tons of ram and more or less DASD plus possibly a lot of pro graphics cards used as compute models and that not counting the radically different networking designs/tech used in clusters
    • Splitting is a bad idea why? Install from the desktop repo, or the server repo, and your problems are solved, right?

      Are they not solved? Well use something like Gentoo where the compile is customized for your environment. There is no Gentoo? Why?

      Defend your position.

    • by dbIII ( 701233 )

      A server is just a bigger laptop

      On exactly that note I've put FreeBSD10 on an early netbook since such a server OS can be set to not run a lot of stuff at once on a low memory 32bit system. IMHO the difference between a server and desktop software distribution is there is more control available in the former which actually makes it better for low end devices.
      Another amusing example of the server/desktop split is offline wikipedia on ereaders - despite having very little CPU power there's plenty to run a we

  • Hey! Why not... (Score:4, Interesting)

    by rnturn ( 11092 ) on Monday September 08, 2014 @04:01PM (#47856051)

    ... a `Professional' and `Home' edition as well?

    Seriously... is this what some people believe is holding back wider Linux adoption? There's already more than enough FUD in the press and on the web in articles about Linux providing too many choices now without adding a server and desktop edition for the naysayers to complain about.

  • Makes sense (Score:5, Insightful)

    by rijrunner ( 263757 ) on Monday September 08, 2014 @04:04PM (#47856077)

    Last week, the complaint was that systemd was making Linux look like windows. This week, the plan is to adopt the Windows server/workstation design philosophy as a fix to the problem..

    I saw a lot of assertions in the article, but none seemed to actually have any data behind them. Nor, is it really apparent how a fork would leave either branch the critical number of developers needed to handle the respective branches.That is aside from the fact that the 2 kernels would have about 95% overlap in code base, which would separately need to maintain their own build environments and development paths.

    Let us look at one of the assertions:

    "However, they're also demanding better performance for desktop-centric workloads, specifically in the graphics department and in singular application processing workloads with limited disk and network I/O, rather than the high-I/O, highly threaded workloads you find with servers. If Linux on the desktop has any real chance of gaining more than this limited share, those demands will need to be met and exceeded on a consistent basis."

    How would a kernel fork address this? If the need is there now, in what way is the current environment stopping the developers from releasing code to address these issues?

    • by musicon ( 724240 )

      This doesn't have anything to do with a kernel fork; indeed, in the Windows world you're using the same kernel and drivers regardless of workstation, server, etc.

      This has more to do with the support systems in place, eg, using standard init scripts, leaving logs in text format, etc.

  • Already Happened (Score:2, Interesting)

    by Anonymous Coward

    Linux has already split into two different versions a few years ago.

    The "server" version is called GNU/Linux, and encompasses the hundreds of distributions designed to look and act like a class Unix workstation.

    The "end user" versions are Android and Chrome OS.

    • by armanox ( 826486 )

      Except they really don't act like UNIX, do they? I avoid GNU when I can, they're a pain in my rear at this point.

      • by rwa2 ( 4391 ) *

        Not only has this already happened, but the server-side of Linux looked at the new features introduced by Android / ChromeOS and decided they wanted some of that too.

        So now you have CoreOS formed based on the features of ChromeOS as a nice way to run and maintain Docker containers in a server cluster. So much for forking desktop and server Linux.

        http://en.wikipedia.org/wiki/C... [wikipedia.org]

        You can compile and run GNU utilities on Android (and likely ChromeOS as well).
        https://play.google.com/store/... [google.com]

        granted, it's in a

  • by Mr_Wisenheimer ( 3534031 ) on Monday September 08, 2014 @04:14PM (#47856165)

    After all, you can take a Windows server and essentially turn it into a desktop OS with a little tweaking. The problem with Linux is that it is very fragmented, which is Linux's greatest strength and its greatest weakness.

    Linux is great for technologically savvy users who want to customize it for a specific role. It is not so great for users who lack technical expertise or the time to administer it. Linux evangelists have been claiming it would take large amounts of desktop user share from Windows. You still see some of those around, but they tend to be quieter. The Unix OS that took away Windows market share was OSX, because like Windows, it has a unified, consistent codebase and is developed to be easy for end-user.

    Splitting up Linux would not suddenly make Linux server or workstation uses stronger. Most technical end users of Unix (that I have known) have switched to OSX or some combination of Windows and Unix environment (cygwin or SSH to a UNIX/Linux box). Paid development and unified code simply has advantages that Linux will probably never be able to match. All splitting up linux would accomplish is divide already scarce developer resources.

    People should love (or hate) Linux for what it is, a fragmented mess for the average end user that is imminently hackable and customizable to fill any possible role by experienced users who are willing to put in the time and effort.

  • It's already done (Score:3, Interesting)

    by radioact69 ( 1220518 ) on Monday September 08, 2014 @04:16PM (#47856187) Homepage
    Install Ubuntu Server 14.04.1 and you have a fairly minimal server OS. Do 'sudo apt-get install kubuntu-desktop' and suddenly it's a desktop OS. Going back isn't quite so simple, but you can 'sudo apt-get remove kubuntu-desktop' to get most of the way there.
  • by Peter H.S. ( 38077 ) on Monday September 08, 2014 @04:18PM (#47856217) Homepage

    Paul Venezia is just another sore systemd hater who can't accept that all major Linux distros are changing to systemd.

    That he think systemd is mostly for desktops just show how much he has lost contact with Linux. There simply isn't any commercial interest in keeping SysVinit or even Upstart alive. The market would have reacted long ago if any companies where queuing up to pay for new Linux SysVinit releases. They are not.

    Several companies have even switched to using systemd even though it wasn't officially supported on their distro yet, simply because systemd offers so many advantages over legacy script based init-systems.

    There is no coordinated non-systemd development taking place in the Linux community at the moment. The few non-systemd distros left haven't even begun to cooperate. So it looks unlikely right now that any non-systemd distros of note will survive into the next decade.

    There is a reason why commercial Linux vendors like Ubuntu and Red Hat are supporting desktop editions, even though they don't generate any money; without the desktop you will start to lose developers. It is that simple. That is also the main reason why BSD's are using GPL'ed DE's even though their sponsors can't resell them as close source software like the rest of the core BSD components; without a DE, the BSD variants would have even fewer developers.

    So it is pretty much distro suicide to split a distro up in two different and incompatible versions, one for the desktop and one for the server.

    • by Rich0 ( 548339 )

      The few non-systemd distros left haven't even begun to cooperate. So it looks unlikely right now that any non-systemd distros of note will survive into the next decade.

      What non-systemd distros are even out there? I hear slackware doesn't support systemd. Does anybody else not support it (well, of any of the significant distros)?

      People bring up Gentoo as a non-systemd distro, but at this point systemd works about as well on Gentoo as it does on just about any distro, and about as well as sysvinit on Gentoo. It just doesn't come pre-installed (but my guess is that within a year neither will sysvinit - the Gentoo way in these cases tends to be to let the user make their c

      • by Peter H.S. ( 38077 ) on Tuesday September 09, 2014 @03:59AM (#47860079) Homepage

        There are still some Debian derivative distros that haven't changed to systemd yet, since Debian haven't released "Jessie" yet; the first Debian stable release with systemd as default init.

        There is also a handful of other, rather small distros (forks of Gentoo and similar). But the basic problem with all those non-systemd distros and the systemd opponents are, that they seem unable to attract developers, and they don't cooperate either. They can barely maintain basic forks of udev, so when udev gets kdbus support, forks like "eudev" will begin to really differ from "udev".

        The entire non-systemd infra structure will start to decay further when no big distros are supplying developers to maintain it. ConsoleKit have basically been bit-rotting for years now, and the systemd opponents haven't even started to _plan_ for a replacement.

        At best the non-systemd distros will have crude Desktop Environment support. They will also have problem with Wayland support. Without DE support, it will become even harder to attract developers.

        As things are looking now, I don't think any non-systemd distros will survive for long. IMHO, they only have themselves to blame for that, they have focused all their energy on hate attacks on named open source developers and negative campaigning against systemd, instead of focusing on making a constructive alternative.

  • by mx+b ( 2078162 ) on Monday September 08, 2014 @04:18PM (#47856219)

    One of my favorite distros is OpenSUSE. In its repos, it has several different kernels -- there is a default one, but also ones for virtualization and a desktop specific one. I always figured they had the different kernels that were tuned/tweaked for the different needs. If you wanted to switch from a desktop to a server or vice versa, simply install/uninstall the packages you need, including switch the kernel, then reboot and you're done.

    I don't know enough about their tweaks to know if the desktop vs server kernel makes a difference, but I imagine it does or at least could in the right circumstances. I think the power of being able to change around some packages and get the effect you want is better than fragmenting the distro. I appreciate having access to all the features and being able to mix and match.

  • by wytcld ( 179112 ) on Monday September 08, 2014 @04:20PM (#47856235) Homepage

    I'm friggin tired of installing Linux as either server or workstation and finding a bunch of stuff that's oriented to making a laptop work well. I want to be able to do a clean install that by default has no support for Bluetooth or wifi or dhcp client, let alone a propensity to rewrite /etc/hosts or handle any aspect of networking in anything but a hand-configured way. Also, even if systemd's part of the distro, standard text logs should be there by default, as well as cron and a working /etc/rc.local file.

  • by Livius ( 318358 ) on Monday September 08, 2014 @04:23PM (#47856287)

    ...is for software to do one thing, and do it well.

    A computer does not (usually) need to be both a server and a desktop, though perhaps desktop versus server is more a matter of the windows manager rather than the whole distribution.

    • By the extension of that first thought, it means you can turn a distro into either desktop or server versions by removing the pieces that do one thing well and adding others.

      • by Rich0 ( 548339 )

        By the extension of that first thought, it means you can turn a distro into either desktop or server versions by removing the pieces that do one thing well and adding others.

        I mostly agree, but there have been some radical departures. You have stuff like android/chromeos which are huge departures from the typical linux server. You also have stuff like CoreOS which is a huge departure from tradition and which wouldn't work very well as the basis of a desktop distro (unless you're talking about serving virtual desktops - which obviously has a blend of server/desktop-like needs).

  • by raymorris ( 2726007 ) on Monday September 08, 2014 @04:52PM (#47856605) Journal

    98 of the top 100 fastest supercomputers in the world run Linux. Most phones also run Linux. See also consumer electronics of all kinds - TVs, routers, webcams, consumer NAS drives ... Linux works everywhere. As Linux has been installed everywhere over the last few years, Microsoft has gone from a monopoly, the 800 pound gorilla, to trying to catch up in order to survive.

    There is a reason for this. Linux didn't make any assumptions about what hardware people were going to use next week. Even the architecture could be whatever you anted that day - DEC Alpha, Blackfin, ARM (any), Atmel AVR, TMS320, 68k, PA-RISC, H8, IBM Z, x86, assorted MIPS, Power, Sparc, and many others.
    Microsoft built specifically for the desktop, and supported one platform - x86. Suddenly, most new processors being sold were ARM, and screens shrank from 23" to 4". Microsoft could only scramble and try to come up with something, anything tat would run on the newly popular ARM processors, and ended up with Windows RT. Linux kept chugging along because they had never made any assumptions about the hardware in first place. To start maing those assumptions now would be stupid.

    We don't know whether smart watches will be all the rage next year, or if cloud computing wll take off even more than it has, or virtualization, or a resurgence of local computing with power, battery-friendly APUs and roll-up displays. To specialize for "dektop" hardware or "server" hardware would be dumb, because we don't know what those are going to look like five years from now, or if either will be a major category. How many people here remember building web sites for WebTV? How well did that pay off, investing in building a WebTV version, then a Playstation version? The sites that faired these changes the best built fluid, adaptive sites that don't CARE what kind of client is being used to view them - they just work, without being tailored to any specific stereotype of some users.

  • The only benefit to such I model is the ability to charge more in licensing fees for the server version. Obviously that makes no sense here. Although I will say this, for any distro the is both frequently used as a server and is eye balling the abomination that is SystemD, they damn well better offer an init.d version.
  • Huh? (Score:4, Insightful)

    by c ( 8461 ) <beauregardcp@gmail.com> on Monday September 08, 2014 @04:53PM (#47856619)

    I assume that this is yet another click-bait blog-spam article, because I can't imagine that anyone who knows jack about Linux distributions wouldn't be aware that server and desktop variants of various distributions have been and still are done.

    More to the point, anyone who wanted it done that way would've or could've already done it. That the more popular distros don't generally make the distinction or don't emphasize it should be taken as a fairly solid answer to the question posed in the headline.

    • Several years ago, a kernel developer submitted a patch that greatly increased Linux performance for desktop-oriented tasks; but the patch was rejected because it harmed server performance. In that case, there was no way to reconcile the needs of the two types of systems. Under that kind of situation, the logic for a server/desktop split increases.

  • I am responsible for several servers running Ubuntu Linux. My Lenovo Notebook runs Ubuntu Linux. For development, for testing, I want the same OS on my machine as we have on the servers. When I test code on my notebook and upload it to the server I want it to see the same environment. It makes no professional sense to me to develop code under Red Hat and then hope it runs under FreeBSD when a hundred people are watching it crash. I don't need optimization, I need reliability.
  • Windows Server and Windows Desktop don't use the same OS? What definition of Operating System are you using here?

    They have the same system libraries. They have the same kernel, albeit optimized and configured differently. They support the same APIs, run the same applications, use the same drivers, support the authentication engine, support the same UIs and shells, and use the same package delivery systems. There are differences, but I've yet to see any technical reason why you couldn't turn a Server edition into a Desktop release or vice versa.

    As a counterpoint, the Ford Mondeo (4-door/5-door midsized vehicle) uses the same platform as a Land Rover Range Rover Evoque. They have the same frame, many of the same components, and otherwise take advantage of factory line construction and economies of scale. However, in this case, you could at least argue that they have different 'Operating Systems' -- they have some differences which are arguably just optimizations and tuning changes (handling characteristics, consoles, etc.) but others that are physical differences (Seats, load/capacity, etc.). You don't see Ford running out to split the Platform, though. Why? Because it doesn't make sense. There are more things in common at the core than are different, and they can make more products at a lower cost by sharing the core of the car platform. Ford has a dozen or so active car platforms, used by different models across their various brands; most other car makers do similarly.

    The author is making one of several possible basic errors.
    1) They don't really understand the definition of a Linux distribution (e.g. RHEL v CentOS v TurnKey v XUbuntu v Arch v etc.)
    2) They don't really understand the differences between Windows Server and Windows Desktop
    3) They don't really understand the definitions of the Linux kernel, GNU/Linux, and the Linux OS
    4) They don't really have a grasp of how software is made or how source code is shared
    5) They weren't loved enough as a child and are desperately seeking attention.

    This is like saying we need to create different compilers for AMD and Intel chips, as they have different architectures. It lacks understanding of the problem and understanding of how to address a solution.

    • The textbook version has the kernel as the OS, the common usage adds a pile of libs and userspace binaries that get close to the kernel and the MS vs Netscape version had the web browser as part of the OS instead of part of a software distribution. So with definition number three they are different operating systems, but with number one and two they are functionally the same.
      So it's not a "basic error", it's conflicting definitions which have been distorted over time by people pushing agendas (MS legal wit
  • doesn't mean you should.

    Thinking that any Linux distro, just because it could be run in both environments, should be developed to be suitable for both seems wrong.

    Why not just pick a desktop-oriented existing distro for your desktop and a server-focused one for your production environment?

    And if you need to develop on your server environment, then this doesn't really matter to you, since you have to pick what's best for your production environment regardless of your desktop.

  • by niks42 ( 768188 ) on Tuesday September 09, 2014 @08:18AM (#47861127)
    I've a big machine in the office at home. Some of its time it is a media server, some of the time a database server, apache/php web server and so on; equally it is my go-to client machine for highly interactive desktop applications like schematic entry, PCB layout, graphics and so on. Now it is not the music production machine, and on that I am using low latency kernel and I keep down the number of server-like processes. But they both came from the same distro, with light bits of tuning and configuration. I really, really don't want to have to manage multiple disparate distros based on usage of the day.

It's currently a problem of access to gigabits through punybaud. -- J. C. R. Licklider

Working...