Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Open Source Operating Systems Linux

Is Modern Linux Becoming Too Complex? 716

An anonymous reader writes: Debian developer John Goerzen asks whether Linux has become so complex that it has lost some of its defining characteristics. "I used to be able to say Linux was clean, logical, well put-together, and organized. I can’t really say this anymore. Users and groups are not really determinitive for permissions, now that we have things like polkit running around. (Yes, by the way, I am a member of plugdev.) Error messages are unhelpful (WHY was I not authorized?) and logs are nowhere to be found. Traditionally, one could twiddle who could mount devices via /etc/fstab lines and perhaps some sudo rules. Granted, you had to know where to look, but when you did, it was simple; only two pieces to fit together. I've even spent time figuring out where to look and STILL have no idea what to do."
This discussion has been archived. No new comments can be posted.

Is Modern Linux Becoming Too Complex?

Comments Filter:
  • So roll your own. (Score:3, Insightful)

    by GloomE ( 695185 ) on Wednesday February 11, 2015 @09:18AM (#49028093)
    Just like Linus did.
    • by Nate B. ( 2907 ) on Wednesday February 11, 2015 @10:50AM (#49028765) Homepage Journal

      Actually, there are some that are intent on doing just that despite being labeled "haters" even though their motivations have nothing to do with "hate". Disagreement does not mean hatred. So long as the Linux kernel does not require specific user space software or versions, those of us who prefer a more traditional approach will be fine.

    • by morgauxo ( 974071 ) on Wednesday February 11, 2015 @11:13AM (#49028953)

      Rolling your own 'Just like Linus did' may be a little extreme. I don't think you need a whole new kernel!

      Just install Linux from scratch and don't put all that *kit, etc.. crap in it. I would imagine you could even get rid of udev and all that stuff if you are willing to run mknode yourself. Roll it like it's 1995.

      You will lose out on some convenience if you are using a portable device such as a laptop but on a desktop with fairly static hardware everything should work just fine.

      If having your own custom simple Linux isn't good enough for you then take it to the next step and start your own distro that leaves all that stuff out.

      • by ookaze ( 227977 )

        Rolling your own 'Just like Linus did' may be a little extreme. I don't think you need a whole new kernel!

        Just install Linux from scratch and don't put all that *kit, etc.. crap in it. I would imagine you could even get rid of udev and all that stuff if you are willing to run mknode yourself. Roll it like it's 1995.

        You will lose out on some convenience if you are using a portable device such as a laptop but on a desktop with fairly static hardware everything should work just fine.

        If having your own custom simple Linux isn't good enough for you then take it to the next step and start your own distro that leaves all that stuff out.

        I've run my own Linux From Scratch like system since 2001, and I can say your advice is a very bad one.
        The problem is that the kernel provides dynamic interfaces since a long time. If you do what you say here, you will have a very limited setup that must not be a moving target, or take the risk of seeing your OS not boot very often, or even lose data (I had several of these problems before systemd). Your setup also has to be a very basic one.
        Even your network interfaces or your disks can appear too late for

    • Re: (Score:3, Insightful)

      by NotDrWho ( 3543773 )

      Yeah, that's EXACTLY what Linux needs to make it less complex--another 1,000 forked distros.

      • by Lodragandraoidh ( 639696 ) on Wednesday February 11, 2015 @12:21PM (#49029667) Journal

        I think you're missing the point. Linux is the kernel - and it is very stable, and while it has modern extensions, it still keeps the POSIX interfaces consistent to allow inter-operation as desired. The issue here is not that forks and new versions of Linux distros are an aberration, but how the major distributions have changed and the article is a symptom of those changes towards homogeneity.

        The Linux kernel is by definition identically complex on any distro using a given version of the kernel (the variances created by compilation switches notwithstanding). The real variance is in the distros - and I don't think variety is a bad thing, particularly in this day and age when we are having to focus more and more on security, and small applications on different types of devices - from small ARM processor systems, to virtual cluster systems in data centers.

        Variety creates a strong ecosystem that is more resilient to security exploitation as a whole; variety is needed now more than ever given the security threats we are seeing. If you look at the history of Linux distributions over time - you'll see that from the very beginning it was a vibrant field with many distros - some that bombed out - some that were forked and then died, and forks and forks of forks that continued on - keeping the parts that seemed to work for those users. Today - I think people perceive what is happening with the major distros as a reduction in choice (if Redhat is essentially identical to Debian, Ubuntu, et al - why bother having different distros?) - a bottleneck in variability; from a security perspective, I think people are worried that a monoculture is emerging that will present a very large and crystallized attack surface after the honeymoon period is over.

        If people don't like what is available, if they are concerned about the security implications, then they or their friends need to do something about it. Fork an existing distro, roll your own distro, or if you are really clever - build your own operating system from scratch to provide an answer, and hopefully something better/different in the long run. Progress isn't a bad thing; sitting around doing nothing and complaining about it is.

        • by NotDrWho ( 3543773 ) on Wednesday February 11, 2015 @12:28PM (#49029739)

          One man's variety is another man's hopelessly confusing goddamn mess.

  • Yes (Score:5, Insightful)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday February 11, 2015 @09:19AM (#49028099) Homepage Journal

    Yes, yes it is. We have too many redundant frameworks. Sadly, systemd is the only effort to unify them that seems to have traction.

    There should be one facility for each function on the system. I don't need my network interfaces being diddled by bizarre and obscure programs. Example, libvirt doesn't use /etc/network/interfaces, this is stupid and complicates firewalling scripts and so on. And it insists on running its own copies of dnsmasq, rather than just dropping some files in /etc/dnsmasq.d. What a PITA. Use the goddamned operating system, that's what it's there for.

    • Re:Yes (Score:5, Funny)

      by GrumpySteen ( 1250194 ) on Wednesday February 11, 2015 @09:22AM (#49028109)

      We have too many redundant frameworks. Sadly, systemd is the only effort to unify them that seems to have traction.

      Because lots of different redundant efforts to unify lots of redundant frameworks is clearly be the best way to solve the problem of lots of different redundant frameworks!

      Redundancy is awesome!

      • Oblig. XKCD (Score:4, Funny)

        by Anonymous Coward on Wednesday February 11, 2015 @09:41AM (#49028211)

        Oblig. XKCD: http://xkcd.com/927/

        • Re:Oblig. XKCD (Score:5, Insightful)

          by DuckDodgers ( 541817 ) <keeper_of_the_wo ... inus threevowels> on Wednesday February 11, 2015 @10:05AM (#49028361)
          I know that post, but while I think he has a point, I also think it's too defeatist. If everyone took that attitude, nothing would ever get done.
    • Re:Yes (Score:5, Interesting)

      by Opportunist ( 166417 ) on Wednesday February 11, 2015 @09:30AM (#49028141)

      I think it's less the redundancy but rather the attempt to abstract away everything while at the same time including every kind of hardware and the stove into as few interfaces and scripts as you can get away with. There has to be ONE networking script, no matter whether the one actually used is wired, wireless or pigeon carrier based. And we have to include every single friggin' USB device ever built no matter whether 99% of them have at best a handful users and at worst a single user.

      Linux is getting more and more similar to Windows, a huge abstraction layer crammed in between user and system in the vain attempt to make it "easy", and in this actually making everything overly complex.

      Linux always had one defining strength over Windows: It is way more modular and its parts are way more easily separated and rejoined. And now various distributions try to nix this advantage by pouring their "version" into a monolithic block that "should be good for everyone". If they feel like diversifying, you'll maybe get a "server" and a "client" distri, with the main difference being that the server distri has no GUI.

      Linux is getting overly convoluted, but only because we let it. Distributions are of course trying to take the easy way out, offering a system that will appeal to as many people as possible. Of course this lugs about a LOT of dead weight because what you need in your OS is useless to me and vv.

      • Re:Yes (Score:4, Informative)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday February 11, 2015 @09:42AM (#49028217) Homepage Journal

        There has to be ONE networking script, no matter whether the one actually used is wired, wireless or pigeon carrier based.

        Well, that's not really true. I mean, look at what I'm asking for WRT libvirt. There's a facility already present in the system for doing what they're doing, and they simply ignore it, with consequences for users. And what's more, the facility works really well for what they're doing with it, which they're doing very poorly.

        And anyway, it's not true because interfaces can have their own scripts. I've used this functionality for firewalling on debian.

        Linux is getting more and more similar to Windows, a huge abstraction layer crammed in between user and system in the vain attempt to make it "easy", and in this actually making everything overly complex.

        Well, some things need centralized management, simply so that pieces of the system don't step on one another. Networking is basically the ideal example. Mounts are another. Nobody complains that everyone is expected to use fstab to define those, or that all mounts are tracked in the mtab.

        • Centralized, yes. And I'm also in favor of choosing a tool for a job and sticking to it compared to everyone taking another one and distris having to support each of them in their scripts.

          But that's exactly the point, do the scripts really need to lug about support for stuff that maybe one, maybe two people actually use? There's still support for some esoteric hardware that I never even heard about (and believe me, I've seen a lot) that has some odd quirks and has some "special needs" when it comes to scrip

          • Re:Yes (Score:5, Informative)

            by Ol Olsoc ( 1175323 ) on Wednesday February 11, 2015 @11:09AM (#49028921)

            My question is whether it is really warranted to overburden and complicate scripts and even the functionality of some tools to pander to the quirks of hardware hardly anyone uses. My approach would be to leave it out and offer patches for the 3 people who actually want to use them.

            Yet what really sold me on Linux is what you don't like. The nasty years of Windows Vista when perfectly good contemporary hardware had to be replaced. The present day situation where support for a product just goes away.

            Linux now has the best support for devices of any OS.

            My favorite example is when I was setting up a Dual boot system that used a USB to RS-232 adapter on both sides of the boot. I set it up first on the Linux end. No problem, Just enable the serial port (Linux looks at serial ports as a security issue) in bash, and it just worked. Now I start to set up on the Windows side. No worky. It sees the adapter, but no driver install. Nor help.

            After a websearch I found out that the Adapter I had used was an old Staples adapter used for an ancient Palm Pilot my wife used maybe a decade ago. No Windows support, and none is forthcoming.

            Its happily working on a Linux only system now, saved someone 50 bucks. It's also marked "do not use on Windows". Problem is, there really are a lot more than 3 of us who are using hardware other than the really common stuff. And your negative is our positive.

            • Version 1 Neatreciepts scanner. It's a rebranded common brand with a changed USB ID. Linux recognizes it fine but on windows it's a nightmare to get it working. I have lots of examples like this where I can use it on linux but not on a new version of windows.

          • Re:Yes (Score:5, Insightful)

            by morgauxo ( 974071 ) on Wednesday February 11, 2015 @11:23AM (#49029077)

            How about an example?

            One of the things I love about Linux is all the old and esoteric hardware it supports. I don't want to throw away something that suites me just fine only because it isn't popular anymore.

            I do agree that costs and benefits shoudl be weighed. But where is all this old hardware support complicating scripts that you speak of? The place I am used to seeing hardware support is in the kernel. It's a dropdown... build it in, make it a module or don't support it. I'm guessing that 90% or so of users don't even see that anyway! They are probably running kernels that came with their distros.

            I don't even mind if distros chose not to build in modules for ancient hardware. So long as I am free to compile my own kernel who cares? But.. where are these scripts that will be oh so better if only we flipped the bird to the few people still using some hardware and told them they can't have their toy anymore?

            Also.. even if removing support for one piece of hardware only alienates a few people... If you really clean house then that's a few people per each device you condemn to obsolesence. Don't you think they might add up?

          • Re:Yes (Score:4, Informative)

            by unrtst ( 777550 ) on Wednesday February 11, 2015 @12:41PM (#49029897)

            But that's exactly the point, do the scripts really need to lug about support for stuff that maybe one, maybe two people actually use?...

            :-) This topic is perfect for /. because of the complete lack of scope.

            When you refer to "scripts", what level/layer are you referring to? I don't even think there is a well defined naming convention for that (ex. something like an OSI model with respect to configuration of hardware).

            Given the networking example...

            On the GUI level, there are loads of interfaces, many specialized, to aid in configuring the network. Some of them are protocol-specific, such as various VPN utilities, kppp and other ppp utilities, dial up interfaces, and a bunch of wifi ones too. Many of those are somewhat modular, with a backend/libs, command line interface, gnome/gtk interface, qt/kde interface, and possibly others (curses, xfce, tk, etc). That said, there is a primary target within this cadre: Network Manager and all its cousins.

            On the other end of things, within the kernel, there's loads of drivers and standardized ways for those to interface with the various kernel subsystems. Those drivers necessarily have a wide variety of options... that's kinda the point. The vast majority of those can be compiled into the kernel, built as modules, or not built at all. This layer is fairly well defined as there is a clear separation of user space and kernel space; this ends at the first layer that provides a user space API (and this could be considered to constitute two layers... kernel space and user space of that... think OSI layer 1 and 2).

            On that kernel level (similar to OSI media layers), I don't think we have a problem. This is, at least partially, due to the monolithic nature of the kernel and it's management by a benevolent dictator. A few comments here have mentioned support for old hardware, but I don't think they are referring to the drivers themselves nor the kernel... they're likely referring to something further up in user space. IMO, if the question is posed here, the answer is "No, Linux is not becoming too complex".

            On the top end of the GUI side, I'd also argue that, "No, Linux is not becoming too complex". Yes, it can be a quagmire of various utilities at times, and some work better than others, but that *should* be fine. Hell, that's the only way to quiet those that complain about supporting all that old hardware - just snip it out of the GUI utility or hide it in advanced areas. I would never want to enforce a rule that these must all go through some specific middle ware, though that's really the part we should all be talking about.

            So... the middle. This thread referenced "/etc/network/interfaces". That does NOT exist on all distributes (ex. redhat based systems don't have this). Personally, I like /etc/network/interfaces, but it's a good example of fragmentation of "standard" ways/interfaces to configure the kernel networking subsystem. Is it bad that debian and redhat both do it different? IMO, the "becoming too complex" question would imply that this is NOT bad, since this has been this way FOR A LONG LONG LONG TIME, and I'd agree that this amount of differentiation is ok and even good, but this could easily be argued is and firmly into the grey area.

            The part that I have very large concerns with is what is currently happening with the low-level just above kernel... specifically, systemd and its related parts. Networking is an example here, as one of its goals is to provide one unified/common way to configure the network.... but doesn't that already exist!?!? It's called the kernel! On the other hand, maybe it will prove to be a useful shim? The fact that a single framework is going in above the kernel, which some direct ties to the kernel, and is casting a very wide net in terms of things it is, or can, control (logging, network, dhcp, login, init, sessions, mounts, consoles/vte, timedated/ntp, devices/udevd)... we'd better hope and pray it's designed well cause everything and the kitchen sink will soon have direct dependencies on the interfaces it's implementing.

        • There's a facility already present in the system for doing what they're doing, and they simply ignore it, with consequences for users. And what's more, the facility works really well for what they're doing with it, which they're doing very poorly.

          If a software package used an existing, high-quality facility for doing a particular task, someone somewhere would complain about the dependency.

    • Re:Yes (Score:5, Insightful)

      by rastos1 ( 601318 ) on Wednesday February 11, 2015 @10:14AM (#49028467)

      Sadly, systemd is the only effort to unify them

      I don't know about "unify them". As far as I can see, it is trying hard to hide the complexity under one umbrella. And if the complexity is hidden completely, then there is little you can do to fix a problem that happens to be complex. Without this unifying effort I can easily plug in myself somewhere in the middle, track down what's going on and fix it. Or at least work around it. Ah, yes, I'm a Slackware user. Is that relevant?

      • Re: Yes (Score:4, Insightful)

        by Anonymous Coward on Wednesday February 11, 2015 @10:31AM (#49028605)

        Systemd has been the most divisive force in the Linux community lately, and perhaps ever. It has been foisted upon many unwilling victims. It has torn apart the Debian community. It has forced many long-time Linux users to the BSDs, just so they can get systems that boot properly.

        Systemd has harmed the overall Linux community more than anything else has. Microsoft and SCO, for example, couldn't have dreamed of harming Linux as much as systemd has managed to do, and in such a short amount of time, too.

        • Re: (Score:3, Insightful)

          by arth1 ( 260657 )

          Amen. It's sad, but a single person has managed to kill the momentum of GNU/Linux as an operating system. Microsoft should give the guy a medal.

          People are loath to publish new projects because keeping them running with systemd and all its dependencies in all possible permutations is a full time job. The whole "do one thing only and do it well" concept has been flushed down the drain.

          I know that I am not the only sysadmin who refuses to install Red Hat Enterprise Linux 7, but install new systems with RHE

      • Re:Yes (Score:5, Insightful)

        by Zeromous ( 668365 ) on Wednesday February 11, 2015 @10:43AM (#49028693) Homepage

        I have a new verb to describe this type of useless abstraction: I call it, "Poettering-around".

    • by DarkOx ( 621550 )

      libvirt doesn't use /etc/network/interfaces

      My distro does not use /etc/network/insterfaces either so this is probably a good thing. Keep your debianisms to yourself.

  • Slackware (Score:5, Informative)

    by Vyse of Arcadia ( 1220278 ) on Wednesday February 11, 2015 @09:23AM (#49028113)
    No problems here. Slackware seems to keep things simple. Granted, I haven't tried to mount a camera with DigiKam in a couple of years.
  • by Anonymous Coward on Wednesday February 11, 2015 @09:24AM (#49028117)

    I was reading through the article's comments and saw this thread of discussion [complete.org]. Well, it's hard to call it a thread of discussion because John apparently put an end to it right away. The first comment in that thread is totally right though. It is systemd and Gnome 3 that are causing so many of these problems with Linux today. I don't use Debian, but I do use another distro that switched to systemd, and it is in fact the problem here. My workstation doesn't work anywhere as well as it did a couple of years ago, before systemd got installed. So when somebody blames systemd for these kinds of problems, that person is totally correct. I don't get why John would censor the discussion like that. I also don't get why he'd label somebody who points out the real problem as being a 'troll'. John needs to admit that the real problem here is not the people who are against systemd. These people are actually the ones who are right, and who have the solution to John's problems! The comment I linked to says 'Systemd needs to be removed from Debian immediately.', and that's totally right. But I think we need to expand it to 'Systemd needs to be removed from all Linux distros immediately.' If we want Linux to be usable again, systemd does need to go. It's just as simple as that. Censoring any and all discussion of the real problem here, systemd, sure isn't going to get these problems resolved any quicker!

    • How exactly is systemd causing this problem? Neither you nor the author on the other thread give any sort of argument (which is the reason why he/she was accused of being a troll in the first place).
      • by Kagetsuki ( 1620613 ) on Wednesday February 11, 2015 @10:40AM (#49028673)

        Every time I've played with it I had things like weird locking issues - but this was maybe a year ago when I last tried it.

        What bothers/worries me about it are the devs behind it. Poettering was bitching about how hostile the community was before but he completely deserved every bit of criticism. All the major devs on that project are known to have abandoned other projects. Several times they made mainline commits which completely broke things. They constantly pushed barely tested and poor quality code (which is why Linus got angry at one of them and banned them from making pull requests till they got their sh*t together). On top of that the design of systemd is not very *nix like so it does seem an odd fit. All this makes me uneasy, and I don't think I'm the only one, because from this I am expecting a big lump of poorly tested experimental play code that the lead devs will just abandon once they get interested in another project.

        • by gweihir ( 88907 )

          Something that was problematic a year ago cannot be production-stable now. Server operation and in fact, professional desktop operations, has higher standards. It is like a partially broken and ever-changing thing is pushed into Linux in order to boost support revenue.

      • by gweihir ( 88907 ) on Wednesday February 11, 2015 @12:23PM (#49029675)

        You either have not looked or you are trolling. Try google("systemd sucks"). There really are very few unsubstantiated rants about systemd, most comments are fact-based and explain what is wrong with it. The systemd proponents are waging a war against a large part of a whole community.

    • "My workstation doesn't work anywhere as well as it did a couple of years ago, before systemd got installed. So when somebody blames systemd for these kinds of problems, that person is totally correct." - its these sorts of comments that gets labelled as trolling because they are vague anecdotes with no facts backing it up. If someone points out a specific problem and backs its up with facts, it gets discussed properly.
      • by Junta ( 36770 ) on Wednesday February 11, 2015 @10:34AM (#49028629)

        Of course some of the vagueness is precisely because things happen mysteriously, and systemd has a habit of doing unexpected mysterious things. Of course it's not alone, you have quite a few subsystems all deciding to be a bit 'automagic', with systemd and associates just being the most prominent. As a consequence, if you manually do something like reconfigure a network device using the underlying tools, something can mysteriously redo them later when it thinks something has happened like a lease expiry, even though dhclient no longer runs. Or a time change event at boot causes dhclient and some mysterious third party to disagree about when lease goes away. dhclient isn't renewing lease, but some third party decided that a lease wasn't renewed and deconfigures the adapter. It makes no sense, but someone in some random component thought something wasn't proper and decided to 'help' take care of something that wasn't their business.

      • by Anonymous Coward on Wednesday February 11, 2015 @10:43AM (#49028697)

        How is that vague? The problem is very clear: that user is worse off after systemd was installed than before systemd was installed.

        The exact flaws with systemd don't really matter. Maybe it was problems booting. Maybe it broke sleeping/hibernation on a laptop. Maybe it stopped mounting drives properly. Maybe it was the binary logs making debugging difficult or impossible. Maybe it was one of the thousands of other bugs plaguing systemd and distributions using systemd.

        When a computer is less useful today than it was last year thanks to systemd getting installed, the problem is solely with systemd.

      • by gweihir ( 88907 ) on Wednesday February 11, 2015 @12:41PM (#49029889)

        When vague anecdotes start to pile up (and they do for systemd unreliability), they become facts in themselves. Add to that that systemd problems are exceptionally hard to debug (you have to look into complex C source code for many) and the development team is unhelpful, and you get a pattern: The reason many, many people are reporting vague anecdotes about their system being unstable from systemd is not that they are lying, or fantasizing or on drugs, the reason is that systemd does indeed break reliability and on top is very hard to debug and fix.

        Some very old engineering failure analysis wisdom applies here: To really break things, you have to screw up in two major aspects. Systemd manages to do this easily by being unreliable and so hard to debug that most people fail at it. People are scared of it and angry at it, because they cannot master this complexity. And they are right to fight it: A decent OS has no business at all being complex in any place where it is avoidable and in particular, it has no business at all replacing simple things that work with complex ones, regardless of whether they work or not. If Linux is not kept free of high complexity in core components, it will implode.

    • I've been kinda sitting on the sidelines about systemd all this time.. I still run init based startups on my debian boxes (wheezy and squeeze currently in use, on 2 machines.)

      I hear so much... hatred for this program... now as someone who's mostly not researched this... what is good about systemd over init scripts, besides supposedly ridding ourselves of the need for init scripts? I won't go into why I think that's probably not a good idea in general, but anyway. I hear so much hatred and so much 'it brea

    • by John Goerzen ( 2781 ) on Wednesday February 11, 2015 @03:40PM (#49032025) Homepage

      I didn't shut down all systemd talk. Just the stuff that was flamebait. What you didn't see is the comments that I deleted, which degenerated exceptionally quickly into namecalling and four-letter words. I am happy to tolerate many viewpoints on my personal blog as long as they are expressed with respect. I have seen sooooo many threads, whether here or elsewhere, start with statements like the one there. That post was on a technical matter, and things that are verifiably false and rehash the way a systemd decision was made were both off-topic and non-respectful.

      There are a lot more systemd comments on the post, by the way. Some pro, some against.

      "Systemd is a problem because..." was fine. "forced upon us" is a completely different discussion that is still highly-charged, produces nearly instant flamewars, and I didn't want to go there (yesterday).

      My blog is my own little corner of the Internet where I try to raise the level of discourse just a bit. It's fighting a tidal wave, but I do try.

  • by Lumpy ( 12016 )

    Linux is starting to get the "feature creep" crud that is making it a mess. the nice part is we still have some choices to avoid the things we dont like and it will create a split like Linux and BSD but inside linux it's self.

  • by Kokuyo ( 549451 ) on Wednesday February 11, 2015 @09:31AM (#49028147) Journal

    Look, you can joke all about how I'm just too stupid to use it but the fact is I am an IT professional and I'm not incompetent at what I do. Which means that technical concepts do not leave me in the dust by default.

    And yet, every few years when I try to get into the swing of all things Linux, it ends in utter frustration.

    Make of that what you will. And when you tell me I am at fault because I am unwilling or incapable of hurdling that learning curve, I will throw back in your face that a good product is also defined by usability considerations.

    Linux is a pain in the ass. It is good at a broad variety of tasks, but so are other OSs that make my life less of a living hell.

    • by johnlcallaway ( 165670 ) on Wednesday February 11, 2015 @09:43AM (#49028227)

      Make of that what you will. And when you tell me I am at fault because I am unwilling or incapable of hurdling that learning curve, I will throw back in your face that a good product is also defined by usability considerations.

      HAHAHAHAHAHHAHA!!!!!

      When Linux was less 'usable', it was simpler.

      Increased usability means more scripts and automation, meaning more things are abstracted.

      You can't have it both ways.

      I know what the real problem is. I stepped away from Linux/UNIX for about 5 years because of a new job (Went from a Sun/Unix/Oracle shop to a Windows/SQLServer shop). When I got back to Linux, I didn't understand a lot of things, it had changed so much. It took a while to dig into it.

      But .. know what??? It was all there. All I had to do was understand how it started up to find all the scripts and then read them. It wasn't that hard.

      It just took a little effort. And enough intelligence to actually read scripts and Google things I didn't understand.

      If you don't get it .. it is you.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Simple for the desktop != simple for the admin != simple for the programmer.

        Desktop users want to click on a mac'ish interface on obvious icons and do stuff. Make a slide show, print a document, send an email, connect the wireless.

        Admins want to quickly understand the state of the system, analyze problems and fix them.

        Programmers want hookers, blo and the newest widgets and functions, abstracted to hookers and blo.

        These are orthogonal, but they don't have to be contrary. Userland can have a nice interface t

        • by bouldin ( 828821 ) on Wednesday February 11, 2015 @10:29AM (#49028575)

          This.

          I would personally like to see three flavors of Linux:

          Server - lean, NO systemd or plug-and-play crap, focus on security

          Desktop - includes whatever bells and whistles people need for a modern, useable desktop; focus on productivity

          Mobile - similar to desktop, but with a focus on low power consumption and small screens

          I don't need a tablet GUI on my desktop, and I don't need hotplug support for webcams and printers on my server.

      • by jedidiah ( 1196 )

        The "abstractions" aren't any easier. They might have the virtue of being better automated WHEN THEY ACTUALLY WORK. Beyond that, they are more complicated and thus LESS usable when something doesn't go precisely to plan.

        The "abstractions" do less well when you wander off the reservation and Linux has always been about wandering off the reservation.

        Linux is mature enough that people are now trying to fix things that really aren't broken and they are breaking them in the process.

    • Re: (Score:2, Insightful)

      Burn the witch!

    • I'm of the complete opposite opinion but that's just because I'm more used to Linux than Windows or OS X. And I'll call you on the "usability considerations" thing - my father got a Windows 8 box and was completely frustrated with it - I threw Ubuntu on it and he loves it. OS X is the opposite in that it hides so much of the functionality behind a shiny and simple interface it's like being forced to wear mittions for a power user.

      Do yourself a favour and put Linux on a machine you use regularly - then actua

    • by DuckDodgers ( 541817 ) <keeper_of_the_wo ... inus threevowels> on Wednesday February 11, 2015 @10:35AM (#49028639)
      I can - and do - make Linux dance to my tune and I've used it as my only desktop operating system at home for years. But yes, the learning curve was a pain in the neck. 90% of the time, everything installs and works perfectly. 7% of the time, you hit a problem that you can fix with a quick web search and twenty minutes of work. 3% of the time, you hit a headache that requires days of research and trying different things until you solve it or give up and try a different distribution or give up and go back to a proprietary operating system. I probably made a dozen switches to Linux that failed before they finished or only lasted a few months before I acquired enough skill to make the switch permanent.

      About once every four or five hours of play, Minecraft crashes for my kids on my Linux machine. The display becomes completely unresponsive. So I have to switch to a virtual terminal or use a remote ssh (or better, mosh) connection into the machine, run "ps aux | grep -i minecraft" to find the processes related to minecraft, and "kill -9 PID" the processes for Minecraft. A full screen crash that hung the entire graphical interface has not happened to me on Windows more than a handful of times since Windows 98. I would never expect a casual user or even a moderately technical one that does not have a lot of Linux experience to be able to deal with this. I think I read somewhere that Wayland (the replacement for the X11 window system that underpins most graphical applications on Linux) has some fundamental design differences with X11 specifically so that a crash or hang of a full screen application can be detected and dealt with inside the graphical interface instead of requiring a switch to a terminal or remote shell.

      All of these things can be improved and should be improved. I want to do my part, but work plus kids keep me too busy.

      But to the article's original complaint, I think that sounds like the whining of someone who refuses to learn something new. For example, older /etc/fstab files listed disks and partitions like /dev/sda1 (first disk, first partition) and /dev/sdb2 (secon disk, second partition). The newer /etc/fstab files can support that format, but the preferred way to work is to use the UUID (universally unique identifier) of each disk partition. Yes, UUIDs make the file harder to read. Yes, UUIDs take a little more time to set up. But the advantage is that if you add a hard drive, solid state disk, etc... or remove one, it can change the order drives are enumerated to the operating system. If your /etc/fstab has the UUIDs of the partitions then that change is not a problem. If your /etc/fstab has /dev/sda1, /dev/sda2, etc... that change can break your boot process or at least mount some partitions in the wrong place. Likewise, the systemd "journalctl -r" is a new command to learn instead of "tail /var/log/messages". But the systemd journal uses digital hashes to make sure the system log has not been tampered with by a hacker. /var/log/messages has no such security, so the old way is convenient but less safe. Some changes are stupid, and unnecessary. But some are necessary, and useful.
  • by xxxJonBoyxxx ( 565205 ) on Wednesday February 11, 2015 @09:32AM (#49028157)

    >> it was simple; only two pieces to fit together

    To me, the Linux experience has been based around the use of simple, command-line oriented tools that could be easily scripted together. That's the opposite of "only two pieces fit together" - just like Legos you have thousands of pieces that could fit together to make billions of different things.

  • The answer is BSD. Try it, you'll love it. Great documentation, testing before features go live, UNIX philosophy still alive, the list goes on...

    • I second that!

    • Totally agree. BSD is to Linux just like internet is to ham radio.
    • At this point moving to BSD would indeed be a breath of fresh air for the open source software community. A lot of the classic UNIX goodness is still there but also plenty of modern features. There's now also PC-BSD for those who want to get a desktop running quickly.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Wednesday February 11, 2015 @09:38AM (#49028189)
    Comment removed based on user account deletion
    • Re:Perspective (Score:4, Insightful)

      by jedidiah ( 1196 ) on Wednesday February 11, 2015 @10:03AM (#49028345) Homepage

      USB storage has "just worked" on Linux for a very long time now. Whatever has been added recently to "fix it" clearly isn't making it better. Meanwhile, it's also more complex apparently.

      Again, Linux has gotten mature enough that things that really aren't broken are being fixed by bored children that need some sort of distraction.

      • Re:Perspective (Score:4, Informative)

        by YoopDaDum ( 1998474 ) on Wednesday February 11, 2015 @11:55AM (#49029379)
        Tempest in a tea pot really: I had the very same issue with Jessie and a little googling around quickly found the issue and a solution (for example here: https://lists.debian.org/debia... [debian.org]).

        In short, if one installs (installed?) Debian Jessie from a USB key the installer would add an entry in /etc/fstab for the key. Now the automounting of USB keys for the currently logged user is normally taken care of by udev, who does things properly. But for backward compatibility if there's a /etc/fstab entry udev bows out and let the legacy system handle the key, and that's where one end-up with a USB key mounted as root instead of as the user. Fix: remove the useless /etc/fstab entry. As this has been discussed already on the Debian user mailing list it's likely been fixed in the install process by now (not check, will try with a new laptop next week).

        All in all: a small installation process glitch in the testing distribution, so still beta. But let's not waste such an opportunity to rant on how much the old times were betters, and young ones are hopeless. I guess the real issue is that early Linux users (me included) are getting older, and more adverse to changes.
  • The more complex the system, the more potentially undesirable states you have.
    Undesirable states take work to remove.
    Human efforts being imperfect, undesirable states appear spontaneously and accumulate over time, requiring further work.
  • by QuietLagoon ( 813062 ) on Wednesday February 11, 2015 @09:47AM (#49028261)
    As an occasional user of both Windows and Linux, I used to see a significant difference between the two. The clarity of Linux, the fog of Windows.

    .
    But now Linux seems to be getting closer to fog than clarity.

    One example:

    Last week I installed opensuse. When I tried to send an email using the mail command, Postfix was giving me odd permission errors for maildrop. So I went to look at the Postfix log, and there was none that I could find.

    One step closer to the fog of Windows, where the system is hidden behind magical portals that only a few know how to access....

  • by tlambert ( 566799 ) on Wednesday February 11, 2015 @09:52AM (#49028289)

    That's all user space.

    Honestly, I thought this was going to be a kernel rant, and I came loaded for bear: there's a lot that needs fixed about the Linux kernel and the processes and relationships between stakeholders.

    But let's address the subject of the blog post instead, because there's a lot of fodder there too.

    Everything complained about in the blog post is not a Linux problem, it's a Linux distribution problem, since the distributions are what add the user space components that are doing things like automatically mounting his phone so that something else in user space can't talk to the second control channel on the USB interface (because the phone uses the primary command channel to switch to the second command channel, and it's in use by the mount).

    This is basically the problem you are going to face on a distribution without an overall architectural design for the user/kernel interaction, and interaction between user space components that allow for layered access.

    For the "It's a camera! It's a phone! It's a mass storage device!" problem, I don't have a specific answer; I'll note that uugetty solved the contention for typed use of a resource problem for modems ("It's an inbound modem! It's an outbound modem!") in the 1980's in HoneyDanBer UUCP. And they did it by having an integrated model that all the consumers used. IT's called a layered approach to software development.

    I think the big driver for user space problems is that a lot of Open Source people believe that *their* program is the most important thing your computer can possibly be running, and if it interferes with someone else's use of something, so what? The computer is still performing it's *most* important function, which is to run *their* work product.

    Even Apple is not immune from these problems; there are third party phone tools that can do nifty things with pretty much any cell phone and come with all sorts of USB cable ends that plug into this USB cable adapter, but the OS grabs the phones out from under the software, and you have to hack the device ID list in a plist to get it to work like it's supposed to (then iPhoto, etc., can no longer see the phone). But at least on Apple systems, there's one place to go to to fix it, the fix is well known, and when Apple is informed of the problem, they generally fix their software to "get out of the way" (or tell the third party how to do it temporarily so their software will work).

    What's really missing for Linux distributions, honestly is...

    (1) An architect with a holistic vision
    (2) A project manager for the components
    (3) Productization - people in Open Source only want to work on fun stuff, not on boring stuff that makes stuff actually usable
    (4) Usability engineering
    (5) Interface contracts which don't change over time
    (6) A way to shunt third party installed software (i.e. "apt get", etc. stuff) off into an isolated hierarchy so it doesn't screw with normal operation
    (7) Documentation that doesn't have to change over time ...in other words, if you want it to look like a commercial OS distribution, you have to approach it as one. And that's not happening.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Wednesday February 11, 2015 @09:53AM (#49028299)
    Comment removed based on user account deletion
    • by Noryungi ( 70322 )

      Just as I was warming up to your rant...

      Gentoo? Oh, please, bitch. Gentoo is for ricers [fun.irq.dk]. Period. I have better things to do with my time than to compile every single shitty utility on my system. That's what a distro is for. Oh, and Gentoo can be systemd'ed as well [gentoo.org]. Read it and weep [gentoo.org].

      Arch? Uses systemd. Don't believe me? Click here [allanmcrae.com] or click here [ycombinator.com]. Arch is the Gentoo of the 2000s.

      You have no idea what you are talking about. I would be tempted to add a STFU or two, but I am just too lazy.

    • by Barsteward ( 969998 ) on Wednesday February 11, 2015 @10:29AM (#49028585)
      "2. The god damn pottering man: Hes controversially steamrolled most major distros into giving up everything from competent init scripts to non binary logs and even the bootloader in favour of 1 single process capable of doing everything, forever. " - Poettrering steamrollered the major distros????? i've got a tin foil hat for sale, you sounds like you need a double layer version.
    • I agree on your first point, and would add that if your application runs on the command line then you will increase your chance of popular use and recommendations by a factor of ten by giving useful error messages and ending with "for more information, run 'man foo'" so that a complete newbie gets help.

      On your second point, I emphatically disagree. Read Poettering's blog, starting with "Rethinking Pid 1", then "Biggest Systemd Myths". The backlash against systemd is 90% people who don't even understand systemd and have been too lazy to RTFM and 10% people who understand the technical tradeoffs and think differently, all spurred on by Slashdot, Phoronix, and a dozen other sites making a mint off the advertising revenue from people visiting the flame war. And the documentation for systemd, both at the official website and from the man pages, is outstanding.
  • With developer choice, comes complexity. End of story.

    You want everything to be the same, write in Python, where they enforce a coding style.

    If you want to do things the best way for you, you'll have to support a variety of duplicating libraries and APIs. The thing is, one once library becomes a clear winner over all the others, development often stops on the others, which means everyone stops using those libraries.

    So while yes, things may be complex now, but only because of the rapid amount of develo

  • The problem is modern operating systems have taken on too much of the operating environment role leading to excessive complexity. Our modern opening systems are hypervisors like like xen or vmware. The OS has become a mess of other things that aren't related to security and suability of a system. The Operating Environment is where the rapid changes and R&D should be so features can progress and mistakes can be quickly removed.

  • by Punto ( 100573 )

    Reminds me of this e-mail from Bill Gates http://blog.seattlepi.com/micr... [seattlepi.com]

    (talking about the "add/remove programs" screen) "Someone decided to trash the one part of Windows that was usable? The file system is no longer usable. The registry is not usable. This program listing was one sane place but now it is all crapped up."

    At least we still have a filesystem

  • Is that another "things are not like I wanted them to be" posts? Linux is a community of vastly different groups with lots of different interests, with over 100,000 software packages, each scratching an itch. If you want a complete "experience" then MacOSX or even Windows is maybe better for you, i.e. a computer system where each component is designed from only one vendor. As a newbie Linux user myself, I'm pretty amazed that all those thousands of different software packages, all from different developers,

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Wednesday February 11, 2015 @10:21AM (#49028521)

    Simplicity needs to be the new goal in a FOSS OS project like Linux. 20 years ago it was all about getting an alternative to systems that cost north of 100 000$ up and running to be able to do the stuff we all wanted to do but couldn't afford to.

    Today leading FOSS solutions and extremely powerful hardware is available in abundance, as are network and cyperpunk-working-coding-and-collaboration resources. It is now that we need to push for simplicity and perhaps even an own hardware standard.

    To be honest, putting emphasis on FOSS hardware might even provide the right incentive for exactly that simplicity. Apple won all the Unixers over a decade ago, because it offered exactly that. Zero-fuss out-of-the-box FOSS-*nix functionality. It started losing them ever since the golden cage starting to close and lock. This is a gap the FOSS community needs to fill.

    It is, in my opinion, high time for FOSS hardware to move into the limelight. We need to start crowdfunding our own NixBook Airs, flashy pro desktops and servers. ... The librem 15 [laptopmag.com] is a step in the right direction - we need more of that.

  • by sdinfoserv ( 1793266 ) on Wednesday February 11, 2015 @11:04AM (#49028879)
    I've made several attempts to use Linux for the desktop... all have failed. Worse than being too complex, after 20 years of development, it fails the ‘just work’ test. I’m a techie by trade, a programmer longer than I want to think about. I do network support, server support and mange a team. I have kids, a wife, home interests.. I don’t have the time or patience to futz with computer problems when an install or an app refuses to function. When I was younger, I enjoyed the hack to make it work. That was 20 years ago. Now it annoys me. I just want it to work. After 20 years of Linux development there needs to be some level of maturity, stability, uniformity.. It needs to just work without having to hack it for the casual user to accept it. unfortunately, Linux doesn't pass that bar. I’m done with Linux at home. It’s nothing more than a toy for hackers who want to tweak. For people who just want their stuff to work, Linux is not the answer..
    • by Alioth ( 221270 )

      Don't know what distro you using, but "just works" test has passed for me both at home and work with Debian.

      Not so much with Windows. I have a Windows partition because a couple of games I like don't have Linux equivalents - it took some fscking around to make them run because a default Windows install doesn't actually have all the required DirectX DLLs, and software installers for Windows do not have any dependency resolution built in, so it requires running around finding the DLL on microsoft.com. Not to

  • by RogueWarrior65 ( 678876 ) on Wednesday February 11, 2015 @11:37AM (#49029219)

    Wifi support in Linux is a mess.

  • by Danathar ( 267989 ) on Wednesday February 11, 2015 @11:56AM (#49029387) Journal

    Not really trying to split hairs, but the issue is certain distributions of Linux, not Linux itself (the kernel). Distributions like Slackware are quite easy to understand, but others not so much.

  • by nine-times ( 778537 ) <nine.times@gmail.com> on Wednesday February 11, 2015 @12:35PM (#49029819) Homepage

    I used to be able to say Linux was clean, logical, well put-together, and organized.

    You would only say that because you were used to the previous organization. It has always been a mess of "catering to old UNIX paradigms" while also "trying to squeeze in the latest new thing." Old UNIX guys have always complained whenever the GNU tools had a different behavior from what they were used to, including changes that you take for granted. Bash was once new, and some people still don't like it.

    Do you remember the first time you saw a UNIX filesystem? Think back. You have directories like etc, usr, and var. "usr" doesn't really contain user information. "etc" doesn't include miscellaneous files. "var"? WTF is "var"?

    None of that shit ever made sense. It's what you were used to. If we set out today to make a sensible, orderly, logical, clean system, it would not look like modern Linux, and it would not look like old Linux.

    • by John Goerzen ( 2781 ) on Wednesday February 11, 2015 @03:50PM (#49032163) Homepage

      Actually, I DO remember the first time I saw a Unix filesystem. It was on FreeBSD. And it DID make sense. When I switched to Debian not long later, there was this document that eventually became the Filesystem Hierarchy Standard (FHS). It clearly spelled out where things lived, and in Debian non-compliance with the FHS was a bug (and once the notion of a release-critical bug was invented in Debian, it was a release-critical bug.)

      Part of the problem here is that we are in a twisty little maze and every passage looks alike, and our flashlight ran out of batteries in 2013. The manpages, to the extent they exist for things like cgmanager and polkit, describe the texture of the walls in our little cavern, but don't give us a map to the cave. Therefore we are each left to piece it together little bits at a time, but there are traps that keep moving around and it is slow going.

      Add to the the fact that it's a damn big cave.

      I could understand the FHS in about 10 minutes. This stuff? Would probably take weeks.

      The order of magnitude of complexity is entirely different. It came out in the comments on my post that Fedora finally threw up their hands, and the reason that Wifi works out of the box there is because they just expose all wifi passwords to all users of the box. Whoops. Could you have known that by looking at the permissions with ls? Nope. You'd have to read some XML file in a location that network-manager never mentions.

  • by jzu ( 74789 ) on Wednesday February 11, 2015 @01:03PM (#49030165) Journal

    - Use text whenever possible
    - Performance is not paramount, so use C
    - And do one thing at a time but do it well - connect small specialized tools to build complex applications
    - Documentation, while terse, should cover all features
    - The filesystem is a simple tree starting with /

    Let's see what modern Linux does:

    - Lots of binary stuff everywhere, where text would do
    - You'll boot up faster with systemd, oooh yeah baby, totally rad!
    - Oooh, and it's more integrated, one single process does everything!
    - Look for processes with stranges names running on your machine, then try to find any documentation on them
    - gphoto2://[usb:008,044]/store_00010001

    The last one makes me angry. It's VMS all over again: is anyone here old enough to remember host::disk$1:[directory]file.ext;version? I can't find another way of accessing my phone data. I can't, for the life of me, mount it the way I would mount another volume.

    Guys like Poettering couldn't care less. They have a vision, for sure, and they have good ideas sometimes. But there are really two issues here: a good idea is not sufficient when you engineer a system, and their vision is not Unix. To hell with simplicity, to hell with consistency.

  • by iONiUM ( 530420 ) on Wednesday February 11, 2015 @01:05PM (#49030181) Journal

    The behaviour of "Linux" (all the distributions and kernels) as a whole is exactly the same behaviour you see in companies with poor management. Everyone is working on stuff, and maybe even working hard, but all those things don't add up to the whole. There's no 1 person over-seeing it all to ensure everyone is working smart, and in the same direction.

    To me, this is what is happening with Linux. Everyone has ideas, and some of those ideas are great, but when everyone can fork and create and merge without an overall management process, you end up with a bit of a mess and mass confusion for those on the "outside."

    This is both the advantage (choice) and disadvantage (lack of alignment) with Linux. Should I use Gnome or KDE or Unity? Do I even know what those are as a end-user? Should I?

    What I get OSX, I know what I get. When I get Windows, it's the same. Everything (mostly) from the previous version will work with this version, the interface isn't some massive surprise, etc (which is partially why Windows 8 was such a fiasco; things WEREN'T compatible and the UI was totally different).

    At the end of the day, what needs to happen is exactly what most Linux devs hate the most: a large corporation with 1 vision needs to come in and create a clean, uniform experience that allows consistency and compatibility for years/decades, and reduces "choice" to a degree in order to provide consistency.

    To some degree, you can argue RedHat did this a bit, especially with packages, but everyone hates on them too now..

  • Not just Linux (Score:4, Insightful)

    by tverbeek ( 457094 ) on Wednesday February 11, 2015 @01:14PM (#49030265) Homepage

    It isn't just Linux; it's the nature of modern systems to become "too complex". Back in the days of my youth, it was possible for one person to grok an entire operating system, but it simply isn't possible anymore, unless it's a tightly-focused and built-to-purpose system.

  • by Anonymous Coward on Wednesday February 11, 2015 @01:17PM (#49030295)

    of the tenets of UNIX. Yes, I realize we are in the modern era, and UNIX is over 40 years old, but the tenets that made UNIX great are still valid to this day.

    The tenet I wanted to touch on is the tenet of UNIX that suggests we "keep it simple". Complexity is not only the enemy of this tenet, it's also the enemy of security and common sense.

    I have, in the last year, begun a move of my critical machines to BSD variants, namely FreeBSD and OpenBSD. OpenBSD in particular, exemplifies the tenets of UNIX better than any other OS that is in use. Theo and team correctly understand the issues of complexity and security and their product reflects the care they take. I liken their work to that of a gardener and his bonsai trees.

    FreeBSD is rapidly becoming the "go to" OS for those who are disillusioned with what Linux as become -- namely bloated, complicated, and difficult to deal with. Linux, while intentionally what it is in terms of choices, has become fractured internally what with respected long-standing developers leaving for this or that reasons. Some of this is because of systemd, some of it for other reasons.

    Let's be honest for a minute. Linux is not a bad ecosystem. It's has become a difficult maze of kernel, weird and varied frameworks, too many user land utilities and DE/WMs, and the legacy stuff that Windows and Apple were accused of is there for all who have eyes to see. Nothing is perfect, obviously, but the creep is evident and obvious. I'm severely disappointed with the notion of binary blobs, something OpenBSD correctly rejects out of hand.

    For me and my IT shop, we are headed towards the BSD camp because of the above and because I value stability and engineering above all else. BSD has always cared more about being "correct" than cool. BSD is engineered, while Linux seems haphazard. My .02.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...