Greg KH Favors Rolling Release Distros 175
jones_supa writes In an interesting Google+ post, the lieutenant Linux developer Greg Kroah-Hartman mentions him fully moving to rolling-release Linux distributions: 'Finally retired my last 'traditional' Linux distro box yesterday, it's all 'rolling-release' Linux systems for me. Feels good. And to preempt the ask, it's Arch Linux almost everywhere (laptop, workstation, cloud servers), CoreOS (cloud server), and Gentoo for the remaining few (laptop, server under my desk).' What's your experience? Would in the current situation a rolling-release operating system indeed be the optimal choice?
Uh (Score:5, Informative)
Re:Uh (Score:5, Funny)
Re:Uh (Score:5, Funny)
hah, what a newb.
I really wish Greg would have told us which distro he is using though.
So much for stability and uptimes... (Score:5, Informative)
I guess that era is now gone, with rapid-release and lots of little things constantly needing the system to restart.
Re:So much for stability and uptimes... (Score:5, Informative)
Some of us still work in environments where constant restarts are strictly not allowed, and software which expects to be on a constant release cycle is shunned.
We had a vendor once, who wrote a component for a large enterprise system ... they released builds pretty much weekly and thought that was grand.
We filed a bug once, and they said "we don't support that version because it's a month old, and therefore 4 versions out of date, you need to upgrade". We said "you'll be hearing from our lawyers because we don't take a prod outage every week just for you idiots". Needless to say, they quickly realized they were going to lose that fight.
Sorry, we need a lot more stability, and we don't care if you think you're on an agile cycle. It takes around two months to promote something through to Production ... we simply don't care that you want to build weekly.
Not all places (specifically most regulated industries) have the ability to have stuff constantly changing underneath them, and they certainly haven't got the patience for some company who thinks a product lifecycle is measured in weeks.
Continuous releases often have the effect of making your customers your beta testers. And we can't do that for you.
Re:So much for stability and uptimes... (Score:5, Insightful)
Re: (Score:2)
And if you're lucky, the people you deal with are all seasoned veterans with the version you want support on, know all the fixes and troubleshooting info.
If you're not lucky, the people providing support for your version are clueless newbies who've never seen your version in active production and are relying on the internal KB and decision trees they stumbled across on an old file server.
And you could blame the vendor for being douche bags and that might be true, but then again, maybe the seasoned veterans
Re: (Score:2)
Re: (Score:2)
But then again, there are douche bag customers, too, who refuse to update and insist on running grossly outdated software. Usually it has nothing to do with grizzled, old-school IT vets and their deep regard for mainframe era stability but super douchy business owners who just want to cash checks.
I *just* did a project for a customer like that. They built a brand-new infrastructure (which is quite good in terms of actual hardware) so they could install "new" 2003 r2 x86 servers and run an old x86 version
Re: (Score:2)
I've been in the industry long enough to have seen plenty of "customers" demanding to save money by short changing long term decisions, only to pay more money in the long run to still have a non-viable system in place, because ... well they don't want to spend the money to do it right.
Your post just reminds me of spending good money on bad ideas in the name of saving money that is never saved.
My current philosophy is to help guide people into doing things right, even if it costs a little more now, with the
Re: (Score:2)
This isn't just in business, most political decisions made don't consider looking past the next election, let alone looking into how it will impact ten let alone thirty years down the line. Smart decisions like that require someone to be brave, and brave doesn't win more votes than "shiny thing, here's money" that most political promises seem to have.
Re:So much for stability and uptimes... (Score:5, Interesting)
You know it's interesting. I used to work in finance. We, like you it seems, had a very locked down production environment with huge amounts of testing - pushing builds through multiple stages, reviews and signoffs. Once every month or so we'd shut everything down for a few hours in the middle of the night and roll the world forward. Stability was everything. Downtime was OK if scheduled, a disaster if not.
Now I work at a web company. We push to prod multiple times per day. There's a process, there are reviews and approvals, but it all happens much more quickly and at a more granular level. Change is constant but small, as opposed to infrequent but total. What's more we're a 24/7 operation so no downtime (as visible to the user) is acceptable. We simply can't schedule a few hours to do our rollout - everything has to happen live.
You know what I've noticed? We're no less reliable, overall, than the bank was. Yes we have issues, but they tend to be noticed, and fixed, much much faster. When you change everything all at once you run the risk of not being able to figure out what broke when inevitably something does. Rollback is painful because you have so many interdependent changes - in the end you have to pull the whole release to avoid one small issue in a single module. When you roll frequently the scale of change is small so isolating the bug is trivial, and rolling it back the same. Now of course there are huge differences in risk when you're handling people's money vs their cat photos, but I think the view that people working on an agile schedule don't care about stability, and that the only way to achieve stability is through reducing the frequency of change, is demonstrably wrong.
Re: (Score:1)
The problem with your first company was the process itself. Downtime was OK? Hello! What? Also, a rollback should be an extremely rare event (for either schedule). Lots of rollbacks/downtime shows that people managing the project are not serious about uptime/stability/etc - in which case, the point is moot.
Re: (Score:2)
You know what I've noticed? We're no less reliable, overall, than the bank was. Yes we have issues, but they tend to be noticed, and fixed, much much faster. When you change everything all at once you run the risk of not being able to figure out what broke when inevitably something does. Rollback is painful because you have so many interdependent changes - in the end you have to pull the whole release to avoid one small issue in a single module. When you roll frequently the scale of change is small so isolating the bug is trivial, and rolling it back the same. Now of course there are huge differences in risk when you're handling people's money vs their cat photos, but I think the view that people working on an agile schedule don't care about stability, and that the only way to achieve stability is through reducing the frequency of change, is demonstrably wrong.
This is something that all Gentoo users know, either intuitively or from experience. Gentoo is an interesting case study as it's a rolling release distro (so no discrete releases) where updates have a non-trivial cost (compile time), relative to other distros. The result of this is that users delay non-critical updates significantly, which means that the Gentoo community has a fair bit of experience on the trade-offs of different update granularities. (I believe most people follow a weekly cycle.)
The short
Re: (Score:3)
Re: (Score:2)
It makes the product sound like a steam "early release" rather than a production system and totally impractical for a live business environment. Some of this stuff is just too "seat of the pants" material.
I remember working in system admin and the product testing hoops that had to be jumped through by the testers was phenominal. They'd have products in test for three or more months before they'd even start raising notions of sending it out to get approval/review for sending to a live system. Hell I treat my
Re: So much for stability and uptimes... (Score:1)
The one thing that keeps me from using obsd is that each time I remember the close to non-existing fs support even for the most common ones such as ext3.
no journal, no deal for me.
Re: (Score:1)
Spoken like a clueless AC who mindlessly suggests open source for everything.
BSD and Linux do not usually have functioning, enterprise software for a lot of things.
So your little toy won't cut it, doesn't have the same kind of software, and what you say is utterly meaningless.
This is precisely the difference between corporate production environments than what some smarmy little twit thinks can be solved with "just use BSD or Li
Re: (Score:1)
We use linux and BSD in our production software, so don't most major corporations with servers. Maybe you want to evaluate who thinks what are toys and who it is that is mindlessly spewing stuff.
Re: (Score:2)
The magic words here are "accountability" and "Support contracts". Some people are willing to either do things with open source software and wing it with the potentially marginal support they get. Others do things in-house and have support agreements with their support teams, with virtual money flowing in between groups to provide the support. Others are happiest with support contracts so that they can lever the supporting groups to MAKE them find a solution if they have to.
I'm not saying "linux isn't for t
Re: (Score:2)
I hope you're not recommending OpenBSD as an alternative for better long term support.
More than a year old? You're out of luck. [tedunangst.com]
Re: (Score:3)
Re: (Score:2)
Instructure is trying to get people to use Instructure's cloud hosting, not to use a self-hosted model. I expect that contributes to their customers' migration
Re: (Score:2)
One thing I really loathed about using maven in a production environment was the younger developers looking at the fact you could set the version numbers you wanted as a panacea against needing to worry about what an update will do.
Re: (Score:2)
I think we can conceptually do a rolling release without trouble. I've even written up how to do it: add DT_RUNPATH into each binary in a package pointing to /usr/packages/$PACKAGE/$VERSION/lib; install any compatibility packages into their own /usr/packages/$COMPATPACKAGE-$VERSION/; and symlink those binaries from /usr/packages/$PACKAGE/$VERSION/lib/liboldshit.so.1 to /usr/packages/$COMPAT-PACKAGE/$VERSION/lib/liboldshit.so.1.
When the binary loads, it'll look for every library in /usr/packages/$PACKAG
Re: (Score:2)
I don't think it's gone completely. For the consumer, yes, it's over, because they want the latest and greatest NOW regardless of possible flaws. I think the only reason consumers ever had a non rolling release model is because tech originally started with the enterprise and
Re: (Score:2)
I agree with almost everything you wrote there. A month ago, I could watch Flash videos just fine in Firefox. Firefox update comes round, then install a couple of security updates for Flash, and now roughly half the time I play a Flash video the browser locks up and I have to kill the process. Given that I've spent much of this week watching training/conference material on sites using Flash videos, I'm no longer able to use Firefox for work. (Bonus snide remark: If the Firefox team spent more time fixing fu
Re: (Score:2)
Right. If it ain't broke, don't fix it.
Re: (Score:2)
Re: (Score:2)
Re: So much for stability and uptimes... (Score:5, Informative)
Using separate apps and libraries which have strict and unavoidable dependencies between them isn't "modularity".
Modularity requires those components to be very loosely coupled.
For example, GNOME consists of many separate libraries, apps, and scripts, but it isn't modular. Installing just one small GNOME app means you have to pull in tons of libraries and other apps, because they are tightly coupled.
Systemd is similar to GNOME. It's an all-or-nothing situation, which obviously isn't modular.
Traditional UNIX software generally is modular. I can easily change my shell, for example, without affecting the other software on the system. I can even install a different C compiler, and none of the other software on the system would even be aware of the change. That's true modularity.
Re: (Score:1)
don't forget that they're adding a bootloader now too... I can just see it now:
systemd bug report:
Missing kitchen sink.
Anyways, I'd been planning to move away from Ubuntu for some time, and went as far as experimenting with Fedora Core 20/1, and HAD been toying with the idea of moving to arch.
I ended up installing arch on a new a10-7850k build as I wanted to toy around with HSA and since support is really just rolling out the last few months, I needed something a little more bleeding edge than the "traditi
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
There are scripts on the Arch Wiki that alleviate most of the PITA aspect of Arch. I found them after I had done what you did, though I can't say I found it terribly difficult to set up as long as I followed the steps in the wiki, then again this was after a few days of trying to get other stuff just to run at all on my system so it's all relative. I've been looking to jump into BSD as a storage distro, sounds interesting based on your description. Personally I found Arch to be far more pleasing and user fr
Re: (Score:2)
But my app doesn't require a specific version of glibc. I just updated my glibc (only) a week or two ago.
If you include re-compiling, it can use a number of other libc implementations as well.
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
How come every systemd update on Debian makes the next system shutdown a collection of random things if the systemd was so modular and allowed online updates? After systemd update, the power button may not bring the confirmation dialog (to select from hibernate, restart or shutdown) but it initiates shutdown, NFS unmount may halt the system, etc. The "best" part of this behaviour is that the systemd disables first the possibility to jump to virtual teminals (ctrl-alt-fXX), then it puts the screen black and
Re: (Score:1)
G+ (Score:5, Funny)
Re: (Score:1, Informative)
Re: (Score:1)
Google+: Pretty much anything technical.
FB/etc.: Today I broke my little toe, I walked the dog, I smoked pot, etc.
Re: (Score:3)
The real news is, someone is still using Google Plus.
Why? What do you use? The facebook? (snicker)
Re: (Score:2)
The real news is, someone is still using Google Plus.
Why? What do you use? The facebook? (snicker)
It is inconceivable that the man uses nothing?
Re: (Score:2)
https://www.youtube.com/watch?... [youtube.com].
Situation Dependent (Score:5, Insightful)
Even within 'rolling release' distros there is a huge variation in exactly what that means in terms of changes, updates, frequency, which parts are rolled vs versioned, user control over backdating. This combines with a bit of a matrix of use cases for one to find exactly how much manpower using such a distribution within an organization will eat up. So yeah, 'it depends' pretty much sums it up.
How critical is stability? (Score:4, Insightful)
For a machine that you would just blindly take updates for anyways, rolling releases are probably convenient.
For mission-critical systems where every change should be tested first, it's probably a bad idea unless rolling back is very easy, as it might be in a VM-with-easy-snapshots environment.
Re: (Score:2)
Comment removed (Score:4, Insightful)
Re: (Score:2)
I'm no database expert by any means, but isn't it possible to put the database (its data store I mean, not the application) on a separate partition or drive, and mount that at boot-up time? Shouldn't that solve this problem?
Re: (Score:2)
Of course they do, on a schedule, when the system is minimally used. Where I work this usually means that the updates get pushed into production on a Sunday (branch offices are guaranteed no business on Sunday) after they've been verified to not crap out on the Model environment. And Model doesn't get the rollout until the update has been verified in the Test environment for at least a week. Also, on mission critical systems, updates are only rolled out once a month at the fastest for only the most criti
I use Gentoo - but not for much longer (Score:4, Interesting)
I've been using Gentoo for many years, and temporarily switched to Funtoo on my personal laptop. I've since graduated and don't spent nearly as much time on my laptop as I used to, which these days mainly runs MythTV.
I don't think I'd continue with Gentoo - it takes too much time to sort through updates, figure out which packages need to be masked, etc. I'd rather go to Arch next, although I was considering Debian unstable.
Recently, my video card stopped being supported by the newest nvidia graphics, and the newer versions of Xorg weren't compatible. My masked list is growing as more and more packages have deeper dependancies on newer versions of Xorg. I always figured Portage should honour my masked packages and keep everything at the latest version without stepping on my masked packages, but it wants me to do everything manually. If package 1.2.3 is incompatible with my Xorg, I'll mask 1.2.3 and newer. There is a slight chance, however, that 1.2.4 will be compatible, but it doesn't matter, since Portage made me masked out 1.2.3 and newer, I'll never even know.
Re: (Score:2)
If package 1.2.3 is incompatible with my Xorg, I'll mask 1.2.3 and newer. There is a slight chance, however, that 1.2.4 will be compatible, but it doesn't matter, since Portage made me masked out 1.2.3 and newer, I'll never even know.
Gentoo lets you mask only a specific version of a package with =package-1.2.3.
Re: (Score:2)
Re: (Score:2)
Good for developers ... (Score:5, Insightful)
I think rolling releases are good for developers, and gives you that whole agile thingy ...
But really what it instills is a culture of "almost got it" where you'll run the risk of breaking your user's systems and then just say "whoops, we'll fix that next time".
I think it leads to sloppy release engineering (because, after all, it's just a build), and will be fundamentally incompatible with how companies need to do IT.
And every time I see Firefox telling me "It is strongly recommended you upgrade to this version" what I really see is "holy crap, did we inject some garbage in that last one".
I think in general the "continuous release" says "we're not worried that people in the real world can't do this, and we don't care ... we'll fix it on the next release ... maybe".
So, for your personal desktop, or a sandbox, or a toy ... sure, have at it. But for a real machine, doing real work ... I think "continuous release" is a terrible idea.
Because in the real world, we're not prepared to patch Prod system just because you committed some new changes -- we have bigger issues to deal with than constantly updating software to keep you happy.
I should think nobody in a corporate environment is a fan of that. And if you're a small shop of 20 people who are risk takers ... you're not in what I'd call a corporate environment.
Re: (Score:3)
Even when it's not broken, it's different, needs t (Score:3)
Agreed. Also, even if it's not _broken_, I don't want things constantly changing under my feet without even being able to meaningfully talk about what changed in different versions.
It's good to be able to say "here are the major changes between "Windows 7 and Windows 8". It's definitely good to be able to say "this software works on Windows 8", rather than "this software works on versions released between 2013-10-12 and 2015-01-03".
Re: (Score:3)
Rolling releases are *not* good for developers when an update breaks your build environment. What known good previous version do you roll back to?
Re: (Score:2)
I guess that means that all of us who don't happen to run Gentoo are just SOL, then?
Re: (Score:3)
I run a small scientific laboratory (3-5 people depending on the season) that is very much like a startup. Our primary product is scientific output, and stability is paramount for us, even though we're small. We have standardized (by edict from me, The Boss) on one version of Word, one version of OpenOffice, one version of Matlab, one version of Windows (well, two, because we have some older XP systems used in data collection), etc. The versions selected for standardization shift, but only slowly (ie, it
Re: (Score:2)
This is the exact scenario where you end up with people still using XP machines and IE6 (seen just last week).
Re: (Score:2)
Oh please. People are still using XP/IE6 not because of a particular cycle, but because their company is using some shitty internal web app that only works on IE6.
Gentoo works for me (Score:3)
I've been using Gentoo on all my personal machines for the last decade or so.
Works fine as long as you pay attention.
--dost
Re: (Score:3)
I may be a masochist, but I feel kind of sad when I do an "emerge -uDp world" and nothing comes up. I start feeling like should install more stuff.
Re: (Score:1)
You can always re-merge webkit, that one never lets me down
Re: (Score:2)
I may be a masochist, but I feel kind of sad when I do an "emerge -uDp world" and nothing comes up. I start feeling like should install more stuff.
I don't think I've dnoe that in over a year....I'm kind of afraid to at this point.
Re: (Score:2)
It helps if you do an "emerge --sync" first!
Re: (Score:2)
I've alo been using gentoo on my desktops for about the same amount of time, it's by far my favorite of all that I've tried. And while the ricer-level make options don't have as much effect on performance as they used to, I still like the configurability of the whole thing.
That being said, I wouldn't run Gentoo is a prod environment for any amount of money, it's debian or a redhat-based distro, all the way. The nuances of the portage tree from week to week just lend themselves to too much instability on wha
Debian SID (Score:5, Interesting)
I've been using Debian unstable in my personal computers for years. Occasionally, something breaks.
But I prefer the long term support of Debian stable and CentOS for internet facing servers and lab workstations.
Here, it's important to be able to get security fixes without fear of breaking anything for years.
Re:Debian SID (Score:5, Insightful)
Home USE !=Business Use (Score:3, Funny)
You want your systems to be running stable, known working, and reliable code. Who cares if it's version 10 and not version 10.0.4134. Let the dev monkies play with the updates in the background and when a service release is out test it further.
Unless there is a positive gain (security, feature release, or annual patch) then the old code is just fine. It works, don't touch it, leave it the hell alone and go play with your crap in your lab.
Another reason I hate the DevOPS movement. Combines the worst of habits of a Dev Monkey and a System Admin.
Re: (Score:2)
I'm not sure how far you're going in your thoughts here, but I know I care about the version and there are lots of occasions when the old code ain't fine.
Falling too far behind can turn into an even larger problem down the road when you need to update software A to resolve an issue but you can't because you're too far behind and there's no longer an upgrade path because they've done something major (like switched from MySQL to PostGreSQL) on the backend. Better yet, sometimes software A (which is already b
Re: (Score:2)
r.e. falling behind, yes that sucks.
My old employer fell so far behind on cisco call manager that the version they had were out of support and cisco would only touch their issues when billed at a T&M rate (i.e. it ran on windows). Their system was so big complex and unwieldy that it took the better part of 18 months planning to even update to a version of ccm that was even remotely current. I left shortly before that mess went live, that would have been a shitty teething period.
Some of us run businesses on Linux (Score:1)
Some of us run businesses on Linux. At the company I work at, the product we give to customers is delivered using Linux platforms. We are too busy making money with Linux to be spending all day figuring out if a given software update for some unstructured "rolling release" breaks some program our business needs.
This "rolling release" nonsense is a euphemism for "we're too lazy to properly test, package, freeze, and take responsibility for a given version of our software". No, I don't want to add 20 features
Re: (Score:2)
Re: (Score:3)
That is why we use CentOS for most of our critical servers at work. There's something to be said for 10-year support cycles.
The trick is that then the upgrade at 8 years is a nightmare.
The real problem is that people don't know what they've installed, how they've configured it, and how to upgrade it. Devops really is the answer. My puppet modules work at least on CentOS, CentOS -1, and Fedora/Fedora -1, so I figure out changes on Fedora, and eventually retire the CentOS -2 releases. My CentOS 5 is all
"Rolling Rease"? It's called CI somewhere else. (Score:3)
In software development, especially server-side web development this is called continuous integration (CI for short). I have nothing against it, if automated testing, instant rollback and other things are in place. And if the distro has solid quality control and feature management. ... Somehow I doubt that though.
If a distro crew knows what they are doing, I'd trust them with rolling releases. ... Maybe I should try this Arch Linux thing out. Any experiences? Any advice?
Re: (Score:2)
That was going to be my response... I think rolling release is probably a good idea, having lived through the nightmare of enormous organizations that spend 4 or 5 years upgrading from Windows XP / IE6 to Windows 7 and the huge inertia of all that. The shitty old mire of horrendous hacks that you have to dig through to move this sisyphean rock of organizational code, and then everything breaks anyway because no-one actually tests things *properly* when they do their migration plans.
An environment that caref
Comment removed (Score:3)
No surprise (Score:1)
Greg was an active Gentoo developer.
Void Linux (Score:2, Informative)
Try Void Linux [voidlinux.eu], a rolling distro that doesn't suck:
- System-wide LibreSSL by default (maybe the first linux distro to do so) ... and more.
- runit instead of systemd
- multilib aware
Arch for a while, FreeBSD for life (Score:1, Interesting)
I personally prefer rolling release. /bin, etc. It became a bitch to maintain with all the breaking changes imposed.
I used to use Arch, until they started trashing their ecosystem. Giving up KISS by adopting systemd, moving
Ultimately I found FreeBSD's ports to be amazing rolling release system. Far more stable than Arch, you don't have to break your junk if you don't want to. And that makes me happy. The kernel moves in increments, everything else you compile or download binaries. I end up with a much more
Depends on the target user... (Score:2)
I am absolutely not surprised by this: A well-known kernel hacker has enough systemwide understanding for the ocassional glitch to become obvious. He also uses most probably a very specific subset of programs for his day-to-day activities — I (a very far cry from his skill levels) haven't changed my main tools in over ten years. I mean, a tiling window manager, Emacs, a browser... Specific little tools can vary, but they won't jeopardize my system's overall behaviour — This means, it won't mean
Who's doing what now? (Score:2)
the lieutenant Linux developer Greg Kroah-Hartman
The what? Did he develop "lieutenant Linux," with a small L? Or is he a lieutenant like Columbo?
Other than that, what should I know about who this guy is? Because the summary (which is, I'm told, also the article) tells me nothing.
Who's doing what now? (Score:1)
He reports to colonel Panic and general Protection-Fault :)
Re: (Score:2)
He's a kernel lieutenant. Which means that he's one of the guys that Linus Torvalds trusts to shepherd patches into the main line of the kernel. In other words, he's one mean code-farmer. Not your average user.
Re: (Score:2)
2007: https://www.youtube.com/watch?v=L2SED6sewRw [youtube.com]
2014: https://www.youtube.com/watch?v=fMeH7wqOwXA [youtube.com]
Arch... Ugh. (Score:5, Insightful)
Arch breaks. Often. Breakage is the trouble with rolling release distributions, and an intolerable problem for anyone not wanting to spend the time un-breaking things.
Loyal but naive Arch users are always quick to defend it, "my system has never broken" "you must be doing something wrong" etc. but these discussions are always about semantics. Just because it's a one-liner to fix doesn't mean that it isn't broken. If it requires my attention to keep working, then it's broken. Just because it is fixable doesn't mean I want to spend time fixing it.
Arch is a great way to learn Linux, and the Arch wiki is a great resource not exclusive to just Arch. But you'd have to be out of your mind to use it for anything in production. The Arch FAQ makes it pretty clear: YOU, the user, is responsible for keeping your system updated, functional and stable; but the more packages you have installed, the more likely you are to get broken when upstream updates something.
Also from Arch docs:
Warning: Do not be tempted to perform partial updates, as they are not supported by Arch Linux and may cause instability: the whole system should be upgraded when upgrading a component. Also note that infrequent system updates can complicate the update process.
Translation: You want to update package foosicle-1.2 to foosicle-1.3 because it has a security problem. Oh, you don't want to update X, Firefox, KDE, and the kernel? I hope you do want instability then. BTW, stay on top of your updates unless you want to get really hosed.
No thanks.
I use Ubuntu LTS releases on my computers at work for three reasons:
1. Reading the Arch wiki to un-fuck Java after I updated my system to fix a security issue for a different package is not a good use of my time.
2. Not a good use of my time to compile from source because the distribution ships with something ancient or doesn't have it at all (I'm looking at you, RHEL).
3. Will keep getting updates for the lifetime of the hardware.
Re: (Score:1)
Actually, FreeBSD-STABLE is really the definition of that word. I've been tracking that monthly for 8 years and it's only bit me three times, twice because I failed to read the mailing list where someone else discovered the issue first.
Bad experience (Score:2)
My experiences with rolling-release software has been unpleasant, so I will continue to avoid it to the best of my ability.
Re: (Score:2)
Meh (Score:1)
Great for Desktops (Score:2)
IMO, rolling releases are great for desktop/laptop machines, but not so great for servers. There's something to be said for installing and configuring your OS on your work machine exactly once, for the life of the machine, and then it just stays up to date. No more twice a year upgrades that bork everything (I'm looking at you, Ubuntu) and make you reinstall anyway, no more "backport" repositories if you want to run the latest KDE or LibreOffice or whatever. Small, incremental updates are actually a lot eas
Re: (Score:2)
IMO, rolling releases are great for desktop/laptop machines, but not so great for servers.
I would have said exactly the opposite. I strongly dislike rolling releases in general, but my experience is that they're less annoying on servers than on end-user machines. Rolling releases mean that you have to put up with unexpected UI changes, which is what makes them hurt.
Good for him? (Score:2)
I don't. I like predictably scheduled releases. Ubuntu's release strategy particularly pleases me, with predictable releases every 6 months, and long term support releases every 2 years, with support for upgrading either from regular release to regular release, or from LTS release to LTS release.
Of course, I don't run Linux as a desktop platform, so Ubuntu still works nicely for me in a server environment. I tend to run only LTS releases on important servers (typically waiting until 6 months after an LTS re
Not for production use (Score:2)
Rolling distros are great if you are a technology enthusiast and completely manage your own machine. If you are supporting a large number of users or servers, you want to test a fixed configuration and deploy it to everyone once a year. In general the key to stability is to branch a code at some point and focus on bug fixes rather than new features/cleanup/refactoring.
Another idiot on the ignore list (Score:2)
He's apparently abandoned the idea of a long-term stable box.
Since he's abandoned that idea, everything he says has just become useless advice at several of my IT jobs, where we depends upon long-term stability and reliability.
*AND* he's using Arch as a primary. Nope. Too much bloat.
Mark down yet another useless person to listen to on the list.
Appropriate captcha: Detached - as in this idiot is detached from reality.
Re: (Score:2)
You want your kernel developers running something old and stable? I sure don't, I want him running something fresh where he finds the bugs before they get to me.