Yep. They don't need the latest and greatest kernels on 20-year old machines.
They won't have any new hardware and they probably can't run the newest software anyway. Stick with an older kernel that's proven and has been working all that time.
I went and read the original mail, and was surprised to learn that most of these were added fairly recently.
* asm9260 -- added in 2014, no notable changes after 2015 * axxia -- added in 2014, no notable changes after 2015 * bcm/kona -- added in 2013, no notable changes after 2014 * digicolor -- added in 2014, no notable changes after 2015 * dove -- added in 2009, obsoleted by mach-mvebu in 2015 * efm32 -- added in 2011, first Cortex-M, no notable changes after 2013 * nspire -- added in 2013, no notable changes after 2015 * picoxcell -- added in 2011, already queued for removal * prima2 -- added in 20111, no notable changes since 2015 * spear -- added in 2010, no notable changes since 2015 * tango -- added in 2015, sporadic changes until 2017, but abandoned * u300 -- added in 2009, no notable changes since 2013 * vt8500 -- added in 2010, no notable changes since 2014 * zx --added in 2015 for both 32, 2017 for 64 bit, no notable changes
So I'm not sure the argument that you can't run modern kernels on these platforms really holds true. How much processing power does it take to run a bash shell anyhow?
Bloat doesn't have to be mean runtime bloat. It can also be code cruft/bloat. Reading, debugging and maintaining code that exists incurs cost, no matter how well organized/modularized/compartmentalized it is. And often you can significantly reduce the complexity of a module by dropping support for codepaths that never get hit in practice, significantly improving ease of maintenance and associated things like onboarding.
Refactoring doesn't fix problems like 'hey this CPU has a hardware bug and a feature doesn't work on it'. For a lot of systems you can add some form of software workaround and implement that under the HAL layer so nobody see it. But occasionally you hit a block where that just don't work well enough. Maybe the workaround is too slow and introduces a timing problem elsewhere or you're out of resources in terms of either the Silicon. If the HAL can't fully abstract away the problem, then that adds bloat to th
The problem is that most developers won't have any way of testing how changes will affect that old hardware and they can waste a lot of time over trying to keep it working.
Generally, if nobody touched a particular piece of kernel code for years then it is either completely perfect or completely unused. Determine which and act accordingly.
There's the third possibility that the code is buggy, but still very much used in some very specialized setting which only exercises a tiny subset of its functionality.
IMLE that's like 99.99% of cases. Linux is a general purpose OS, yet it's mostly used in very specialized, embedded settings.
I wish that people would give up on this "code cleanup" obsession, which most of time means "let's fuck up stuff we don't have a clue about".
You've got the wrong idea about how the kernel process works, and the wrong idea about the importance of code cleaning. Very wrong. Dangerously wrong. Hope you're not fucking anything up where you work.
Most of the "code cleaning & refactoring" pushers aren't able to understand & DEBUG old code at all (or write completely new code), but they have to do something to leave their mark and be in the "process".
Most of the "code cleaning & refactoring" pushers aren't able to understand & DEBUG old code at all (or write completely new code), but they have to do something to leave their mark and be in the "process".
What are you going on about? In the kernel community such a person would be exposed immediately and invited to get lost. People can't just send in their shitty code you know, it has to go through a maintainer. And maintainers that fuck up by letting shit through do not last long, nor do their checkins.
The other solution, other than using an older kernel, would be to let Arnd know that you're using whichever CPU. That is if it's a CPU that can even support 16MB of RAM or so - several of these can't.
Yes true, but the reason of "no commits, so it must not be in use!" is not sound. Maybe it's good enough, it doesn't need a constant slew of commits because those CPUs aren't as broken as Intels? The attidute to just fork it because the users of the unpopular stuff can just support it themselves assumes that the users can support it themselves or have the amount of free time to do so. The Linux and open source philosophy is not merely "fix it yourself, if you can!", but it also includes "we support unpopular stuff that the big commercial companies won't", "we can make your old hardware work again", and "we don't treat our users like shit".
I think the argument is more "no commits, so it must not be actively maintained!" - and if it's not being maintained, then little or nothing will be lost if they drop it and let whoever IS still using that old hardware to also use the old kernels.
Also, it's not like there's a huge security risk leaving some of these architectures with kernels that are no longer updated. Sure you can get your 30+ year old whatever working agian, but you're probably not going to be using it for anything worth worrying about,
Even that logic is broken. No recent commits MAY mean not maintained, nobody cares OR it might mean it hasn't been broke in a long time, so nothing to fix.
I haven't performed maintenance on the light in my utility room in well over a year. I *DO* use that light daily. It just hasn't burned out since I replaced the light emitting heat globe with LEDs.
This is sort of the difference between FreeBSD, NetBSD, or OpenBSD in some ways. PC-centric versus traditional suport platforms versus platforms we think are interesting.
Having old stuff around is informative too. Otherwise you get newcomers complaining about having to use strange macros even though they're no-ops on Intel. Not so far fetched because I have heard similar sorts of complaints from various noobs in the past ("htonl is stupid!").
the reason of "no commits, so it must not be in use!" is not sound
I think the argument is more "no commits, so it must not be actively maintained!"
Which might also not be accurate. No commits might mean that it's darn near perfect if the hardware hasn't changed (and we're talking about older hardware).
So if someone wants their 32-bit Sparc support but also able get the latest USB drivers and network stack fixes... Maybe the real problem is that too much just got shoved into single code tree and that it wasn't made to be more modular from the start?
Yes true, but the reason of "no commits, so it must not be in use!" is not sound.
But that's not the reason. They're using that as a search criteria to draw up a list of candidates for removal. It is entirely sound to look for things that haven't been updated for a long time to build up a list.
Given how much the kernel ABI changes, if they haven't been touched for 5 or more years, then chances are no one is developing them enough to notice any breaking ABI changes.
"we support unpopular stuff that the big commercial companies won't"
And they have been supporting it. They've been supporting it for 5 to 10 years without any problems.
The 5-10 years sounds like a long time. But... I am involved with issues at work where something drops support after 5 to 10 years and we're scrambling. 10 years may be a long time for a PC, but it's a short time for a lot of things, such as lifetime of embedded devices, certificates, licensed customer support, and so forth.
Given how much the kernel ABI changes, if they haven't been touched for 5 or more years, then chances are no one is developing them enough to notice any breaking ABI changes.
The kernel ABI doesn't change all that much. At least, at the architecture HAL level. The kernel needs very little of the HAL, and big changes to the HAL aren't very common because doing so will break every architecture.
The rest of the kernel ABI is hosted by the kernel itself - things like driver APIs are well above the architecture HA
There is a point where the 1o year old hardware that is based on non-replacable hardware has too high a maintenance cost. How much of the modern kernel is dead-weight, there because the 386 Intel processors are still in use, but no longer sold.
Can these old CPUs actually boot recent kernels? Are they being used in machines with at least, say, 16 MB RAM to begin with? Not much point in keeping those if they are only used in QEMU.
Looking at a lot of these CPUs they likely don't even have enough external address line pins to even get over 16MB or whatever of physical memory. It's also difficult to imagine how many Sun 3s are still being used. It's difficult to make the case for keeping most of this code maintained.
Sparcstation-20 can have 4 sun4m cpus and 512mb ram.
- Alpha 2106x - IA64 Merced (first-gen Itanium)
Both are 64bit cpus, available in servers supporting multiple gigabytes of ram. Both of these support PCI slots and/or USB so there is potential of connecting newly manufactured peripherals to them that wouldn't be supported by older kernels.
Older CPUs are being reimplemented in FPGAs, for instance look at the apollo-core which implements an m68k compatible processor for use as an accelerator or re
My first Linux boot was from floppies (Slackware) on a machine with 4MB RAM. As I recall, it was a 1.21 kernel, and I had to buy a CD drive to be able to make the boot floppies.
yes please (Score:5, Insightful)
It's causing bloat and maintenance headaches, get rid of it! It's an old CPU, it can use an old kernel.
Re: (Score:2)
Yep. They don't need the latest and greatest kernels on 20-year old machines.
They won't have any new hardware and they probably can't run the newest software anyway. Stick with an older kernel that's proven and has been working all that time.
Re:yes please (Score:5, Informative)
I went and read the original mail, and was surprised to learn that most of these were added fairly recently.
* asm9260 -- added in 2014, no notable changes after 2015
* axxia -- added in 2014, no notable changes after 2015
* bcm/kona -- added in 2013, no notable changes after 2014
* digicolor -- added in 2014, no notable changes after 2015
* dove -- added in 2009, obsoleted by mach-mvebu in 2015
* efm32 -- added in 2011, first Cortex-M, no notable changes after 2013
* nspire -- added in 2013, no notable changes after 2015
* picoxcell -- added in 2011, already queued for removal
* prima2 -- added in 20111, no notable changes since 2015
* spear -- added in 2010, no notable changes since 2015
* tango -- added in 2015, sporadic changes until 2017, but abandoned
* u300 -- added in 2009, no notable changes since 2013
* vt8500 -- added in 2010, no notable changes since 2014
* zx --added in 2015 for both 32, 2017 for 64 bit, no notable changes
So I'm not sure the argument that you can't run modern kernels on these platforms really holds true. How much processing power does it take to run a bash shell anyhow?
Re: (Score:2)
can't run the newest software anyway
What is the newest software? Is Linux useless when it can't run Gnome and stream Netflix?
Re: (Score:2)
Re:yes please (Score:5, Informative)
Bloat doesn't have to be mean runtime bloat. It can also be code cruft/bloat. Reading, debugging and maintaining code that exists incurs cost, no matter how well organized/modularized/compartmentalized it is. And often you can significantly reduce the complexity of a module by dropping support for codepaths that never get hit in practice, significantly improving ease of maintenance and associated things like onboarding.
Re: (Score:2)
Refactoring doesn't fix problems like 'hey this CPU has a hardware bug and a feature doesn't work on it'. For a lot of systems you can add some form of software workaround and implement that under the HAL layer so nobody see it. But occasionally you hit a block where that just don't work well enough. Maybe the workaround is too slow and introduces a timing problem elsewhere or you're out of resources in terms of either the Silicon. If the HAL can't fully abstract away the problem, then that adds bloat to th
Re: (Score:2)
The problem is that most developers won't have any way of testing how changes will affect that old hardware and they can waste a lot of time over trying to keep it working.
Re: (Score:3)
Generally, if nobody touched a particular piece of kernel code for years then it is either completely perfect or completely unused. Determine which and act accordingly.
Re: (Score:1)
There's the third possibility that the code is buggy, but still very much used in some very specialized setting which only exercises a tiny subset of its functionality.
IMLE that's like 99.99% of cases. Linux is a general purpose OS, yet it's mostly used in very specialized, embedded settings.
I wish that people would give up on this "code cleanup" obsession, which most of time means "let's fuck up stuff we don't have a clue about".
Re: (Score:2)
You've got the wrong idea about how the kernel process works, and the wrong idea about the importance of code cleaning. Very wrong. Dangerously wrong. Hope you're not fucking anything up where you work.
Re: (Score:1)
Right, I was trying too hard to be kind.
Most of the "code cleaning & refactoring" pushers aren't able to understand & DEBUG old code at all (or write completely new code), but they have to do something to leave their mark and be in the "process".
Re: (Score:2)
Most of the "code cleaning & refactoring" pushers aren't able to understand & DEBUG old code at all (or write completely new code), but they have to do something to leave their mark and be in the "process".
What are you going on about? In the kernel community such a person would be exposed immediately and invited to get lost. People can't just send in their shitty code you know, it has to go through a maintainer. And maintainers that fuck up by letting shit through do not last long, nor do their checkins.
Or just raise your hand (Score:2)
The other solution, other than using an older kernel, would be to let Arnd know that you're using whichever CPU. That is if it's a CPU that can even support 16MB of RAM or so - several of these can't.
Re:yes please (Score:5, Insightful)
Yes true, but the reason of "no commits, so it must not be in use!" is not sound. Maybe it's good enough, it doesn't need a constant slew of commits because those CPUs aren't as broken as Intels? The attidute to just fork it because the users of the unpopular stuff can just support it themselves assumes that the users can support it themselves or have the amount of free time to do so. The Linux and open source philosophy is not merely "fix it yourself, if you can!", but it also includes "we support unpopular stuff that the big commercial companies won't", "we can make your old hardware work again", and "we don't treat our users like shit".
Re: (Score:2)
I think the argument is more "no commits, so it must not be actively maintained!" - and if it's not being maintained, then little or nothing will be lost if they drop it and let whoever IS still using that old hardware to also use the old kernels.
Also, it's not like there's a huge security risk leaving some of these architectures with kernels that are no longer updated. Sure you can get your 30+ year old whatever working agian, but you're probably not going to be using it for anything worth worrying about,
Re: (Score:2)
Even that logic is broken. No recent commits MAY mean not maintained, nobody cares OR it might mean it hasn't been broke in a long time, so nothing to fix.
I haven't performed maintenance on the light in my utility room in well over a year. I *DO* use that light daily. It just hasn't burned out since I replaced the light emitting heat globe with LEDs.
Re: (Score:2)
This is sort of the difference between FreeBSD, NetBSD, or OpenBSD in some ways. PC-centric versus traditional suport platforms versus platforms we think are interesting.
Having old stuff around is informative too. Otherwise you get newcomers complaining about having to use strange macros even though they're no-ops on Intel. Not so far fetched because I have heard similar sorts of complaints from various noobs in the past ("htonl is stupid!").
Re: (Score:2)
Re: (Score:2)
the reason of "no commits, so it must not be in use!" is not sound
I think the argument is more "no commits, so it must not be actively maintained!"
Which might also not be accurate. No commits might mean that it's darn near perfect if the hardware hasn't changed (and we're talking about older hardware).
Re: (Score:2)
So if someone wants their 32-bit Sparc support but also able get the latest USB drivers and network stack fixes... Maybe the real problem is that too much just got shoved into single code tree and that it wasn't made to be more modular from the start?
No need to fork - just tell Arnd "I'm using that" (Score:2)
> The attidute to just fork it
There's no need to fork anything. Just tell him "I'm still using a prima2" and it stays in.
Re: (Score:2)
Yes true, but the reason of "no commits, so it must not be in use!" is not sound.
But that's not the reason. They're using that as a search criteria to draw up a list of candidates for removal. It is entirely sound to look for things that haven't been updated for a long time to build up a list.
Given how much the kernel ABI changes, if they haven't been touched for 5 or more years, then chances are no one is developing them enough to notice any breaking ABI changes.
"we support unpopular stuff that the big commercial companies won't"
And they have been supporting it. They've been supporting it for 5 to 10 years without any problems.
"we don't treat our users like shit".
The companies that sup
Re: (Score:3)
The 5-10 years sounds like a long time. But... I am involved with issues at work where something drops support after 5 to 10 years and we're scrambling. 10 years may be a long time for a PC, but it's a short time for a lot of things, such as lifetime of embedded devices, certificates, licensed customer support, and so forth.
Re: (Score:2)
The kernel ABI doesn't change all that much. At least, at the architecture HAL level. The kernel needs very little of the HAL, and big changes to the HAL aren't very common because doing so will break every architecture.
The rest of the kernel ABI is hosted by the kernel itself - things like driver APIs are well above the architecture HA
Re: (Score:2)
Re: (Score:2)
Can these old CPUs actually boot recent kernels? Are they being used in machines with at least, say, 16 MB RAM to begin with?
Not much point in keeping those if they are only used in QEMU.
Re: (Score:2)
Re: (Score:3)
Looking at a lot of these CPUs they likely don't even have enough external address line pins to even get over 16MB or whatever of physical memory. It's also difficult to imagine how many Sun 3s are still being used. It's difficult to make the case for keeping most of this code maintained.
Re: (Score:2)
This is about dropping after the LTS release.
So users of these platforms have 5 years to figure it out, it seems pretty reasonable to me.
Re: (Score:2)
I am surprised that most of these CPUs seem to have MMUs, though. Perhaps most MMU-less CPUs have been already removed from Linux, hopefully.
Re: (Score:2)
- SPARC/Sun4M
Sparcstation-20 can have 4 sun4m cpus and 512mb ram.
- Alpha 2106x
- IA64 Merced (first-gen Itanium)
Both are 64bit cpus, available in servers supporting multiple gigabytes of ram. Both of these support PCI slots and/or USB so there is potential of connecting newly manufactured peripherals to them that wouldn't be supported by older kernels.
Older CPUs are being reimplemented in FPGAs, for instance look at the apollo-core which implements an m68k compatible processor for use as an accelerator or re
Re: (Score:2)
Re: (Score:2)
I'm not sure recent kernels can even boot with 4 MB RAM