Linux 3.4 Released 385
jrepin writes with news of today's release (here's Linus's announcement) of Linux 3.4: "This release includes several Btrfs updates: metadata blocks bigger than 4KB, much better metadata performance, better error handling and better recovery tools. There are other features: a new X32 ABI which allows to run in 64 bit mode with 32 bit pointers; several updates to the GPU drivers: early modesetting of Nvidia Geforce 600 'Kepler', support of AMD RadeonHD 7xxx and AMD Trinity APU series, and support of Intel Medfield graphics; support of x86 cpu driver autoprobing, a device-mapper target that stores cryptographic hashes of blocks to check for intrusions, another target to use external read-only devices as origin source of a thin provisioned LVM volume, several perf improvements such as GTK2 report GUI and a new 'Yama' security module."
How RedHat's Linux Can Defeat Micr$oft's Windoze (Score:2, Funny)
Hi,
I've always used Windowz and I consider myself an exceptional Visual Basic programmer, so I know computers pretty good. In fact I got an A- in my programming class last term. But I'm a little wary of how much power Microsoft has in the computer field. Many of my friends use RedHat and I've recently installed it on my machine at home. Although I haven't had as much chance to play with it as I'd like, I've been greatly impressed.
This weekend I gave some thoughts to the things that are wrong with Linux
Re:How RedHat's Linux Can Defeat Micr$oft's Windoz (Score:5, Funny)
As much as Linux is doing rather well despite the plethora of different versions and security risk from the open code base, using it is rather risky for legal reasons as well. Red Hat stole much of Linux from SCO's Caldera, and are distributing it without paying royalties, meaning users could be on the hook for several hundred dollars a license and casting the future of Red Hat's offerings in jeopardy.. Litigation is ongoing now, and experts expect SCO to win a crushing verdict any day now. Linux has some neat features, but there's a lot of fear, uncertainty, and doubt in the community about its legal future.
Re: (Score:3)
If the measure of a troll is how seriously people take it, GP's doing very well...
Portability (Score:2)
Re: (Score:3)
Except in the old days Microsoft had a version of NT for DEC Alphas.. so I doubt too much was in x86 Assembly as all that code would have to be completely re-written (well not just re-written in the sense of modifying C code for the new platform, but the whole dang thing).
I can't remember if they had any other variants than the Alpha versions though.. it's been too long :)
There were five variants as of NT 4.0 : x86, Alpha, Mips, Sparc, and (I think) PPC.
DEC Alpha was usually served up with ISA slots, and it could execute x86 code to use those.
Re: (Score:3)
Sparc was never there, although Intergraph mulled over the idea for a bit before settling for Wintel. PPC was there very briefly. The only RISC platforms that lasted a while was MIPS and Alpha. OTOH, NT was developed on both an i860 as well as a DECStation 3000 (using MIPS) but never actually supported either of those platforms.
Also, the first DEC Alpha PCs had EISA slots, not ISA, while the OVMS workstations used TurboChannel. Later, w/ the AlphaStations and AlphaServers, DEC switched completely to P
Re: (Score:2)
I've always used Windowz and I consider myself an exceptional Visual Basic programmer ...
Ick.
Re: (Score:2)
Nice troll.
You need to raise your standards. That was pathetic.
Re: (Score:2)
Re: (Score:2)
No it isnt, it's a retarded pointless waste of time troll.
Hey! :)
Touché?
Re:How RedHat's Linux Can Defeat Micr$oft's Windoz (Score:4, Funny)
It took about 4 reads before your post didn't say "titties".
btrfs needed the work (Score:5, Interesting)
I tried btrfs, and ended up going back to ext4. Hoped btrfs might be a good choice for a small hard drive, and it is-- it uses space more efficiently. But it's not a good choice for a slow hard drive or the obsolete computer that the small size goes with.
Firefox ran especially poorly on btrfs. I was told this is because Firefox does lots of syncs, and btrfs had very poor performance on syncs. Maybe this improvement in performance on metadata is just the thing to fix that?
Re: (Score:2)
Re:btrfs needed the work (Score:5, Informative)
Yes, there is a massive speed difference. Unpacking a particular tarball takes 10 seconds on btrfs, 124 seconds on ext4.
The problem is with certain broken programs that fsync after every single write. But that's a problem with those programs, not btrfs. The words "fsync" and "performance" don't belong in the same sentence. fsync may be legitimately uses when durability is actually needed, but in almost all use cases the programs want only consistency.
An example: there's a file where a transaction consists of appending a few bytes to the file then updating something in the header. Let's say it's four bytes in both writes. The program can handle power loss between transactions or if it happens between the first write and a header update, but not any other reordering. The disk in question has 100MB linear speed and 10ms seek time. Even with some kind of a barrier syscall, ext4 would need to alternate writes anyway, plus it needs to write to the journal and inode. This gives you a whooping 25 transaction per second. btrfs and log-structured filesystems with atomic flushes get 25M instead (assuming infinitely fast CPU).
The primary offender here is Firefox which fsyncs all the time. This not only slows down writes but also causes insane fragmentation. The data it protects is not vital in the first place (mostly browsing history), and if it used sqlite in WAL mode on relevant platforms instead it wouldn't have to fsync for consistency almost at all.
Re:btrfs needed the work (Score:5, Interesting)
Fix Firefox? Why does it "need" to do a lot of syncs?
Re:btrfs needed the work (Score:4, Interesting)
Also, put your firefox browser.cache.disk.parent_directory on tmpfs on single user systems.
Who cares why it needs it? (Score:5, Insightful)
It is something the FS should handle. The "Just fix the program," is a bad answer because while maybe one could change Firefox, you'll find another program that can't be changed because the nature of what it does requires many syncs.
The low level systems should be robustly written to do what apps need, they shouldn't be telling apps "You can't do that."
Re: (Score:2)
I both agree and disagree. Obviously if you are writing low level systems you want to make them as efficient as possible. One can't forget though that there are sometimes inherent limits that an application should respect. It isn't unreasonable, for example, to suggest that an application shouldn't try to use an SD card as its display buffer. Sometimes there are good reasons to tell an app "you can't do that".
Re:Who cares why it needs it? (Score:4, Insightful)
Re: (Score:3)
What options does the FS have though?
The app is calling sync, which is used to flush disk buffers and ensure a change is physically written to disk. btrfs does this safely, with barriers, as it should.
Should it simply ignore repeated sync() calls and NOOP? If not, what should it do?
If I write a program that tries to allocate 20GB of ram on startup and never uses it, then complain that "linux" is making this huge swap file and making my app slow, should they change the malloc/etc APIs to cater for my using
Re:btrfs needed the work (Score:5, Insightful)
Sync (or fsync) is the way to ensure that files are committed to disk and not just cached somewhere. This is a precondition for reliable "restore session" and similar functionality. However, application developers cannot rely on the OS to sync data in the background, because e.g. on a laptop where frequent disk access is both expensive (battery life) and risky (physical motion), the OS will cache as much as possible. If FF did not sync, the OS might delay writes for hours, which means a computer crash leads to lost hours of browsing history for the user. It doesn't sound like a big deal, but I can tell you that it is infuriating as a user to see a browser say, "whoops, I lost your tabbed windows, hope you weren't using the WWW for anything important!". Not having looked at the source myself, I don't know if it's possible to optimize FF's sync behavior; but I do know that it's impossible to eliminate it.
Re: (Score:2)
This sort of application thinking is retarded. If the OS crashes, it is an OS problem. Firefox (which being a browser stores SFA that needs to be permanent) should not be forcing sync in the fear that the OS crashes. Let the cache work as intended, don't cripple it because the user is retarded or the underlying OS is crap.
If the underlying OS is crap and causing data corruption due to crashing with outstanding cached writes, then the OS is broken and needs to be fixed. NOT the browser.
Re: (Score:3)
This doesn't sound right. OS crashes should be very uncommon and thus you shouldn't design your software around them. Much more common is that Firefox itself crashes, or X11, in which case the data will be there even if the process didn't sync. If your computer crashes, you usually have bigger problems than a bit of lost browser history (not that I've ever seen *hours* of uncommitted data being lost due to a computer crash).
Re: (Score:2)
OS crashes should be very uncommon and thus you shouldn't design your software around them.
By far, the predominate OS that FF runs on is Windows. Thus, the developers are concerned about frequent OS crashes.
Re: (Score:3)
Right. Since Mozilla design their browser with Windows 98 and ME in mind, and then drops support for them only at compile time. Windows hasn't been crash prone for more than 10 years.
Re: (Score:3)
By far, the predominate OS that FF runs on is Windows. Thus, the developers are concerned about frequent OS crashes.
Then, the sync should be conditional on the OS it is running on, and disabled on Linux.
And even on Windows, it should be configurable. Some users might care more about appropriate battery life than about their browser history in the rare event of a crash. Even Windows crashes much less nowadays than it used to...
Re: (Score:3)
For kernels tweaked into "laptop mode" this may be different, but for stock modern Linux the maximum time delay for disk cache writes is 30+5=35 seconds, not hours.
$ cat
$ cat
$ cat
Re:btrfs needed the work (Score:5, Informative)
Checksumming, built-in RAID support, snapshotting, transparent compression, online volume resizing, et alia. Basically, a lot of stuff that is very interesting at the enterprise level and to serious nerds who like to do strange things with their volume management, but nothing particularly important to the average user. It's basically a non-Oracle-owned version of ZFS, if you know what that is.
Re:btrfs needed the work (Score:4, Informative)
Actually, ironically Oracle "owns" btrfs! But it is Open Source.
Re: (Score:2)
Re:btrfs needed the work (Score:5, Informative)
GPL for ever.
early in the development of BTRFS commits were sourced from vocal and stubborn devs that would protect it from being re-licensed source: http://www.youtube.com/watch?v=hxWuaozpe2I [youtube.com]
Re: (Score:2)
Re: (Score:2)
2 i would think as it's going into the kernel but you should probably check that
Re: (Score:3)
Re: (Score:3, Interesting)
Checksumming, built-in RAID support, snapshotting, transparent compression, online volume resizing, et alia. Basically, a lot of stuff that is very interesting at the enterprise level and to serious nerds who like to do strange things with their volume management, but nothing particularly important to the average user.
This is known as featuritis, and is anathema to the Unix way, where each part should do just one thing, and do it extremely well.
In my opinion, it's not interesting for enterprise because you get mediocre features, like RAID support that doesn't cover RAID5, no online file system check, not all operations being atomic, and xattrs stored separate from the inode, making it sloooow with SELinux (and presumably Samba with windows per-file security support).
It's basically a non-Oracle-owned version of ZFS, if you know what that is.
Er. While some might consider Btrfs a poor man's versi
Re:btrfs needed the work (Score:5, Informative)
>like RAID support that doesn't cover RAID5 /blah
Is on the way targeted for 3.5 (was held for the fast offline check code)
>no online file system check
btrfs scrub start
Re:btrfs needed the work (Score:5, Insightful)
This is known as featuritis, and is anathema to the Unix way, where each part should do just one thing, and do it extremely well.
How do you make a file system out of parts, though? e.g. how do you tackle, say, snapshotting, or online volume resizing, efficiently onto an existing FS that was not designed with those features in mind?
Re:btrfs needed the work (Score:5, Interesting)
This is known as featuritis, and is anathema to the Unix way, where each part should do just one thing, and do it extremely well.
All btrfs does is manage a B-tree filesystem. All grep does is apply a regular expression to a string.
However, the UNIX way is not always even a good thing.
It is also the UNIX way to duplicate a single thing a hundred times for each little feature variation (grep, egrep, fgrep, most of Perl.) That can also be unpleasant for the end user (xterm, gnome-terminal, kterm, gterm, LXterm, terminator, editing Perl.) Great for a system administrator who is expert at their particular tool and only that tool but horrible for everyone else.
That's without getting into the UNIX Way for (lack of) documentation. Or how that one thing is so often the wrong thing so it doesn't matter how well that one tool does it.
btrfs is famously called a rampant layering violation. The roll-up of filesystem-management features in one place actually lets the developers avoid duplicating code (which may actually be about as non-UNIXy as you can get in some ways.) Code that now knows about certain information normally hidden from it can do things differently. This is sometimes better (rapid mkfs) or worse (fsck tool was apparently hard to write.)
In my opinion, it's not interesting for enterprise because you get mediocre features, like RAID support that doesn't cover RAID5, no online file system check
In my opinion, if your enterprise system depends on fsck and not good backups then you don't have an enterprise system. Yes, xfs_repair can do amazing things to mostly trashed disks. But one day your data will take a good fscking where only surviving copy will be the backup copy.
RAID5 implementation from Intel is in the tree, but waiting until after the fsck is done. And btrfsck has been around since, oh, February? And the btrfs-progs you should be using with the 3.4 kernel have btrfsctl included [kernel.org]?
I was hoping the RAID5 code was going to land in 3.4, actually. Reading the pull request says that RAID5/6 should be in 3.5 [lkml.org]. Oh, well.
Of course, if you have enough money to buy an "enterprise" solution, your SAN/NAS should do the thing doing RAID for you anyway.
My major criticism of btrfs is the horrid sync performance. Hosting virtual machines tends to require lots of small writes to disk that make btrfs incredibly non-performant.
btrfs has many sexy, sexy features for a world of enterprise SAN storage and virtual machine hosting. It has thin disks, balanced meta-data, flexible storage, SSD optimized modes, multiple snapshot layers, checksummed data on disk. All of this just because it does one thing and does it well: manage a B-Tree database.
Today it's is just not there in the I/O department, sadly. Probably good for inside the virtual machine guests, though. Only testing will tell.
My money is on NILFS, if nothing else because Oracle gives people a bad taste in their mouths, but ICBW.
Wow, speaking of niche file systems. Log file systems have quite a long history. Of horrible performance and fragmentation. But if we all end up on SSDs, that won't matter. Underlying any file system you put on it, an SSD implements storage as a circular log and performance is fast enough to not depend on huge uncommitted disk caches.
Re:btrfs needed the work (Score:5, Informative)
This is known as featuritis, and is anathema to the Unix way, where each part should do just one thing, and do it extremely well.
The problem with conventional raid is it has no way of knowing which redundant copy of the information is correct and indeed it may well end up overwriting the correct copy with the bad copy during a resync. So it protects against drives that fail but it doesn't protect against drives that quietly return bad data.
In theory you could implement a raid layer with strong checksums so it knew which copy was bad, but the problem then becomes where to put those checksums (without creating a load of extra seeks).
By implementing raid techniques as part of the filesystem the checksums can be stored with the existing metadata. Implementing raid as part of the filesystem also allows different redundancy policies to be applied to different data.
Re:btrfs needed the work (Score:4, Funny)
Btrfs builds largely on ext2
[citation needed]
Nope, he's quite right. I built btrfs just fine previously, but now after I upgraded to ext4, look what happens:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git
$ cd btrfs-progs
$ make
System going down for HALT now!
Re:btrfs needed the work (Score:5, Informative)
Checksumming, built-in RAID support, snapshotting, transparent compression, online volume resizing, et alia. Basically, a lot of stuff that is very interesting at the enterprise level and to serious nerds who like to do strange things with their volume management, but nothing particularly important to the average user. It's basically a non-Oracle-owned version of ZFS, if you know what that is.
Checksumming is useful to anyone who doesn't like corrupt data. Transparent compression is useful to anyone who likes to fit more stuff on their drives and access it faster. Btrfs is technically superior to ZFS though currently less mature. For better or worse, Btrfs is largely developed by Oracle employees so they do own part of it. Oracle could simply stop paying people to develop it but they can't take it away from Linux. Both ZFS and and Btrfs are available under Free and Open Source licenses though the licenses are are not compatible which is the primary reason ZFS cannot be included as part of Linux.
Re:btrfs needed the work (Score:5, Informative)
well comparing it to lvm ignores a significant amount of what btrfs is
you would compare it with the entire stack
mdadm + lvm +ext 3/4
btrfs gets you:
Checksums on data
mirrored metadata on a single disk
lots of flexibility (online resizing and reshaping(single disk to raid 1 to 0 to single disk (or some variant of it) ( additionally raid5/6 like systems are coming)
easy striping and mirroring across different sized disks
snapshots
and probably more go check https://btrfs.wiki.kernel.org/ [kernel.org]
Re: (Score:2)
Re:btrfs needed the work (Score:5, Insightful)
no because if you lose a disk in a striped array you lose everything. (perhaps you are thinking raid1 in which case it protects you from disk failure but does not provide backups)
but soon they will be working on a btrfs send\receive system so you would be able to take snapshots and push to another disk
IMO there are a number of different failure states that you must cater for.
1. Human failures (the oh shit I deleted something): a snap shot capable file system helps protect you from these (not perfect but fairly good)
2. Hardware failures (disks are dead): traditional backup systems work here (or btrfs\zfs send\receive) disk failures can have reduced impact due to mirroring your data (or strip plus parity) checksums and COW help defend against silent failure
3. Software failures (the OS is hosed, partition table is dead): traditional backup systems work here (or btrfs\zfs send\receive) (though COW file systems and marking shit read-only helps)
4. oh shit the building burnt down: Hope you do offsite backups
BTRFS helps in the first 3 by bringing awesome features to the table (snapshots, COW(so you can walk back up the tree to recover) and mirroring your data on multiple disks) but is only something that can supplement a backup system not replace it at all
only a good backup system helps in the 4th situation.
Re:btrfs needed the work (Score:5, Insightful)
Re:btrfs needed the work (Score:5, Insightful)
"Journaling makes sense for servers; not so much for personal boxes."
I'm sorry my friend but you must be insane. I don't go uncleanly powering off my boxes intentionally but it still happens a couple times over the course of a month for various reasons (power flickers and the like). In my experience ext2 will fsck its way back to functionality 4 or 5 times tops before it won't fix or the data lost in the fixing is something critical.
Linux was a fun toy and nothing more before ext3 because ext2 is the most destructible filesystem on earth. Don't get me wrong, I played with that toy but that is all it was.
Re: (Score:2)
Re:btrfs needed the work (Score:5, Insightful)
Wow. Am I out of the loop, or what? We're up to ext*4* now? I'm still using (happily) ext2. Yeah, I've heard of btrfs, but why change if what you're using works? Journaling makes sense for servers; not so much for personal boxes.
Yes, you are way behind. Ext3 became part of Linux eleven years ago and added journaling to ext2. Some of us have been using superior journaling file systems like Reiserfs3, XFS, JFS and Reiserfs4 for many years. Journaling is a good idea for all file systems because it allows much stronger metadata and sometimes data consistency guarantees. In other words, though hardware failures and unexpected shutdowns can cause data loss on any file system, journaled ones are more likely to know which data are corrupt and which aren't. Btrfs improves on that by also checksumming everything so no corruption can ever go unnoticed. This is increasingly important as disks get bigger and errors become more likely. Another thing that's perhaps especially nice for desktop and laptop systems is that journeled filesystems can generally be checked for consistency very quickly, meaning you much less oftend need to do a lengthy fsck.
Most programs don't need a 64-bit address space (Score:5, Informative)
The new x86-64 ABI with 32-bit pointers is cool because it allows you to get the architecture improvements of x86-64, such as extra registers and RIP-relative addressing, without increasing memory usage substantially due to larger data structures. Also, 64-bit operations will just use the 64-bit registers. The vast majority of programs simply do not need the extra address space.
One reason that this ABI works so well is that the majority of the x86-64 instruction set uses 32-bit operations. Some operations involving pointers can be done in one instruction without using a temporary register to load a 64-bit constant.
Windows actually also can support this, in theory, but you're on your own in trying to communicate with the Win32 API. The linker option /LARGEADRESSAWARE:NO causes the NT kernel to limit your program's address space to 2^31 bytes.
Re: (Score:2)
But it will reduce the address space available for ASLR, am I right?
Re:Most programs don't need a 64-bit address space (Score:5, Interesting)
Yes, but so what? A system that supports x32 should also support x86-64. So, if you're relying on ASLR for security purposes, compile those sensitive apps as x86-64.
Granted, the potential attack surface grows as you consider larger and larger threats. For example, a GCC compiled as x32 makes a fair bit of sense. What about Open/Libre Office? Well, that depends on if you open untrusted documents that might try to exploit OOo / LO. (Odds seem pretty low, though.) And what about Firefox? Far less to trust on the web...
So, at some point, you have to make a tradeoff between the marginal benefit of increased performance/better memory footprint in x32 mode vs. increased security against certain overflow attacks that ASLR offers. For most people in most situations, the former likely wins for anything with a decent memory footprint. For people building hardened Internet-facing servers, the latter probably wins.
Re: (Score:2)
Re: (Score:3)
There are those much more famous than I who would disagree with you. [stanford.edu] (Scroll down to "A Flame...") Of course, appeal-to-authority is not a great way to argue a point that should be settled by data.
Some workloads are amazingly pointer heavy. Compilers and interpreters are very pointer heavy, for example. At least one SPEC benchmark sped up by over 30% in early testing. Then again, a couple others slowed down, which seems odd. I imagine we'll just have to see what happens as the compilers get tuned and
32 bit ABI? (Score:2)
It's true that most programs won't need 64-bit address space - right now - but that's only as long as their memory requirements are within 2GB. If Linux itself is 64-bit, then is there any compelling reason that the ABIs were made 32-bit? In fact, what exactly are the x86 targets for Linux - is it both 32-bit and 64-bit PCs? If that's the case, wouldn't there exist 2 versions of Linux in the tree, and wouldn't it make sense for the 32-bit Linux to have a 32-bit ABI, and the 64-bit Linux to have a 64-bit
Re: (Score:2)
It's true that most programs won't need 64-bit address space - right now - but that's only as long as their memory requirements are within 2GB.
And a lot of programs this is true, and will "always" be true. Will Emacs ever need more than 2GB for most people?
(And actually it's 4GB on Linux, or at least close to it.)
If that's the case, wouldn't there exist 2 versions of Linux in the tree
It's more like 99% of the code is shared, and changes depending on how you compile it.
wouldn't it make sense for the 32-bit L
Re: (Score:2)
Re: (Score:2)
Linux kernel does not have a standard ABI, in a sense that the way modules communicate with the kernel and each other is not fixed on binary level. Linux as an OS most certainly does have a standard ABI - how else would you be able to take a binary that was compiled 10 years ago (say, a proprietary game), and run it today?
ABI can also mean a more lower-level and generalized stuff - things like function calling convention or how the stack is arranged.
In this case, it's both - it's an generalized ABI for 64-b
Re: (Score:2, Interesting)
Re:Most programs don't need a 64-bit address space (Score:5, Insightful)
The problem is not the memory but the CPU cache. No reason to clog it with bloated 64 bits pointers when 32 bits pointers will do.
Re:Most programs don't need a 64-bit address space (Score:4)
Re: (Score:3)
You've seen the prices of 16+ GB of ram recently, right?
16GB is pretty cheap, beyond that it starts to get expensive because you either need either expensive 8GB modules or a high end CPU platform (intel LGA2011 or amd G34) with more ram channels.
If you plan for all your ram to be used by one big process thaen x32 won't be of interest to you. OTOH if you are serving webapps (lots of processes but each individual process not using much ram) written in languages that make heavy use of pointers then x32 starts to look attractive.
not in your data, which is far and away larger
All depends on what form your data i
kernel 3.2 was released only 5 months ago (Score:4, Insightful)
What is the rationale for moving up to 3.4 so soon?
Obviously big tech companies, as well as the Mozilla Foundation play the versioning game aggressively, but the Linux kernel always had a reputation of being conservative.
Re: (Score:2)
Re:kernel 3.2 was released only 5 months ago (Score:5, Informative)
"I'd say too conservative, if they were only updating the third digit every few months."
I beg to differ. This is the kernel not some userland app or even a daemon. Stable releases are supposed to be reliable enough to trust with billions of dollars in data flow and human life support systems on the day of release.
Re: (Score:3)
I beg to differ. This is the kernel not some userland app or even a daemon. Stable releases are supposed to be reliable enough to trust with billions of dollars in data flow and human life support systems on the day of release.
In Linux, that level of QA has been moved to the distributors. The only QA done on the official release is that volunteers have tried the release candidates. Some volunteers run compile/test farms, at least sometimes.
People who run life critical systems can generally afford to pay for the kind of testing they need. It is certainly difficult to find volunteers willing to do it.
Re: (Score:3)
Re: (Score:3)
Versioning schemes should suit the software being versioned.
Linux, used to use a three part versionin number. The first part was a 2 that basically never changed. The second part indicated which series it was (odd numbered were development series even numbered were stable series) The third part indicated releases within a series.
AIUI that worked for a while but during the 2.4/2.5 era it became clear it was no longer working well. The linux kernel different features were maturing at different times and distr
Re: (Score:2)
Re:yes but... (Score:5, Informative)
It's a common FUD. Nowaday Linux audio works just fine, PulseAudio as a sound server (mixer) and ALSA to talk to the hardware, the rest (OpenAL, gstreamer, OSS, ESD) are either obsolete or totally different stuff unessential to audio playback. Earlier problems related to closed source softwares (Flash, Skype) or badly written HW drivers are mostly fixed.
Re: (Score:2)
Re: (Score:2)
Actually if I have any complaint about Linux its the fact that most of your assistance comes from google (the same with any os/complex hardware) and there is so much outdated documentation out there.
Someone whose card wasn't detected might well find information telling them how to play with those obsolete technologies. You might even still be able to install some of the stuff it tells you to. Actually with old documentation it likely tells you download some tarball onto your binary distro and ./configure, m
Re: (Score:2)
Actually if I have any complaint about Linux its the fact that most of your assistance comes from google (the same with any os/complex hardware) and there is so much outdated documentation out there.
I agree, that is why I include my distro name and version in my search, plus you can limit results in time (pages update in the las 24 hours, month, year, etc...).
Re:yes but... (Score:4, Interesting)
It's a common FUD. Nowaday Linux audio works just fine
My desktop still can't auto-switch between speakers and headphones when the latter are plugged in and out, on any distro (it just plays sound through both of them). The relevant bugs have been in Ubuntu database for years now.
Re: (Score:3)
That's a bummer.
As a counterpoint: Can you tell me how to turn off that same feature under Windows? I *don't* want it to auto-switch, but it insists upon it.
Re: (Score:3)
Re:yes but... (Score:4, Insightful)
Right, Linux audio works nowadays. Almost. Except when PulseAudio starts corrupting audio. Or stops outputting audio. Or hangs. Or forcibly mutes my headphones, requiring me to call amixer after PulseAudio has started. Or requires me to re-learn something that I learnt to do with ALSA, and now I need to start over. And except when GUI tools decide to hide ALSA devices when PulseAudio is running, ruining my ability to unmute my inputs or fine tune my volume control in many other ways.
And I can't "stop using PulseAudio", because: /etc/apt/preferences.d. I learnt how to use APT pinning solely for getting rid of PulseAudio. That should speak volumes for how broken it is.
1. When somebody asks me for help with their audio, I can't simply go and uninstall it every time.
2. Certain distributions, such as Ubuntu, make it extremely difficult to remove PulseAudio.
3. Even distributions like Debian do install it automatically, so you need to ban it in
Funny enough, I was using PulseAudio long before it became popular, because it was arguably the best network audio server for casual use. I had to stop doing that because it started breaking the sound in many applications, playing with my volume, etc. It was also funny when the authors decided that the mode in which I was using PulseAudio (as a system-wide daemon) was "unsupported", and asked distros to get rid of their init scripts, thereby breaking my dedicated sound server. Not that it isn't trivial to fix, but why would anyone remove a feature in that manner? It was probably the distros fault, since Debian are still keeping the init script, but I wasn't using Debian at the time. One day I had my sound server working, and the other day I was greeted with a message telling me what I was doing is a bad idea and I should stop doing it ASAP.
Re:yes but... (Score:5, Insightful)
It's a common FUD. Nowaday Linux audio works just fine
Well, sometimes getting audio to work is beyond the control of the Linux kernel. If the system has integrated audio on the motherboard (e.g. a laptop) the ACPI DSDT (Differentiated System Description Table) supplied by the manufacturer in the ROM can instruct the hardware to behave differently under different operating systems, or provide different descriptions of the hardware (e.g. audio inputs and outputs) to different operating systems. That's why it's common to have little glitches in Linux audio, like not having the right mixer controls.
The DSDT is written in a language called ACPI Source Language (ASL). Intel and Microsoft both provide compilers for ASL, but the MS compiler accepts buggy, non-compliant DSDTs. Since for some vendors (Toshiba) the job is considered done when stuff works under the current version of windows, they ship their laptops with DSDTs that won't work under anything but Windows and might not work in future versions of Windows.
Since the kernel writers have no way of knowing what specific hardware is in your machine except what your machine tells the kernel, they can't fix this. It's entirely the manufacturer's fault, although users blame Linux because everything works in Windows. Getting stuff working isn't exactly a nightmare, but it's beyond most users' capability. You extract the DSDT from ROM, decompile it, fix the bugy ASL, compile it, then put the fixed DSDT in your initramfs (remembering to do this again every time you install a new kernel). Sometimes using a linux boot parameter to masquerade as Windows to the hardware works.
So to recap: the Linux audio system may be fine, the hardware drivers may be fine, but if the manufacturer fails to supply a correct description of what the hardware contains to the Linux kernel, audio might not work.
Disclaimer -- this information is a few years out of date, as I've stopped using Toshiba laptops and use Asus instead. However I'm fairly sure it still exists with certain manufacturer's laptops, which have worked flawlessly for me under Linux.
Re:yes but... (Score:5, Funny)
[Old Man mode]: I remember a time before PulseAudio, and before JACK, and before ALSA: The Linux kernel had some built-in drivers ("OSS-Free"?) which supported adequate functionality for every sound card/chip on the list, and if you wanted more features or support you could just pay 4front [opensound.com] for a better driver (and they were always worth the minimal price).
And: Everything. Just. Worked. Always. Hardware settings (back when sound cards still had configurable analog sections(!)) were deterministic and reliable, and getting excellent sound from *random_app* was a foregone conclusion.
Much fun was had, for instance, with "cat /dev/audio > /dev/st0" to dump a radio show (reliably! without problems! in the plain-and-simple way that Unix is supposed to be!) to DDS tape.
Now, this was 17 (or so) years ago. Anything involving further difficulty, at any stage of the game on a user level, on the Linux sound front is a step backward.
Now, get the fuck off my lawn.
[/Old Man mode]
Re: (Score:2)
Re: (Score:3, Insightful)
At present there are two systems people use for audio, pulseaudio comes with most distributions these days standard (for end users, rather limited and full of latency) and JACK (for professional audio usage, uses a callback interface though)
low latency and low power tend to be at odds with each other, what with low latency frequently waking up the cpu etc. The only reason pulseaudio was went with on the desktop is for some reason they seem to think we care about a fraction of a percent more cpu usage on my
Re: (Score:3)
Why do you need low latency for typical music playback on desktop? It is only for audio professional doing mixing from multiple sources. For laptop users like me, saving 1 W means 5~10% more battery life.
Re: (Score:2, Troll)
Re: (Score:2)
But that costs space, weight, money, and perhaps time, plus almost certainly complexity when it comes to charge the thing(s) at the end of the day.
If I'm out and about ("traveling,") and my laptop is open, I prefer to play music from it because it has the best interface. Both my phone and my laptop can access the same music (thanks to Subsonic), but I like having a real keyboard and pointer for selecting things and it's fewer physical boxes to fuck with. (And, no, I'm not -also- going to carry a dedicated
Re:Yes, 3.4 BUT... (Score:4, Informative)
Re:Yes, 3.4 BUT... (Score:5, Funny)
Achievement Unlocked
Most gratuitous use of the word "fuck" in a serious Slashdot post.
Re: (Score:3)
You must be new here and not have seen many of my posts.
I think one of them not only used fuck a ton, but also got me a visit from the Secret Service.
Re: (Score:3)
I think one of them not only used fuck a ton, but also got me a visit from the Secret Service.
Inquiring minds want to know. I always fancied inviting the guys in black suits to tea myself.
Re: (Score:3)
They have, pulse for end users and Jack for people who care about their audio.
Why not one interface? Because the low latency goals of jack conflict with the low power goals of pulseaudio (designed for use on netbooks and tablets etc) why desktop users had to suffer so much into the pulse transition just to cater to that crowd I have no idea.
Re: (Score:2, Informative)
The Pulse/Jack difference isn't power consumption, it's intended use.
Pulse provides a simple API for just making noise.
Jack provides a low-latency API (like you said) for the purposes for music creation and other things that require true low latency audio (and no, that doesn't include games) with a significant trade-off in complexity.
Re: (Score:2)
The problems with Pulseaudio were partly caused by buggy drivers, partly by buggy programs and partly by distributions switching to Pulseaudio before the other problems were sufficiently addressed. There were plenty of audio problems before it came along and neither keeping things as they were nor using jack for everyone would have been trouble-free. Jack developers have never advocated it for ordinary desktop users. My experience on both desktop and laptop has been that after a couple of problematic releas
Re:They fix the sound bullshit yet? (Score:5, Informative)
"Another" audio subsytem? Today standard is PulseAudio on ALSA, and that it has been like that for at least 4 years. Before ALSA there was OSS but Linux developers disagree with how OSS do the sound mixing and resampling in kernel space (for better latency, they said) and OSS went closed source for awhile. PulseAudio is an effort to unite all the sound server/mixer (ESD from GNOME, aRTs from KDE or ALSA's own dmix) plus some nifty features like better battery life (less wake ups per second).
Update your FUD once in awhile, please.
Re: (Score:2)
Would you two please just go get a room? We adults have important things to talk about.
Re: (Score:2)
I must be getting old.
You and me, both. I remember 0.96, or sumfin.
Oct, 1991, comp.os.minix (Score:5, Funny)
I still remember that message, on Oct 1991, from a guy by the name of Linus Benedict Torvalds on comp.os.minix
"Do you pine for the nice days of minix-1.1, when men were men and wrote :-) "
their own device drivers? Are you without a nice project and just dying
to cut your teeth on a OS you can try to modify for your needs? Are you
finding it frustrating when everything works on minix? No more all-
nighters to get a nifty program working? Then this post might be just
for you
Re: (Score:3)
Nobody does !@#$ like that these days. Why is that?
Because miracles don't happen very often. Thanks Linus.
Re:Oct, 1991, comp.os.minix (Score:4, Insightful)
I went back to college in 2008 and casually turned to one of my peers and said "do you remember when Back To The Future came out?" and he said "no, because I was born in 1990".
I felt old.
GNU? (Score:5, Informative)
Re:GNU? (Score:4, Insightful)
Yes, the GNU GPL licensed kernel doesn't have anything to do with GNU.
Re: (Score:3, Funny)
the GNU animal
You mean, a gnu? [wikipedia.org]
Re:GNU? (Score:4, Funny)