Introduction to Linux Sound Systems and APIs 43
UnderScan writes "Linux.com is running an article on Linux kernel sound subsystems, OSS & ALSA, and their APIs. Insightful commentary from both users and the project's developers can be found at OSNews.com comments section."
Re:Re-inventing the wheel. (Score:2)
Re:Re-inventing the wheel. (Score:4, Insightful)
Re:Re-inventing the wheel. (Score:3, Insightful)
(I think beos had this covered the best.. being able to control volume per application & etc..)
Re:Re-inventing the wheel. (Score:1)
Re:Re-inventing the wheel. (Score:3, Informative)
Let's see
And DirectX is comparable to SDL, not to ALSA/OSS. How many years after Linux was in a releasable form did SDL come out?
ALSA/OSS is a driver revamp. I do believe Microsoft underwent a pretty thorough throwing out of drivers when they ended the 9x line and moved to the NT line completely.
This is the problem (Score:4, Insightful)
Re:This is the problem (Score:2)
Re:This is the problem (Score:3, Insightful)
Windows goes through the same type of API revisions.
-molo
Re:This is the problem (Score:2, Insightful)
But the point is that as a user, it's completely transparent. As a user, it's absurd that I should have to know that sound servers exist, let alone have to kill artsd to hear xmms.
Re:This is the problem (Score:3, Insightful)
-molo
Re:This is the problem (Score:2)
Yes.
Re:This is the problem (Score:1)
I run mostly Japanese-built systems (Score:2, Interesting)
I used to be a team leader back on the initial Unix (read SCO) team, and one thing that we never would have let happen was letting down the Japanese customers by not supporting their hardware.
If there is any one thing holding back Linux uptake, it is the lack of driver support for non-mainstream devices.
Re:I run mostly Japanese-built systems (Score:3, Insightful)
Re:I run mostly Japanese-built systems (Score:2)
One day it did, through seemingly random unrelated combinations of modules and kernel options. Today I grabbed an incremental release of the 2.6.7 and poof, there goes sound again!
I don't see these problems in X or the network subsystems, or disk access so why
Re:I run mostly Japanese-built systems (Score:1)
that's it.
Re:I run mostly Japanese-built systems (Score:2)
if i could boot up and there was sound, and you booted up and there was sound and joe schmoe did the same it would be fine.
Simplicity itself (Score:1)
Moving programs from OSS to ALSA (Score:5, Informative)
While oss-compat-api will give you basic sound, mixer controls, etc. sometimes you want to do more advanced things. For example, I use a tvtuner app and wanted to be able to control detailed mixer channels (Analog Capture Volume and Analog Playback Volume) that just couldn't be done with OSS. Looking at my app, tvtime [sourceforge.net], I found it only had OSS mixer controls. So I just took a weekend to learned/wrote the ALSA API version for it. Wasn't too bad and the app works great now. I can configure any control (mixer channel) on any card I want. Hopefully the dev will include the patch I sent it in the 1.0 release this month.
I know that this isn't an option for everyone. But I think as time goes on, more and more apps should have support for ALSA. Especially since it's in the 1.0.x range and the API has become more stable.
Re:Moving programs from OSS to ALSA (Score:3, Interesting)
I would not be against taking some developer resou
Re:Moving programs from OSS to ALSA (Score:2, Insightful)
the problem there is that lots of people ([i think] including myself) see this as a distro problem and beyond the scope of the kernel where most of this has moved (i think the vpn thing isn't though). And as you said making applications for configuring sound, video, etc. this is usually done at the distro le
Against ALSA (Score:2)
these systems are junk (Score:4, Informative)
Additional controls should be handled by ioctls to the special devices.
The sound system in Linux is a nightmare.
I don't think it's as bad as you make it out to be (Score:5, Interesting)
You want to "dump a file to
I'm not saying that these are insoluable, just that there's a bit more complexity than you're making out.
How would you implement "mixing should be handled intelligently"? This is something that I've thought and bitched about for a while. The ideal would be to automatically use hardware mixing up to the maximum number of channels (two on an old card I had, 32 on my current Sound Blaster Live), then fall back to software mixing. The problem is that you have to have some buffer space to mix audio, which means adding latency. When you hit 33 channels and that last channel has to be software-mixed, what are you going to do -- suddenly bump up the latency in the audio to add a buffer into the audio output line? Right in the middle of playback?
What's even been going on? (Score:2)
What's happened in the last few years (Score:5, Informative)
* The OpenAL library came around. Does 3d audio, hardware mixing, doppler, etc, etc. Good for games.
* OSS/Free got deprecated.
* The plethora of eight million halfassed sound servers resolved down into just a few -- artsd is probably going away in favor of JACK (if the article is correct), which means that we just have the (icky) esound -- which with any luck will give way to JACK -- and JACK. Finally, applications can avoid having eight million output plugins.
* Hardware mixing in drivers became par for the course. Five years ago, everyone used OSS/Free. Today, you can play audio in xmms and *still* hear your "bong" when an error occurs without having to ram everything through a high-latency sound server.
* Wavetable MIDI is, at long last, reasonably well supported. I remember the early days with my emu10k1-based Sound Blaster Live Value and earlier cards where I had to just use FM synth because I couldn't load soundfonts to my card. Linux was behind for years here.
* Creative Labs is no longer ignoring Linux users.
* At least in theory, I can use the DSP on my emu10k1 chip to do things like adjust bass.
* There are half-decent sound applications out there. Rosegarden doesn't suck, there are synths and trackers and editors. Still not the same as a Windows or MacOS-based sound editing environment, but you can actually do sound work on Linux without coding up your own tools.
I actually really like Linux as a sound-using environment. I can plonk two or three sound cards into a Linux system and (unlike Windows) all my apps let me choose what device to play out of. I can be playing music going to speakers out of Sound Card A for everyone in a room, but still be listening to what someone's saying on VoIP over my headphones connected to Sound Card B.
Just copy Core Audio and be done with it (Score:3, Interesting)
Re:Just copy Core Audio and be done with it (Score:5, Informative)
JACK [sf.net] uses a callback based API much like Core Audio.
Basically every high-end (e.g. ardour [ardour.org], JAMin [sf.net], Rosegarden [rosegardenmusic.com], Hydrogen [sourceforge.net], etc.) uses it.
You can get really low latency using it if you have good sound hardware (e.g. RME Hammerfall for extremely low latency or even an M-Audio Delta 1010). Something like an SBLive! (what I have) will need a period size of 2048 bytes with two periods to avoid underrunning (I have a Dual AthlonMP 2800+ so I'm pretty sure it's the sound card...). Stuff like QJackCtl [sf.net] and Jack-Rack [arb.bash.sh] make controlling Jack easy.
Getting realtime mode working for a normal user can be tricky, but Debian makes it really easy. Just install the realtime-lsm package and build the realtime-lsm-source package for your kernel and all users in the audio group gain the ability to run applications realtime (at least with the default config). It could be made easier (mainly by prebuilding the realtime-lsm modules for the stock kernels) but GNU/Linux pro-audio is still mostly for hackers and adventurous people right now. Stuff like PlanetCCRMA [stanford.edu] and AGNULA [agnula.org] are aiming to make everything work out of the box. I have yet to try either (I use Debian so PlanetCCRMA is useless for me) but it looks like DeMuDi has everything set up for recording out of the box.
Re:Just copy Core Audio and be done with it (Score:2, Interesting)
I agree that low latency is quite important, and anything that furthers that goal is a good
thing. Even the really good native systems still arent quite up to the task of recording lots of live musicians, which is why for now I use Protools HD on OSX.
I was recommending implementing CoreAudio (or heck Direct Sound) instead of something just similar because it would decrease the level of effort for the developers of the applications (and very importantly plugin developers). It would just be a case of recomp
Re:Just copy Core Audio and be done with it (Score:2)
I think CoreAudio could very easily be implemented on top of Jack because the APIs are similarish (well, at least as far as being callback based and realtime capable).
CoreAudio would be more worthwhile than DirectSound because OS X apps are more Unixy than Windows apps and the OpenSTEP/Cocoa stuff for the GUI is mostly implemented by GNUStep [gnustep.org]. OS X is way closer to GNU/Linux than Windows and I'm betting it would be tons easier getting an OS X Cocoa app working on GNU/Linux than a Windows app.
I don't real