Linux Cluster For Processing DSP Effects? 98
SpLiFFoRd asks: "I'm a MIDI musician and songwriter who seems to be constantly running out of processing power for my VST effects when I'm working in CubaseVST. Coincidently, I've also been working on explorations into Linux clusters and parallel processing. As I watched my CPU meter in Cubase hit 95% one night, I began thinking...'What if there was a way to farm out the work of my single processor to an outside Linux cluster with multiple processors to speed things up as well as enable me to run more simultaneous effects without straining the system?' I've been looking around, but don't seem to see any others who might have had this thought. This would be a tremendous real world application to me, and probably to others as well. Do you know anyone who might want to tackle such a project?" Sounds like a worthy project, but sound processing is still in its infancy under Linux, and while the horizon looks good with projects like GLAME and ecasound, I'm wondering if it may be a while before something like this will be feasible. That issue aside, however, I think something like this would be really cool. What about you?
"As a side note, when I say 'VST effects', I mean both the original and third party plugins for audio effects such as digital delay, reverb, tapped delay, pitch shifting etc. that work with Cubase."
Linux must support mLAN. (Score:5)
See here [slashdot.org] for my earlier comment/response.
Some links to mLAN resources, by no means complete, please follow-up with more URL's if you find them (I don't have time):
Pavo (leader in 3rd party mLAN development): http://www.pavo.com/ [pavo.com]
Yamaha's mLAN (pretty pictures): http://www.harmony-central.com/Newp/2000/mLAN.html [harmony-central.com]
Details on mLAN products (Japanese) http://www.miroc.co.jp/Tango/qrys/NEWS/yamaha/mlan .html [miroc.co.jp]
mLAN stands for "musical LAN", which is exactly what it sounds like. There is no point us re-engineering this for Linux, we simply need to work on making Linux a good platform for mLAN development, and we'll get to the point where we can route audio just like we route IP, and thus do things to it just like we do with the content fields of IP packets...
If you want realtime DSP-rendering farms based on Linux, then keep an eye on the work being done to add Firewire/IEEE1394 support to the Kernel. This is where it will start.
-- Jay Vaughan - jv@teklab.com
Re:Real time is a hard problem (Score:2)
Your main problem is going to be latency. Even a few milliseconds of latency is glaringly noticeable when working with audio. Many of the effects that you are wanting to use even use delays by this small amount to achieve their goals (phasing using around 3 ms, and flanging a little bit longer delays, chorusing slightly longer, etc.).
The second issue I was going to bring up is why use a cluster of CPU's at all? You do realize there are _MANY_ DSP PCI boards available that can handle the tasks you want. They aren't cheap, but neither is a cluster...
Creamware, Korg, and Digidesign all make boards that perform the tasks your talking about with several DSP's on a board. Going with a Pro Tools system from Digidesign allows you to add more boards as your processing needs increase.
Going with a computer cluster is the wrong way, if you need DSP's, by god use DSP's!
Re:Still got a *LONG* way to go ... mLAN instead. (Score:2)
This doesn't make any sense in the context of audio, though. In the context of audio work, as *long* as things are being done in realtime, at a minimum, then you're alright. I'm speaking purely from a professional musicians perspective - not a compsci/compmus type researcher or engineer - because I believe that is the stance from which this article was originally posited.
Offline faster-than-realtime rendering of audio is typically not all that useful, since audio is a linear medium that must be perceived serially.
It is useful as a backend process - calculating reverb tails and filters, when designing a realtime DSP engine, really does benefit from faster-than-realtime processing, but it doesn't matter to the end user how fast it's done, as long as it sounds the way its supposed to sound when we listen to it in realtime.
You very rarely get a scenario where you're listening to how your compression sounds by 'skipping through the audio' in non-1:1 ratio's - at least, in my experience, this hasn't been very useful...
As *long* as the results are done in realtime, we're cool. And as long as the architecture supports realtime results, we're also cool.
So in my scenario, having the Linux "DSP FX boxes" accessible with realtime mLAN-type networking is acceptable and workable.
The typical musician usually just wants to add *more* effects to his mix, not more *complicated* effects (at least not in the midst of production of a track - in experimental mode, yes, sure, give me all the weird granular effects I can handle, but when the time comes to refine a track, I'll stick with what I know so far...), and thus being able to quickly and easily do this by adding a $600 P3box with mLAN/Linux support any time he wants to would be a productive architecture.
And this is what mLAN will give us, ultimately. The day may come when the average musician points at a rack of blinking lights and go "that's my PC-based effects farm, I add to it by buying cheap PC's and I get more effects busses instantly over mLAN"... I look forward to it.
mpeg4? (Score:1)
Re:Wouldn't custom hardware be better than a clust (Score:3)
Yup. Here's some links:
Symbolic Sounds Kyma system, a DSP-based programmable hardware solution for audio. [symbolicsound.com]
Details on the Kyma system (drool!):
http://namm.harmony-central.com/SNAMM00/Content/S
Hardware DSP hacking tools:
Analog Devices EZ-SHARC/EZ-KIT DSP experimentation board (which screams for Linux support, incidentally) [analog.com]
Re:Go SMP instead of distributed (Score:1)
Re:Still got a *LONG* way to go ... mLAN instead. (Score:1)
Sounded like a product announcement or infomercial.
Re:YOU ARE A TOTAL IDIOT (Score:1)
Once you're happy with the effects on your track, you should definitely apply them, so as to free up processor power for more effects and tracks..
and if you dont want to lose your original recording, then you simply make a copy of the original track before applying the effects and leave it muted.
You should only be using realtime effects to allow you to adjust the settings on the fly to get the sound you're looking for.
any good sequencers for linux btw? (Score:1)
Re:Csound (Score:1)
Horizon was audiotechque and koobase 2 years ago (Score:2)
Unless the purpose is to study clustering methods it's usually cheaper to get a faster computer.
Write effects to disk (Score:1)
maybe if it's just midi it's different but it's worth looking into if u have hard disk space but are running out of clock cycles
-----
Kenny Sabarese
Left Ear Music
AIM: kfs27
irc.openprojects.net #windowmaker
Don't use Linux for that, silly. (Score:1)
Symbolic Sound [symbolicsound.com], for example, makes a box called the Capybara which is comprised of 4 DSPs (expandable to 28) and a bunch of RAM, all of it specifically designed for sound computation. This is the box that sound designers for Star Wars, etc. use. Why bother with spending tens of thousands of dollars on a Linux cluster when one Capybara will probably offer more effects power than you'll ever need?
Or, get Pro Tools and use TDMs. Those sound better anyway.
anyone tell me good (free hopefully) software? (Score:1)
This is an old topic for linux-audio-dev folks (Score:1)
"Sounds like a worthy project, but sound processing is still in its infancy under Linux, ..."
I think the situation is much better than that. For a long time now, Linux has offered a lot to musicians and other audio enthusiasts. Just take a look at Dave Phillips' Sound&Midi page at http://www.linuxsound.at [linuxsound.at] ! Linux audio apps might not have the same polished neat-knobs-and-stuff look as the popular Windows+Mac apps have, but they are very powerful tools for working with audio. Check Ardour, Quasimodo, aRts, GDAM, terminatorX, Snd, Sweep, Slab, SAOL, Csound,etc, etc... and not forgetting my own project, ecasound.
As for the clustering discussion, this is an old topic for linux-audio-dev folks. See for instance these threads: http://eca.cx/lad/2000/19991108-0307/0575.html [eca.cx] and http://eca.cx/lad/2000/19991108-0307/0580.html [eca.cx]. As a quick summary, I can say it's not easy. At the moment, even getting solid rt perfomance for running one userspace audio app on a Linux box requires a patched kernel (Ingo's 2.2.x ll-patches that never made it into the mainstream kernel are probably the most well-known). Running multiple rt-processes is even more difficult. Now from this, it's easy to see how far we are from realtime DSP-clustering. But nevertheless, problems can be solved, and Linux is a very good platform for experimenting. If you are interested in these matters, be sure to check the linux-audio-dev resources at http://www.linuxaudiodev.com [linuxaudiodev.com].
SHARC DSPS on Creamware Pulsar (Score:1)
Why try and do this kind of thing with x86 machines hooked up with ethernet?? for low-latency stuff like audio, i'd guess its way more trouble than it's worth.
Break this down... (Score:2)
One - you're using real time effects processing. In this case, the latencies across a network would kill you. It is generally accepted that the human sense of hearing can distinguish sounds within 3 milliseconds of one another (Bracewell's Sound For Theatre.) 3 milliseconds latency is hard enough for software on the host PC to meet as a goal - keeping the response of all the computers within a cluster to under 3 milliseconds distance from one another would be difficult. A trained musician (I live with one and have tested this, FWIW) will get very confused if monitoring latency causes them to hear what they're playing more than about 10 ms from when they play it.
In the second case, non real-time post-processing, a cluster would be more useful. Essentially, it would be a parallelization process - each computer in the cluster gets a chunk (perhaps even allocated by some measure of speed) and they all grind away seperately and return their results to the host, which reintegrates the data into a continous whole. However, problems arise when the effect you're using is a time-domain effect. Frequency effects (filters, EQ) and amplitude effects (compressors, envelopes) won't have problems with being split, but time-domain effects like reverb and echo will have an issue. Imagine if the echo from one segment spills over into the next? I suppose you could extend the echo beyond the given data, and the host machine could mix them with the next machines data for a result. But what about multitapping echoes, where the echoes can themselves be echoed? Problems arise even with non-realtime processing.
The ideal solution is dedicated hardware, either internally (programmable DSPs with VST plugins for them) or externally (banks of I/O with external reverb and such). Both probably cost more money than software, but you pay a price for realtime. mLAN is just an implementation of the latter idea, using Ethernet for the I/O and the dedicated DSPs for the effects.
Finally, for all you Linux weenies who are crying "Why Windows for audio?" - I hate to break it to you, but Linux isn't very good for real-time. In fact, in some ways it's worse than Windows. I'm not a programmer - these are the words of a friend who is, and manufactures timing-critical show-control hardware and software. For a long time (as long as it was practical) they used Amigas, because the RT support was better. Eventually, they moved over to NT, because it was the best available.
So, for the foreseeable future, it looks like the best investment is still decent analog gear - after all, the EQ on my old RAMSA analog mixer doesn't add 5 ms of delay to my signal.
Re:What's really needed is an easier interface... (Score:1)
Yeah, yeah...java is so slow. Well, I'm definitely not an expert in compiler theory, etc. but I have had good experience when using properly designed Java applications. There is also a new component available for JINI to allow non-Java applications to join a JINI federation, allowing for native processing performance.
You could do it with Plan9 (Score:1)
DSP processing is just one more task that you could assign to a CPU.
Just my 2 cents.
MOSIX before Beowulf? (Score:3)
http://www.mosix.cs.huji.ac.il/
MOSIX uses transparent migration of processing amongst other nodes on an Ethernet network [read the site for more particulars]. It's also pretty easy to set up [basically just run the install script and make sure the nodes can locate each other].
Might be helpful if Beowulf is a bit much for your needs.
MOSIX (Score:1)
Isn't this exactly what Mosix [mosix.org] is all about? Dumb cpu time clustering for threaded applications?
Some day, I'll try it so I know, I promise
Link followup... (Score:2)
http://www.yamaha.co.jp/tech/1394mLAN/mlan.html [yamaha.co.jp]-- detailed mLAN specs.
Wouldn't custom hardware be better than a cluster? (Score:4)
Assuming that the software's there, dedicated DSP hardware should by far be the cheapest and easiest solution.
OTOH, good luck finding the software.
Addendum: Actually, I think that the effects generators on most modern sound cards are just DSPs, RAM, and some firmware. You might be able to find (or write or contract) hacked drivers that would let you arbitrarily reprogram this for your own purposes.
Just a few thoughts.
Math (Score:1)
Re:Get a Macintosh... (Score:1)
Clustering work @ IBM (Score:1)
P.S. It was mentioned awhile back on Slashdot that there doesn't seem to be many internships relating to UNIX/Linux. A few phone calls and emails landed me this 9 month internship with IBM. For any aspiring Linux enthusiasts: The jobs are out there
And you can get ProTools for free (Score:2)
It's at:
http://www.digidesign.com/ptfree/ [digidesign.com]
I don't wanna press my luck, but how about a free linux version, Digidesign?!! ;)
W
-------------------
What about MOSIX? (Score:1)
It's really quite easy to send processes to other nodes of the cluster... and I'm thinking that if the program this guy is using performs well on an SMP system, it might-possibly-i-reckon-could work on a MOSIX cluster.
Any ideas people?
Mike
Re:There are already much better solutions out the (Score:2)
I do recommend Digi, though I don't actually use it- I use hardware analog mixing and limiting (heard on my most recent album [besonic.com], the tracks named after airplanes). I do think that with enough skill and dedicated analog gear you can top the quality level Pro Tools will give you (though if I had Pro Tools- I could use _that_ as well and do even better. So even then, Pro Tools is desirable). However, I have to seriously confirm all that Funkwater says here: you don't want a Linux cluster for DSP. Maybe you want Linux support _for_ the hardware DSP you can already get, so you don't have to run a Mac or Windows. But you don't want a Linux cluster, unless you have some sort of non-realtime arrangement that can make use of insanely demanding 128-bit calculations to slowly 'render' a final track far better than even modern DSP allows. However, we're talking audio- that's hard to even imagine, and the DSP _is_ out there and very capable.
Re:Linux sound support really sucks (Score:1)
check out www.alsa-project.org [alsa-project.org], you will see that Echo refuses to release product specs to the open source community. grrr.
Re:What about MOSIX? (Score:1)
Ask and ye shall receive! (Score:2)
Yeah, so this post is a shameless commercial plug. It's also what was requested.
Re:Why use a PC's CPU time in the first place? (Score:1)
Re:Pipelining (Score:1)
I'd say that the best analogy is a unix pipe. To put several processors to work you would run all of the effects as separate processes piping the data from one process to another and have the operating system take care of nasty details..
Splitting one effect across several processors would need some pretty clever algorithms, but then again just buying a dsp-card or dedicating a fast enough a processor for the job might just be enough.
Re:Don't use Linux for that, silly. (Score:1)
Comment removed (Score:3)
Re:Why use a PC's CPU time in the first place? (Score:1)
Now, with all the external cheap boxes, you can do this by patching everything together, aux sends/returns, etc, making sure all the input levels are set correctly (may need adjusting for each track you do). You'd have to play the track on the PC, convert it to analog, send it through all the boxes (each adding to the noise floor) and then convert back to digital, recording on an additional track. Might sound OK but there would definitely be HISSSSS and it wouldn't be professional quality, that's for certain.
Or... use PC software to do it all, keeping it purely in the digital domain, with a much cleaner result. Keep in mind he's probably working with 24-bit, 96kHz audio, which has a MUCH higher S/N ratio than any analog gear you can pick up at the local music store. It also requires a helluva lotta power to process.
The only time I can see using the dedicated reverbs is when you want to record a wet sound in the first place, and you happen to like the color that a particular box adds (which does happen often)... as opposed to recording the dry sound and then processing via software later.
Another thing the CPU-based DSP is used for is mixing down. You finally get all the tracks sounding good, now you have to get 8 or whatever tracks of 96/24 audio mixed down into 2 tracks of 44.1kHz 16-bit that can be burned to CD audio.
I'm just barely scratching the surface here, but now you can see the limited (but still useful) function of the external DSP boxes.
Re:Go SMP instead of distributed (Score:1)
Re:MPEG 4 structured audio... (Score:1)
Derek
Sheesh (Score:2)
The correct question is "Would I be better off writing this with vi or Emacs". In the open source world, the idea is not to get people to give you free software, it's that people want something done so they do it. If your idea is interesting enough, other people will get involved.
Rich
How we do it in TV/FILM (Score:1)
The animator would for example want to render a 3D scene so he would tell the renderque manager to put his job in the farm.
If its a 75 frame animation sequence, the que manager will send one frame to each machine in the farm to do instead of all of them working on one frame simultaneously (which is what you are suggesting).
The way you would have to go about doing what you want to do in my opinion is to write an interfacing app which would take the sound break it up into lots of little bits, send each little bit to a machine with sufficient resources and put them back together once theyre all done as one whole sequence.
This would work in the same way as an existing and very reliable model.
Hope i've helped out.
Re:Pipelining (Score:2)
that kindof thing would paralellize well...
tagline
Re:Don't use Linux for that, silly. (Score:1)
Re:Get a Macintosh... (Score:1)
Re:Go SMP instead of distributed (Score:1)
Of course this would require Cubase to become SMP-aware on Win32, which probably is about as likely as the distributed thing.
Cart before horse? (Score:1)
There are already Windows-based realtime DSP effects boxes that use mLAN as the routing interface, currently in *development* by a number of large music companies (I can't drop names here, I'm under very tight NDA).
=
Hey, you should let these guys know that before they code support for ultra-niche hardware configurations they should implement SMP support. Unless "number of large music companies" = digidesign; because protools is the only audio software I can think of that already supports SMP.
badtz-maru
Great for video editing (Score:4)
I do editing using Adobe Premiere on my Macintosh. When I finish a project, or finish a "stage" of it, I have to render the project overnight. My next editing machine will likely be a dual processor PC, but for now I'm using my Mac.
In addition to my Mac, I have four other computers available, all of which will happily run Linux. Setting up a Linux cluster would be a good project, and is definitely feasible. But my Macintosh has no means to offload the rendering to a cluster...
I think this would be a fantastic product, I can see it maybe as a wholly separate product. It would run separate from the actual editing application and distribute its rendering load to some sort of cluster (I'd assume a customized Linux cluster).
Computers are cheap now, but it is expensive to buy very fast machines every few months... why not allow for clustering of your old machines (even if you do replace the primary one every few months)? Then you could still use those extra CPU cycles, and maybe you could actually use your money effectively.
I'm no engineer but.. (Score:3)
Anyone have any feedback on using distributed computing for such "Real-time" things such as Video and Audio?
MPEG 4 structured audio... (Score:3)
If you did your effects in Mpeg 4 structured audio instead of MIDI, you might get considerably more performance.
Why? Because there is considerable research in compiling MP4-SA to C and then running the native code, to get greater performance out of arbitrary effects, filters, etc.
More info is available here [berkeley.edu]
Nicholas C Weaver
nweaver@cs.berkeley.edu
Re:YOU ARE A TOTAL IDIOT (Score:1)
My dual G4-400 *easily* handles 3-4 effect on 8 *stereo* tracks realtime, no lag!
Plus, any speed increase you gain by using the cluster (I've set up 128 node MPI clusters before) is going to be lost just *getting* the data there...
It could be a nice idea for a large job - I don't know what kind of audio you're processing - but for my everyday use (or other musicians I know) clustering just isn't practical (in the sense of time or money).
Ok, you're not a *total* idiot -- there's just better ways of doing it. A faster machine will take you much farther than a few clustered Pentiums.
Re:Here's an idea... (Score:1)
Re:Pipelining (Score:2)
This is a natural for parallel processing.
--Mike--
Re:Get a Macintosh... (Score:1)
Re:MPEG 4 structured audio... (Score:2)
Basically, MIDI contains commands such as "play an F#", "play quiet", "play loud", "stop playing any notes", "sound like a piano", "sound like an oboe". Synthesizers take those commands, interpret them, and send out an audio stream.
Software effects, such as those in Cubase, operate on that resulting audio stream, either taking a file for input (I believe it is in
In any case, since he wants to use Cubase (and presumably other professional tools), rather than rolling his own, the format of the audio is not open to negotiation.
Does anybody know if VMWare benefits from being run on a cluster (Beowolf or other)? Perhaps you could run Windows and your entire music workstation environment over the cluster, rather than shipping off specific jobs? (I don't have the foggiest idea whether this is even remotely possible).
On the other hand, what about using CSound to do your processing on the cluster, since CSound runs just fine under Linux. Again, I don't know if any of the clustering technologies would benefit the performance without being rewritten to take advantage.
Yeah, get a mac..... (Score:1)
Re:Pipelining (Score:1)
Realtime multiprocessing would be possible, I'm sure, but would need some pretty clever algorithms to get anything near realtime (maybe each processor takes care of a single sample at a time, round-robin fashion).
What effects are you running? (Score:1)
There are already much better solutions out there (Score:1)
I recommend DigiDesign ProTools. ProTools can be a bit expensive, but the time you'd save inventing a solution and actually making music would be worth it.
ProTools is really 2 things -- hardware AND software. ProTools the hardware is a few PCI cards that have "farms" of DSP processors, integrated SCSI, etc. It also consist of 24-bit analog-digital converter boxes.
ProTools, the software is as good and better in some ways than Cubase, plus supports TDM plug-ins. These plug-ins can emulate amplifiers, run high powered reverb (like LexiVerb), etc. and they do it on the ProTools hardware, which means you can run 8 times the effects, or some really really high end effects, all in in real-time (so you can tweak them up to the very last mixdown)
There are a few other software packages that support ProTools -- MOTU Digital Performer, Emagic Logic Audio, and Cakewalk Pro.
ProTools is the stuff that all the MTV-esque bands are using -- from Limp Bizkit to Britney Spears (who may suck but have great sound).
Check out http://www.digidesign.com
No I don't work for DigiDesign.. I don't even own any of their ProTools gear, I just know it's the best. I own a Digi001 - a nice intro to hard disk based recording (under $1000) http://www.digi001.com
write a vst hack (Score:1)
Why use a PC's CPU time in the first place? (Score:1)
What advantages are there to using a fully programmable processor to do nothing more than emulate a much less expensive piece of hardware?
-Poot
Re:Pipelining (Score:1)
If nodes can't achieve real time performance, it just frees up bandwidth. If your network goes through a good switch, you should see better numbers. If you don't need 5.1 channels of audio, you'll scale better. In any case, a 100Mbps network will not be the bottleneck in an audio application.
Re:Pipelining (Score:1)
If a pipe-like solution were used, the audio data would effectivly have to travel N times. Now if the nodes were set up serially, each with two network cards, daisy chaining along, this may work nicely. Each node would have to specialize in a particular effect, some with dual CPU's if thier effect parelizes well, and with CPU speeds chosen such that each effect takes the same amount of time.
The system would be like lining up a bunch of colored glass plates (filters), and shining light (audio data) though them and seeing what comes out. It would be rather interesting to play with, but also a lot of work.
One word: LATENCY (Score:1)
Well, for the parallel side of things... (Score:2)
Re:Pipelining (Score:1)
Ideally, a "clustered" solution should be as symetrical as possible. The daisy-chain system would give the "simple delay" processor a vacation, while the "stereo chorus" processor bottlenecks the whole system. This is where the clever algorithm comes in.
All in all, I agree most with the previous posts that specialized DSP hardware could handle this much better than some hacked-up Linux boxes.
Re:MPEG 4 structured audio... (Score:1)
It doesn't. Software has to be written specifically for parallelism (this even includes the software running on the virtual Windows machine).
It would be conceivably possible to design VMWare to emulate a multiprocessor environment, then use some different Linux boxes to emulate the different processors, but this approach would have two problems:
Re:MOSIX before Beowulf? (Score:1)
http://www.byte.com/column/BYT20000925S0005 [byte.com]
Re:What effects are you running? (Score:1)
This is common for 3d renders. (Score:1)
---
Problem of Parallelization (Score:1)
DSP functions can't be easily parallelized
That is to say: suppose we have a 10-second sound we are trying to render. Lets suppose you want to break that up into 10 1-second slices to farm out to your DSP cluster. The problem becomes, these are not discrete items. The sound of slice 4 depends on what was happening in slice 1, 2, and 3 in a cumulative fashion. Therefore, it becomes very difficult to render slice 4 until you know what waveforms should be present from the previous slices....
I've thought of the same thing for Text-To-Speech conversion - but I ran into the same kind of problem, except it was a question of punctuation. For instance:
This sentence, my friends, is hard to parallelize, right?
(Its hard because of the speech breaks that would occur at the punctuation marks.)
-DM
Re:Still got a *LONG* way to go ... mLAN instead. (Score:2)
Stupid assumptions, made liberally.
specs are here... (Score:2)
http://www.yamaha.co.jp/tech/1394mLAN/mlan.html
Can't do much better than that, honestly. And mLAN hardware is out there right now, there's no reason enterprising developers can't get started as of today.
I'd do it, but I'm working on stuff that sits on top of mLAN first on Mac/PC
Re:MOSIX before Beowulf? (Score:1)
Re:YOU ARE A TOTAL IDIOT (Score:1)
How about 5-7 effects on 24 stereo tracks realtime no lag? 48 tracks?
Re:Go SMP instead of distributed (Score:2)
Why not... (Score:1)
Or perhaps maybe see if there are some DSP cards that would work with the VST plugins your using. I think farming it out in real time is not something thats going to happen for a while. I mean, the latency will definately be a problem....
Harmony Central [harmony-central.com] had an article [harmony-central.com] (old, but still interesting), about a DSP card. And here [yamaha.co.jp] is some info on a yamaha DSP Farm.
I think going that route would be better....
--
Re:Pipelining (Score:2)
For effects on a file the machine would be able to wait. But what happens when this theory gets applied to real-time things like Video Editing?
Without a fast enough hard drive, you loose frames. Without a fast enough backbone/backplane between machines would a similar effect happen?
Still got a *LONG* way to go ... mLAN instead. (Score:5)
There's a lot to say about MIDI - sure, it's slow, sure it's limited, but as far as serial control protocols go, it has *definitely* weathered the test of time.
Anyway, this slashdot article is not a problem of MIDI, but a problem of i/o architectures.
The problem with realtime Cubase effects is that they're still very processor/architecture/OS bound. There's no functional way of getting data in and out of a Cubase effects module other than by using the strict architecture path that the effects software has been written to use - this problem exists for Cubase plugins, ProTools/TDM plugins, DirectX plugins, etc.
A better alternative to investing energy into the MP4-SA side of things (which is not a new technology - for a similar example, check out Thomas Dolby's Beatnik, which is a fairly similar implementation) is to instead investigate Linux' support for mLAN, which is Yamaha's "musical LAN" protocol that sits on top of Firewire/IEE1394.
mLAN is the solution to this article's stated problem.
By adding really good support for mLAN in Linux, it's really *NOT* unreasonable to say that Linux clusted boxes could be used for REALTIME DSP processing tasks. mLAN represents audio, video and media-data as individual streams on the same wire - a very good implementation of mLAN/Firewire in the Linux kernel would allow you to separate these streams, route them off to different processes/processors, and stick them back on the mLAN wire in realtime.
This will happen in the PC world.
There are already Windows-based realtime DSP effects boxes that use mLAN as the routing interface, currently in *development* by a number of large music companies (I can't drop names here, I'm under very tight NDA).
With good Firewire support, Linux (and other OSS OS'es) can keep up with these developments, which will hit the market, I'd predict, by Q3 2001. Put the audio out on the wire, route it wherever you want to route it, and get it back off the wire, and we'll never have these limited 'plugin' problems again
The audio world has *ALWAYS* been about interoperability, open specifications, and shared protocols. MIDI would never have come so far, and given us so much, if it weren't for the fact that hardware manufacturers took the time to figure out good ways to work together, in spite of competition. Audio effects architectures have always been very well documented - the average professional mixing desk has been designed from the ground up to be able to work with gear from *all* sorts of manufacturers, using simple protocols and specifications for audio signal routing. (XLR, balanced, etc)
This changed recently with the advent of software plugins, where hardly *ANY* companies are truly working with others to build a common platform - Oh sure, Digidesign have their TDM system, as do Microsoft, as do Steinberg, but pretty much all of these systems were developed from the ground up to be proprietary and to be used in giving 'market edge' to the respective owners.
Next year, software plugins won't be as badly implemented, and the audio world will actually be able to work cooperatively again... because of mLAN.
So, we should work on getting good mLAN/Firewire support in Linux, and develop open API's to solve the problem. Then you can build your Linux audio-rendering farm, hook it all up with mLAN, and forget about proprietary crappy interfaces for your audio work in the future...
DSP Farm (Score:2)
Regrettably, I think the host program (CubaseVST in your case) would have to be heavily modified to allow it to understand that Cubase itself will not be processing the requests.
Perhaps someone currently beginning development on a Linux Multitrack environment from scratch would be interested in this idea. I think it would be much easier to implement from the start than try to convince existing manufacturers to chnge their philosophy.
Another multitrack function which is badly needed is a global file management. (More like a document management system, but for sound events.)
I have roughly 30 GB of WAV material ranging from small loops (50K) to whole mixdowns (50MB). It gets hard to find the sound one is after.
How do I find an example of your current Cubase material?
Real time is a hard problem (Score:2)
If the effects are being done in real time, Beowulf probably won't work very well. The inter-node latency would be a killer.
If you're doing non-real time (from a score/cue sheets etc), a beowulf could be helpful. The best bet would be to lay the local network out as a ring (two nics in each node, crossover cables interconnect the nodes in a ring). Each time segment could get processed by each node in turn like an assembly line until it ends up back at the master where it goes to the output file. The software would be 'interesting' to write, but doable.
who knows one day it might happen..... (Score:1)
Linux sound support really sucks (Score:1)
I can't get a simple Crystal Sound card to work properly - Sound Blaster Compatible, but the driver isn't. Frickin notebooks with their frickin top secret chip set crap designs.
But I mean, even if I wanted to get Cubase VST Linux and chain it through a Darla or some other medium to high end soundboard, where's the support in Linux? There's precious little- OSS is almost the ONLY viable solution to most problems...
What I would like to know is how to develop sound card drivers cause trust me, I'd be off and running. I bought the Linux Device Drivers book which was cool, it explains how Linux uses drivers, what I need is a good article on how to make the SOUND CARD part work.
Like it (Score:2)
The only probblem would be setting up for road shows. but for studio work, it would be great.
Maybe I can build me a wall of sound [raymondscott.com] like Raymond Scott [raymondscott.com] did.
(side note: Raymond Scott invented the sequencer, and was a teacher of Robert Moog. His novelty tunes were used widely by Carl Stalling for themes in many Warner Bros cartoons.)
GStreamer and distributed processing (Score:1)
GStreamer [gstreamer.net] is a project I and another person are currently working on pretty much full time (I'm between jobs and he's on vacation) at the moment. It's been around for a bit over a year, and has grown considerably since in that time. It's a pipeline-based architecture somewhat similar to M$'s DirectShow, allowing arbitrary graphs of plugin filters, processing just about any kind of streamable media you can think of.
This lends itself quite a bit to distributed processing, since you simply (for now) code up an element pair that will enable you to join two pipelines over the network, via TCP or somesuch. Eventually we plan to have CORBA interfaces wrapped around everything, which while slowing down data tranfers, has the potential to make everything even easier.
A release is planned for the beginning of next year (midnight, Jan 1st, Millennium Release), which should provide people with a stable enough API to start writing apps with it. There are still going to be some major changes like a shift to GObject (currently uses GtkObject, so it's tied to X, bleagh), and some major feature enhancements like the graphical pipeline editor. Changes to the system should affect plugin writers mostly, as the "user-level" API should remain basically the same.
The two of us are interested in audio and video respectively. I want to build a live mixing system, he wants to build an NLE. The two have much overlap even ignoring the GStreamer core, so things should get interesting. There are some other people with some pretty cool ideas that we'll try to incorporate, one of which is distributed processing.
Anyone interested in this project should head over to http://www.gstreamer.net/ [gstreamer.net] and sign onto the mailing list (gstreamer-devel). We'll be busy coding through the end of the year, but we welcome anyone who would like to use the system. The scheduling system is currently being re-written for the 4th or 5th (and hopefully last) time, so anyone with specific use cases can help the process along by enumerating them to make sure the scheduler can deal with even the most bizarre cases.
As far as VST and other plugins, there's a project called LADSPA [ladspa.org] that's building a plugin interface similar to VST. Quite a few plugins are already available, including FreeVerb. The problem with VST is that it embeds the Windows-based GUI into the plugin. This might be shimmed with libwine or something, but is a tough problem. If someone would liket o tackle that, please, step right up, we'll help you as much as we possibly can.
Go SMP instead of distributed (Score:3)
Re:Still got a *LONG* way to go ... mLAN instead. (Score:1)
Latency (Score:1)
-Daniel
Re:I'm no engineer but.. (Score:1)
If you need SCSI speed, take Gigabit or Myrinet
(yes, there's still the protocol overhead, I know - but just count the bytes - it's probably fast
enough).
I was thinking of something similar... (Score:1)
(I was told this text was hard to understand if you don't know that sound is nothing more than waves, and that waves can be 'drawn' in digital by stacking together a bunch of numbers representing the amplitude of the wave at exact timed moments. All musical effects are math on those 'strings' of numbers))
I was thinking of starting a musical project after my current gig [eevolved.com] is over.
It would be cool if we could create a musical tool (maybe a replacement to proprietary VST, but completly VST compatible...) that was completely free and ultra powerful.
Python is a language I've come upon recently and it seems to be very apt for massively paralleled computational tasks such as Digital Signal Processing (DSP) and VST effects (and musical effects in general).
All that would have to be coded is an agent app that does work on incoming packets and outputs to another specific agent(eventually the signal data would return, completely processed, to the output computer)
Live shows using the software could consist of one controller computer that changes parameters for each effect according to musician's input(knobs and/or instruments plugged into the computer). That computer's agent would grab the input and send constructed commands to the processing cluster. The processing cluster would run all the math, then output the resulting signal to the output computer, (which is running an agent whose 'job'(or 'effect') is to play the signal) resulting in crackling beats and such)
python [linuxjournal.com] Anybody want this to exist?
real-time vs batch (Score:1)
However, you would almost certainly never get real-time performance - you could imagine a fractal "reverb" program, maybe, where the local processor provides a coarse version, farms out the detail to a render farm and moves on, but performance would be seriously variable - a single network glitch like a request for new tasks could cause the effect to coarsen as performance degrades.
Most of the time, you would rather have no effects at all than randomly variable ones.
However, clusters would be great for apps like winchimes etc. which generate random music based on complex math - although the law of diminishing returns may set in rather early....
Here's an idea... (Score:1)
This has to happen sooner or later, with some software or other! :-)
Quasimodo author PBD has some pretty good arguments on his site that it is worth doing DSP audio processing on generic PCs, BTW. You can take a look at some dedicated (non-networked, and non-GNU/Linux) hardware solutions at Creamware's site [creamware.com] - in particular their high-end-studio-in-a-PC, Scope [creamware.de]
Latency will be a problem - not because of delays audible to the human ear, but because of the difficulty of synchronising different tracks that have arrived by different network paths. Net-latency-caused discrepancies of a couple of milliseconds could introduce phase problems in audible ranges. Perhaps a "broadcast clicktrack" plugin of some kind could solve that..?
Re:DSP Farm (Score:1)
Then that plugin transmits audio and configuration data to the cluster, which in turn can run any VST plugin that exists, returning the computation result back to the VST network client.
You could do it using UDP, or over 1000baseT to get around the network latency issues.
Just a thought...
Re:Wouldn't custom hardware be better than a clust (Score:1)
The big problem is that Digi has a lock on the hardware and prices and it's too goddammned expensive. ($8000 or so for 16 tracks)
Linux machines would be about $1000+ each, so I guess it evens out after awhile.
Re:Math (Score:1)
Re:There are already much better solutions out the (Score:1)
"ProTools" is a software digital audio editing package. It will work with many digital audio cards (well, at least those that run with DAE; mostly Digidesign cards.)
The realtime effects processing comes in when you use ProTools TDM, a MixFarm card, and a protools interface (something like the 888/16 will do.)
When you're done buying all this stuff, you'll have a good system. If you want an excellent system, then you're looking at purchasing a better D/A / A/D box (Apogee is excellent) and Preamplifiers (Focusrite, etc.)
Nearly $50K later you'll have a system that will be better than most low-end SSL consoles ;)
Csound (Score:1)
Re:I'm no engineer but.. (Score:1)
What's really needed is an easier interface... (Score:3)
Where you can use any other type of OS, and send a work order to a clustered network, for processing.
Of course, for this to become reality, it would require software manufacturers support. Which is where things fall out, not because of ignorance, but it just wouldn't be cost effective (why spend extra money on a feature few users would be able to use?).
This then presents another problem, inhouse development by individual companies based on need, with no outlook for distribution of this work. It's cool that some companies would be willing to sacrifice resources to develop a distributed solution for a product that they need, but there's no guarentee that they'll release it to the public afterwards.
Cheers,
leroy.