Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Cluster For Processing DSP Effects? 98

SpLiFFoRd asks: "I'm a MIDI musician and songwriter who seems to be constantly running out of processing power for my VST effects when I'm working in CubaseVST. Coincidently, I've also been working on explorations into Linux clusters and parallel processing. As I watched my CPU meter in Cubase hit 95% one night, I began thinking...'What if there was a way to farm out the work of my single processor to an outside Linux cluster with multiple processors to speed things up as well as enable me to run more simultaneous effects without straining the system?' I've been looking around, but don't seem to see any others who might have had this thought. This would be a tremendous real world application to me, and probably to others as well. Do you know anyone who might want to tackle such a project?" Sounds like a worthy project, but sound processing is still in its infancy under Linux, and while the horizon looks good with projects like GLAME and ecasound, I'm wondering if it may be a while before something like this will be feasible. That issue aside, however, I think something like this would be really cool. What about you?

"As a side note, when I say 'VST effects', I mean both the original and third party plugins for audio effects such as digital delay, reverb, tapped delay, pitch shifting etc. that work with Cubase."

This discussion has been archived. No new comments can be posted.

Linux Cluster For Processing VST DSP Effects?

Comments Filter:
  • by torpor ( 458 ) <ibisumNO@SPAMgmail.com> on Thursday December 21, 2000 @12:18PM (#1410849) Homepage Journal
    I already brought this up in an earlier response on this article, but mLAN is the way to solve this problem.

    See here [slashdot.org] for my earlier comment/response.

    Some links to mLAN resources, by no means complete, please follow-up with more URL's if you find them (I don't have time):

    Pavo (leader in 3rd party mLAN development): http://www.pavo.com/ [pavo.com]

    Yamaha's mLAN (pretty pictures): http://www.harmony-central.com/Newp/2000/mLAN.html [harmony-central.com]

    Details on mLAN products (Japanese) http://www.miroc.co.jp/Tango/qrys/NEWS/yamaha/mlan .html [miroc.co.jp]

    mLAN stands for "musical LAN", which is exactly what it sounds like. There is no point us re-engineering this for Linux, we simply need to work on making Linux a good platform for mLAN development, and we'll get to the point where we can route audio just like we route IP, and thus do things to it just like we do with the content fields of IP packets...

    If you want realtime DSP-rendering farms based on Linux, then keep an eye on the work being done to add Firewire/IEEE1394 support to the Kernel. This is where it will start.

    -- Jay Vaughan - jv@teklab.com

  • Yes, I was going to post something along these lines myself.

    Your main problem is going to be latency. Even a few milliseconds of latency is glaringly noticeable when working with audio. Many of the effects that you are wanting to use even use delays by this small amount to achieve their goals (phasing using around 3 ms, and flanging a little bit longer delays, chorusing slightly longer, etc.).

    The second issue I was going to bring up is why use a cluster of CPU's at all? You do realize there are _MANY_ DSP PCI boards available that can handle the tasks you want. They aren't cheap, but neither is a cluster...

    Creamware, Korg, and Digidesign all make boards that perform the tasks your talking about with several DSP's on a board. Going with a Pro Tools system from Digidesign allows you to add more boards as your processing needs increase.

    Going with a computer cluster is the wrong way, if you need DSP's, by god use DSP's!
  • Okay, I get the difference you're pointing out.

    This doesn't make any sense in the context of audio, though. In the context of audio work, as *long* as things are being done in realtime, at a minimum, then you're alright. I'm speaking purely from a professional musicians perspective - not a compsci/compmus type researcher or engineer - because I believe that is the stance from which this article was originally posited.

    Offline faster-than-realtime rendering of audio is typically not all that useful, since audio is a linear medium that must be perceived serially.

    It is useful as a backend process - calculating reverb tails and filters, when designing a realtime DSP engine, really does benefit from faster-than-realtime processing, but it doesn't matter to the end user how fast it's done, as long as it sounds the way its supposed to sound when we listen to it in realtime.

    You very rarely get a scenario where you're listening to how your compression sounds by 'skipping through the audio' in non-1:1 ratio's - at least, in my experience, this hasn't been very useful... :)

    As *long* as the results are done in realtime, we're cool. And as long as the architecture supports realtime results, we're also cool.

    So in my scenario, having the Linux "DSP FX boxes" accessible with realtime mLAN-type networking is acceptable and workable.

    The typical musician usually just wants to add *more* effects to his mix, not more *complicated* effects (at least not in the midst of production of a track - in experimental mode, yes, sure, give me all the weird granular effects I can handle, but when the time comes to refine a track, I'll stick with what I know so far...), and thus being able to quickly and easily do this by adding a $600 P3box with mLAN/Linux support any time he wants to would be a productive architecture.

    And this is what mLAN will give us, ultimately. The day may come when the average musician points at a rack of blinking lights and go "that's my PC-based effects farm, I add to it by buying cheap PC's and I get more effects busses instantly over mLAN"... I look forward to it.

  • First of all MIDI audio data? No, midi doesn't carry audio. It carries SYSEX not information. Its been discussed several times before. And if you are working with DSPs, compressed audio is exactly what you DON'T want, especially if you are attempting to do it realtime. Not only does the computer have to apply algorythms to the audio, but uncompress it first? How does that make sence. Anyone in any graphic/audio/video work knows you compress AFTER everything is DONE. And if you are doing anything professionally, compressed audio isn't good enough. Every bit counts. bortbox
  • "This seems like an expensive and inefficient solution for the problem you describe. PCs are actually pretty bad at doing signal processing; there's a whole class of chips (DSPs) that are optimized for it. There are almost certainly DSP-based hardware boards/widgets that will handle a lot of these effects for you, and it wouldn't surprise me to learn that there was a general-purpose programmable DSP card on the market too. Researching this before you spend money on a cluster would be a Good Idea."


    Yup. Here's some links:
    Symbolic Sounds Kyma system, a DSP-based programmable hardware solution for audio. [symbolicsound.com]
    Details on the Kyma system (drool!):
    http://namm.harmony-central.com/SNAMM00/Content/Sy mbolic_Sound/PR/Kyma5.html [harmony-central.com]

    Hardware DSP hacking tools:
    Analog Devices EZ-SHARC/EZ-KIT DSP experimentation board (which screams for Linux support, incidentally) [analog.com]
  • In the end, the algorithm being processed is more critical to which platform is best, SMP, clustering, or NUMA.... based on it being I/O, bandwidth, or processor limitted. In any event, if your software doesn't support multiple processes or threads, the issue is moot, and a UP box will be only nominally slower than an SMP box of the same speed. Ask most gamers if they would rather have a dual 600MHz machine, or a UP 800 MHz. SMP does have some disadvantages over a cluster architecture if say the algorithm is extremly partionable but memory bandwidth limited. SMP boxes tend to share processor bandwidth, so processor utilization goes down after the memory is saturated. If you can send half the job off to another system, and have two UP machines run a bandwidth limited application, it will run much faster than on the same processors in one box. Now, this is a rare situation, but can still occur depending on the algorithm. I am not sure how parallizable MIDI effects are, but earlier post indicated it was I/O limitted. Of course, getting applications to support DSP/vector processor nodes of some sort would be the best solution.
  • You said "mLAN" something like 16 times in your two posts. (yes, I counted. :)

    Sounded like a product announcement or infomercial.

  • A little harsher than I would have put it, but pretty much correct.

    Once you're happy with the effects on your track, you should definitely apply them, so as to free up processor power for more effects and tracks..

    and if you dont want to lose your original recording, then you simply make a copy of the original track before applying the effects and leave it muted.

    You should only be using realtime effects to allow you to adjust the settings on the fly to get the sound you're looking for.

  • I normally use cubase for midi only. I don't use the score editting, just the normal tetris-like interface. Anything simular to cubase out there for linux?
  • right off the bat I _do_ know of the SHARC 210xx series DSPs, which can be programmed using the csound opcodes to perform DSP operations in realtime. I'm just getting into csound, so I really can't provide much more info than that. Check out www.csound.org and www.csounds.com for more info on csound.
  • Two years ago the horizon looked good with projects like Audiotechque and Koobase. Today the horizon looks good with projects like Glame and Ecasound. Who knows why the horizon is going to look good two years from now. We always know the horizon looks good even if nothing materializes.
    Unless the purpose is to study clustering methods it's usually cheaper to get a faster computer.
  • I use cakewalk pro audio in my studio and what i'll do is instead of running real time FX. i'll test them out then write them to disk. then u don't have to worry about yer CPU being used up.

    maybe if it's just midi it's different but it's worth looking into if u have hard disk space but are running out of clock cycles
    -----
    Kenny Sabarese
    Left Ear Music
    AIM: kfs27
    irc.openprojects.net #windowmaker
  • A Linux cluster is an expensive, ridiculously overpowered and basically silly way to provide what you need. Professional solutions already exist for massive audio processing.

    Symbolic Sound [symbolicsound.com], for example, makes a box called the Capybara which is comprised of 4 DSPs (expandable to 28) and a bunch of RAM, all of it specifically designed for sound computation. This is the box that sound designers for Star Wars, etc. use. Why bother with spending tens of thousands of dollars on a Linux cluster when one Capybara will probably offer more effects power than you'll ever need?

    Or, get Pro Tools and use TDMs. Those sound better anyway. :)
  • I'd really like to make some techno style stuff (like ATB etc.), but I'm not sure what software to use, or even where to start to find out. Any suggestions?
  • "Sounds like a worthy project, but sound processing is still in its infancy under Linux, ..."

    I think the situation is much better than that. For a long time now, Linux has offered a lot to musicians and other audio enthusiasts. Just take a look at Dave Phillips' Sound&Midi page at http://www.linuxsound.at [linuxsound.at] ! Linux audio apps might not have the same polished neat-knobs-and-stuff look as the popular Windows+Mac apps have, but they are very powerful tools for working with audio. Check Ardour, Quasimodo, aRts, GDAM, terminatorX, Snd, Sweep, Slab, SAOL, Csound,etc, etc... and not forgetting my own project, ecasound.

    As for the clustering discussion, this is an old topic for linux-audio-dev folks. See for instance these threads: http://eca.cx/lad/2000/19991108-0307/0575.html [eca.cx] and http://eca.cx/lad/2000/19991108-0307/0580.html [eca.cx]. As a quick summary, I can say it's not easy. At the moment, even getting solid rt perfomance for running one userspace audio app on a Linux box requires a patched kernel (Ingo's 2.2.x ll-patches that never made it into the mainstream kernel are probably the most well-known). Running multiple rt-processes is even more difficult. Now from this, it's easy to see how far we are from realtime DSP-clustering. But nevertheless, problems can be solved, and Linux is a very good platform for experimenting. If you are interested in these matters, be sure to check the linux-audio-dev resources at http://www.linuxaudiodev.com [linuxaudiodev.com].

  • Why not just lay out the cash for one of those? that will give you more than enough raw DSP power to do pretty much whatever you want, since you can simply slot extra 4-DSP-chip cards into available PCI slots.

    Why try and do this kind of thing with x86 machines hooked up with ethernet?? for low-latency stuff like audio, i'd guess its way more trouble than it's worth.

  • OK. There are a couple of possibilities here.

    One - you're using real time effects processing. In this case, the latencies across a network would kill you. It is generally accepted that the human sense of hearing can distinguish sounds within 3 milliseconds of one another (Bracewell's Sound For Theatre.) 3 milliseconds latency is hard enough for software on the host PC to meet as a goal - keeping the response of all the computers within a cluster to under 3 milliseconds distance from one another would be difficult. A trained musician (I live with one and have tested this, FWIW) will get very confused if monitoring latency causes them to hear what they're playing more than about 10 ms from when they play it.

    In the second case, non real-time post-processing, a cluster would be more useful. Essentially, it would be a parallelization process - each computer in the cluster gets a chunk (perhaps even allocated by some measure of speed) and they all grind away seperately and return their results to the host, which reintegrates the data into a continous whole. However, problems arise when the effect you're using is a time-domain effect. Frequency effects (filters, EQ) and amplitude effects (compressors, envelopes) won't have problems with being split, but time-domain effects like reverb and echo will have an issue. Imagine if the echo from one segment spills over into the next? I suppose you could extend the echo beyond the given data, and the host machine could mix them with the next machines data for a result. But what about multitapping echoes, where the echoes can themselves be echoed? Problems arise even with non-realtime processing.

    The ideal solution is dedicated hardware, either internally (programmable DSPs with VST plugins for them) or externally (banks of I/O with external reverb and such). Both probably cost more money than software, but you pay a price for realtime. mLAN is just an implementation of the latter idea, using Ethernet for the I/O and the dedicated DSPs for the effects.

    Finally, for all you Linux weenies who are crying "Why Windows for audio?" - I hate to break it to you, but Linux isn't very good for real-time. In fact, in some ways it's worse than Windows. I'm not a programmer - these are the words of a friend who is, and manufactures timing-critical show-control hardware and software. For a long time (as long as it was practical) they used Amigas, because the RT support was better. Eventually, they moved over to NT, because it was the best available.

    So, for the foreseeable future, it looks like the best investment is still decent analog gear - after all, the EQ on my old RAMSA analog mixer doesn't add 5 ms of delay to my signal.
  • Ok .... definitely not the answer for the music problem (I think), but JINI goes far to making distributed computing much easier. Whenever you have a task to run, you request a "worker" from the lookup service that can perform your particular kind of task. You then submit the task to the worker. Workers can dynamically join and leave the registry as they become available.

    Yeah, yeah...java is so slow. Well, I'm definitely not an expert in compiler theory, etc. but I have had good experience when using properly designed Java applications. There is also a new component available for JINI to allow non-Java applications to join a JINI federation, allowing for native processing performance.
  • If I would really have the time to investigate this subject, I'd probably use Bell Lab's Plan9 operatig system. After installing it in my laptop and run a CPU server, you can easily add more and more CPU processing to your application in a totally transparent way. The use of the resources is incredibly efficient and it is much easier to program than Unix/Linux.

    DSP processing is just one more task that you could assign to a CPU.

    Just my 2 cents.
  • by heiho1 ( 63352 ) on Thursday December 21, 2000 @12:19PM (#1410868)
    I'm not an expert on clusters but it may be simpler to use something like MOSIX

    http://www.mosix.cs.huji.ac.il/

    MOSIX uses transparent migration of processing amongst other nodes on an Ethernet network [read the site for more particulars]. It's also pretty easy to set up [basically just run the install script and make sure the nodes can locate each other].

    Might be helpful if Beowulf is a bit much for your needs.

  • Isn't this exactly what Mosix [mosix.org] is all about? Dumb cpu time clustering for threaded applications?
    Some day, I'll try it so I know, I promise :-)
  • Forgot to add this one, sorry:


    http://www.yamaha.co.jp/tech/1394mLAN/mlan.html [yamaha.co.jp]-- detailed mLAN specs.

  • This seems like an expensive and inefficient solution for the problem you describe. PCs are actually pretty bad at doing signal processing; there's a whole class of chips (DSPs) that are optimized for it. There are almost certainly DSP-based hardware boards/widgets that will handle a lot of these effects for you, and it wouldn't surprise me to learn that there was a general-purpose programmable DSP card on the market too. Researching this before you spend money on a cluster would be a Good Idea.

    Assuming that the software's there, dedicated DSP hardware should by far be the cheapest and easiest solution.

    OTOH, good luck finding the software.

    Addendum: Actually, I think that the effects generators on most modern sound cards are just DSPs, RAM, and some firmware. You might be able to find (or write or contract) hacked drivers that would let you arbitrarily reprogram this for your own purposes.

    Just a few thoughts.
  • by sPaKr ( 116314 )
    All these affect boil down to Math on a Stream. Now. All we need to do is write this as a simple touring machine, Then all we do is give each machine a chunk of the stream (tape) to work on, ad the proper function (engine) and simply splice the stream (tape) back toghter and your cool.
  • I've used both a mac and pc for mpeg 4 structured audio instead of MIDI...mac has by far worse performance.
  • I have recently accepted an internship offer from IBM. We will be developing a 64 bit SMP Linux kernel as well as some clustering stuff. Should be pretty interesting. AFAIK most of the development in these areas is done with full-time paid programmers such as IBM.

    P.S. It was mentioned awhile back on Slashdot that there doesn't seem to be many internships relating to UNIX/Linux. A few phone calls and emails landed me this 9 month internship with IBM. For any aspiring Linux enthusiasts: The jobs are out there :-)
  • I don't know what they were thinking at digidesign, but there's a windows & mac version that supports 8 tracks and is totally *free* (ie, not crippleware or a demo). I've been using it for about a month and you can do some awesome things, from music to audio soundtracks to whatever. Some plugins included like pitch-shifting and basic equalization. No special hardware needed.

    It's at:

    http://www.digidesign.com/ptfree/ [digidesign.com]

    I don't wanna press my luck, but how about a free linux version, Digidesign?!! ;) W
    -------------------

  • I use MOSIX [mosix.org] on my home-lan for various things, but would it work in this situation? It claims to work on programs not specifically written for clustering.

    It's really quite easy to send processes to other nodes of the cluster... and I'm thinking that if the program this guy is using performs well on an SMP system, it might-possibly-i-reckon-could work on a MOSIX cluster.

    Any ideas people?

    Mike

  • Limp Bizkit (and Kid Rock) do have great sound, but Britney has appalling sound :) mind you, that's not Digidesign's fault, it's because Britney Spears music is _appallingly_ overproduced and overcompressed. Peak levels are like half a db over main levels... it's quite horrible but by God is it ever loud. Compare it to Bizkit or Kid Rock and the rockers' main levels are more like 3-6db down from peak.

    I do recommend Digi, though I don't actually use it- I use hardware analog mixing and limiting (heard on my most recent album [besonic.com], the tracks named after airplanes). I do think that with enough skill and dedicated analog gear you can top the quality level Pro Tools will give you (though if I had Pro Tools- I could use _that_ as well and do even better. So even then, Pro Tools is desirable). However, I have to seriously confirm all that Funkwater says here: you don't want a Linux cluster for DSP. Maybe you want Linux support _for_ the hardware DSP you can already get, so you don't have to run a Mac or Windows. But you don't want a Linux cluster, unless you have some sort of non-realtime arrangement that can make use of insanely demanding 128-bit calculations to slowly 'render' a final track far better than even modern DSP allows. However, we're talking audio- that's hard to even imagine, and the DSP _is_ out there and very capable.

  • But I mean, even if I wanted to get Cubase VST Linux and chain it through a Darla or some other medium to high end soundboard, where's the support in Linux? There's precious little- OSS is almost the ONLY viable solution to most problems...
    check out www.alsa-project.org [alsa-project.org], you will see that Echo refuses to release product specs to the open source community. grrr.
  • Usually software optimized for single-computer multiprocessor threads, which is definitely different than using processes. Windows systems are very usually based on threading, instead of multiprocessing, which is popular on unix (because of large mainframes with memory on processor board). Threading software which uses memory to share data between threads does not work well in a distributed environment. If the software started every filter as it's own process, and piped their output (extend your definition of pipe if multiple input/output pipes with timecodes seem complicated), the OS could easily make a distribution plan on the fly when it noticed that processor is getting overloaded.
  • As it just so happens, I'm the chief tech at a service [sigma.net] that does basically this. I think we're within a month of being able to launch our public beta. Contact info's on the site if you wish to use our services. We're doing Maya rendering first, but we would be more than happy to add support for other formats (possibly including processing music - we hadn't thought of that one - and definitely including stuff like Premiere) on request, though of course we'll need a bit of time (hopefully not more than a week or two, but no promises yet) to add more formats.

    Yeah, so this post is a shameless commercial plug. It's also what was requested.
  • The reason why is that outboard effects units require you to use analog signals for the input. When you are working in digital, each pass thru D/A -> A/D degrades the signal to the point where most reasonably affordable DSP units would just sound crappy. Now, if you had one that had digital I/O, then it would definitely be a better choice, but these are few if not non-existant. (I am not aware of any myself.) Protools and similar integrated solutions do just that - they use seperate DSP processing on extra cards for effects, as does the company [klotzdigital.com] I work for. (Ours is aimed at the broadcast market, so we don't do reverb, etc. but we do have various EQ and dynamics processing modules. Unfortunately for me, it's a Windows-based system. :( Oh, well - at least I get to use it for some programming.)
  • Think again. It would work just fine. All of the processors or nodes take care of a single effect and the forward the results to the next processor. Sure it would introduce some lag but still you could utilise N processors or nodes for N effects. This is what you speculate with a "clever" algorithm but it really doesn't need to be clever at all..

    I'd say that the best analogy is a unix pipe. To put several processors to work you would run all of the effects as separate processes piping the data from one process to another and have the operating system take care of nasty details..

    Splitting one effect across several processors would need some pretty clever algorithms, but then again just buying a dsp-card or dedicating a fast enough a processor for the job might just be enough.

  • SO - the answer is to use Mac or Windows, huh? Thanks, but no thanks.
  • by account_deleted ( 4530225 ) on Thursday December 21, 2000 @02:14PM (#1410884)
    Comment removed based on user account deletion
  • OK, say you're going to process a stereo track you've got recorded on your PC. Maybe you want to do some sort of chorus feeding into a reverb and then compress/limit the whole thing, a pretty simple process.

    Now, with all the external cheap boxes, you can do this by patching everything together, aux sends/returns, etc, making sure all the input levels are set correctly (may need adjusting for each track you do). You'd have to play the track on the PC, convert it to analog, send it through all the boxes (each adding to the noise floor) and then convert back to digital, recording on an additional track. Might sound OK but there would definitely be HISSSSS and it wouldn't be professional quality, that's for certain.

    Or... use PC software to do it all, keeping it purely in the digital domain, with a much cleaner result. Keep in mind he's probably working with 24-bit, 96kHz audio, which has a MUCH higher S/N ratio than any analog gear you can pick up at the local music store. It also requires a helluva lotta power to process.

    The only time I can see using the dedicated reverbs is when you want to record a wet sound in the first place, and you happen to like the color that a particular box adds (which does happen often)... as opposed to recording the dry sound and then processing via software later.

    Another thing the CPU-based DSP is used for is mixing down. You finally get all the tracks sounding good, now you have to get 8 or whatever tracks of 96/24 audio mixed down into 2 tracks of 44.1kHz 16-bit that can be burned to CD audio.

    I'm just barely scratching the surface here, but now you can see the limited (but still useful) function of the external DSP boxes.
  • What network bandwidth? 10mbps maybe, but consider this: CD-quality audio is 44.1khz * 16bits * 2 channels = 1.4 Mbits / second. 100Mbit network should do just fine for say 10-20 streams running back and forth..
  • Having experience with this subject matter (digital recordings), I can say a) no comercial quality software supports such a thing and b) the output quality would be very poor. At any rate MIDI at the least take much much much less performance the digital audio I process...my recomendation: set up a cluster, because you CAN cluster with NT/NT and NT/Linux, and use SMP systems, because even though cubase itself isnt designed for SMP, it will still improve performance.

    Derek
  • Do you know anyone who might want to tackle such a project?

    The correct question is "Would I be better off writing this with vi or Emacs". In the open source world, the idea is not to get people to give you free software, it's that people want something done so they do it. If your idea is interesting enough, other people will get involved.

    Rich

  • I work for a Post Production (special FX etc) company and the way we do renderfarming is we have a network of rendering machines and a renderque server.

    The animator would for example want to render a 3D scene so he would tell the renderque manager to put his job in the farm.

    If its a 75 frame animation sequence, the que manager will send one frame to each machine in the farm to do instead of all of them working on one frame simultaneously (which is what you are suggesting).

    The way you would have to go about doing what you want to do in my opinion is to write an interfacing app which would take the sound break it up into lots of little bits, send each little bit to a machine with sufficient resources and put them back together once theyre all done as one whole sequence.

    This would work in the same way as an existing and very reliable model.

    Hope i've helped out.

  • i dont knwo if you could do a real-time processing using the beowulf model, but you could probably chunk up the audio bits into bite sized chunks, and then do the processing, and reassemble later...

    that kindof thing would paralellize well...


    tagline

  • Um. The person said he was using Cubase VST, which as far as I know is Mac/Windows only.
  • Too bad you dont always have mod points =)

  • Of course this would require Cubase to become SMP-aware on Win32, which probably is about as likely as the distributed thing.

  • =
    There are already Windows-based realtime DSP effects boxes that use mLAN as the routing interface, currently in *development* by a number of large music companies (I can't drop names here, I'm under very tight NDA).
    =

    Hey, you should let these guys know that before they code support for ultra-niche hardware configurations they should implement SMP support. Unless "number of large music companies" = digidesign; because protools is the only audio software I can think of that already supports SMP.

    badtz-maru
  • by doorbot.com ( 184378 ) on Thursday December 21, 2000 @11:53AM (#1410895) Journal
    I've been wanting something exactly like this for a long long time.

    I do editing using Adobe Premiere on my Macintosh. When I finish a project, or finish a "stage" of it, I have to render the project overnight. My next editing machine will likely be a dual processor PC, but for now I'm using my Mac.

    In addition to my Mac, I have four other computers available, all of which will happily run Linux. Setting up a Linux cluster would be a good project, and is definitely feasible. But my Macintosh has no means to offload the rendering to a cluster...

    I think this would be a fantastic product, I can see it maybe as a wholly separate product. It would run separate from the actual editing application and distribute its rendering load to some sort of cluster (I'd assume a customized Linux cluster).

    Computers are cheap now, but it is expensive to buy very fast machines every few months... why not allow for clustering of your old machines (even if you do replace the primary one every few months)? Then you could still use those extra CPU cycles, and maybe you could actually use your money effectively.
  • by Splat ( 9175 ) on Thursday December 21, 2000 @11:57AM (#1410896)
    Isn't the problem faced by distributing computing such as this the limitation of the network bandwith between machines? Assuming you connected machines via 100MB/s, wouldn't you top out real quick? The processing speed compared to what it can send of the wire would make for a few delays I would think.

    Anyone have any feedback on using distributed computing for such "Real-time" things such as Video and Audio?
  • by nweaver ( 113078 ) on Thursday December 21, 2000 @11:52AM (#1410897) Homepage

    If you did your effects in Mpeg 4 structured audio instead of MIDI, you might get considerably more performance.

    Why? Because there is considerable research in compiling MP4-SA to C and then running the native code, to get greater performance out of arbitrary effects, filters, etc.

    More info is available here [berkeley.edu]


    Nicholas C Weaver
    nweaver@cs.berkeley.edu

  • Absolutely!

    My dual G4-400 *easily* handles 3-4 effect on 8 *stereo* tracks realtime, no lag!
    Plus, any speed increase you gain by using the cluster (I've set up 128 node MPI clusters before) is going to be lost just *getting* the data there...
    It could be a nice idea for a large job - I don't know what kind of audio you're processing - but for my everyday use (or other musicians I know) clustering just isn't practical (in the sense of time or money).

    Ok, you're not a *total* idiot -- there's just better ways of doing it. A faster machine will take you much farther than a few clustered Pentiums.
  • Quasimodo is also smp enabled.
  • It seems to me that if you are doing N effects on a file, you should be able to pipeline the data through N processors. The code may be tricky (isn't this just the type of thing beowolf clusters are actually for?), but shouldn't be hard to do.

    This is a natural for parallel processing.

    --Mike--

  • (Score:5 funny) ROTFLMAO
  • MIDI is not an audio format. MIDI is a protocol for communication (generally between musical instruments, although it has been co-opted for other purposes, such as controlling lighting etc.).

    Basically, MIDI contains commands such as "play an F#", "play quiet", "play loud", "stop playing any notes", "sound like a piano", "sound like an oboe". Synthesizers take those commands, interpret them, and send out an audio stream.

    Software effects, such as those in Cubase, operate on that resulting audio stream, either taking a file for input (I believe it is in .wav format, but I could be wrong), or by operating on live audio coming in through A/D converters.

    In any case, since he wants to use Cubase (and presumably other professional tools), rather than rolling his own, the format of the audio is not open to negotiation.

    Does anybody know if VMWare benefits from being run on a cluster (Beowolf or other)? Perhaps you could run Windows and your entire music workstation environment over the cluster, rather than shipping off specific jobs? (I don't have the foggiest idea whether this is even remotely possible).

    On the other hand, what about using CSound to do your processing on the cluster, since CSound runs just fine under Linux. Again, I don't know if any of the clustering technologies would benefit the performance without being rewritten to take advantage.
  • You'll get good sound, even virtual 3d sound. So please buy one so my spatializer stock will go up! LOL
  • Interesting idea, but you couldn't split it up quite so simply. Effects on a signal are not parallel, but additive (serial). For example, if you take a signal, and put reverb then delay on it, each tap of the delay has its own reverb. In other words, if you processed the effects separately, then mixed the signals, you would not get the same overall effect at all.

    Realtime multiprocessing would be possible, I'm sure, but would need some pretty clever algorithms to get anything near realtime (maybe each processor takes care of a single sample at a time, round-robin fashion).

  • Sounds like you need to buy some real equipment. What types of effects do you use in VST? If you need realtime effects, you need to buy hardware that was designed for it. There really is no such thing as quality cheap effects in the professional music biz.
  • I'd recommend investing in some good audio hardware for your existing computer. I'm referring to audio processing cards that do the processing off of your PC's CPU on their own dedicated DSP chips.

    I recommend DigiDesign ProTools. ProTools can be a bit expensive, but the time you'd save inventing a solution and actually making music would be worth it.

    ProTools is really 2 things -- hardware AND software. ProTools the hardware is a few PCI cards that have "farms" of DSP processors, integrated SCSI, etc. It also consist of 24-bit analog-digital converter boxes.

    ProTools, the software is as good and better in some ways than Cubase, plus supports TDM plug-ins. These plug-ins can emulate amplifiers, run high powered reverb (like LexiVerb), etc. and they do it on the ProTools hardware, which means you can run 8 times the effects, or some really really high end effects, all in in real-time (so you can tweak them up to the very last mixdown)

    There are a few other software packages that support ProTools -- MOTU Digital Performer, Emagic Logic Audio, and Cakewalk Pro.

    ProTools is the stuff that all the MTV-esque bands are using -- from Limp Bizkit to Britney Spears (who may suck but have great sound).

    Check out http://www.digidesign.com

    No I don't work for DigiDesign.. I don't even own any of their ProTools gear, I just know it's the best. I own a Digi001 - a nice intro to hard disk based recording (under $1000) http://www.digi001.com
  • I wonder how difficult it would be to come up with a wrapper for VST plugins that allowed them to be used under linux? VST plugs are DLL's. I have seen a package for linux that uses Win32 DLL's wrapped in something (aviplay). Perhaps if someone could write a VST plugin implementation under linux, allow it to read the VST plugins through a wrapper, then write a VST plug to proxy the data over to the linux box (cluster), it could work. Don't know about latency though. VST plugs are already bad enough in this regard. I would be a neat hack though :)
  • Standalone, dedicated DSP units are cheap and available at every music store in a variety of shapes and sizes - rackmount units, stomp boxes, from all-inclusive multi-effects systems to units that do only one thing but do it incredibly well.

    What advantages are there to using a fully programmable processor to do nothing more than emulate a much less expensive piece of hardware?

    -Poot
  • Do the math. High quality audio (5.1 channels of 24 bit audio at 48000 samples per second, for instance) uses ~5.87 Mbps, so let's say you can stream 8 audio streams simultanously. Since nodes have to return results to the main node, we're talking about a very conservative estimate of 4 nodes, if the nodes are fast enough to do their work in real time (a dodgy assumption, since if they could, we wouldn't need all this complexity in the first place).

    If nodes can't achieve real time performance, it just frees up bandwidth. If your network goes through a good switch, you should see better numbers. If you don't need 5.1 channels of audio, you'll scale better. In any case, a 100Mbps network will not be the bottleneck in an audio application.

  • If a pipe-like solution were used, the audio data would effectivly have to travel N times. Now if the nodes were set up serially, each with two network cards, daisy chaining along, this may work nicely. Each node would have to specialize in a particular effect, some with dual CPU's if thier effect parelizes well, and with CPU speeds chosen such that each effect takes the same amount of time.

    The system would be like lining up a bunch of colored glass plates (filters), and shining light (audio data) though them and seeing what comes out. It would be rather interesting to play with, but also a lot of work.

  • Need for near-real-time responsiveness + Distrubuted processing = No, I don't think so.
  • ...you could always use Mosix [mosix.org]. Its a clustering implementation that doesn't require the special optimization of code or a recompile like PVM and other tools do.

  • I see your point about assigning one effect to each processor, essentially running a serial chain as an "almost parallel, but with tiny lag" system. However, some effects take much more processing time than others, so this wouldn't distribute resources very well.

    Ideally, a "clustered" solution should be as symetrical as possible. The daisy-chain system would give the "simple delay" processor a vacation, while the "stereo chorus" processor bottlenecks the whole system. This is where the clever algorithm comes in.

    All in all, I agree most with the previous posts that specialized DSP hardware could handle this much better than some hacked-up Linux boxes.

  • Does anybody know if VMWare benefits from being run on a cluster (Beowolf or other)?

    It doesn't. Software has to be written specifically for parallelism (this even includes the software running on the virtual Windows machine).

    It would be conceivably possible to design VMWare to emulate a multiprocessor environment, then use some different Linux boxes to emulate the different processors, but this approach would have two problems:

    • The latency caused by the cluster would be unbearable, and would probably negate any benefits gained. No network is going to match the speed of an internal PC bus.
    • The Windows software still would only run on one of the virtual processors, unless it was designed to run on more than one. Even if it is designed for a multiprocess environment, you would benefit more from just buying a nice multiprocessor box.
  • Moshe Bar wrote an article on setting up Mosix a while back for Byte magazine. Very well written and easy to follow - I even set up a mini-cluster of three K6's in under an hour following the steps he layed out:

    http://www.byte.com/column/BYT20000925S0005 [byte.com]

  • Amen. You can get some incredible effects through software post-processing, but if you want realtime, you gotta lay down some bucks.
  • Many 3d modellers have the option to send the project to network or batch renders, which are basically the beefy machines that do the number crunching so you can continue to work on your project. I've never had the chance to try it, but 3D Studio Max and Maya both support it, no doubt many other 3d programs do as well. This developed from a need, if you made your desire well known i'm sure software designers could design something to meet it.

    ---
  • DSP farms are a great idea in theory, (Not withstanding the network latency issues already mentioned). The real problem is:

    DSP functions can't be easily parallelized

    That is to say: suppose we have a 10-second sound we are trying to render. Lets suppose you want to break that up into 10 1-second slices to farm out to your DSP cluster. The problem becomes, these are not discrete items. The sound of slice 4 depends on what was happening in slice 1, 2, and 3 in a cumulative fashion. Therefore, it becomes very difficult to render slice 4 until you know what waveforms should be present from the previous slices....

    I've thought of the same thing for Text-To-Speech conversion - but I ran into the same kind of problem, except it was a question of punctuation. For instance:

    This sentence, my friends, is hard to parallelize, right?

    (Its hard because of the speech breaks that would occur at the punctuation marks.)

    -DM

  • Yeah? Well, you use "420" in your domain name, so I guess you must be a stoner.

    Stupid assumptions, made liberally.
  • ... all it takes is someone willing to devote the time to add mLAN support.

    http://www.yamaha.co.jp/tech/1394mLAN/mlan.html

    Can't do much better than that, honestly. And mLAN hardware is out there right now, there's no reason enterprising developers can't get started as of today.

    I'd do it, but I'm working on stuff that sits on top of mLAN first on Mac/PC ... once this works, I'll consider doing mLAN driver work for Linux.
  • I Agree. Installing a Mosix cluter is simple. All that is required is 2 RPM's, 100mbit switched network and some spare systems. The RPM's contain the Mosix kernel, so you don't even have to compile a new one. I built a small cluster with 3 CPUs(133-166Mhz) in a matter of a couple of hours. The process migration is efficent, commands are submitted to the Mosix process table by prefixing them, with 'mosrun'. Process monitoring is accomplished by the 'mon' command, which displays a simple ansi graph. The hardest part was configuring /etc/mosix.map. It needs to be in the format NODE #, NODE IP, TOTAL NODES #. I have been working on bootable mosix cd-os, so when you want an additional node, you just throw your cluster cd into your desktop machine.
  • You said "My dual G4-400 *easily* handles 3-4 effect on 8 *stereo* tracks realtime, no lag!"

    How about 5-7 effects on 24 stereo tracks realtime no lag? 48 tracks?
  • It's more efficient but not more cost-effective on a large scale. 16 or 32-processor computers like SGI Origins cost about $1,000,000. A 128-node beowulf cluster costs a mere $700,000 about. So which one's more cost-effective?
  • Why not maybe take the plunge and perhaps get a quad processor pc?

    Or perhaps maybe see if there are some DSP cards that would work with the VST plugins your using. I think farming it out in real time is not something thats going to happen for a while. I mean, the latency will definately be a problem....

    Harmony Central [harmony-central.com] had an article [harmony-central.com] (old, but still interesting), about a DSP card. And here [yamaha.co.jp] is some info on a yamaha DSP Farm.

    I think going that route would be better....
    --

  • That clears it up a bit, but I would still be worried about the network performance between machines. If you're maxing out your 100MBs link, doesn't that create some sort of delay with the machines synchronizing themselves together? Or is the delay negligible enough given performance gains.

    For effects on a file the machine would be able to wait. But what happens when this theory gets applied to real-time things like Video Editing?

    Without a fast enough hard drive, you loose frames. Without a fast enough backbone/backplane between machines would a similar effect happen?
  • ... before MP4-SA can begin to function, and I mean *function* in a professional studio in lieu of MIDI.

    There's a lot to say about MIDI - sure, it's slow, sure it's limited, but as far as serial control protocols go, it has *definitely* weathered the test of time.

    Anyway, this slashdot article is not a problem of MIDI, but a problem of i/o architectures.

    The problem with realtime Cubase effects is that they're still very processor/architecture/OS bound. There's no functional way of getting data in and out of a Cubase effects module other than by using the strict architecture path that the effects software has been written to use - this problem exists for Cubase plugins, ProTools/TDM plugins, DirectX plugins, etc.

    A better alternative to investing energy into the MP4-SA side of things (which is not a new technology - for a similar example, check out Thomas Dolby's Beatnik, which is a fairly similar implementation) is to instead investigate Linux' support for mLAN, which is Yamaha's "musical LAN" protocol that sits on top of Firewire/IEE1394.

    mLAN is the solution to this article's stated problem.

    By adding really good support for mLAN in Linux, it's really *NOT* unreasonable to say that Linux clusted boxes could be used for REALTIME DSP processing tasks. mLAN represents audio, video and media-data as individual streams on the same wire - a very good implementation of mLAN/Firewire in the Linux kernel would allow you to separate these streams, route them off to different processes/processors, and stick them back on the mLAN wire in realtime.

    This will happen in the PC world.

    There are already Windows-based realtime DSP effects boxes that use mLAN as the routing interface, currently in *development* by a number of large music companies (I can't drop names here, I'm under very tight NDA).

    With good Firewire support, Linux (and other OSS OS'es) can keep up with these developments, which will hit the market, I'd predict, by Q3 2001. Put the audio out on the wire, route it wherever you want to route it, and get it back off the wire, and we'll never have these limited 'plugin' problems again ... which, by the way, is one of the reasons I'm a very strong advocate of musicians *staying* in the hardware realm for now, and leaving the software plugins alone.

    The audio world has *ALWAYS* been about interoperability, open specifications, and shared protocols. MIDI would never have come so far, and given us so much, if it weren't for the fact that hardware manufacturers took the time to figure out good ways to work together, in spite of competition. Audio effects architectures have always been very well documented - the average professional mixing desk has been designed from the ground up to be able to work with gear from *all* sorts of manufacturers, using simple protocols and specifications for audio signal routing. (XLR, balanced, etc)

    This changed recently with the advent of software plugins, where hardly *ANY* companies are truly working with others to build a common platform - Oh sure, Digidesign have their TDM system, as do Microsoft, as do Steinberg, but pretty much all of these systems were developed from the ground up to be proprietary and to be used in giving 'market edge' to the respective owners.

    Next year, software plugins won't be as badly implemented, and the audio world will actually be able to work cooperatively again... because of mLAN.

    So, we should work on getting good mLAN/Firewire support in Linux, and develop open API's to solve the problem. Then you can build your Linux audio-rendering farm, hook it all up with mLAN, and forget about proprietary crappy interfaces for your audio work in the future...
  • I think that such a thing would be awesome. I'd like to see *all* processing done by the cluster, using the host recording PC simply for I/O and possibly as the storage locale. (Perhaps file storage could be sent to the cluster as well...)

    Regrettably, I think the host program (CubaseVST in your case) would have to be heavily modified to allow it to understand that Cubase itself will not be processing the requests.

    Perhaps someone currently beginning development on a Linux Multitrack environment from scratch would be interested in this idea. I think it would be much easier to implement from the start than try to convince existing manufacturers to chnge their philosophy.

    Another multitrack function which is badly needed is a global file management. (More like a document management system, but for sound events.)
    I have roughly 30 GB of WAV material ranging from small loops (50K) to whole mixdowns (50MB). It gets hard to find the sound one is after.

    How do I find an example of your current Cubase material?

  • If the effects are being done in real time, Beowulf probably won't work very well. The inter-node latency would be a killer.

    If you're doing non-real time (from a score/cue sheets etc), a beowulf could be helpful. The best bet would be to lay the local network out as a ring (two nics in each node, crossover cables interconnect the nodes in a ring). Each time segment could get processed by each node in turn like an assembly line until it ends up back at the master where it goes to the output file. The software would be 'interesting' to write, but doable.

  • I have heard similar concerns about lack of processing power from many different fields of work like computer aided drafting (CAD) and many graphics designers who cannot bring themeselves to use macs (which would be a smart thing to do in thier situation). They all have brought up the idea of running a linux cluster that would be able to provide them with enough CPU power to easily accomplish whatever there tasks are (rendering, etc) But fact still remains that poor software and bug-riddled code seems to be hindering one of linux's key points that it should be capitalizing off of.
  • Unfortunately.

    I can't get a simple Crystal Sound card to work properly - Sound Blaster Compatible, but the driver isn't. Frickin notebooks with their frickin top secret chip set crap designs.

    But I mean, even if I wanted to get Cubase VST Linux and chain it through a Darla or some other medium to high end soundboard, where's the support in Linux? There's precious little- OSS is almost the ONLY viable solution to most problems...

    What I would like to know is how to develop sound card drivers cause trust me, I'd be off and running. I bought the Linux Device Drivers book which was cool, it explains how Linux uses drivers, what I need is a good article on how to make the SOUND CARD part work.
  • I would love the Idea

    The only probblem would be setting up for road shows. but for studio work, it would be great.

    Maybe I can build me a wall of sound [raymondscott.com] like Raymond Scott [raymondscott.com] did.

    (side note: Raymond Scott invented the sequencer, and was a teacher of Robert Moog. His novelty tunes were used widely by Carl Stalling for themes in many Warner Bros cartoons.)

  • GStreamer [gstreamer.net] is a project I and another person are currently working on pretty much full time (I'm between jobs and he's on vacation) at the moment. It's been around for a bit over a year, and has grown considerably since in that time. It's a pipeline-based architecture somewhat similar to M$'s DirectShow, allowing arbitrary graphs of plugin filters, processing just about any kind of streamable media you can think of.

    This lends itself quite a bit to distributed processing, since you simply (for now) code up an element pair that will enable you to join two pipelines over the network, via TCP or somesuch. Eventually we plan to have CORBA interfaces wrapped around everything, which while slowing down data tranfers, has the potential to make everything even easier.

    A release is planned for the beginning of next year (midnight, Jan 1st, Millennium Release), which should provide people with a stable enough API to start writing apps with it. There are still going to be some major changes like a shift to GObject (currently uses GtkObject, so it's tied to X, bleagh), and some major feature enhancements like the graphical pipeline editor. Changes to the system should affect plugin writers mostly, as the "user-level" API should remain basically the same.

    The two of us are interested in audio and video respectively. I want to build a live mixing system, he wants to build an NLE. The two have much overlap even ignoring the GStreamer core, so things should get interesting. There are some other people with some pretty cool ideas that we'll try to incorporate, one of which is distributed processing.

    Anyone interested in this project should head over to http://www.gstreamer.net/ [gstreamer.net] and sign onto the mailing list (gstreamer-devel). We'll be busy coding through the end of the year, but we welcome anyone who would like to use the system. The scheduling system is currently being re-written for the 4th or 5th (and hopefully last) time, so anyone with specific use cases can help the process along by enumerating them to make sure the scheduler can deal with even the most bizarre cases.

    As far as VST and other plugins, there's a project called LADSPA [ladspa.org] that's building a plugin interface similar to VST. Quite a few plugins are already available, including FreeVerb. The problem with VST is that it embeds the Windows-based GUI into the plugin. This might be shimmed with libwine or something, but is a tough problem. If someone would liket o tackle that, please, step right up, we'll help you as much as we possibly can.

  • by kirkb ( 158552 ) on Thursday December 21, 2000 @12:53PM (#1410933) Homepage
    Given that network bandwidth would quickly become your bottleneck, it seems that going SMP would be a *much* more efficient (and cost-effective) solution.
  • What you say is all true (if perhaps a little optimistic), but has nothing to do with clustering. What you describe would allow you to daisy-chain individual Linux boxes which would serve as dedicated effects boxes. I imagine you could do this sort of thing right now with SPDIF. For those who don't know, SPDIF is the format used for moving around digital audio - for example between a CD transport and an outboard D/A converter in your home stereo system, or between components in a digital studio - such as a digital mixing board and a digital effects unit. The problem is, this daisy-chaining approach really isn't at all the same as clustering. While you might be able to render a 5 minute song in real time (5 minutes), it will still take 5 minutes, no matter how many computers you add to your "farm". But if you were able to do the processing on a cluster, you might be able to render the song in 30 seconds using eight computers.
  • I don't think standard clustering would work for this. Considering there are no VST solutions for clustering, and you'd have to write your own anyway, I'd definitly not suggest trying to do the processing in a loosly clustered environment. The problem is that your processing would get a lot of latency, and couldn't be done in real time for you. A better idea would be to have one machine per effect stage and then pass digital audio between them... preferably with a IEEE1394 interface. Of course if you had a tightly clustered system, you shouldn't have too much of a latency problem for the human ear.

    -Daniel

  • 100 Mbit/sec isn't slow at all - at least not (much) slower than your IDE hard disk.

    If you need SCSI speed, take Gigabit or Myrinet
    (yes, there's still the protocol overhead, I know - but just count the bytes - it's probably fast
    enough).
  • (I was told this text was hard to understand if you don't know that sound is nothing more than waves, and that waves can be 'drawn' in digital by stacking together a bunch of numbers representing the amplitude of the wave at exact timed moments. All musical effects are math on those 'strings' of numbers))

    I was thinking of starting a musical project after my current gig [eevolved.com] is over.

    It would be cool if we could create a musical tool (maybe a replacement to proprietary VST, but completly VST compatible...) that was completely free and ultra powerful.

    Python is a language I've come upon recently and it seems to be very apt for massively paralleled computational tasks such as Digital Signal Processing (DSP) and VST effects (and musical effects in general).

    All that would have to be coded is an agent app that does work on incoming packets and outputs to another specific agent(eventually the signal data would return, completely processed, to the output computer)

    Live shows using the software could consist of one controller computer that changes parameters for each effect according to musician's input(knobs and/or instruments plugged into the computer). That computer's agent would grab the input and send constructed commands to the processing cluster. The processing cluster would run all the math, then output the resulting signal to the output computer, (which is running an agent whose 'job'(or 'effect') is to play the signal) resulting in crackling beats and such)

    python [linuxjournal.com] Anybody want this to exist?

  • in principle, it should be relatively easy to write a VST plug-in which uses TCP/IP to farm out work to a music rendering farm (just as 3d graphics apps use render farms).
    However, you would almost certainly never get real-time performance - you could imagine a fractal "reverb" program, maybe, where the local processor provides a coarse version, farms out the detail to a render farm and moves on, but performance would be seriously variable - a single network glitch like a request for new tasks could cause the effect to coarsen as performance degrades.
    Most of the time, you would rather have no effects at all than randomly variable ones.
    However, clusters would be great for apps like winchimes etc. which generate random music based on complex math - although the law of diminishing returns may set in rather early....
  • ... check out quasimodo [quasimodo.org] and then someone write a network audio stream plugin for it (or petition the author;) you could then run a copy of quasimodo on each "node" and route the audio signal(s) between them.

    This has to happen sooner or later, with some software or other! :-)

    Quasimodo author PBD has some pretty good arguments on his site that it is worth doing DSP audio processing on generic PCs, BTW. You can take a look at some dedicated (non-networked, and non-GNU/Linux) hardware solutions at Creamware's site [creamware.com] - in particular their high-end-studio-in-a-PC, Scope [creamware.de]

    Latency will be a problem - not because of delays audible to the human ear, but because of the difficulty of synchronising different tracks that have arrived by different network paths. Net-latency-caused discrepancies of a couple of milliseconds could introduce phase problems in audible ranges. Perhaps a "broadcast clicktrack" plugin of some kind could solve that..?

  • I think the best thing to do, if this was fast enough would be to write some sort of VST network plugin; You load the VST plugin into any program that supports VST.

    Then that plugin transmits audio and configuration data to the cluster, which in turn can run any VST plugin that exists, returning the computation result back to the VST network client.

    You could do it using UDP, or over 1000baseT to get around the network latency issues.

    Just a thought...

  • Argh, come to think of it, why bother with a cluster at all? Protools was designed to solve this problem, giving you a cluster of Motorola 56000's accessible via TDM.

    The big problem is that Digi has a lock on the hardware and prices and it's too goddammned expensive. ($8000 or so for 16 tracks)

    Linux machines would be about $1000+ each, so I guess it evens out after awhile.

  • a TOURING machine? Is that like a travel agency? I think you mean turing ;)
  • Many of you are confused on what Digi offers.

    "ProTools" is a software digital audio editing package. It will work with many digital audio cards (well, at least those that run with DAE; mostly Digidesign cards.)

    The realtime effects processing comes in when you use ProTools TDM, a MixFarm card, and a protools interface (something like the 888/16 will do.)

    When you're done buying all this stuff, you'll have a good system. If you want an excellent system, then you're looking at purchasing a better D/A / A/D box (Apogee is excellent) and Preamplifiers (Focusrite, etc.)

    Nearly $50K later you'll have a system that will be better than most low-end SSL consoles ;)

  • Have you considered csound [csound.org], it's fast, simple, powerfull and all text so you wont have to worry about wimpy GUI and you can interface it quit easily. It's not multithreaded so you won't benefit from multiple processor but I found it pretty scalable (something like stacking 50000 sine waves, just make sure you have plenty of RAM,I once brought a SGI box to it's knee doing that ;)
  • From what I have seen of Trinity's station, the signal gets passed seriel wise through effects processors. So as process one completes chunk a, it passes chunk a to process two while process one starts working on chunk b. This is all accomplished on a special 10MB backbone.

  • by leroy152 ( 260029 ) on Thursday December 21, 2000 @12:17PM (#1410946) Homepage
    ..for distributed computing.

    Where you can use any other type of OS, and send a work order to a clustered network, for processing.

    Of course, for this to become reality, it would require software manufacturers support. Which is where things fall out, not because of ignorance, but it just wouldn't be cost effective (why spend extra money on a feature few users would be able to use?).

    This then presents another problem, inhouse development by individual companies based on need, with no outlook for distribution of this work. It's cool that some companies would be willing to sacrifice resources to develop a distributed solution for a product that they need, but there's no guarentee that they'll release it to the public afterwards.

    Cheers,

    leroy.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...