Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Business

Linux On Alpha To Power Streaming Media Boxes 34

This CNET article, found at the new and itchin'-for-eyeballs AlphaNews site run by the AlphaLinux folks, says that Alpha Processor International (API) will be producing specialty boxes delivering streaming audio and video starting this Fall. "Alpha Processor's new machine will consist of a collection of smaller, two-processor Alpha computers, each running its own copy of Linux but the whole system acting essentially as a single large server," says the article, pointing out too that there already are Alpha-based dedicated streaming boxes from Network Appliances. Anything that increases Alpha production seems good to me, if it drives down the price of being 64-bit. Wonder what distro all those CPUs will be running ...
This discussion has been archived. No new comments can be posted.

Linux On Alpha To Power Streaming Media Boxes

Comments Filter:
  • Because it concludes stereotypes based on the author's limited personal encounters and tries to make it sound like journalism?

    I thought that's what journalism was. Well, you've left out "hypocrisy". But that is covered by starting your article out saying you were at a sex party then moaning about other people's untraditional sexual morals towards the end.
  • I run shout with icecast with pre-encodeing mp3s. I wrote a small perl script to re-encode mp3s specifically for this.

    it's at my page www.bigfreakinserver.com [bigfreakinserver.com]

    -Jon

  • RAM will have been replaced before we are using 256G of it.
  • As my MP3 archive is 13 gig, I'd rather not almost double it by keeping 24kbit and 64kbit versions around as well.

    What I'm doing is I've got a perl script to re-encode on the fly and broadcast it, sort of like liveice, but it supports multiple bitstreams (I assume it's like liveice, I've never been able to get liveice to work correctly).

    I do like the live365mon however, playing with it right now. Of course, it thinks my server needs to be reset when it doesn't, but I think that's because I'm timing out to cgi.live365.com right now. :)

    You can check my perl script to do the on-the-fly thing at defiance.dyndns.org [dyndns.org]. It's not as small tho'. :)

    Of course, we're veering off topic a tad. Oh well.

    :wq!

  • Surprise, surprise! Linux also has support for 64-bit file offsets. You failed to quote this comment I made though: "I know that there are ways around this with some filesystems, but there is a speed penalty when you get away from native cpu data sizes for variables."

    I have indeed run into this speed difference myself when I wrote some code recently to assist data recovery on large (20gig+) hard drives. I ended up with a much simpler (and faster) program by doing my own buffering and using only relative 32-bit lseeks and blockreads instead of the llseek 64bit offset functions. Since I wasn't trying to read a filesystem, but instead, rebuild data chains and recover information, 32bit-ness wasn't a problem aside from speed ( this process did involve *much* seeking. Granted, I didn't use FreeBSD, which may not have had a speed penalty ( and my speed differences could have been entirely caused by my crappy code ;o} ). Bottom line here is that for *my* application, 64bit-ness would have helped quite a bit in terms of getting the program written quickly and simply.

    BTW, Linux 2.2.xx supports 64bit llseek, but has a 32bit vfs. The 2.3.99 series and more importantly the 2.4 release series has a 64bit llseek and 64bit vfs. Go here [deja.com] for a small, current discussion thread
  • Oh. I was trying to bring-up the idea of a binary compiled for 32bit operations and ran on a 64bit CPU. My idea was that it isn't aware of the new registers and instruction sets. For example, when Intel releases its version of a 64bit CPU, Linux will undoubtedly be updated for this new architecture and someone will probabley build a compatibility library to run the binaries on previous version of Linux that are limited to their 32bit methodology. I have a program in DOS that is built for a 286 cpu, which is 16 bit and has the typical 80x86 instrctions with a few instructions for 80x286. It is a 16bit program that uses 80x286 instructions, but is ran on a 80x486, which uses a 32bit cpu. I mean to say that it runs correctly because it is all designed to be downward compatible, but this 16 bit program isn't optimized to take advantage of the new EAX, EBX, etc registers. Sure it'll run faster, but it still isn't optimized for the CPU it is running on. I compare this with commercial programs that try to remain platform independent, thus the developers may choose to not use platform specific operations to increase the performance on a given platform. Oh well... :o)
  • Even though you may not need 64-bit addressing for only one gigabyte of physical RAM, it still helps. With a 32-bit virtual address space, all of the following would need to be mapped at some point in time: the virtual memory of the process, the physical RAM in the system, and I/O space. If they can all be mapped simultaneously, that will provide a speedup as mappings do not have to be changed nearly as often. Suppose a process is using around 1GB of virtual memory on a machine with 1GB of physical RAM. That's half the address space already. Now add, say, a 2GB memory mapped file. Oops, the address space is all gone. Now, suppose this program is multithreaded, with very unpredictable stack sizes. Wouldn't it be nice to have lots of virtual address space to allocate to each thread's stack just in case it needs it? With a virtual address space substantially larger than 4GB, this is no longer an issue.

    --

  • Hopefully Slackware will make it there, soon.
  • ROFL!
    --
  • With 32bit registers, a signed long is what limits file sizes to 2gigs. This is getting to be a bit of a limit for many applications (some of my custom apps and datafiles anyway). I know that there are ways around this with some filesystems, but there is a speed penalty when you get away from native cpu data sizes for variables. That is why a long on i386 is 32bit, and 64bit on 64bit cpus.

    Data can also be handled quite nicely in a 64bit register and the Alpha architecture has many registers for the compilers and programmers to play within which decreases the need for more memory bandwidth by just a bit. I'm still hoping for more mainstream pricing for the Alpha family processors and motherboards.
  • by X ( 1235 ) <x@xman.org> on Friday May 26, 2000 @08:23PM (#1043968) Homepage Journal
    64-bits is not just about addressing memory.

    64-bit matters if you want to do integer arithmetic with large numbers. For example, factoring large numbers with a 64-bit processor can be 4x as fast as with a 32-bit processor. Additionally, the Alpha and various other ISA's have very cool support for manipulating each of the bytes in a 64-bit register independantly. This is of course 2x as fast as doing it with a 32-bit register. ;-)

    A good chunk of what Intel did with MMX was to allow for faster operations on 64-bit integer values. While that didn't make MS Word go faster you will recal various multi-media routines (like Photoshop filters) got a pretty big benefit from it.
  • I made that up. I've never owned an alpha machine. But I'm sure if i see one anywhere I'll buy it and add it to the eth3rnet that wishes it was a high speed cluster (=!! I'm actually just posting here to compliment the moderators. For karma's sake have pity on me! ok here we go. N4tAl|e P0r7M4n 4Ev3r!!!!!!
  • by Anonymous Coward
    I'll tell you what's wrong with japenese people. The spelling. Japanese has 2 'L's.
  • I'm not a regular Salon reader, but I've read a few articles linked to from Slashdot. They all have the same structure. The journalist notices a common element among geeks and gives examples. Then they notice that the opposite element is also present and give examples. Now, I'm all for balanced reporting, but I gotta ask: If some members of group Y have property X and others don't, where's the story? How does that differentiate group Y from the rest of society?
    --
  • If anyone's ever run a shoutcast stream with winamp/dsp you will know that compressing audio to 128kbit in real time is a also major hit.

    I'm not running shoutcast and winamp (icecast and libshout), but yeah, LAME pretty much destroys my machine if I try to push it too hard.

    I've got a 24kbps and a 64kbps stream right now on a celeron 466, and my load average is at about 50%. If I try to kick the 64kbps up to 96, it gets breakups because I peak over 100% at times, even though the average stays under 1 (and this is with --fast going to LAME). As it is, I can still compile and do other general system stuff without breaking my stream (hrm... sounds perverted :)

    I hope to alleviate this somewhat by decoding once and encoding each stream off that (right now I'm just doing lame --mp3input -b [bitrate] for each stream) but I haven't gotten to that yet.

    Maybe this weekend. :)

    :wq!

  • I'm not sure what you're talking about but...

    The biggest and easiest optimization to perform on straight ansi C on an alpha is to simply use its 30 (!!) general purpose registers instead of heap storage. Compare this to intel's *6* general purpose registers, most of which are not actually general purpose. From a compiling/optimization standpoint, the number of registers is extremely important, since instructions can be performed on values in registers immediately, while loading from cache/store can be a 1-30 cycle penalty per word. So, don't worry about the 64-bitness (which is also killer, though it takes some program changes) so much as the fact that the Alpha is a vastly superior chip architecture.

    Second, 32 bit applications will migrate to 64 bit just like 16 bit applications do. No doubt the key operating systems will have emulation modes for 32 bit programs just as Win 95 does for 16 bit. (Alpha NT even came with a program to emulate 32 bit Intel programs!)

    Finally, it's not that hard to write portable C code if you know what you're doing. And as we move to higher level languages (and I don't mean Java) because of all the hassles of using C/C++ this will be taken care of for us automatically.
  • by marcus ( 1916 )
    The Digital/Compaq compiler for True64 Unix called ccc is MUCH better and has been released along with libraries for use on Linux boxes. There are still some problems compiling code that was written for gcc, but there are workarounds.

    Read some of the alpha/axp newsgroups for more info.
  • Linux also has support for 64-bit file offsets.

    (Presumably meaning "support for 64-bit file offsets on 32-bit platforms.) And so do the other BSDs, Solaris 2.6 and later, and, I think, Windows NT, and probably other OSes.

    I ended up with a much simpler (and faster) program by doing my own buffering and using only relative 32-bit lseeks and blockreads instead of the llseek 64bit offset functions.

    How much of the "and faster" was due to "doing [your] own buffering" rather than "using only relative 32-bit lseeks"? Many file systems probably turn 64-bit file offsets into 32-bit block numbers and 32-bit offsets within blocks fairly early in the process of doing an I/O operation.

  • Linux can serve streaming media just fine AFAIK, but it still can't capture, author, or play it as well as some other systems.

    So does that make Linux a "media OS" or not? I don't really care about the labels, just what it can do.
  • Its great to see Linux taking the use of more powerful jobs. Watch out MS
  • Woo-hoo! I love it when the speed of the internet manifests itself. It took under five minutes and a single e-mail.

    The link listed in the article [alphanews.net] works now.

    Massive props for being on the ball, guys!
    _____________
  • What's the big deal?

    Very few machines actually push the 32bit limits. I have ~20 boxes from Sun running Solaris 2.7 that are running in 32bit mode for one reason or another.

    I understand the push to move infrastructure forward; the day will soon come where 256G RAM makes sense. For something, I can't guess what at the moment, but it will.

    I see no speed benefit that the low (less than 1G RAM) end of things. What's the point right, from a business standpoint, right now?

    -j
  • If you want to see Alphanews [alphanews.net] click the link.

    No wonder its itching for eyeballs. Their site is not available at http://www.alphanews.net as the Slashdot article stated. I presume that they advertise that URL, but their second level domain nameserver doesn't seem to resolve it yet. But a simple http://alphanews.net [alphanews.net] works just fine.

    -----------------
    Jes' doin' what I can to encourage people to read the links before commenting on them...
    _____________
  • by Anonymous Coward
    If I remember, the Alpha GCC has serious optimization issues... it seems somewhat illogical to use Alpha/Linux.

    That is unless its going to be compiled with a high-performance commercial compiler, which I guess they're going to have to do.

    Anyone know if GCC for Alpha has improved much?
  • More Linux on Alpha is good. More exposure for Linux is good (I guess). But what I'm hoping is that these catch on really well, and that Window Media streaming format will take a[nother] hit.

    Timothy was wondering what distro they'll be running? My guess is none. A "speciality", single-purpose, single hardware-setup box doesn't need all of the crap that comes with most Linux distros. Just compile exactly what you need into a new kernel, include the only software and apps you need, and you'll end up with a streamlined system that takes up little disk space, and you won't have to pay the distro overhead.

    And I'm not aware of any distro that has special Alpha optimizations; and if any did, wouldn't the code be absorbed into the official kernel, if it were really an improvement?

    But it depends on who's providing software support, right?

    ... I've heard that Red Hat supports Alphas well ... [steps aside as vat of boiling oil is tossed from the Slashdot battlements] ;-)

    But as long as you could install a 'normal' distro on it, this could be very cool, as they'll probably sell for cheaper than usual Alpha systems.

    Getting offtopic, I'm seeing a Helix GNOME banner-ad on my screen as I write this. Something about "packaged" Gnome 1.2, represented by a stick of butter with the Helix logo on the wrapper. Okay. Is that how they want us to see GNOME, as slippery, yellow, and bad for your health in large quantities?

    LOL!! I just noticed the AlphaLinux logo! The penguin with the shock of fire-red hair? Shit, I want that on a T-shirt! We should name him Lars. He looks like a Lars.

    ---------///----------
    This post is not redundant, please don't moderate it as such. I repeat, this post is not redundant.

  • Will it run QTSS and handle the load adequately?
  • by Anonymous Coward

    With 32bit registers, a signed long is what limits file sizes to 2gigs


    Nonsene. FreeBSD has had 64bit file offsets for years.


    there is a speed penalty when you get away from native cpu data sizes for variables.


    Is extfs so blisteringly fast that arithmetic operations get in its way?

  • by Anonymous Coward

    dear penthouse,

    i have always enjoyed "letters to penthouse" but never thought i'd have anything to write about. well, my luck has changed! i couldn't wait to write this out and send it in!

    i am a greatly-respected open source software advocate and literary genius. i am able to cite my one known work, the cathedral and the bazaar, whenever i am interviewed by obscure online 'zines. i am constantly at odds with a free software advocate. i must admit, this professional competitiveness is merely a mask to cover my strong sexual feelings toward this man, who i'll call richie.

    recently, i had been discussing open source sexuality with an author of salon magazine. i realized that my feelings for richie were a natural part of my superior mentality. richie, being a free software advocate, must have the same urges! i couldn't wait to test my theory out!

    i didn't have to wait long! the staff of a popular online news site had planned a love-in which would take place at a rented convention center. i wasn't quite ready for a group encounter, but i knew richie would be there and i thought i could probably figure out a way to get him alone.

    so, there we were in the middle of a giant cluster-fuck. richie and i were the only two not participating, since we are both rather nasty-looking and shy. i sat there looking at richie, without his knowing. suddenly, i felt a huge cramp in my stomach. all of the whiskey and nervousness had gotten to me!

    i ran to the bathroom and sat in the stall. a steady jet of sulfuric fluid shot out of my rectum. suddenly, i heard the door opened. i tried to stop the stream, out of embarrasment, but couldn't. i heard a shuffling sound. the door to the stall next to me opened.

    a few moments later, my colon had been emptied. i knelt down on the floor, without even wiping or pulling my pants up, to see who was next door... maybe it was richie!

    suddenly, my stall door flung open! it smacked me in the buttocks and i started to jump up. i was pushed back down. suddenly, i felt a rock hard love-iron shoved into my manhole. i turned around to look... it was richie, completely naked except for a pair of foam chicken feet! he pumped my ass until it burned from the residual diarrhea. i couldn't believe the pain! the joy! the ecstacy!

    suddenly he stopped... NO! he pulled me up by what little hair i have left and shoved his diarrhea stained manhood into my mouth. i licked it clean and stroked it until he climaxed all over my hard nipples. together, we rubbed the ejaculate into my chest-hair.

    next, richie took his infamous flute and shoved it into his rectum. he squatted over my face and told me to blow into the flute while he masturbated me. i quickly climaxed to the note of d-flat radiating from his anus. richie licked the pearly-white goodness from my throbbing manhood. we sat back in the stall, holding each other in our exhaustion. he kissed me deeply and said, "hey baby, wanna go grab a malted?!"

    richie and i have had numerous encounters since then. we shy away from the orgies, because we are into more intimate lovemaking.

    yours truly,
    Erect Schlong Rammer


  • Anyone know if GCC for Alpha has improved much?

    Its improving at a pace that generates code that runs twice as fast every 18 months.
  • If we're lucky, this new application may actually help remove the notion that "linux is not a media OS". The SMBs might help propel Linux into the mainstream...

    Or not.
  • I run a NetShow server on a win98 box and it constantly at runs 100 percent processor usage (not to mention crashing all the time). If anyone's ever run a shoutcast stream with winamp/dsp you will know that compressing audio to 128kbit in real time is a also major hit.

    One reason it uses so many cycles is due to the amount of data being processed. For instance about 1 minute of raw PCM audio is 8 megs, Compressing 8 megs in a lzw compression isn't even trivial. Compressions like MPEG-4 with audio, at high resolutions is quite a chunk of data to give to the processor.

    Another hit is "Variable Streams Casts". For instance NetShow offers the ability to dish out one stream that ranges from 2k a second to 10k a second. To offer variable qualities of data it has to compress to different ranges in parallel.

    My concern with this Linux machine is the only viable software for audio+video streaming is Real Networks. Real in my option is worse then Microsoft in the "evil" department. They charge ludicrous licensing fees and harbor one of the most obnoxious marketing departments I have ever seen.

    There are Open Source alternatives to Real, but from what I can tell they do not compare in quality, or availability.

    -Jon
  • The really interesting thing is that www.alphanews.net is a different page than alphanews.net!
  • I know for sure that no operating system is 100% efficient and in light of the 64 bit CPUs I ask:

    Besides the operating system's processes, how will you see a performance increase when *COMMERCIAL* user programs are not aware of the 64 bit CPU registers? The programs are simply not aware that they are running on a 64 bit CPU. This problem rises when developers stay within ANSI simply for easy compatibility and compilation among many operating systems. I don't actually know why this is a problem. Don't the compilers "optimize" the code when it is preprocessed? For example, on two different operating systems on a 32bit CPU, MS VC optimizes its ANSI C differently than how Linux "make" optimizes its 32bit code. I know that the low-level "optimizations" are operating system dependent, but I mean to say that the "code" will be overall desired platform *independent* and each operating system's proprietary subroutines will be most-likely ignored by the program's creator(s). What is the point to use the 64bit CPUs when data structures and definitions in a programming language are the same on all platforms? If this is true, then a program should run the same on each operating system. Microsoft's Win95/98 operating system is good for game performance simply because all of their daemons running in the background are half-ass and they have optimized most of the programs for their operating system. Even Loki Software, in Tustin, said that they only had to drop the Microsoft extensions in Quake3 to get it ported to Linux. Of course, they did more than that, but what is the result? Quake3 runs about 2% to 10% slower on Linux than on Microsft Windows. Linux is technically Unix(duhh). Unix programs are meant to be platform independent. Thus, they are not 100% optimized, right? I am talking about *COMMERCIAL* programs like Wordperfect, Netscape, Quake 1/2/3, Hexen, HeavyGear, and other unlisted high-performance-required games. So, 64bit CPUs will do what to system performance in a 32bit world? I hypothesize that the operating system's daemons will run much faster and it will leave more bandwidth for user programs. But, will we actually expect a good logarithmic performance increase on a program that is ran? I have a Dual Pentium Pro system now and before I added the second CPU, I thought my system was fast. I read that I would expect an approximate performance increase of 35% on a second CPU. Well, the performance increase on my fresh Dual CPU system is so small that I would like to compare it with the "enhancement" of 64 bit CPUs as opposed to 32 bit CPUs. Then again, these 64 bit CPUs listed in the article are of a completely differnent design than an Intel CPU. Is it comparable? Excuse me whil I enjoy my hot ice.

A person with one watch knows what time it is; a person with two watches is never sure. Proverb

Working...