Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software Linux IT Technology

VIA Releases 800 Pages of Documentation For Linux 131

billybob2 writes "VIA has published three programming guides that total 800 pages in length and cover their PadLock, CX700, and VX800/820 technologies. The VIA PadLock provides a random number generator, an advanced cryptography engine, and RSA algorithm computations. The VX800 chipset was VIA's first Integrated Graphics Processor, while the CX700 is a System Media Processor designed for the mobile market. This is another step in VIA's strategy to support the development of Free and Open Source drivers under Linux, which comes pre-installed on VIA products such as the Sylvania NetBook, HP Mini-Note, 15.4" gBook, gPC, CloudBook, Zonbu, and VIA OpenBook. Earlier this week, VIA hired Linux kernel developer and GPL-Violations.org founder Harald Welte to be VIA's liason to the Open Source community."
This discussion has been archived. No new comments can be posted.

VIA Releases 800 Pages of Documentation For Linux

Comments Filter:
  • by Anonymous Coward

    Via (=Way) to go! :)

  • by Anonymous Coward

    Guh?

  • by Anonymous Coward on Sunday July 27, 2008 @07:31AM (#24357059)

    VIA has published three programming guides that total 800 pages in length

    How many pages in width?

    • by Drinking Bleach ( 975757 ) on Sunday July 27, 2008 @07:34AM (#24357075)

      700, with a depth of 300.

      • And increasing with time every passing second! But see, if we're just talking about the size of it, then Wikipedia beats it in all respects: It's larger, older, and gets larger as it gets older(at least so far). Let's get to the dimension of quality. Then we can really measure the colossality of the thing.
      • actually being pdf's the width would be 8.75" with the depth being 8800" or in a standard print 3.04"
        • by arth1 ( 260657 )

          Nothing shows a company's intent not to have documents read and used more than publishing them in PDF.
          Scrolling through large PDFs is painful, no matter whether you use an open source program or Acrobat Reader, and the CPU fast enough to compensate hasn't been invented yet.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      How many pages in width?

      Only one, certainly.

    • by eddy ( 18759 ) on Sunday July 27, 2008 @09:43AM (#24357861) Homepage Journal

      The only thing disappointing is that we still don't have PadLock[esque] instructions in AMD's and Intel's mainstream CPUs. You need to max out a modern 2-core highly clocked CPU to match a fanless C7 1.2GHz CPU in SHA and AES performance. What the hell is the problem? NIHS?

      XSHA for teh wins already!

      • Re: (Score:1, Flamebait)

        by smallfries ( 601545 )

        A single core on a 3Ghz Core2 can match the performance of Padlock. I can't provide a link as the figures are unpublished but it's not particularly hard to work out how.

        Offloading is a good idea for any heavily used operation. Special purpose hardware (like Padlock) is always more efficient than executing a program on general purpose hardware. There is nothing magical about this - overhead has been removed and the execution has been optimised for that specific case.

        The fact that the Core2 can keep up says v

        • by eddy ( 18759 ) on Sunday July 27, 2008 @02:41PM (#24360481) Homepage Journal

          >A single core on a 3Ghz Core2 can match the performance of Padlock.

          Which of course, is pathetic, all things considered. It's like hammering in a screw. Yeah sure, with a big enough hammer it'll go in...

          >There is nothing magical about this

          Who mentioned magic? I sure as hell didn't.

          >The fact that the Core2 can keep up says volumes about the poor implementation of the C7.

          Yes, an Intel 3.0GHz Core2 CPU at 100% load keeping up with a ~12W fanless CPU. I can see how you'd consider that a loss for the VIA implementation, if you're on drugs.

          I don't know what the hell the point of your comment was, I seems to be argumentative just for the sake of it. The facts are simple: If Intel and AMD worked together on a cryptographic instruction set, we'd get FANTASTICALLY BETTER performance in these scenarios. We're talking 10-20x the performance of just bruteforcing it, spending CPU time that could be used for something better.

          If you want to argue against that, I suggest you visit the local bar. I believe its name is /dev/null

          • Re: (Score:1, Troll)

            by smallfries ( 601545 )

            Yike, ranting, raving and selective quoting. You do go for the whole troll don't you. I'm not suprised you didn't understand my point as you quote all of my post but the part that explained it:

            Special purpose hardware (like Padlock) is always more efficient than executing a program on general purpose hardware....overhead has been removed and the execution has been optimised for that specific case

            So no, it is not pathetic that a 3Ghz general purpose processor can match the special purpose extensions on the

            • by Makoss ( 660100 )

              So no, it is not pathetic that a 3Ghz general purpose processor can match the special purpose extensions on the C7. Given that the achievable speedup is much larger than the ratio in clock speeds (let alone the extra the Core2 is doing) is shows that the VIA performance is shit.

              If it were true that the Core2 could match the speed of the Padlock AES, adjusted for clock speed, then padlock would be disappointing. Not worthless, as a complete VIA based system including a pair of disks will run on ~30 watts, but disappointing.

              However that's simply not the case. The best numbers I can find are around 125MB/sec for a 3GHz core2, you list 250MB/sec and reference "unpublished results". For the sake of argument I'll accept your numbers.

              AES-256-CBC on a 1GHz Via part, an older board th

              • The figures that I found googling were about 45MB/sec for openssl. If it hits 511MB/sec then yes that is much more impressive, but that is ten times higher than the top few results in google suggested. The unpublished figures that I mentioned will be released in a few months: they're not mine but I can't really spoil another researchers thunder, as it were. Even ignoring those results there are published results for Crypto++ that show 20 clocks-cycles per byte for AES. That's 150MB/sec per core.

                Maybe I'm mi

          • by 12357bd ( 686909 )

            I suggest you visit the local bar. I believe its name is /dev/null

            Fantastic name for a geek's bar!!....

            See ya at /dev/null! Great!

            Disclaimer!: No!, I! don't! work! for! Yahoo!

        • Re: (Score:2, Informative)

          by Makoss ( 660100 )

          A single core on a 3Ghz Core2 can match the performance of Padlock. I can't provide a link as the figures are unpublished but it's not particularly hard to work out how.

          I don't suppose you could provide any numbers along with that claim? Because a non-padlock CPU matching the performance for AES-256 would be really useful sometimes.

          For reference here are padlock numbers on a moderate Padlock equipped CPU:
          cpu family : 6
          model : 10
          model name : VIA Esther processor 1200MHz
          stepping : 9
          cpu MHz : 1197.115
          cache size : 128 KB

          Using "openssl speed":
          type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
          aes-256-

          • Ah, I was talking about the 2Gb/s claim on Via's page. I would expect that to be 250MB/s of throughput for AES. The overhead of openssl is another matter although I would expect it to be less of a factor on a Core2. The single core of a Core2 should exceed 250MB/s, although not by much.

      • Intel *did* introduce AES instructions. In fact, the current development version of GCC has support for them already; the release notes for GCC 4.4 say "Support for Intel AES built-in functions and code generation are available via -maes.".

  • font? (Score:1, Insightful)

    by B5_geek ( 638928 )

    I hope they used a very tiny font!

    I want to love Via, but they keep disappointing me.

    • Re:font? (Score:5, Insightful)

      by bloodninja ( 1291306 ) on Sunday July 27, 2008 @07:51AM (#24357133)

      I hope they used a very tiny font!

      I want to love Via, but they keep disappointing me.

      ATI started off with 800 pages as well. They kept adding to it, to the point where ATI graphic chipsets are almost as well supported Intels, and even have budding 3D support in the free drivers. I have faith in VIA.

    • by Ilgaz ( 86384 )

      Disappointed for what? Do you expect a "10 PRINT HELLO WORLD" like thing in age of 2008?

      Chipsets are way too complex stuff, they will indeed have 800 page documentation. People developing that kind of deep stuff actually uses all those 800 pages of documents.

    • Re:font? (Score:4, Funny)

      by Asztal_ ( 914605 ) on Sunday July 27, 2008 @09:11AM (#24357615)
      They should have asked the OOXML people to help out.
    • Re:font? (Score:5, Insightful)

      by Jason Earl ( 1894 ) on Sunday July 27, 2008 @04:44PM (#24361341) Homepage Journal

      Personally, I would say that hiring Harald Welte is a better indication of VIA's intentions than the release of documentation. Nobody in their right mind is going to hire the owner of GPL-Violations.org [gpl-violations.org] unless they are absolutely serious about Free Software.

      Welte eats vendors for breakfast. Hiring him grants VIA instant credibility. If VIA drops the ball it is very likely to get crucified. Unless the executives at VIA have the intellect of fence posts this indicates a sea change for Free Software support from VIA.

      • by Nutria ( 679911 )

        Welte eats vendors for breakfast. Hiring him grants VIA instant credibility.

        It also distracts him from GPL-Violations.org.

  • by walshy007 ( 906710 ) on Sunday July 27, 2008 @07:58AM (#24357175)

    makes you wonder, what with intel and via and amd/ati opening their documentation etc, if it will get to the point in the near future where nvidia will be the only binary blob in regards to video drivers.

    come to think of it, this trend is something similar to what happened with wifi a few years back. Everyone was using binary blobs, then atheros, ralink etc release specs and oss drivers. let us hope this pressures the remaining vendors to do the same.

    • Re: (Score:3, Informative)

      by DrSkwid ( 118965 )

      and also the only one with fully accelerated 3d

      • Intel has fully accelerated 3D for the features it supports. They may only have low end cards, which don't have support for more advanced features, but I'd like some assurance that my video card works now, and can continue to be supported in 10 years from now, when it's not hot stuff and the manufacturer decided they don't care about a product they no longer produce.
        • by DrSkwid ( 118965 )

          Yeah, I'm so glad I can't use my $500 video card today because it's much better to be able to use my $0.50c card in 10 years time!

          Piss off you idiot.

          I've still got my ZX81 (qitrh 16k rampack), BBC Micro, (the 386 & 486s I ditched), a P54C, a Pentium 75, a Pentium 150, a Pentium Pro Proliant 800 (I cba to find out what speed - probably 200Mhz), another Proliant a bit later 500Mhz I think, then its on to the Pentium IIIs - 10 all the same 700Mhz ish (from a skip), an IBM E-Server dual PIII 733Mhz (that ru

    • by Ilgaz ( 86384 ) on Sunday July 27, 2008 @09:03AM (#24357537) Homepage

      They need serious competition from ATI and Linux fans choosing ATI because of document availability.

      Same goes for Via too.

      The pressure can only be done via free market and people's reason for choosing a product. Lets say, a huge customer like a country Army chooses ATI for their computers over NVidia just because ATI is documented. I tell you to count days (not weeks!) before Nvidia does similar move. Just watch the governments after the documentation of VIA, the salesman will have a real hard to beat argument: "It is open!"

      Does the security agencies, armies still buy Nvidia while choosing Linux/BSD because the source is open? It really makes no sense to have binary thing running in supposed to be open and secure OS.

      • Honestly, I don't think that most Linux-users will be picking VIA. VIA is known for making really cheap hardware, but it many times doesn't last as long nor run as fast as Intel/AMD chips. Yes, the gPC and others use it, but for the average person who walks into a store and buys a computer, it will have an AMD or Intel chipset most likely.

        That said, I think that it is great that VIA is opening up docs, and I can't wait for nVidia to do the same, compiz cubes for everyone!
        • Re: (Score:3, Interesting)

          by Ilgaz ( 86384 )

          Now it is open, some very advanced developer may pick it and make that cheap hardware integrated graphics become the fastest performing integrated graphics in that class. They are very much tied to software/driver you know.

          You know, such things happened, some people fixed Creative's advanced sound drivers to work fine in Vista I heard.
          I got an impression that even big name guys like NVidia and ATI aren't performing the way they should because of drivers. Especially on OS X/Leopard I notice it. 70% of bad f

        • Re: (Score:3, Insightful)

          by Yfrwlf ( 998822 )
          Well mobile and affordable/portable chips is a really big thing right now. Sure, users wanting powerful systems may not choose VIA but the market for small and portable computers is quite large. ^^
          • Re: (Score:2, Informative)

            by Jorophose ( 1062218 )

            Not only that VIA made a smart move with its Nano design.

            Realising that its C7 design, and by extension the Atom design, are not what people want, they moved towards an ultra-low voltage Core Solo design for their ITX motherboards. The Nano is strong enough to run Crysis with an 8600GT, all on miniITX hardware.

            Not to mention the VX800 can output at 1900x1200, and can do hardware-playback of MPEG2, MPEG4 h.264 & ASF, along with a few other formats. This is all going to be documented for linux. So, HTPC b

        • Many of the netbooks coming out have VIA chipsets. I haven't picked one up to date because I knew the hardware was not particularly well supported. I've bought VIA in the past, and been disappointed.

          VIA chipsets would be great for thin clients and X terminals as well, but once again, VIA's support has historically been sub par. I actually am a little surprised VIA hasn't done this earlier. VIA competes very well on the low power and low cost end of the scale and Linux is fairly important in this marke

      • I'm sorry, but I'm not picking ATI just yet. I want to see results, not just "useless" specs as I will NOT be writing a video driver just because there is some specs for it.

        nVidia cards just work with any modern kernel. And they will work for conceivable future.

        ATI cards circa 2000 had some dev(s) working on them and ATI was co-operating or so was the word. That was when I chose the ATI card 7200 with 64MB, VIVO etc. And guess what? The drivers sucked and continued to suck. Then ATI discontinued Linux suppo

        • "When was the last time you installed from source, AND reviewed all the code/changes AND compiled with compiler you trusted? Was it done on a 100% secure machine without any hardware trojans? If you haven't done all that, you may as well be running that binary blob. There is no difference."

          at some point you have to trust someone, even if you review all the source how do you know the compiler your using isn't compromised? however, why trust many different sources with varying goals when you can trust one sou

      • I hope the linux people will buy ATI/AMD otherwise they will die soon, very soon.

  • by slimjim8094 ( 941042 ) on Sunday July 27, 2008 @08:06AM (#24357211)

    It's just a coincidence they came one after another, but I think companies are going to quickly realize that there's no benefit to keeping things locked up.

    Suddenly they won't need to pay to write drivers, just release the documentation to write them (of course, it would be nice if they gave us a base). The OSS community will make the drivers more stable, cleaner, and faster. We will use the drivers for things they didn't imagine. All of this will save them money and sell their hardware (features added for free? added incentive to buy my stuff? sign me up!)

    I think we may have reached critical mass, at least on the driver side.

    • by Paradigm_Complex ( 968558 ) on Sunday July 27, 2008 @11:02AM (#24358571)
      No, sadly we're no where near critical mass. Not yet. There's three main problems:

      (1) Companies lose control if they open source their drivers. Examples: Dell recently killed certain features from their sound drivers, and a ways back Creative was upset at someone who hacked up features into their Vista drivers which were purposefully absent (but present on their XP drivers). Both Dell and Creative "lost" here even with closed source drivers - they'd have never stood a chance to screw over their customers if the drivers were open.

      (2) Many companies, which should focus on hardware, still worry others stealing their technology from open sourcing their drivers. nVidia is the biggest example here.

      (3) Managment people are stupid and can't seem to comprehend how giving away this information can benefit them.

      Slowly things are going the right direction, but it'll be quite a while yet. For the time being the F/OSS community will just have to remain in the weird flux of having some things work better than their closed source counterparts (rt2570 works sooo much better on Linux), while some things are worse (x264 acceleration).
      • Re: (Score:3, Informative)

        by Kjella ( 173770 )

        a ways back Creative was upset at someone who hacked up features into their Vista drivers which were purposefully absent (but present on their XP drivers). (...) Creative "lost" here even with closed source drivers - they'd have never stood a chance to screw over their customers if the drivers were open.

        Creative licensed certain features for XP, that they didn't want to pay for in Vista. It wasn't that Creative was trying to force customers to buy more expensive cards, it was that Creative itself would have to pay an obscene sum to a third party for Vista support. Not getting permanent rights sounds like short-term cost saving on Creative's part but whoever cashed his bonus back in 2001 probably doesn't care. Whoever owned the rights probably knew they had Creative by the balls and got too greedy, so they

        • Perhaps accusing them of trying to screw their customers with that was overly harsh, I'll admit. Still, it still stands of an example where a company would lose control if they open source drivers, evil or otherwise.
          • by MrZaius ( 321037 )

            I'm sure they couldn't have possibly open sourced that section - Like the man said, it was proprietary code implementing someone else's (presumably patented, or else why bother) feature. It's all but certainly just ilke the licensed code that had to be replaced when the iD games were open sourced.

            That said, though, you were right to say that they screwed the customer. To pull an advertised feature for cards already sold because you're too cheap to pay up for a licensing fee or to pay to have it recoded in a

    • by Yfrwlf ( 998822 )
      I'd really like to see some standardization APIs created for more devices so that you only need one driver per hardware class. Imagine if you could just remove the whole proprietary problem with devices so that drivers were simple and new hardware was much easier to support. Perhaps some completely new interface standards could really help, just like how firewire made media drivers obsolete. Any new features can simply be extended to the firewire driver and that one driver updated to take advantage of ne
        • by Yfrwlf ( 998822 )
          I'm in love. Thank you. ^^

          That's really great that some developers take the modular and API structural systems to heart. The divisioning of work is a great feature that allows for much easier development among other things. I hope the Linux kernel also becomes more modular and flexible in the future, and all software really. I think due to competition you'll see this kind of thing more and more, because the huge unmodular software stacks will become more ignored because they are more difficult and less
    • I see that one of the chips in question is for a random number generator. Despite providing documentation/specs on how this chip runs, to make it possible to write free drivers, it's not the same as having the actual source code for the chip. With any other type of chip this would be well and good, but with random number generators, you can't really test them, and will need to rely on examination of the source code to prove that it works. Even then, it would not be that easy --see the Underhanded C Conte [xcott.com]

      • You currently do have a choice. But the VIA RNG seems to work well.

      • random number generator, an advanced cryptography engine, and RSA algorithm computations

        Don't worry, thanks to Debian, Linux doesn't actually need these things; documentation is optional too.
      • Re: (Score:3, Informative)

        by Kz ( 4332 )

        According to them, it's quantum-effect based:

        http://www.via.com.tw/en/initiatives/padlock/hardware.jsp [via.com.tw]

        in short, it's a set of free running oscillators, where the exact frequency of each is affected by thermal noise. the instabilities generate an easy to detect "beating", turned into bits and accumulated in hardware registers.

        there's very little 'source code for the chip' to read and validate; but there are several tools to statistically verify random distributions.

        (this one looks nice: http://csrc.nist.gov [nist.gov]

    • Your network driver or even an encryption chip, well, not quite the same as a 3D driver.

      Only the ignorant will say that a company will not need to write their own OSS drivers for 3D cards just because they've released the specs. It doesn't work like that.

    • Not critical mass, no. The waters are just starting to rush a little faster now. The flood is coming to save you Nvidia...Just don't be building up anymore dikes while its on its way.
  • by Anonymous Coward on Sunday July 27, 2008 @08:15AM (#24357253)

    Next step: Release the documentation for the display adapters please.

    The open source drivers mostly can't handle the mpeg2/mpeg4 acceleration, and without that the Epias collapse when you try to watch some higher resolution video. That makes them quite unsuitable for living room usage, which is a shame because they could make excellent HTPCs. With better drivers the better Epia boards could handle HD video just fine..

    • Amen!
      They may even expirience a small sales boost when all the linux enthusiasts and integrators can finally build their HTPCs around VIA-boards.

    • Re: (Score:3, Interesting)

      by billybob2 ( 755512 )
      Even with the released documentation, we also need a good leader like Harald Welte to bring together the OpenChrome [openchrome.org] and UniChrome [sourceforge.net] developers to work on the same codebase. Right now the split effort is really wasteful.
    • by Two9A ( 866100 )

      Most definitely. I've just spent two days grappling with the Unichrome driver for X, trying to get it to play video. Of course, the first time I try to play a video file, X crashes and takes IRQ #11 with it, taking the Ethernet chip offline.

      So, I'd welcome better support for the CLE266, personally ;)

    • Re: (Score:3, Insightful)

      by Yfrwlf ( 998822 )
      I'd most like to see acceleration for the open source codecs like Vorbis, Snow, Dirac, and others, but mpeg is better than nothing.
    • Amen!! When I saw how much power I had to draw to do CPU decoding of HD and STILL have stuttering, I retired it in favor of having more time to spend with my newborn. That was 18 months ago. I was sure that the scene would be better by now, but it is not. It's very sad.

      When I saw the title of this article, I thought this was the breakthrough I've been waiting for... so sad.

      While you are at it... STOP PUTTING VGA ON YOUR MOBOs!!!! --> See: http://www.bronosky.com/?p=54 [bronosky.com]

  • by owlstead ( 636356 ) on Sunday July 27, 2008 @08:27AM (#24357315)

    I'm trying to run Ubunu on a VIA epia for some time now, but their graphics solution is as unstable as hell. There is either the binary driver from VIA itself, or the OS one, but both are not quite what you would expect. Now the question for me is: will it also affect the CN400 chipset (and especially the graphics driver)? Because 5 minutes of average uptime before the machine freezes is not workable. I do think the UniChrome Pro support packages are most important for VIA, the rest already seems to work pretty well.

    It seems that each time that a company is on the ropes, they pledge OS support. It would be a good idea for companies to do something when they are not on the brink of extinction. VIA is in a tight spot. They're moving out of the chipset business, and since the eye of Intel is currently on the mobile CPU/chipset business, they can expect the Nazgul to come riding in pretty soon (I don't know too many old testament stories, which seem more appropriate for VIA).

    • by thsths ( 31372 )

      > I'm trying to run Ubunu on a VIA epia for some time now, but their graphics solution is as unstable as hell.

      Yep, same here. The rest of the board works quite well, but I had endless trouble with the built in graphics. With Ubuntu 8.04 it seems to work ok now, but the picture is still a bit fuzzy.

    • Both of my newer via (C7) boards are being used as mini-servers right now, so I don't have much use for any graphics drivers, but prior to that I found that my Epia M and other unichrome-based boards worked fine with the via-provided, and later kernel-inherent drivers. They worked nicely for watching movies, some 3d games, etc, and no crashes. This was on a Debian-based system but being that Ubuntu is debian-based and the kernel is cross-distro I'd imagine they should be similar.

    • I'm running a CN400 on Debian etch. I had the same problems with ubuntu, but then I switched to plain old Debian + the latest openchrome driver, and the problem pretty much went away. I do still get the occasional freeze during rewinding in MythTV, but my average uptime is on the order of three or four months. Have you got the latest SVN checkout of the openchrome library?

      This tutorial should work on ubuntu too:
      http://wiki.openchrome.org/tikiwiki/tiki-index.php?page=Compiling+the+source+code+on+Debian [openchrome.org]

      My mai

      • Re: (Score:2, Informative)

        by guzzloid ( 597721 )

        p.s. also worth checking your X-Config too: here's my VIA video settings (tailored for TV-out...)

        Section "Device"
        Identifier "VIA Unichrome Pro II"

        Driver "via"
        Option "ActiveDevice" "TV"
        Option "TVType" "PAL"
        Option "TVOutput" "S-Video"
        Option "TVDeflicker" "0"
        #Option

  • ...for *Linux* (Score:1, Insightful)

    by Anonymous Coward

    So, all other non-Linux:ish systems are out of luck?

    How does one write hardware specs which are only usable to Linux?

    I'm mighty confused.

    • by BhaKi ( 1316335 ) on Sunday July 27, 2008 @09:45AM (#24357877)
      There is no such thing as OS-specific hardware documentation. The released documentation enables all interested OS-writers/driver-writers to write compatible software.
      • There is no such thing as OS-specific hardware documentation.

        For your next intellectual exercise, please define Winmodem [wikipedia.org] for the class...

        • by BhaKi ( 1316335 )
          A Winmodem is a hardware-plus-software suite for M$-windows whose modem-functionality is done in software and the hardware-part is just a ADC/DAC converter.
          • A Winmodem is a hardware-plus-software suite for M$-windows...

            ...and what would you call the documentation for this hardware, hardware that appears to be {in conjunction with software} designed for only one OS?

            • by BhaKi ( 1316335 )
              Although the hardware-plus-software suite is Windows-specific, the hardware itself can be controlled by any OS (given the required documentation) to the same extent as Windows. So the hardware, and hence its documentation, is not inherently Windows-specific. But yeah, I do agree that it's theoretically _possible_ to write hardware documentation in a very Windows-specific way although I can't imagine anyone who would willingly do that.
            • by Dahan ( 130247 )
              I'm not sure what it would even mean for the hardware to be designed for only one OS. Can you give an example of how a Winmodem is designed only for Windows? In any case, seeing that Winmodems work fine in Linux [linmodems.org], a non-Windows OS, whether they're "designed for Windows" or not is irrelevant.
  • I've been using Via boards for awhile in lower-power (as in watts) webservers and media machines. In terms of power return on low-consumption machines they rock.

    One thing I've wondered is why the newer "lower power" rigs are using the Atom processor, which from I can tell in stats is inferior to - say - the C7 in terms of CPU-power-per-watt output.

    • The Atom delivers MORE performance/watt than Via's solutions.

      Atom destroys the C7 in performance/watt. The C7 has a relatively poor memory architecture and has piss-poor SSE performance in comparison to the Atom. The C7 also cannot match the low consumption of the Atom at a similar clock speed.

      If you really think the C7 has good performance/watt, just see how a DESKTOP Intel Celeron [mini-box.com] keeps pace with the C7 in terms of power consumption. The power consumption is within a few watts, and the Intel chip deliv

      • Do you have numbers that account for the Intel NB and SB power consumption? When you include those numbers, I think that the Nano and Atom will be close for system-level performance/watt.

        • You are of course referring to the D945GCLF with the desktop Atom and 945GC.

          Intel is practically dumping the 945GC because it's built in an old 130nm fab, a fab Intel finds no monetary reason to upgrade. The desktop atom chip is much more frugal, with only a 4w TDP, so it is just the chipset holding it back.

          The intent of the D945GCLF is not to be an ultra-low power board, but to be a CHEAP board to feed the $100-150 PC market, and find a use for old fab tech. There are much more efficient bridge chips ava

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...