USB 2.0 for Linux Coming Soon 258
itwerx writes "There's an article on MSNBC about USB 2.0 support in Linux. Interesting to see that the open source community is less than a year behind the most powerful software company in the world in supporting it. Does that make us the second most powerful now? :)"
Linux being mentioned on MSNBC (Score:2, Insightful)
Anyway, I'm glad to hear it. I look forward to replacing my USB 1.1 hard drive housing with a USB 2.0 one.
1 year behind? (Score:3, Insightful)
>less than a year behind the most powerful software
>company in the world in supporting it.
1 Year is interesting? Seems like maybe a couple months behind would be interesting.
Re:Linux being mentioned on MSNBC (Score:1)
Microsoft hasn't been bashing Linux so much anymore. It sees important opportunities there. What they really don't like is GPL-like licences.
Anyway, they've been changing their atitude towards Linux and Open Source. I just don't know if it'll get better or worse...
Re:Linux being mentioned on MSNBC (Score:5, Insightful)
> Anyway, they've been changing their atitude towards Linux and Open Source.
That's the biggest BS I've ever heard. Gates and Ballmer still run the company, and they are no more honest now than in the past.
It was just a few months ago that evidence came about that showed how Microsoft pressured Dell into dropping support for desktop Linux:
http://www.theregister.co.uk/content/4/24478.html [theregister.co.uk]
If Microsoft is being quieter now, it's because they want something -- something that requires less hostility from Linux developers.
What Microsoft wants right now is for companies and developers to accept
This is consistent with Microsoft's earlier behaviour.
For example, once Microsoft had their polluted J++ version of Java in place, their strategy became the following:
> "At this point its [sic] not good to create MORE noise around our win32 java classes. Instead we should just quietly grow j++ share and assume that people will take advantage of our classes without ever realizing they are building win32-only java apps." http://java.sun.com/lawsuit/051498.unfair.html [sun.com]
Microsoft tried a similar "keep quite and let everyone lock themselves in" strategy with Bristol's Wind/U (Windows APIs on Unix), which tended to lock Unix applications to Windows servers.
So of course Microsoft would like things to quiet down right now. It's because they've already set the traps that they hope will capture Linux and the Internet.
These traps include:
-
- Palladium
- Windows Media protocols over the Internet
- Palladium support for Apache
- MS Office lock-in on Linux (Crossover)
- ActiveX lock-in on Linux (Crossover)
-
- ActiveX support (lock-in) in Konqueror
- Windows Media lock-in on Linux (mplayer)
- Hardware partnership with AMD (kept API details secret, making Linux unstable)
- Hardware partnership with NVidia (closed source driver tied into Linux kernel)
- Hardware lock-in through NVidia (their new graphics language compiler)
- Attempted government-mandated IP-security-hardware lock-in
Actually, now that I think about it, that last one is a killer. In order for Microsoft to get Congressmen and Senators on their side, it is very important to reduce the political risk, by making Microsoft seem more benign. Thus, if Microsoft can succeed in keeping the Linux supporters quiet, then more government officials will be willing to accept the payoffs, excuse me, campaign contributions that Microsoft has offered, in exchange for selling out the American people. It would be a pretty sweet deal for Microsoft to have a law that requires the use of Microsoft technology in every computing device.
Re:Linux being mentioned on MSNBC (Score:2)
> So of course Microsoft would like things to quiet down right
> now. It's because they've already set the traps that they hope will
> capture Linux and the Internet.
So far, so good.
> These traps include:
>
> -
or IBM, would people be so upset about it? It's going to make web services
easier to implement. Let J2EE have a little competition and we'll all benefit.
> - Palladium
The only credible benefit of Palladium to the consumer is spam blocking.
Digital rights management is usually consumer-hostile and tends to be defeated.
This one has little chance of success; too big brother-ish. Keep your
congress-peops informed of your opinions. Meanwhile, there's a couple of
products out that already do spam blocking in a similar way
(ChoiceMail [digiportal.com], Mail Washer [mailwasher.net]), and more are coming.
> - Windows Media protocols over the Internet
WMP is a pretty good format. Let them pour money into improving this important
technology, and we'll all benefit. Anyway, with crossover I can now run
Windows Media in Linux, which is one less reason to run Windows--how does that
help Microsoft? Remember, the media player is a free download.
> - Palladium support for Apache
As above.
> - MS Office lock-in on Linux (Crossover)
As above--it's a lock-in, yes, but it's an unlocking of the operating system.
You don't need Windows to do "real" Office. However, this is almost a red
herring because Star/Open Office, Abi Word, etc. have gotten so good. Anyway,
the research to improve Crossover/Wine has a great side effect; it makes more
Win32 binaries run properly in Linux.
> - ActiveX lock-in on Linux (Crossover)
Hmm. For online banking it's handy but best is to scream at the bank, as a
customer, and demand platform independent banking or you'll move your accounts
elsewhere. Money talks. However, as above, it's a liberation of the OS.
> -
> - ActiveX support (lock-in) in Konqueror
> - Windows Media lock-in on Linux (mplayer)
Understand your point but this stuff is redundant.
> - Hardware partnership with AMD (kept API details secret, making Linux unstable)
> - Hardware partnership with NVidia (closed source driver tied into Linux kernel)
> - Hardware lock-in through NVidia (their new graphics language compiler)
Don't know anything about these. Tying a BIOS chipset to a particular OS
sounds dangerous and probably unworkable anyway. If it's that specific and
that secret, it'll certainly break something out there. Dongles failed a long
time ago and any attempt to revive them is a waste of time.
> - Attempted government-mandated IP-security-hardware lock-in
Palladium, in other words.
I'm more optimistic than you, though I agree with your concerns. Anyway my
strategy is to keep pushing for Linux wherever I work and certainly in my home
office. But, if someone builds a better widget well, you know it's still a market system; let the best product win.
Terry
Re:Linux being mentioned on MSNBC (Score:2)
- .Net support (lock-in) in Qt
Qt has .Net support?
Re:Linux being mentioned on MSNBC (Score:1, Interesting)
How's this for a conspiracy theory - Bill Gates, being a geek at heart, is secretly a supporter of Linux. Unfortunately, a public endorsement would de-value the stock value of Microsoft, leaving him liable to lawsuits from Microsoft's shareholders.
Hey, stranger things have happened!
Re:Linux being mentioned on MSNBC (Score:3, Interesting)
Gates' own operating system design was to be UNIX-based. However, he has long since stopped coding and started managing.
You should look less at MSNBC's article as a support of open-source, or a secret desire to support Linux, then as a desire to become a serious news source.
Microsoft has been trying for years to show that they are serious about the things they decide to pursue.
Messengers, game consoles, ISP. All these things are places Microsoft didn't have to go and people didn't expect from a software company. Microsoft is just trying to get away from people thinking "Windows" when they think of Microsoft, and nothing else.
Re:Linux being mentioned on MSNBC (Score:2)
You werew almost making sense... It didn't quite work out, though.
I think you really lost credibility when you said "hella" and "IQ over 150" in the same sentance.
--
If you have something worthwhile to say, log in.
Re:Linux being mentioned on MSNBC (Score:4, Informative)
Re:Linux being mentioned on MSNBC (Score:4, Funny)
Im just imagining this conversation between Stephen Shankland (the author) and his boss.
boss, "Hi Steve, what did you want to see me about?" Steve, "Well, um, fox news offered me 2x what you're paying me, and they have neater graphics, neat DNB music between segueys, and gretta van sustren is kind of cute." boss, "Steve steve steve, do I have to remind you that you signed a 5 year contract?" Steve, "I know boss, I was hoping you'd let me go...(trails off)" boss, "fat chance!!" steve, "Fine then, we can do this the hard way!" boss, "Yea and what is that?" steve, "I'll start writing LINUX STORIES!" (just then the office goes dead silent and you hear the gasps and jaws dropping) boss, "You just try it buddy!"
And this is the [speculative] story of how pro-linux articles appear on MSNBC. Actually, if you read the article praising linux for being only a year behind, REALLY ISN'T HIGH PRAISE. Second of all, there was a time when journalists were supposed to have *ethics*, independance, and a responsibility to the truth.... Hopefully someone at MSNBC still thinks like that.
some articles about Linux USB: imply non-existance (Score:2)
Hey, Linux will soon have USB 2.0 support and with all the vendors/devices supporting USB, it's a good thing Linux is eventually getting USB support.
The average Joe is going to think you can't use your USB devices with Linux.
So watch out when you read those articles because there can sometimes tell another story. The number one marketing company in the world isn't always doing in-your-face marketing. Actually, they very seldom do it that way. In that way, they are very unlike the Borg who are so arrogant and powerful they just keep coming( in the clear ) directly at their prey.
LoB
Shame on whoever modded me down. (Score:2)
Re:Good reporting shows both sides. (Score:2)
Oddly, MSNBC has also published quite a few articles that do not make Microsoft look all that appealing. Unless somebody has some hard evidence as to why MSNBC is biased, like, oh-I-don't-know one single clearly biased story, give them a break. Does everyone think that every honorable journalist was instantly corrupted by Microsoft's aura?
Re:Good reporting shows both sides. (Score:2)
This will help how (Score:2, Insightful)
Re:This will help how (Score:2)
Re:This will help how (Score:1)
Re:This will help how (Score:3, Informative)
Re:This will help how (Score:2)
NetBSD (Score:5, Interesting)
Re:NetBSD (Score:4, Funny)
That'll teach me to post on less than 2 pots of coffee.
No, it doesn't make it number 2 (Score:1)
In fact, this isn't the first since it mentions the USB 2.0 support that was in the 2.4 -ac kernel. It only mentions a patch for Linus' 2.4 kernel tree.
Second? (Score:4, Insightful)
No, it makes us a year behind. That isn't necessarily bad given the limited number of USB 2.0 to support, but it does show where it rates in the Linux priorities. (As a comparison, consider that Linux supported Itanium very early on - and I've yet to see one in the wild...)
Re:Second? (Score:2)
That has nothing to do with priorities. It has to do with the fact that Intel and HP were throwing money at the problem and loaning out Itanium machines semi-permenantly to anyone who could really use one.
Re:Second? (Score:2)
You think Redhat paused for a second to consider priorities when Intel shoved a big pile of cash at them to get Linux working on Itanium?
Re:Second? (Score:1)
Re:Second? (Score:1)
I am not dissin' Linux, merely trying to be a realist.
Re:Second? (Score:5, Insightful)
There has been a stable USB 2.0 patch for well over a year, it has been in the 2.5 kernel since it forked and it's been in 2.4 for a while, albeit under the "Experimental" heading or waiting for the final 2.4.19 kernel to be released.
Like you mentioned, the biggest problem with adding support for USB 2.0 was the lack of devices. The vast majority of development was done with one USB 2.0 controller and one USB 2.0 device. Both were prerelease versions with a whole slew of bugs to workaround.
The reason why you see Itanium support being so mature was because of the priorities of Intel, not of the community. Intel (and HP) sunk a significant amount of money into getting Linux ported to Itanium. Why? Because it's a billion times harder than USB 2.0 support and much more fundamental and thusly important to have supported as early as possible.
Re:Second? (Score:2)
It was not in 2.5 was 2.5 was forked. 2.5.0 was exactly like 2.4.15, which did not include usb 2.0. I don't even believe it was in 2.5.7 or 2.5.8.
USB 2.0 in the 2.4 series has yet to be in an actual release kernel, although it was added in 2.4.19-pre2, which came out back in February.
Next! (Score:2, Insightful)
What devices? (Score:1)
USB is good for keyboards and mice... that's about all.
Re:What devices? (Score:1)
Re:What devices? (Score:1)
Just not from Apple, but from third parties shipping the cards.
http://www.orangemicro.com/OrangeUSBPCI.html
USB 1.1 - Mac OS 8.6, 9.x or newer
USB 2.0 - Mac OS X or newer
"USB 2.0 Hi-Speed support is only available on Mac OS X at this time. When running on Mac OS X systems, USB 2.0 Hi-Speed will have a data transfer rate of up to 480 Mbits/s (Hi-Speed). When running on the Mac OS 8.6 and Mac OS 9.x USB will have data transfer rates of 12Mb/s (Full-Speed) and 1.5Mb/s (Low-Speed) peripherals."
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Not necessarily Second (Score:1)
Excellent! (Score:1)
They basically said "USB on Linux is not there yet" but they had obviously looked at the possibility. I hope USB 2.0 will give them what they've been waiting for and in turn give consumers what we've been waiting for -- more bundled software that runs on Linux!
Re:Excellent! (Score:1)
Re:Excellent! (Score:2)
Its supported! (Score:2, Insightful)
Firewire vs USB support in Linux (Score:1)
CNET Story with details. (Score:3, Informative)
wtf? (Score:1, Informative)
Next time you want to say what Linux will support, please do a search on lkml, if you even know what that is.
I don't get it (Score:1)
Re:I don't get it (Score:2)
USB uses copper, so devices built for USB 2.0 will eventually be substantially less expensive than the ones built for Firewire.
Re:I don't get it (Score:2)
Re:I don't get it (Score:2)
Re:I don't get it (Score:2)
Re:I don't get it (Score:3, Informative)
INTEL HAS THE PATENTS ON USB, and they ain't shy about making money on it. And forcing Firewire OUT, and forcing their inferior product IN.
As for complexity, that would not be expensive if the technology could get better economies of scale.
But since Wintel does not want Apple to prosper, and also since Intel was mightily miffed about little Apple taking it's USB thunder away when Firewire came out, they have FUDDed, lied, blocked, inhibited, you name it, any attempt at getting Firewire into the mainstream.
Firewire is an amazing success story -- Overachiever actually makes big despite determined opposition to Voldemor it in the crib.
Expensive complexity in chipsets is nonsense. Much more complex circuitry exists for a song -- how much is an LCD desktop screen? A video card? A CPU, jeez! A Duron 1.3 is going for $54! I picked up my Shuttle FV-24 barebone PC with Firewire on the motherboard for $190! There is no reason why Firewire is not on the mobo other than cutthroat "free" marketers making damn sure crud gets sold to nuke the hated compeitor.
Re:I don't get it (Score:2)
As for the get a life BS, as we say in the chatting biz,
"If what I say indicates that I should get a life, what does that say about the imbecile who sits around reading what I said?"
F&#k off.
Re:I don't get it (Score:2)
Coming? It's already here (Score:5, Informative)
Flawless.
works fine for me, too (Score:3, Interesting)
Long Device Rant. (Score:4, Informative)
So nice of M$ to draw attention to the mechanism that it keeps splintered. The article phrases the situation as a model for Linux device compatiblity as if there were no other options and Linux development will alsways be broken and lagging. This is true, if you are talking about chasing M$'s broken tail. CSS has demonstrated that any device can be made impossible to talk to, regardless of technical skill.
My experience with M$ USB has been less than advertised. Windows 2000 has managed to make USB I not hot pluggable, and it manages to screw up one of my camera's flash card formating everytime I plug it in at work! At home, I tried to print out five plain text pages to a USB printer from win98. I got four pages, five error messages for lack of communications and one last message about "unknown system errors" requiring a reboot. Sometimes it works, sometimes it don't. That's what happens when you screw around with "standards" too much.
On the other hand, pcmcia with a compact flash adaptor has worked very well. Compact flash registers itself as a new hard drive, /dev/hde in most cases, and this shows up in /var/log/messages when you plug it in. So long as your camera stores pictures unscrambled, you can get them without any silly interface software or device driver. Mount and coppy. Cannon S110 works great, Sipix has broken pictures. Yeah, pcmcia only goes 64 mbps, sigh. Too bad someone out there wants to make sure that:
1. You must use a propriatory driver to talk to your devices. This will enable DRM of the pictures you take - eventually you will have to pay per play to view or print your own pictures. That's progress!
2. That driver will not work forever and you will have to replace your device. Bitrot! more progress. My place of work is filled with old devices that stoped working due to "software upgrades". The vendors recomend, shocker, that we replace the devices.
M$ will never support a "universal" device.
Re:Long Device Rant. (Score:2)
I use OS X. I a digital camera (a Nikon), and I upload the pictures to my Mac with a USB cable and iPhoto. No drivers required.
I have a USB scanner and a USB printer, and with both of them the story is the same: just plug them in.
There's nothing fundamentally wrong with USB. It sounds, from what you said, that you've ust had really bad experiences because you chose the wrong OS.
Huh? (Score:5, Informative)
FWIW, I've found USB2 to be not as fast as firewire for things like hard drives, a conclusion that windows benchmarks have also shown. So it's not like the delay in releasing 2.4.19 is really hurting anything, especially since there aren't many usb2 devices or ports around anyway.
Proud? (Score:2, Insightful)
Being a year behind in this industry is not something to be proud of. Rather this is something to hang our heads about. MSNBC must have loved posting this article. They're notorious for innovation delays, yet they still kicked our butts by 12 months. If the Linux/OSS community hopes to be competitive in the desktop environment it needs to stop being satisfied with second best. Granted these accomplishments are noble in light of the skimpy development finances being poured into OSS, but funds are growing.
Success will come when we start forming hardware protocal standards based on technology that we've accelerated beyond the point where M$ can have much of a say in the standards. People will run linux on their desktops when it can do really innovative cool stuff that other closed-source companies have only started circulating memos about developing.
Linux can no longer live of the legacy of its stability. Say what you will, but the NT5 kernel is suprisingly stable and new versions will likely continue to improve now that M$ home users have been exposed to stable kernels. Linux still has an upperhand is security, but M$ is spending a lot of $ and time into matching us there too. Our frontier needs to be usability, flexibility (open source media formats not restricted by heavy licensing), and innovative feature implamentation. This combined with the corner stone of extremely low cost will drive linux/oss above and beyond.
Re:Proud? (Score:2)
Oh whats that, they don't exist?
yeah, exactly
Not to mention "earlier this year" (as in feburary) is NOT 1 year ago. All you trolls can go the away, thank you.
Re:Proud? (Score:2)
Re:Proud? (Score:3, Informative)
Re:Proud? (Score:2)
What was a new USB architecture even needed? (Score:2, Interesting)
If USB (the interface that hardware presents to core driver software) had been designed well in the first place, then speed would not matter, except for content of data elements that describe speeds (e.g. a value that says this is running at 12mbps or this is running at 480mbps, or the argument to a command that says force this to run at such and such a speed). Maybe they needed to add speed information and speed control, but that wouldn't be a whole change that needs a whole new software architecture (that's something that could have been added in an overnight coding session). What you'd get is data being transferred 40 times faster with 480 mbps.
Without looking at the specs to see, it's rather obvious that the hardware people just redesigned the interface all over again. Can't someone teach those people some things about reusability and refactoring? And USB isn't the only place this happens. Of course you do need to occaisionally add something to an interface, so a tweaked driver will be needed to fully take advantage of new hardware ideas. But a whole redesign isn't called for ... unless the old design was a POS. But was it the hardware or the software that was a POS? Looks to me like it was the hardware. We'll see when the next speed step occurs. Surely, the Firewire people won't stay 80Mbps down for long. They'll probably aim for somewhere in the 800 to 1600 range next, I bet (if not already). Will the next generation be compatible while still running at the higher speed?
USB 2.0 is 99% hardware interface changes (Score:5, Informative)
The biggest amount of work was developing the driver for the new EHCI host controller. A new host controller was necessary for the USB wire interface changes to support the faster speeds.
The reason why development took a while for the EHCI controller was because of the lack of USB 2.0 devices. It's hard to test a driver when you have no hardware to test it against.
That being said, the article is VERY misleading. Linux has had USB 2.0 support for well over a year now and before 2.5 was forked. It's just that it was backported for 2.4 now. Even that's misleading since it's been in the 2.4.19pre tree since it was forked months ago.
Re:What was a new USB architecture even needed? (Score:3, Informative)
>>>>>>>>>
Well, here are the specs so you don't have to make stuff up:
USB 2.0 [usb.org]
USB 1.0 [usb.org]
The real difference is here:
OHCI (USB 1.0 host controller, this is the better one) [compaq.com]
UHCI (USB 1.0 host controller, the sucky one) [intel.com]
EHCI (USB 2.0 host controller spec, has more smarts like OHCI) [intel.com]
Re:What was a new USB architecture even needed? (Score:2)
So a whole new set of commands, just to be able to go faster?
Re:What was a new USB architecture even needed? (Score:2)
Re:What was a new USB architecture even needed? (Score:2)
So then maybe they made it too low level to begin with. It should have been a basic message passing in the buffer kind of thing. I have not looked at the spec and I think I don't even want to in order to avoid polluting my mind. But the best approach would have been a higher level message/packet kind of thing with a few parameters to give status, identify device, device specific parameters/status, and a chunk/window of data. Things like timing and signals should be handled by the controller.
Part of the trouble is that because vendors can supply Windows drivers, they feel free to change hardware interfaces, and so, this becomes a major problem. There need to be standards in this realm, as well as other places like system calls, libraries, and protocols.
powerful?? USB?? (Score:1, Flamebait)
Re:powerful?? USB?? (Score:1)
Not Apple, Orange Micro (Score:2)
Having said that, one has to commend Apple for the architecture inside OSX. A third company wouldn't have been able to create drivers that quickly if OSX never had good plumming. I guess since it started getting designed around 98, they could see USB / Firewire becoming the standard for external IO and designed the OS to allow for easy integration of such devices. I once read the docs about the OSX driver architecture and was impressed - many well thought out layers of abstraction - but that was a long time ago.
Re:powerful?? USB?? (Score:2)
Re:powerful?? USB?? (Score:2)
http://linux1394.sourceforge.net
the royalty is only in the NAME and use of the Logo, thats why in Linux its ieee1394 with a different logo...
What's this about Virtual LAN cards through USB2? (Score:2)
It sure sounds interesting to have something like that especially if this fabled memory pooling version of Mosix ever shows up.
Re:What's this about Virtual LAN cards through USB (Score:2)
Re:What's this about Virtual LAN cards through USB (Score:2)
I've seen some of USB1.0/Ethernet adaptors as well so the same thing with 2.0 didn't sound all that far fetched. It sounds quite intriguing, but I suppose there's quite a few details to be worked out. For one I don't think I've seen USB2.0/Ethernet adaptors yet. You'd probably need those to start. Oh wait, the MAC addess would be in those. Hmm, I see you'd need to buy these little adaptors --once you could find 2.0 versions of them-- and then switch them using a regular fast ethernet switch. Perhaps it's not as complicated as a I thought, nor quite as virtual as I had assumed at first. That doesn't mean it aint cool though. Those USB1.0/ethernet adaptors were right cheap as I recall.
For the wondering ones.... (Score:2, Informative)
USB 2.0 support has been around for a lot longer (Score:2, Informative)
From linux-usb.org:
People have been using USB 2.0 with usb-storage devices from Linux hosts since June 2001, but it was only in early winter of that year (a short while before Linus created the 2.5 development branch) that other USB 2.0 devices (notably, hubs) began to be available. So while some changes for USB 2.0 were rolling into 2.4 kernels through the year 2001, most of the tricky stuff (the ehci-hcd driver) was separate until the 2.5 kernel branched. Recently, some Linux distributions have begun to include this support.
Nope. (Score:4, Insightful)
No, it just validates Microsoft's FUD that Linux is a bad choice for a desktop OS because of poor hardware support.
Yes, but does MS support USB1.1? :-) (Score:2, Interesting)
He plugs it in, XP crashes. Every time the camera goes in XP goes out the Windows...
Friend remembers me saying that I think Linux can handles this easily and gives me a phone call. I'm away from my desk so friend decides to try on his own: He boots Linux, camera gets detected automatically, friend grabs photos easily and send newspaper.
When I called him there was nothing for me to do but say: "So, Linux saved the day once again
Gilad.
Being first? (Score:2)
Actually, in software, the first version usually has the most bugs.
Rush, Rush, and Rush. Debug later sell first.
Has anyone? (Score:2)
Nomenclature (Score:2)
My favorite quote of the article is "SuSE is thinking of providing software that lets customers upgrade to the 2.4.19 kernel..." Last time I installed SuSE, gcc and ftp were part of the standard installation.
What about the "universal" serial standard for USB (Score:2)
For windows, the manufacturers made some of the drivers avail for their serial converters. [but now even some of the early ones are no longer supported in XP like the enterga/xircom/intel ones and the intel based mct ones]
The problem with USB is the need for a separate driver for MANY of the common devices.
Getting everyone together to come out with UNIVERAL device specs that the manufacturers follow and an easy way to update the device IDs for the OS would greatly advance the use of USB on Linux.
Re:do you guys think (Score:4, Interesting)
Prototypes and emulators (Score:2, Informative)
How can you have support for a non-existing CPU?!
Just because it hasn't showed up on pricewatch.com yet doesn't mean it doesn't exist. There are prototypes, and before that, there were emulators.
While Microsoft talks, Linux innovates (Score:4, Insightful)
What the heck are you talking about?
Microsoft doesn't make advancements -- the PC hardware developers do.
Microsoft's primary role has been to hold the hardware developers back.
Do you remember, in the early nineties, when we had hardware-based Virtual Machine capabilities on the PC? Remember when, because of virtual memory and multitasking innovations from companies like Qualcomm, we were able to run multiple copies of DOS, DR-DOS, and other OSes, in parallel? What happened? Microsoft wanted users to only be able to run one OS -- DOS/Windows -- on their PCs. Thus, Microsoft tied memory management into Windows, thereby destroying further developer on PC VM capabilities.
Do you remember when the 386 came out, with its new memory protection capabilties? Do you remember how many years it took for Microsoft to provide support for those capabilities? Even Windows 95 still wasn't using it correctly. In fact, it was Linux that, while new, provided support for 386 memory protection -- long before Windows.
Do you remember when Microsoft hired a group of VMS developers from Digital to develop a stable version of Windows? Remember when they succeeded with NT 3.51? Remember when Microsoft destroyed that stability by allowing video drivers to run in kernel mode, in NT 4.0? Microsoft's history is riddled with backward steps.
Remember when, in 1990, everyone had a capable GUI, that is, eveyone but Microsoft? By the end of the eighties, we had the Macintosh, the Amiga, the Atari ST, and OS/2 and Geoworks for the PC. It wasn't until five years later that Microsoft came out with something even remotely similar, in Windows 95.
Remember when there were simple standards for LANs (SMP), security (Kerberos), printers (PCL), and video (VGA)? Microsoft didn't want open standards, because that might help another OS to compete with Windows. Now, because of Microsoft, we have polluted protocols, and complex devices drivers, tied closely into Windows. Further development of interface standards for PC hardware has slowed to a crawl.
Remember when Microsoft tried to sabotage the standards for Java and OpenGL? Remember the Halloween document where Microsoft stated their plans to "decommoditize" (i.e. destroy the openness of) Internet protocols? Have you noticed that Microsoft has been carrying through on that threat?
Were you paying attention to how long it took for Microsoft to provide a 64-bit version of Windows? The DEC Alpha version of Windows was a joke, because it was just a 32-bit version of Windows, slightly modified to be able to run on 64-bit hardware. Even now, there is doubt about Microsoft's claim of being 64-bit-ready. Meanwhile, Linux has been running on 64-bit platforms for years.
Have you noticed all of the hardware innovation that has been taking place with Linux? Just in the last few years, we have seen Linux based supercomputers, Linux-based clusters for movie graphics, Linux on IBM mainframes, Linux in car radios, Linux-based store kiosks, Linux-based digital video recorders, and so on. Many of those innovations could have taken place ten years ago, except for one thing -- they were being held back by Microsoft.
If there is one thing that has stood out about Microsoft and Windows, it is their _lack_ of innovation. Linux and Open Source are easily outstripping Windows.
Re:While Microsoft talks, Linux innovates (Score:3, Informative)
> Microsoft doesn't make advancements -- the PC hardware developers do.
Microsoft has never billed itself as an innovator until very recently. Microsoft's strategy was based on low price and high volume. In terms of volume sales, standardization, and low prices they most certainly have advanced the market as anyone who was around before their dominance will attest to. The biggest area of innovation was the Microsoft, Western Digital, Intel arrangement that led to the IBM PC not incorporating an open standard for hardware so that after Compaq cloned the IBM bios we had a multi-vendor market of compatible PCs. The reason you are running a PC today is because of that "innovation".
> Do you remember, in the early nineties, when we had hardware-based Virtual Machine capabilities on the > PC? Remember when, because of virtual memory and multitasking innovations from companies like
> Qualcomm, we were able to run multiple copies of DOS, DR-DOS, and other OSes, in parallel?
The company was Quarterdeck. You didn't have virtual machines prior to the 386 since the 8088 and 286 didn't offer protected memory. Quarterdeck's 286 task sharing system (Desqview) was able to allow for genuine multi-tasking when the 386 came out. This was about the same time that Microsoft offered multi-tasking in windows. During the years of the 286 (the IBM AT) Microsoft however had a genuine multi-tasking operating system (OS/2) that they believed would be running on hardware sufficient to maintain multi copies of a dos program + heap + stack (i.e. ~ 4 megs of ram). It was only when OS/2 faltered that it became clear that people wanted to run multiple dos sessions and needed more reliability than the Windows 386 / 3.0 system provided. By Windows 3.1 Quarterdeck's products were only marginally better than what came with a generic windows installation.
> What happened? Microsoft wanted users to only be able to run one OS -- DOS/Windows -- on their PCs. > Thus, Microsoft tied memory management into Windows, thereby destroying further developer on PC
> VM capabilities.
This is simply false. There was very little structural difference between QEMM, Quarterdeck's memory manager, and Microsoft's EMM (included in Dos 5.0), EMM had been purchased by a competitor of Quarterdeck's. QEMM was slightly superior but might have created much greater long term compatibility issues for Windows had it become the standard, getting 90% of the benefit for only 20% of the hassles wasn't a bad trade off for Microsoft. I certainly can't see distributing memory mangers free with the operating system as destroying the technology. In addition OS/2 2.0 (which was the last OS/2 that Microsoft had a contribution to) outperformed QEMM/Desqview by a long shot in terms of 386 memory management for virtual 8088s. People today don't run lots of "real mode" applications and thus don't need powerful memory mangers.
> Do you remember when the 386 came out, with its new memory protection capabilties? Do you
> remember how many years it took for Microsoft to provide support for those capabilities? Even Windows
> 95 still wasn't using it correctly.
None, they offered them in their commercial operating system OS/2 which was used in things like Microsoft LAN manager. They didn't offer it in Windows for the reason we were just discussing above such protection would have caused large numbers of the Dos applications to not function. Memory protection could only become part of the standard operating system when the standard applications didn't violate memory. Microsoft employed a middle ground of moderate protection and still this created enormous problems for a generation of software and software developers used to having dangling pointers all over their code.
> In fact, it was Linux that, while new, provided support for 386 memory protection -- long before
> Windows.
Yes the 386s Unixes had it years before Windows since they didn't have to support Dos applications.
> Do you remember when Microsoft hired a group of VMS developers from Digital to develop a stable
> version of Windows? Remember when they succeeded with NT 3.51? Remember when Microsoft
> destroyed that stability by allowing video drivers to run in kernel mode, in NT 4.0? Microsoft's history is
> riddled with backward steps.
I think backwards is too strong. Microsoft has competing interests, high compatibility vs. reliability. Originally they had planned on compatibility going with the windows line and reliability on OS/2. Once OS/2 failed they needed an NT product line. But 3.51 was seen as not compatible enough. Did they make the right choice in retrospect? Probably not, at the time though, and still today, direct mode video was being used by lots of windows apps. What Microsoft did was offered a semi safe solution with direct x.
> Remember when, in 1990, everyone had a capable GUI, that is, eveyone but Microsoft? By the end of the > eighties, we had the Macintosh, the Amiga, the Atari ST, and OS/2 and Geoworks for the PC.
For a very long time the business community rejected GUIs in favor of menu system which Microsoft did support via. ansi.sys quite well. In practice there had been GUIs long before the ones you mentioned (like the one for the Apple II), they just didn't take off. Macintosh offered the only successful GUI and GUIs were not a strong customer demand. At the time of OS/2, Geoworks,
> It wasn't until five years later that Microsoft came out with something even remotely similar, in Windows
> 95.
Did the start menu rather than application groups make that much of a difference?
> Remember when there were simple standards for LANs (SMP),
Baloney. There were no used standards for LANs at all when NetBUI came out. There were a dozen different vendors all offering different and incompatible systems. Appletalk offered a standard but no way to use non Macs; Novell offered a standard but it cost a bundle, Unix offered a standard that required you run Unix, Lantastic offered a PC standard that didn't scale....
> security (Kerberos),
Again a Unix standard.
> printers (PCL),
Microsoft has never had any problems with PCL. I'm not even sure what you are talking about if anything Microsoft supported PCL. BTW the printer standard at the time you are talking about was PostScript. Microsoft did have a problem with PostScript believing that it was too expensive to implement for it to ever become truly a printer standard. So what they tried to do was offer the major advantage of PostScript (high quality fonts) for cheap printers by using the bitstream system (today called truetype). I can't say that didn't work out. BTW even today it still costs a lot to get PostScript support in a printer.
> and video (VGA)?
Again what did Microsoft ever do to hinder VGA? Dos supported open video drivers so any video card within reason would work fine.
> Microsoft didn't want open standards, because that might help another OS to compete with Windows.
> Now, because of Microsoft, we have polluted protocols, and complex devices drivers, tied closely into
> Windows. Further development of interface standards for PC hardware has slowed to a crawl.
Again compatibility vs. reliability. You want good quality hardware standards buy a Mac or a RS/6000 or any number of other vendors. Microsoft has been the champion of open hardware which makes standards difficult to say the least. No one benefit more from easy unified interfaces than Microsoft, but what they have refused to do is tie into particular vendors.
> Remember when Microsoft tried to sabotage the standards for Java and OpenGL? Remember the
> Halloween document where Microsoft stated their plans to "decommoditize" (i.e. destroy the openness
> of) Internet protocols? Have you noticed that Microsoft has been carrying through on that threat?
You are switching from crushing innovation to not being standards compliant. This is a different issue.
> Were you paying attention to how long it took for Microsoft to provide a 64-bit version of Windows? The
> DEC Alpha version of Windows was a joke, because it was just a 32-bit version of Windows, slightly
> modified to be able to run on 64-bit hardware. Even now, there is doubt about Microsoft's claim of being
> 64-bit-ready. Meanwhile, Linux has been running on 64-bit platforms for years.
And how many 64 bit CPUs do Microsoft's customer's use? Again Microsoft supports customer demand.
> If there is one thing that has stood out about Microsoft and Windows, it is their _lack_ of innovation.
Its funny. Above you go on for standards. If there is one area that Microsoft has innovated in more than any other company its creating a standard base for applications and the creation of standard applications.
Re:While Microsoft talks, Linux innovates (Score:2)
During the years of the 286 (the IBM AT) Microsoft however had a genuine multi-tasking operating system (OS/2)
HANG ON THERE! OS/2 was made by IBM, not Microsoft.Re:While Microsoft talks, Linux innovates (Score:2)
>>>>>.
I hate MS as much as the next guy, but this isn't really true. If it hadn't been for Microsoft making DirectX, game developers would still be writing their own drivers for the dozen different soundcard types on the market. Or, Creative Labs would have just gotten a monopoly on the market and Sound Blaster compatibility would still be important. Plus, MS keeps adding features to the API, and hardware manufacturers rush ahead to support them. That helps innovation, not hurts it.
Thus, Microsoft tied memory management into Windows, thereby destroying further developer on PC VM capabilities.
>>>>
Do you remember how much DR-DOS sucked? Every modern OS has had built in virtual memory. Microsoft building it in was them following CS theory. And remember, the PC VM hardware is still there, and still works if you care to use it.
Do you remember when the 386 came out, with its new memory protection capabilties?
>>>>
And do you realize that people are STILL bitching and moaning about Windows XP not running their DOS programs? Don't blame Microsoft, blame backwards compatibility. Or more aptly, closed-source software that can't be recompiled to be used when OSs evolve.
Remember when Microsoft destroyed that stability by allowing video drivers to run in kernel mode, in NT 4.0? Microsoft's history is riddled with backward steps.
>>>>>
So MS decided that microkernels were out. Along with everybody else! As for video drivers, those aren't the real problem. The real problem was moving the GDI into the kernel.
Remember when, in 1990, everyone had a capable GUI, that is, eveyone but Microsoft? By the end of the eighties, we had the Macintosh, the Amiga, the Atari ST, and OS/2 and Geoworks for the PC. It wasn't until five years later that Microsoft came out with something even remotely similar, in Windows 95.
>>>>>
This holds back hardware developers how?
Remember when there were simple standards for LANs (SMP), security (Kerberos), printers (PCL), and video (VGA)? Microsoft didn't want open standards, because that might help another OS to compete with Windows. Now, because of Microsoft, we have polluted protocols, and complex devices drivers, tied closely into Windows.
>>>>
Windows 2000 uses TCP/IP for LANs, and supports Kerberos security. It fully supports Postscript printers, and the WinPrinters and WinModems came out because hardware developers were lazy, not Microsoft. As for the VGA bit, you're joking, right? (S)VGA is an anachronism that doesn't support linear memory, high-res/high-bandwidth modes, and most importantly, acceleration. And acceleration is hideosly complex, which is why graphics drivers are complex. As for open standards, VESA had the potential to be one, but they screwed themselves when they tried to charge for VBE/AF. Besides, hardware these days cannot keep the same programming interface. Just look at ATI. They've been unable to keep their drivers compatible across even two generations (R200 -> R300) of cards. The real problem is a lack of register specs from graphics manufacturers, not great Microsoft technologies like DirectX (their one positive contribution to computing).
Further development of interface standards for PC hardware has slowed to a crawl.
>>>
Blame the HW manufacturers, who try to keep charge for even basic things like the PCI specs.
Remember when Microsoft tried to sabotage the standards for Java and OpenGL? Remember the Halloween document where Microsoft stated their plans to "decommoditize" (i.e. destroy the openness of) Internet protocols? Have you noticed that Microsoft has been carrying through on that threat?
>>>>
And have you noticed that OpenGL is more important than ever? And Java never really took off client-side anyway?
Were you paying attention to how long it took for Microsoft to provide a 64-bit version of Windows? The DEC Alpha version of Windows was a joke, because it was just a 32-bit version of Windows, slightly modified to be able to run on 64-bit hardware. Even now, there is doubt about Microsoft's claim of being 64-bit-ready. Meanwhile, Linux has been running on 64-bit platforms for years.
>>>
Yes, Linux runs on platforms nobody buys. Windows just runs on platforms people actually purchase. You can't blame them for being in a different business.
Have you noticed all of the hardware innovation that has been taking place with Linux? Just in the last few years, we have seen Linux based supercomputers, Linux-based clusters for movie graphics, Linux on IBM mainframes, Linux in car radios, Linux-based store kiosks, Linux-based digital video recorders, and so on. Many of those innovations could have taken place ten years ago, except for one thing -- they were being held back by Microsoft.
>>>>>
I see how Linux helps driver hardware innovation because its a free, open source, embeddable soluation, but I fail to see how Microsoft hurt it. It's like blaming your car for not being a helicopter.
Re:While Microsoft talks, Linux innovates (Score:2)
>>>>
You don't seem to understand. Hardware that is constrained to a single register standard cannot evolve as well as hardware where the standardization lies at a higher level. You seem to pine for the days of VGA when a few simple register tricks would work on all chips. Well, in this day of vertex shaders and multi-sample anti-aliasing, a single HW level standard is far too constraining. Hell, even something high-level like DirectX (which is more than just Direct3D!) is getting constraining.
Actually, with Linux, I can interface with any sound hardware that is SoundBlaster-compatible. I thank SB for providing the standard.
>>>>>>>
Wow. Boy do you live in the 80's. SB Compatibility is dead. Everyone and their mother has moved on to compatibility at the API level. HW manufacturers like it. End-users like it. Game developers like it. You can't seem to grok it.
> Do you remember how much DR-DOS sucked?
Point taken. I meant in comparison to modern OSs (like UNIX), not DOS itself.
Wrong. Mainframes have hardware assists for virtual memory. They don't waste the CPU's time transferring bytes in and out of real memory.
>>>>>>>
Mainframes are also really anachronistic. They get the job done, but at the cost of a ton of hardware that could be better spent on more modern designs. Are you claiming that every single modern OS (Linux, Solaris, IRIX, etc) and every single modern CPU (MIPS, SPARC, Alpha) all of which use the traditional kernel-controlled on-CPU MMU virtual memory design somehow missed to boat when they didn't catch on to DOS EMS cards?
Oh, please. Everyone else (e.g. OS/2, Linux) seems to be able to run DOS software without giving up on 386 memory protection.
>>>>>>>
Not like Win95 can. Try running programs that directly access certain hardware under Linux, and it *will* get killed. There is just no way around it. Win95 leaves that hardware open, so moldy old DOS programs still work.
All you are telling me is that Microsoft is too incompetent to do it. In fact, look back to the earlier quotes, where DR-DOS was able to achieve near-perfect backward compatibility with MS-DOS, while Microsoft could not.
>>>>>>>>
And DR-DOS was a fully preemptive, multithreaded, 32-bit, protected mode OS?
(S)VGA was a standard. And you're right, it's out-of date. But that's exactly the point. If (S)VGA is out-of-date, then why isn't there a new standard to replace it? The answer is because Microsoft has sabotaged the standards process.
>>>>>
The answer is because hardware vendors realized that grand-unified hardware standards had gone the way of the dinosaur. Newer chips were reaching the complexity of CPUs, and simply could not adhere to one register standard. Try reading a modern graphics card register-spec sometime (I suggest Matrox's, they're free online). Now tell me that these interfaces (which are several generations out of date with current HW mind you) would not be held back if they had to conform to something like a new VGA.
Now, in place of video standards, we have unique, complex drivers for each video card. These drivers are tied so closely to the OS, that they even have to be rewritten to go from one version of Windows to the next. And for some versions of Windows, such as NT, a lot of drivers don't even get ported.
>>>>>>>>.
These drivers don't have to be tied closely with the OS. Take a look at NVIDIA's kernel driver wrapper code. It's quite OS independent. A graphics driver needs only a few basic functions (allocating memory, managing interrupts, etc) and has no need to interface to complex internal OS structures. What is needed is not a hardware standard, but something like XFree86's binary module mechanism. It specifies an abstraction layer for graphics drivers. Any OS that supports this abstraction layer can theoretically load XFree86 drivers, even in binary form.
So Microsoft only succeeded in slowing down the spread of OpenGL. It doesn't change the fact that they tried to kill it. Of course, Microsoft hasn't given up yet, as demonstrated by Microsoft's claim that their recently-purchased SGI patents gives them control over parts of OpenGL.
>>>>
They're on the ARB, they have control anyway. To tell the truth, the ARB did more than anyone to hurt OpenGL. They stagnated it for so long that it really started looking pitiful next to Direct3D. Thank god 3DLabs had the balls to come forward with OpenGL 2.0
> Yes, Linux runs on platforms nobody buys.
You mean like Intel-based PCs, 64-bit PCs (already being used, with Linux, in supercomputer clusters), IBM mainframes, Macintoshes, Sun boxes, HP boxes, Tivos, IBM RS6000s, and so on? Those sorts of "platforms that nobody buys?"
>>>>>>>>>
Yes, compared to the millions upon millions of Windows-based PCs. Microsoft (until only recently) wasn't in that business. Who cares if their products weren't suited for it?
Haha. You're funny guy. The fact is that Microsoft tried to make the move to 64-bit servers, and they failed! They failed miserably! They failed because they weren't good enough!
>>>>>>>
I highly doubt Microsoft lacks the technical skill to make a 64-bit OS. The Alpha, PPC, and MIPS ports of Windows were never really intended to be blockbuster products. I'm sure they had some plans to target that market, but it was more to show off the cool new design of Windows NT (back when portability, along with object orientation and microkernels, and multiple OS personalities was all the rage).
- If Microsoft hadn't taken steps to prevent it, we would now be running PCs with mainframe-style hardware-assisted virtual memory.
>>>>>>>
Clarify this. As far as I can see, the virtual memory layout on my PC is (a) hardware assisted (the CPU takes care of reading the page tables for me) and (b) is the same style of virtual memory as in 64-proc Sun and SGI boxes. Now, if you're talking about multiple-OS images running in fixed partitions of memory, maybe you are right, but to beg the question, how useful is that? My current machine (1.5GHz Athlon XP) barely has enough horsepower to run Linux/KDE-3, much less both Linux and WinXP simultainiously. Plus, if its such a useful feature, why doesn't Sun or SGI have it?
- If Windows had been able to support the new capabilities of the 386, Intel could have been selling it years sooner.
>>>>>>>>
If people hadn't bitched about backwards compatibility, Windows 95, Windows 98, Windows 98 SE, and Windows ME wouldn't have been necessary!
- If Microsoft had been able to provide a stable-enough OS, then Intel and PC manufacturers could have been selling server hardware to compete with the Unix vendors.
>>>>>
Until recently (hell, even now) PC hardware is not enough to compete with real UNIX machines. MS provided a super-stable OS with NT 3.x. Why didn't PCs take the server market by storm? Cuz they weren't ready!
Re:You mean Linux DOESN'T support USB 2.0? (Score:3, Interesting)
OSS typically lags commercial software support, unless the hardware standards designers and hardware manufacturers work with Linux and/or Linux people right from the start. All too often, the first sample a Linux developer has to go on is bought retail the day a new product is released, and often with no hardware specs to go on. I once contacted a hardware standards group by telephone to inquire about getting a copy of the standard for development purposes. If I wasn't a member of their organization, then I'd have to pay $10,000 and sign a non-disclosure agreement. I was told membership was "very exclusive and expensive". That standard was eventually released when products came out. That was the I2O standard.
Re:You mean Linux DOESN'T support USB 2.0? (Score:2)
99% of drivers for Windows are written by device manufacturers. 99% of drivers for Linux are written by people working in their own spare time.
Re:You mean Linux DOESN'T support USB 2.0? (Score:2)
Not only that, drivers written by hardware vendors tend to be buggier than drivers written by the OS people for the OS they will be used in. Bill Gates has even blamed much of the buggieness of Windows on this. The bugs tend to be in interfacing with the OS itself, In fact, I've even seen it myself, where selecting certain options in the HP printer driver property menu on NT would instantly blue-screen. Linux tends to have more of the latter since OS enthusiasts are doing most of the work.
USB root hub vs. USB devices (Score:2)
> The new version won't instantly enable USB 2.0 to work with Linux-based devices
*Still* having trouble getting their heads around this Linux thing
I know exactly what that part of the article means. It means that Linux now supports USB 2 controllers and hubs but does not yet support any USB 2 devices connected to a USB 2 tree.
I call bullshit (Score:3, Informative)
The host controller is the host side hardware which supports USB. For USB 1.1 (there was a 1.0 standard, but it's broken and hasn't been used in years) there was OHCI and UHCI.
For USB 2.0, there's EHCI.
You can't run USB 2.0 on an OHCI or EHCI HCD. You can't run USB 1.1 on an EHCI HCD.
So how does backward and forward compatibility work? Simple. Your USB 2.0 card has both 1.1 and 2.0 HCD's on it. Most likely you have a couple of OHCI controllers and a couple of EHCI controllers on it.
That's why Linux saw the 1.1 controllers, because they need to exist to support 1.1 devices plugged into the root hub. Windows will also see the 1.1 controllers for the same reason.
Now, back to my subject. I call bullshit on devices working a hell of a lot faster in Windows. Why? Because the HCD is the bottleneck. If you plug a 1.1 device into your 2.0 card, it'll still be using the 1.1 controller that's on that card. The 1.1 controller is limited to 12Mbps.
The testing I've done (as well as other people) shows that Linux is consistently faster than Windows on almost all devices. For those devices where Linux is slower, it's only slower by an insignificant amount. Hardly "a HELL of a lot".
I won't even begin to explain the ignorance behind your assertion that there is nothing to sync your paln with under Linux.
Re:No (Score:2)
I say such comparisons are irrelevant, and good job developers of USB 2.0.