USB 2.0 for Linux Coming Soon 258
itwerx writes "There's an article on MSNBC about USB 2.0 support in Linux. Interesting to see that the open source community is less than a year behind the most powerful software company in the world in supporting it. Does that make us the second most powerful now? :)"
CNET Story with details. (Score:3, Informative)
wtf? (Score:1, Informative)
Next time you want to say what Linux will support, please do a search on lkml, if you even know what that is.
Coming? It's already here (Score:5, Informative)
Flawless.
Huh? (Score:5, Informative)
FWIW, I've found USB2 to be not as fast as firewire for things like hard drives, a conclusion that windows benchmarks have also shown. So it's not like the delay in releasing 2.4.19 is really hurting anything, especially since there aren't many usb2 devices or ports around anyway.
Re:Linux being mentioned on MSNBC (Score:4, Informative)
Prototypes and emulators (Score:2, Informative)
How can you have support for a non-existing CPU?!
Just because it hasn't showed up on pricewatch.com yet doesn't mean it doesn't exist. There are prototypes, and before that, there were emulators.
USB 2.0 is 99% hardware interface changes (Score:5, Informative)
The biggest amount of work was developing the driver for the new EHCI host controller. A new host controller was necessary for the USB wire interface changes to support the faster speeds.
The reason why development took a while for the EHCI controller was because of the lack of USB 2.0 devices. It's hard to test a driver when you have no hardware to test it against.
That being said, the article is VERY misleading. Linux has had USB 2.0 support for well over a year now and before 2.5 was forked. It's just that it was backported for 2.4 now. Even that's misleading since it's been in the 2.4.19pre tree since it was forked months ago.
I call bullshit (Score:3, Informative)
The host controller is the host side hardware which supports USB. For USB 1.1 (there was a 1.0 standard, but it's broken and hasn't been used in years) there was OHCI and UHCI.
For USB 2.0, there's EHCI.
You can't run USB 2.0 on an OHCI or EHCI HCD. You can't run USB 1.1 on an EHCI HCD.
So how does backward and forward compatibility work? Simple. Your USB 2.0 card has both 1.1 and 2.0 HCD's on it. Most likely you have a couple of OHCI controllers and a couple of EHCI controllers on it.
That's why Linux saw the 1.1 controllers, because they need to exist to support 1.1 devices plugged into the root hub. Windows will also see the 1.1 controllers for the same reason.
Now, back to my subject. I call bullshit on devices working a hell of a lot faster in Windows. Why? Because the HCD is the bottleneck. If you plug a 1.1 device into your 2.0 card, it'll still be using the 1.1 controller that's on that card. The 1.1 controller is limited to 12Mbps.
The testing I've done (as well as other people) shows that Linux is consistently faster than Windows on almost all devices. For those devices where Linux is slower, it's only slower by an insignificant amount. Hardly "a HELL of a lot".
I won't even begin to explain the ignorance behind your assertion that there is nothing to sync your paln with under Linux.
For the wondering ones.... (Score:2, Informative)
Re:This will help how (Score:3, Informative)
Long Device Rant. (Score:4, Informative)
So nice of M$ to draw attention to the mechanism that it keeps splintered. The article phrases the situation as a model for Linux device compatiblity as if there were no other options and Linux development will alsways be broken and lagging. This is true, if you are talking about chasing M$'s broken tail. CSS has demonstrated that any device can be made impossible to talk to, regardless of technical skill.
My experience with M$ USB has been less than advertised. Windows 2000 has managed to make USB I not hot pluggable, and it manages to screw up one of my camera's flash card formating everytime I plug it in at work! At home, I tried to print out five plain text pages to a USB printer from win98. I got four pages, five error messages for lack of communications and one last message about "unknown system errors" requiring a reboot. Sometimes it works, sometimes it don't. That's what happens when you screw around with "standards" too much.
On the other hand, pcmcia with a compact flash adaptor has worked very well. Compact flash registers itself as a new hard drive, /dev/hde in most cases, and this shows up in /var/log/messages when you plug it in. So long as your camera stores pictures unscrambled, you can get them without any silly interface software or device driver. Mount and coppy. Cannon S110 works great, Sipix has broken pictures. Yeah, pcmcia only goes 64 mbps, sigh. Too bad someone out there wants to make sure that:
1. You must use a propriatory driver to talk to your devices. This will enable DRM of the pictures you take - eventually you will have to pay per play to view or print your own pictures. That's progress!
2. That driver will not work forever and you will have to replace your device. Bitrot! more progress. My place of work is filled with old devices that stoped working due to "software upgrades". The vendors recomend, shocker, that we replace the devices.
M$ will never support a "universal" device.
USB 2.0 support has been around for a lot longer (Score:2, Informative)
From linux-usb.org:
People have been using USB 2.0 with usb-storage devices from Linux hosts since June 2001, but it was only in early winter of that year (a short while before Linus created the 2.5 development branch) that other USB 2.0 devices (notably, hubs) began to be available. So while some changes for USB 2.0 were rolling into 2.4 kernels through the year 2001, most of the tricky stuff (the ehci-hcd driver) was separate until the 2.5 kernel branched. Recently, some Linux distributions have begun to include this support.
Re:What was a new USB architecture even needed? (Score:3, Informative)
>>>>>>>>>
Well, here are the specs so you don't have to make stuff up:
USB 2.0 [usb.org]
USB 1.0 [usb.org]
The real difference is here:
OHCI (USB 1.0 host controller, this is the better one) [compaq.com]
UHCI (USB 1.0 host controller, the sucky one) [intel.com]
EHCI (USB 2.0 host controller spec, has more smarts like OHCI) [intel.com]
Re:While Microsoft talks, Linux innovates (Score:3, Informative)
> Microsoft doesn't make advancements -- the PC hardware developers do.
Microsoft has never billed itself as an innovator until very recently. Microsoft's strategy was based on low price and high volume. In terms of volume sales, standardization, and low prices they most certainly have advanced the market as anyone who was around before their dominance will attest to. The biggest area of innovation was the Microsoft, Western Digital, Intel arrangement that led to the IBM PC not incorporating an open standard for hardware so that after Compaq cloned the IBM bios we had a multi-vendor market of compatible PCs. The reason you are running a PC today is because of that "innovation".
> Do you remember, in the early nineties, when we had hardware-based Virtual Machine capabilities on the > PC? Remember when, because of virtual memory and multitasking innovations from companies like
> Qualcomm, we were able to run multiple copies of DOS, DR-DOS, and other OSes, in parallel?
The company was Quarterdeck. You didn't have virtual machines prior to the 386 since the 8088 and 286 didn't offer protected memory. Quarterdeck's 286 task sharing system (Desqview) was able to allow for genuine multi-tasking when the 386 came out. This was about the same time that Microsoft offered multi-tasking in windows. During the years of the 286 (the IBM AT) Microsoft however had a genuine multi-tasking operating system (OS/2) that they believed would be running on hardware sufficient to maintain multi copies of a dos program + heap + stack (i.e. ~ 4 megs of ram). It was only when OS/2 faltered that it became clear that people wanted to run multiple dos sessions and needed more reliability than the Windows 386 / 3.0 system provided. By Windows 3.1 Quarterdeck's products were only marginally better than what came with a generic windows installation.
> What happened? Microsoft wanted users to only be able to run one OS -- DOS/Windows -- on their PCs. > Thus, Microsoft tied memory management into Windows, thereby destroying further developer on PC
> VM capabilities.
This is simply false. There was very little structural difference between QEMM, Quarterdeck's memory manager, and Microsoft's EMM (included in Dos 5.0), EMM had been purchased by a competitor of Quarterdeck's. QEMM was slightly superior but might have created much greater long term compatibility issues for Windows had it become the standard, getting 90% of the benefit for only 20% of the hassles wasn't a bad trade off for Microsoft. I certainly can't see distributing memory mangers free with the operating system as destroying the technology. In addition OS/2 2.0 (which was the last OS/2 that Microsoft had a contribution to) outperformed QEMM/Desqview by a long shot in terms of 386 memory management for virtual 8088s. People today don't run lots of "real mode" applications and thus don't need powerful memory mangers.
> Do you remember when the 386 came out, with its new memory protection capabilties? Do you
> remember how many years it took for Microsoft to provide support for those capabilities? Even Windows
> 95 still wasn't using it correctly.
None, they offered them in their commercial operating system OS/2 which was used in things like Microsoft LAN manager. They didn't offer it in Windows for the reason we were just discussing above such protection would have caused large numbers of the Dos applications to not function. Memory protection could only become part of the standard operating system when the standard applications didn't violate memory. Microsoft employed a middle ground of moderate protection and still this created enormous problems for a generation of software and software developers used to having dangling pointers all over their code.
> In fact, it was Linux that, while new, provided support for 386 memory protection -- long before
> Windows.
Yes the 386s Unixes had it years before Windows since they didn't have to support Dos applications.
> Do you remember when Microsoft hired a group of VMS developers from Digital to develop a stable
> version of Windows? Remember when they succeeded with NT 3.51? Remember when Microsoft
> destroyed that stability by allowing video drivers to run in kernel mode, in NT 4.0? Microsoft's history is
> riddled with backward steps.
I think backwards is too strong. Microsoft has competing interests, high compatibility vs. reliability. Originally they had planned on compatibility going with the windows line and reliability on OS/2. Once OS/2 failed they needed an NT product line. But 3.51 was seen as not compatible enough. Did they make the right choice in retrospect? Probably not, at the time though, and still today, direct mode video was being used by lots of windows apps. What Microsoft did was offered a semi safe solution with direct x.
> Remember when, in 1990, everyone had a capable GUI, that is, eveyone but Microsoft? By the end of the > eighties, we had the Macintosh, the Amiga, the Atari ST, and OS/2 and Geoworks for the PC.
For a very long time the business community rejected GUIs in favor of menu system which Microsoft did support via. ansi.sys quite well. In practice there had been GUIs long before the ones you mentioned (like the one for the Apple II), they just didn't take off. Macintosh offered the only successful GUI and GUIs were not a strong customer demand. At the time of OS/2, Geoworks,
> It wasn't until five years later that Microsoft came out with something even remotely similar, in Windows
> 95.
Did the start menu rather than application groups make that much of a difference?
> Remember when there were simple standards for LANs (SMP),
Baloney. There were no used standards for LANs at all when NetBUI came out. There were a dozen different vendors all offering different and incompatible systems. Appletalk offered a standard but no way to use non Macs; Novell offered a standard but it cost a bundle, Unix offered a standard that required you run Unix, Lantastic offered a PC standard that didn't scale....
> security (Kerberos),
Again a Unix standard.
> printers (PCL),
Microsoft has never had any problems with PCL. I'm not even sure what you are talking about if anything Microsoft supported PCL. BTW the printer standard at the time you are talking about was PostScript. Microsoft did have a problem with PostScript believing that it was too expensive to implement for it to ever become truly a printer standard. So what they tried to do was offer the major advantage of PostScript (high quality fonts) for cheap printers by using the bitstream system (today called truetype). I can't say that didn't work out. BTW even today it still costs a lot to get PostScript support in a printer.
> and video (VGA)?
Again what did Microsoft ever do to hinder VGA? Dos supported open video drivers so any video card within reason would work fine.
> Microsoft didn't want open standards, because that might help another OS to compete with Windows.
> Now, because of Microsoft, we have polluted protocols, and complex devices drivers, tied closely into
> Windows. Further development of interface standards for PC hardware has slowed to a crawl.
Again compatibility vs. reliability. You want good quality hardware standards buy a Mac or a RS/6000 or any number of other vendors. Microsoft has been the champion of open hardware which makes standards difficult to say the least. No one benefit more from easy unified interfaces than Microsoft, but what they have refused to do is tie into particular vendors.
> Remember when Microsoft tried to sabotage the standards for Java and OpenGL? Remember the
> Halloween document where Microsoft stated their plans to "decommoditize" (i.e. destroy the openness
> of) Internet protocols? Have you noticed that Microsoft has been carrying through on that threat?
You are switching from crushing innovation to not being standards compliant. This is a different issue.
> Were you paying attention to how long it took for Microsoft to provide a 64-bit version of Windows? The
> DEC Alpha version of Windows was a joke, because it was just a 32-bit version of Windows, slightly
> modified to be able to run on 64-bit hardware. Even now, there is doubt about Microsoft's claim of being
> 64-bit-ready. Meanwhile, Linux has been running on 64-bit platforms for years.
And how many 64 bit CPUs do Microsoft's customer's use? Again Microsoft supports customer demand.
> If there is one thing that has stood out about Microsoft and Windows, it is their _lack_ of innovation.
Its funny. Above you go on for standards. If there is one area that Microsoft has innovated in more than any other company its creating a standard base for applications and the creation of standard applications.
Re:Proud? (Score:3, Informative)
Re:Excellent! (Score:1, Informative)
Posting anonymously to protect my job... Sorry if that offends you.
Re:I don't get it (Score:3, Informative)
INTEL HAS THE PATENTS ON USB, and they ain't shy about making money on it. And forcing Firewire OUT, and forcing their inferior product IN.
As for complexity, that would not be expensive if the technology could get better economies of scale.
But since Wintel does not want Apple to prosper, and also since Intel was mightily miffed about little Apple taking it's USB thunder away when Firewire came out, they have FUDDed, lied, blocked, inhibited, you name it, any attempt at getting Firewire into the mainstream.
Firewire is an amazing success story -- Overachiever actually makes big despite determined opposition to Voldemor it in the crib.
Expensive complexity in chipsets is nonsense. Much more complex circuitry exists for a song -- how much is an LCD desktop screen? A video card? A CPU, jeez! A Duron 1.3 is going for $54! I picked up my Shuttle FV-24 barebone PC with Firewire on the motherboard for $190! There is no reason why Firewire is not on the mobo other than cutthroat "free" marketers making damn sure crud gets sold to nuke the hated compeitor.
Re:What's this about Virtual LAN cards through USB (Score:1, Informative)
You mean like this IOGear product announcement [iogear.com] for a USB 2.0 host-to-host link? Many such devices are already supported under Linux with the usbnet driver, though currently only at USB 1.1 speeds. (It should be easy to tell that driver how to handle one more device ... :)
I'd expect it to be 2-3 times as fast as a 100BaseT link, without too much trouble, even on early USB 2.0 implementations. Bridge it (Linux will do the spanning tree stuff for you!) and make it be a relatively cheap 480 MBit/sec Ethernet style LAN.
That product might be based on the NetChip TurboConnect2 device. For USB 1.1 speeds there are a bunch of such custom devices, resold by many companies. I'd be rather surprised if that didn't happen with USB2.