Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

USB 2.0 for Linux Coming Soon 258

itwerx writes "There's an article on MSNBC about USB 2.0 support in Linux. Interesting to see that the open source community is less than a year behind the most powerful software company in the world in supporting it. Does that make us the second most powerful now? :)"
This discussion has been archived. No new comments can be posted.

USB 2.0 for Linux Coming Soon

Comments Filter:
  • by nilstar ( 412094 ) on Sunday July 28, 2002 @11:34AM (#3967510) Homepage
    CNET ran this story before MSNBC. The story is Here [com.com].
  • wtf? (Score:1, Informative)

    by Anonymous Coward on Sunday July 28, 2002 @11:34AM (#3967512)
    I don't mean to be a troll, but USB 2.0 support was in the kernel (2.5) a WHILE ago.

    Next time you want to say what Linux will support, please do a search on lkml, if you even know what that is.
  • by fire-eyes ( 522894 ) on Sunday July 28, 2002 @11:40AM (#3967531) Homepage
    Coming? I'm using it right now, it's an experimental option in 2.4.18 (maybe earlier too).

    Flawless.
  • Huh? (Score:5, Informative)

    by virtual_mps ( 62997 ) on Sunday July 28, 2002 @11:46AM (#3967549)
    I've been using USB2 on linux for a while now. Since the kernel has source available, it's possible to apply patches to add features without waiting on a vendor. It would be more accurate to say something like "mainstream usb2 support" or "usb2 in released 2.4 kernel".

    FWIW, I've found USB2 to be not as fast as firewire for things like hard drives, a conclusion that windows benchmarks have also shown. So it's not like the delay in releasing 2.4.19 is really hurting anything, especially since there aren't many usb2 devices or ports around anyway.
  • by Cryptosporidium ( 145269 ) on Sunday July 28, 2002 @11:50AM (#3967562) Homepage
    The article is from CNET. It has just been reported again by MSNBC.
  • by yerricde ( 125198 ) on Sunday July 28, 2002 @12:03PM (#3967606) Homepage Journal

    How can you have support for a non-existing CPU?!

    Just because it hasn't showed up on pricewatch.com yet doesn't mean it doesn't exist. There are prototypes, and before that, there were emulators.

  • by Johannes ( 33283 ) on Sunday July 28, 2002 @12:08PM (#3967626)
    From a high level software perspective, there wasn't that much to do.

    The biggest amount of work was developing the driver for the new EHCI host controller. A new host controller was necessary for the USB wire interface changes to support the faster speeds.

    The reason why development took a while for the EHCI controller was because of the lack of USB 2.0 devices. It's hard to test a driver when you have no hardware to test it against.

    That being said, the article is VERY misleading. Linux has had USB 2.0 support for well over a year now and before 2.5 was forked. It's just that it was backported for 2.4 now. Even that's misleading since it's been in the 2.4.19pre tree since it was forked months ago.
  • I call bullshit (Score:3, Informative)

    by Johannes ( 33283 ) on Sunday July 28, 2002 @12:15PM (#3967657)
    You're seeing a couple of different things happening here.

    The host controller is the host side hardware which supports USB. For USB 1.1 (there was a 1.0 standard, but it's broken and hasn't been used in years) there was OHCI and UHCI.

    For USB 2.0, there's EHCI.

    You can't run USB 2.0 on an OHCI or EHCI HCD. You can't run USB 1.1 on an EHCI HCD.

    So how does backward and forward compatibility work? Simple. Your USB 2.0 card has both 1.1 and 2.0 HCD's on it. Most likely you have a couple of OHCI controllers and a couple of EHCI controllers on it.

    That's why Linux saw the 1.1 controllers, because they need to exist to support 1.1 devices plugged into the root hub. Windows will also see the 1.1 controllers for the same reason.

    Now, back to my subject. I call bullshit on devices working a hell of a lot faster in Windows. Why? Because the HCD is the bottleneck. If you plug a 1.1 device into your 2.0 card, it'll still be using the 1.1 controller that's on that card. The 1.1 controller is limited to 12Mbps.

    The testing I've done (as well as other people) shows that Linux is consistently faster than Windows on almost all devices. For those devices where Linux is slower, it's only slower by an insignificant amount. Hardly "a HELL of a lot".

    I won't even begin to explain the ignorance behind your assertion that there is nothing to sync your paln with under Linux.
  • by unixmaster ( 573907 ) on Sunday July 28, 2002 @12:34PM (#3967716) Journal
    You can read linux-usb news and reach linux-usb team at http://www.linux-usb.org [linux-usb.org]

  • by JoeBlows ( 581471 ) on Sunday July 28, 2002 @12:51PM (#3967768)
    the USB spec has generic drivers that are available to everyone. the drivers include talking to Opticle devices, talking to block devices, mice, and keyboards.
  • Long Device Rant. (Score:4, Informative)

    by twitter ( 104583 ) on Sunday July 28, 2002 @01:26PM (#3967882) Homepage Journal
    I hate USB. Born in 1993, USB I was about as fast and universal as the parallel port. While I can see my devices on USB I, I have no idea how to talk to them. I have all the respect in the world for people who heroically struggle to build interfaces to talk to old scanners, cameras and what not, in the face of OEM indifference and hostility. I'm afraid that USB II and the far superior IEEE 1394 [sourceforge.net] (400 mbps currenet 800 mbps planned, can have multiple pc hosts, backported to 2.2 kernels already). might suffer the same fate. Someone tell me it's not so.

    So nice of M$ to draw attention to the mechanism that it keeps splintered. The article phrases the situation as a model for Linux device compatiblity as if there were no other options and Linux development will alsways be broken and lagging. This is true, if you are talking about chasing M$'s broken tail. CSS has demonstrated that any device can be made impossible to talk to, regardless of technical skill.

    My experience with M$ USB has been less than advertised. Windows 2000 has managed to make USB I not hot pluggable, and it manages to screw up one of my camera's flash card formating everytime I plug it in at work! At home, I tried to print out five plain text pages to a USB printer from win98. I got four pages, five error messages for lack of communications and one last message about "unknown system errors" requiring a reboot. Sometimes it works, sometimes it don't. That's what happens when you screw around with "standards" too much.

    On the other hand, pcmcia with a compact flash adaptor has worked very well. Compact flash registers itself as a new hard drive, /dev/hde in most cases, and this shows up in /var/log/messages when you plug it in. So long as your camera stores pictures unscrambled, you can get them without any silly interface software or device driver. Mount and coppy. Cannon S110 works great, Sipix has broken pictures. Yeah, pcmcia only goes 64 mbps, sigh. Too bad someone out there wants to make sure that:
    1. You must use a propriatory driver to talk to your devices. This will enable DRM of the pictures you take - eventually you will have to pay per play to view or print your own pictures. That's progress!
    2. That driver will not work forever and you will have to replace your device. Bitrot! more progress. My place of work is filled with old devices that stoped working due to "software upgrades". The vendors recomend, shocker, that we replace the devices.

    M$ will never support a "universal" device.

  • by compwiz ( 21231 ) on Sunday July 28, 2002 @01:43PM (#3967932)
    Leave it to MSNBC and CNET to print totally uneducated articles about something they have no basis for.
    From linux-usb.org:
    People have been using USB 2.0 with usb-storage devices from Linux hosts since June 2001, but it was only in early winter of that year (a short while before Linus created the 2.5 development branch) that other USB 2.0 devices (notably, hubs) began to be available. So while some changes for USB 2.0 were rolling into 2.4 kernels through the year 2001, most of the tricky stuff (the ehci-hcd driver) was separate until the 2.5 kernel branched. Recently, some Linux distributions have begun to include this support.
  • by be-fan ( 61476 ) on Sunday July 28, 2002 @02:05PM (#3968018)
    Without looking at the specs to see, it's rather obvious that the hardware people just redesigned the interface all over again.
    >>>>>>>>>
    Well, here are the specs so you don't have to make stuff up:
    USB 2.0 [usb.org]
    USB 1.0 [usb.org]
    The real difference is here:
    OHCI (USB 1.0 host controller, this is the better one) [compaq.com]
    UHCI (USB 1.0 host controller, the sucky one) [intel.com]
    EHCI (USB 2.0 host controller spec, has more smarts like OHCI) [intel.com]
  • by jbolden ( 176878 ) on Sunday July 28, 2002 @02:12PM (#3968044) Homepage
    I'm going to correct a few things. I'm not sure about this "do you remember" since it seems like you are quoting history you yourself didn't live through.

    > Microsoft doesn't make advancements -- the PC hardware developers do.

    Microsoft has never billed itself as an innovator until very recently. Microsoft's strategy was based on low price and high volume. In terms of volume sales, standardization, and low prices they most certainly have advanced the market as anyone who was around before their dominance will attest to. The biggest area of innovation was the Microsoft, Western Digital, Intel arrangement that led to the IBM PC not incorporating an open standard for hardware so that after Compaq cloned the IBM bios we had a multi-vendor market of compatible PCs. The reason you are running a PC today is because of that "innovation".

    > Do you remember, in the early nineties, when we had hardware-based Virtual Machine capabilities on the > PC? Remember when, because of virtual memory and multitasking innovations from companies like
    > Qualcomm, we were able to run multiple copies of DOS, DR-DOS, and other OSes, in parallel?

    The company was Quarterdeck. You didn't have virtual machines prior to the 386 since the 8088 and 286 didn't offer protected memory. Quarterdeck's 286 task sharing system (Desqview) was able to allow for genuine multi-tasking when the 386 came out. This was about the same time that Microsoft offered multi-tasking in windows. During the years of the 286 (the IBM AT) Microsoft however had a genuine multi-tasking operating system (OS/2) that they believed would be running on hardware sufficient to maintain multi copies of a dos program + heap + stack (i.e. ~ 4 megs of ram). It was only when OS/2 faltered that it became clear that people wanted to run multiple dos sessions and needed more reliability than the Windows 386 / 3.0 system provided. By Windows 3.1 Quarterdeck's products were only marginally better than what came with a generic windows installation.

    > What happened? Microsoft wanted users to only be able to run one OS -- DOS/Windows -- on their PCs. > Thus, Microsoft tied memory management into Windows, thereby destroying further developer on PC
    > VM capabilities.

    This is simply false. There was very little structural difference between QEMM, Quarterdeck's memory manager, and Microsoft's EMM (included in Dos 5.0), EMM had been purchased by a competitor of Quarterdeck's. QEMM was slightly superior but might have created much greater long term compatibility issues for Windows had it become the standard, getting 90% of the benefit for only 20% of the hassles wasn't a bad trade off for Microsoft. I certainly can't see distributing memory mangers free with the operating system as destroying the technology. In addition OS/2 2.0 (which was the last OS/2 that Microsoft had a contribution to) outperformed QEMM/Desqview by a long shot in terms of 386 memory management for virtual 8088s. People today don't run lots of "real mode" applications and thus don't need powerful memory mangers.

    > Do you remember when the 386 came out, with its new memory protection capabilties? Do you
    > remember how many years it took for Microsoft to provide support for those capabilities? Even Windows
    > 95 still wasn't using it correctly.

    None, they offered them in their commercial operating system OS/2 which was used in things like Microsoft LAN manager. They didn't offer it in Windows for the reason we were just discussing above such protection would have caused large numbers of the Dos applications to not function. Memory protection could only become part of the standard operating system when the standard applications didn't violate memory. Microsoft employed a middle ground of moderate protection and still this created enormous problems for a generation of software and software developers used to having dangling pointers all over their code.

    > In fact, it was Linux that, while new, provided support for 386 memory protection -- long before
    > Windows.

    Yes the 386s Unixes had it years before Windows since they didn't have to support Dos applications.

    > Do you remember when Microsoft hired a group of VMS developers from Digital to develop a stable
    > version of Windows? Remember when they succeeded with NT 3.51? Remember when Microsoft
    > destroyed that stability by allowing video drivers to run in kernel mode, in NT 4.0? Microsoft's history is
    > riddled with backward steps.

    I think backwards is too strong. Microsoft has competing interests, high compatibility vs. reliability. Originally they had planned on compatibility going with the windows line and reliability on OS/2. Once OS/2 failed they needed an NT product line. But 3.51 was seen as not compatible enough. Did they make the right choice in retrospect? Probably not, at the time though, and still today, direct mode video was being used by lots of windows apps. What Microsoft did was offered a semi safe solution with direct x.

    > Remember when, in 1990, everyone had a capable GUI, that is, eveyone but Microsoft? By the end of the > eighties, we had the Macintosh, the Amiga, the Atari ST, and OS/2 and Geoworks for the PC.

    For a very long time the business community rejected GUIs in favor of menu system which Microsoft did support via. ansi.sys quite well. In practice there had been GUIs long before the ones you mentioned (like the one for the Apple II), they just didn't take off. Macintosh offered the only successful GUI and GUIs were not a strong customer demand. At the time of OS/2, Geoworks, ... windows was Microsoft's GUI. Notice that Geoworks (similar to the Quarterdeck example above) was not scalable but rather was a niche product that filled a particular hardware gap in Microsoft's strategy existed for a short period of time and died. It was never meant to be a long range platform design in the same sense as MacOS or Windows or OS/2. Finally in terms of OS/2 the bulk of OS/2 applications were text mode. The GUI API wasn't really usable until 1.2 and people didn't mainly use the GUI OS till about 2.0. By that time Windows was in full swing.

    > It wasn't until five years later that Microsoft came out with something even remotely similar, in Windows
    > 95.

    Did the start menu rather than application groups make that much of a difference?

    > Remember when there were simple standards for LANs (SMP),

    Baloney. There were no used standards for LANs at all when NetBUI came out. There were a dozen different vendors all offering different and incompatible systems. Appletalk offered a standard but no way to use non Macs; Novell offered a standard but it cost a bundle, Unix offered a standard that required you run Unix, Lantastic offered a PC standard that didn't scale....

    > security (Kerberos),

    Again a Unix standard.

    > printers (PCL),

    Microsoft has never had any problems with PCL. I'm not even sure what you are talking about if anything Microsoft supported PCL. BTW the printer standard at the time you are talking about was PostScript. Microsoft did have a problem with PostScript believing that it was too expensive to implement for it to ever become truly a printer standard. So what they tried to do was offer the major advantage of PostScript (high quality fonts) for cheap printers by using the bitstream system (today called truetype). I can't say that didn't work out. BTW even today it still costs a lot to get PostScript support in a printer.

    > and video (VGA)?

    Again what did Microsoft ever do to hinder VGA? Dos supported open video drivers so any video card within reason would work fine.

    > Microsoft didn't want open standards, because that might help another OS to compete with Windows.
    > Now, because of Microsoft, we have polluted protocols, and complex devices drivers, tied closely into
    > Windows. Further development of interface standards for PC hardware has slowed to a crawl.

    Again compatibility vs. reliability. You want good quality hardware standards buy a Mac or a RS/6000 or any number of other vendors. Microsoft has been the champion of open hardware which makes standards difficult to say the least. No one benefit more from easy unified interfaces than Microsoft, but what they have refused to do is tie into particular vendors.

    > Remember when Microsoft tried to sabotage the standards for Java and OpenGL? Remember the
    > Halloween document where Microsoft stated their plans to "decommoditize" (i.e. destroy the openness
    > of) Internet protocols? Have you noticed that Microsoft has been carrying through on that threat?

    You are switching from crushing innovation to not being standards compliant. This is a different issue.

    > Were you paying attention to how long it took for Microsoft to provide a 64-bit version of Windows? The
    > DEC Alpha version of Windows was a joke, because it was just a 32-bit version of Windows, slightly
    > modified to be able to run on 64-bit hardware. Even now, there is doubt about Microsoft's claim of being
    > 64-bit-ready. Meanwhile, Linux has been running on 64-bit platforms for years.

    And how many 64 bit CPUs do Microsoft's customer's use? Again Microsoft supports customer demand.

    > If there is one thing that has stood out about Microsoft and Windows, it is their _lack_ of innovation.

    Its funny. Above you go on for standards. If there is one area that Microsoft has innovated in more than any other company its creating a standard base for applications and the creation of standard applications.

  • Re:Proud? (Score:3, Informative)

    by iabervon ( 1971 ) on Sunday July 28, 2002 @02:26PM (#3968085) Homepage Journal
    Linux developers have, in general, had better things to do, aside from the group of people working on it. Until there are devices that use it and machines that support it, there's no reason to have OS support. MicroSoft shipped support a while ago because they're pushing its adoption. Linux developers just want all the devices people have to work; they're generally not pushing particular hardware. Keeping on top of all of the standards which may or may not catch on is generally a waste of time which could be better spent working on any of the other things you mentioned.
  • Re:Excellent! (Score:1, Informative)

    by Anonymous Coward on Sunday July 28, 2002 @04:57PM (#3968551)
    Hopefully I don't get fired for this, but most (all?) of the Kodak DC and DX digital cameras connect to the USB device as any IDE device would. If you can plug a CF reader into a USB port on your OS and have it recognized, Kodak cameras will work too, WITHOUT ANY VENDOR SOFTWARE. Works brilliantly in Windows, MacOS, and FreeBSD (I detest Linux, and use FreeBSD exclusively except for on gaming machines...)

    Posting anonymously to protect my job... Sorry if that offends you.
  • Re:I don't get it (Score:3, Informative)

    by Catbeller ( 118204 ) on Sunday July 28, 2002 @05:23PM (#3968635) Homepage
    No, Apple and their evil patents didn't sink Firewire. They let the license go for a dollar a box, then soon waived the fee altogether.

    INTEL HAS THE PATENTS ON USB, and they ain't shy about making money on it. And forcing Firewire OUT, and forcing their inferior product IN.

    As for complexity, that would not be expensive if the technology could get better economies of scale.

    But since Wintel does not want Apple to prosper, and also since Intel was mightily miffed about little Apple taking it's USB thunder away when Firewire came out, they have FUDDed, lied, blocked, inhibited, you name it, any attempt at getting Firewire into the mainstream.

    Firewire is an amazing success story -- Overachiever actually makes big despite determined opposition to Voldemor it in the crib.

    Expensive complexity in chipsets is nonsense. Much more complex circuitry exists for a song -- how much is an LCD desktop screen? A video card? A CPU, jeez! A Duron 1.3 is going for $54! I picked up my Shuttle FV-24 barebone PC with Firewire on the motherboard for $190! There is no reason why Firewire is not on the mobo other than cutthroat "free" marketers making damn sure crud gets sold to nuke the hated compeitor.
  • by Anonymous Coward on Sunday July 28, 2002 @08:38PM (#3969228)

    You mean like this IOGear product announcement [iogear.com] for a USB 2.0 host-to-host link? Many such devices are already supported under Linux with the usbnet driver, though currently only at USB 1.1 speeds. (It should be easy to tell that driver how to handle one more device ... :)

    I'd expect it to be 2-3 times as fast as a 100BaseT link, without too much trouble, even on early USB 2.0 implementations. Bridge it (Linux will do the spanning tree stuff for you!) and make it be a relatively cheap 480 MBit/sec Ethernet style LAN.

    That product might be based on the NetChip TurboConnect2 device. For USB 1.1 speeds there are a bunch of such custom devices, resold by many companies. I'd be rather surprised if that didn't happen with USB2.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...