Misinterpretation of Standard Causing USB Disconnects On Resume In Linux 280
hypnosec writes "According to a new revelation by Sarah Sharp, misinterpretation of the USB 2.0 standard may have been the culprit behind USB disconnects on resume in Linux all along rather than cheap and buggy devices. According to Sharp the USB core is to blame for the disconnections rather than the devices themselves as the core doesn't wait long enough for the devices to transition from a 'resume state to U0.' The USB 2.0 standard states that system software that handles USB must provide for 10ms resume recovery time (TRSMRCY) during which it shouldn't attempt a connection to the device connected to that particular bus segment."
A bug in Linux? (Score:5, Funny)
Clearly the whole thing is broken, and we should transition to a newer, more open and transparent system than even open source.
I will call it OPENER Source. You aren't just able to read the source, you're required to read it!
Update (Score:5, Informative)
"Update: Looks like this is an xHCI specific issue, and probably not the cause of the USB device disconnects under EHCI. "
linux has bugs? (Score:5, Interesting)
Could have fooled me, I end up spending upwards of 3 months a year fixing bugs in the base os that we ship to run our appliance on. Some of the linux subsystems still read like they were written in someone's basement even after a decade of most of the maintainers being paid a yearly salary to maintain it. God forbid you actually fix some of the crap and post fixes though that are more than ten lines long.. Its a fine way to get blacklisted.
Re: (Score:3, Insightful)
Re:linux has bugs? (Score:4, Insightful)
Yet you still choose it as the base OS to run your appliance on. Presumably it's still better than any of the alternatives.
All software has bugs. I'm sure plenty of device drivers written in brightly lit offices by people with smart haircuts and shiny shoes have some absolutely horrific code too. I doubt which floor you work on has any significant effect.
Re: (Score:3)
there are some who write gobs of horrid code with sparse comments like // here be dragons.
What really gets to me is when I deal with code that is 70% comments, but it's all along the lines of:
IncrementAndCheckI() {
i++;
if(i==x) {
CallOtherFunc(i);
}
}
What's most disturbing is that I see this really quite regularly. Damn near every line is commented, and yet none of it tells me ANYTHING usefu
Re:linux has bugs? (Score:5, Insightful)
It's true. Linus has been quite vocal [lkml.org] about whose fault it is when a kernel change breaks an application...
Re:linux has bugs? (Score:4, Insightful)
Im not a software dev, but I have never agreed with this:
If a change results in user programs breaking, it's a bug in the kernel. We never EVER blame the user programs.
What happens when some program has been using a privilege escalation bug to get around sudo, and it breaks when the kernel is patched to fix the vulnerability? Is that still "a kernel bug", should they not patch it? It seems to me that, yes, you try not to break applications, but this is why you have an official, supported API, and if bad developers want to rely on buggy kernel behavior for their programs, you have to choose between either screwing them, or screwing everyone else.
If anyone can enlighten me as to why thats wrong, Id appreciate it.
Re:linux has bugs? (Score:5, Insightful)
For all practical purposes there's no way, I repeat, no way to "heat the whole apartment block" to eradicate bed bugs.
So you don't know what you are talking about.
If anyone can enlighten me as to why thats wrong, Id appreciate it.
1. syscall returns -EFOO when to report condition A
2. hmm, someone notices that -EFOO is too generic. That syscall should return the more specific -ECOND_A_ERROR instead. They change it.
3. ALL SOFTWARE suddenly works *different* and perhaps does not work at all on the modified kernel that uses that syscall vs.older kernel.
Kapish??
Do not change API internals. Fixing undocumented features (ie. bugs, like overflows) is one thing. Modifying documented and established API on a whim is a bad bad bad thing.
If you want to modify it like that, you do the following,
1. syscall returns -EFOO when to report condition A
2. hmm, someone notices that -EFOO is too generic. That syscall should return the more specific -ECOND_A_ERROR instead. SO MAKE A NEW SYSCALL THAT RETURNS CORRECT! Leave old one as deprecated for removal in some years.
3. ALL SOFTWARE continues to work.
If #2 is too much effort for reward, then do nothing. But above all, do not break userland with kernel changes. Ever.
Re: (Score:2)
Define "some years?" What if the application developer doesn't change the behavior after those some years have passed? Isn't ensuring backwards compatibility one of the reasons that Windows has had so many security issues? If so then do you really want to have to be constantly on the lookout for those same issues in the Linux kernel? Finally, why does Linus insist on acting like a child?
I don't suppose you'll truly know the answer to the last question but I'd appreciate any speculation. Keep in mind that, w
Re: (Score:3)
it seems as if [Linus is] a petulant child at times. Okay, quite frequently... That's sad because having someone approachable would probably be a good thing for Linux. As for approachability, you have RMS and Linus... Yeah...
As opposed to Ballmer throwing his chairs about? In case you haven't noticed, the people that run all the world's largest companies are even bigger assholes than Linus. Besides, to me Linus just seems opinionated, rather than immature.. and having a strong sense of direction helps when doing something like overseeing the Linux kernel.
Re: (Score:2)
Sounds exactly like a petulant child to me.
Re: (Score:3)
Well, maybe Mauro really needed to STFU and listen to someone more experienced at that point? There's a stage where it's better to stop trying to be polite, and get people to wake up, if you want them to learn.
Re: (Score:3)
The chair wasn't thrown at anyone in particular. It was thrown because someone else was leaving MS to work at Google and Ballmer decided he was going to "fucking kill Google". He's doing an A1 job so far.
Re:linux has bugs? (Score:4, Interesting)
I'm not sure which world you live in, but leading the project which produces the OS kernel that is used in more computing devices than any other - well, that's not a bad result really.
Re: (Score:3)
I assume you would like security bugs specially marked so that you can prioritize fixing them and releasing the changes.
Look at the number of bugs found and fixed which have much later been discovered to be security bugs.
So now you didn't push "non-security" bug fixes to your production servers and they get owned by bad guys.
The lesson is that you should treat all bugs as security bugs.
Re:linux has bugs? (Score:5, Informative)
Basically, the kernel has an Application Binary Interface which is a bit like a contract. If the application gives the kernel something formatted in a specific way, the kernal promises to give it back something in a specific way and the other way around. Any software that is written to respect the contract should never be broken by a change to the kernel as the application has no knowledge of how the kernel performs its obligation.
Changes to the ABI are not supposed to be common events. They're supposed to be changed only when lesser changes can't work. FreeBSD handles it using compatibility libraries which maintain the ABI for various kernel revisions so that applications can continue to use older ones if need be. AFAIK, Linux doesn't do that, and as a result, the kernel maintainer and the developers writing the code have to be even more careful about changes made not messing up the ABI.
Also, because Linux is just a kernel without a userland, a change to the Linux kernel that was permitted to break the ABI could hose all of the distros all at once requiring the rewrite of hundreds of little bits of software that are cobbled together to make the distros function as complete OSes.
There's more to it, but that's basically why Linus takes the stance that the kernel is to blame and not the developer. But, he undoubtedly doesn't consider it to be the kernel's fault if a developer does things that don't comply with the normal ABI specifications.
Re: (Score:2)
Basically, the kernel has an Application Binary Interface which is a bit like a contract. If the application gives the kernel something formatted in a specific way, the kernal promises to give it back something in a specific way and the other way around. Any software that is written to respect the contract should never be broken by a change to the kernel as the application has no knowledge of how the kernel performs its obligation.
That was a very good explanation and describes it in a manner that I've never seen before but it does describe it very well. So, in short, thank you for that. Even if I had mod points I'd rather just thank you for having taken the time to come up with that description (I spent 'em already, they'll probably give me more tomorrow) because, frankly, that's the best description I've seen of an ABI and how it probably *should* be done. It brings to mind some security questions (example: What do you do, from the
Re:linux has bugs? (Score:5, Informative)
The ABI has been changed upon occasion. If a struct passed to a syscall (or ioctl) has some spare option bits (that were reserved [and therefore zero], that can be the way to go (e.g. turning on the bit indicates that the program is aware of the new semantics).
Otherwise, a new syscall (or ioctl) number is assigned. For example, the stat syscall originally had a syscall number of 18. When the "struct stat" was modified [added some new fields and/or expanded the size of others], the syscall number was bumped to 106. Old programs that were not recompiled issued stat with 18 and worked unchanged. If you recompiled, you got syscall 106 and the new semantics.
Re: (Score:2)
Re: (Score:3)
I hate to admit it, since I don't use any form of Linux, but I'm beginning to like Linus despite his ego. His words are similar to the ones I would like to tell the multi-billion dollar corporations I have to deal with when I'm trying to figure out why their software works on everyone's machine one day, then on only one the next.
It's quite obviously a programming issue but no, I have to go round and round with people at various levels until they finally admit defeat and say, "Make the user a local admin."
not surprising (Score:2, Interesting)
Power Management has worked well on Windows for 15+ years. I'm still waiting for Linux's first year, so problem on Linux are with the kernel and/or the drivers.
Re:not surprising (Score:4, Informative)
You just said power management worked well on windows 98 and 95.
I am calling you a liar.
Re: (Score:2)
You flipped the switch or hit the power button, it turned on, flip/hit it again, it went off. What didn't work right?
Re: (Score:3)
Which is easy for MS to achieve as they're willing to implement a non-standard ACPI implementation rather than using the Intel implementation that everybody else uses. And write work arounds for buggy implementations rather than kick it back to the manufacturer to do correctly.
Linux doesn't have that luxury, which means that DSDT changes and such have to be done by the end user rather than the developer that should have implemented the standard correctly in the first place.
Re: (Score:2)
Which is easy for MS to achieve as they're willing to implement a non-standard ACPI implementation rather than using the Intel implementation that everybody else uses. And write work arounds for buggy implementations rather than kick it back to the manufacturer to do correctly.
What workarounds? Even if I install the original Windows 7 with no updates on top of it, ACPI works flawlessly on most machines, both old and brand new (which didn't even exist when Win7 was released).
Re: not surprising (Score:5, Funny)
... stubbornly refuses to sleep with win7. Works fine with Linux.
At least she has some standards.
Re:not surprising (Score:5, Informative)
15+ years is a stretch. Even in the 2006-07 era at the end of XP's development, there were brand new machines that couldn't return from sleep correctly. It was particularly vexing since a lot of them were laptops factory configured to sleep when left unattended. I will say that I haven't had any complaints with S3 sleep since the advent of Windows 7, however.
Re: (Score:3)
I've got an XP laptop that instantly BSODs the second you close the lid.
So yeah, myth busted.
DG
Re: (Score:3, Insightful)
If i remember correctly a lot of the power management problems are due to the manufacturers not implementing the standards correctly, they implemented them to the broken Windows implementation in order to keep WIndows working.
Re: (Score:2)
Have a couple of Acer laptops running Windows 7 with displays that frequently (always?) don't come up out of sleep. The solution is to close the laptop and open it again, but this is obviously quite a pain. Linux actually performs better in this regard on the one. THe other one is my wife's so it isn't running Linux.
Re:not surprising (Score:5, Funny)
THe other one is my wife's so it isn't running Linux.
My wife's laptop is running Debian 7. What's up with your wife? :)
Re: (Score:2)
THe other one is my wife's so it isn't running Linux.
My wife's laptop is running Debian 7. What's up with your wife? :)
My guess is his is not imaginary.
Re: (Score:2, Offtopic)
Oh heavens, it must be happening again. I'm obviously experiencing a relapse of those terrible hallucinations that have plagued me for years. Oddly enough, they seem to be at their worst when I'm at home. I've had visions of a beautiful woman in my house, with two beautiful little girls running around as well. I know, I should seek medical attention immediately, as this could be a sign of a serious condition. Speaking of conditions, my sense of reality is so distorted that I've come to believe my fictitious
Re: (Score:2)
Wait... I thought you said she was beautiful? ;)
I'm just screwing with you - she's a wonderful looking woman and has good taste in operating systems as well assuming she uses it because she likes to and not because it pleases you. Then again, doing it for the latter reason isn't horrific or anything but I'm an idealist I guess. (Which is why I no longer have a wife.)
Re: (Score:2)
Yeah, she's a nerd, and she actually likes Debian :).
Re:not surprising (Score:5, Funny)
Oh heavens, it must be happening again. I'm obviously experiencing a relapse of those terrible hallucinations that have plagued me for years.
Oh, wait, she's real after all. [palegray.net]
Dude, you posted a photo of a laptop sitting on the armrest of an empty couch ;)
-
Re: not surprising (Score:5, Interesting)
FWIW, I now have a policy of avoiding Acer like the plague and advising my customers to do the same, owing to their appealing customer support when advised that an entire product line had a bios bug.
http://www.nexusuk.org/~steve/acer.xhtml [nexusuk.org]
TL;DR: one of their lines of laptops has a dsdt bug, I informed them, they weren't interested. I even sent them a patch, still not interested (and decided that completely ignoring my emails was the best approach). To this date they haven't released an updated bios.
Re: (Score:2)
Re: (Score:2)
Sadly, most companies won't allow the customer to talk directly to engineering.
I didn't want to talk directly to engineering, I just wanted someone to pass the patch on to the right people and get a fixed BIOS to a defective product line. Instead they thought it was better customer service to thrown my emails into the bitbucket. That's the kind of customer service that gets a company onto my blacklist.
Re: (Score:3)
I share the same policy. I bought one and only one Acer in my life. It was the first Acer I bought and is the last. Early on it would not sleep under Linux so I took at look at the BIOS. That thing was so sloppily coded! The batteries that Acer shipped with the laptop also had a manufacturing flaw that caused them to fail prematurely. Batteries degrade over time progressively but these would one day perform at 90% original specs and the next day perform the best brick impression. I owned two Acer-approved b
Re: not surprising (Score:2)
FWIW, I now have a policy of avoiding Acer like the plague and advising my customers to do the same, owing to their appealing customer support when advised that an entire product line had a bios bug.
http://www.nexusuk.org/~steve/acer.xhtml [nexusuk.org]
TL;DR: one of their lines of laptops has a dsdt bug, I informed them, they weren't interested. I even sent them a patch, still not interested (and decided that completely ignoring my emails was the best approach). To this date they haven't released an updated bios.
Re: (Score:2, Informative)
There's a big difference between Windows where problems are a corner case, vs. Linux where success is a corner case. But the point still remains that I've used sleep and hibernate on most of my Windows machines without really fearing problems or data loss (I'll still save any progress before initiating it, though thanks to Office 97 I'm in the habit of saving regularly regardless), but I can't think of even bothering to try such a thing on Linux (nor can even of the people I know who love Linux enough to a
Re:not surprising (Score:4, Informative)
Spoken like someone who's never had to reboot a computer from coma mode.
Re:not surprising, since there are few docs (Score:5, Informative)
Far too many vendors are only willing to provide chip documentation under a Non-Disclosure Agreement (NDA), which prevents a knowlege-, as opposed to empirical-based Linux driver. This allows them to kludge around chip deficiencies in a Windows driver without the user being aware of any issues. Even Intel has started making it harder to get the real manuals for their CPUs and bridges (they used to ALL be published on Intel's FTP and HTTP sites). Frequently, in System-on-Chip (SoC) implementations, even the CHIP vendors don't know anything; they just pass along whatever quick and dirty proof of concept the designers of some feature of the chip provided and call it a "working driver", while it is nothing that would pass even a cursory QA process.
The first Linux code I wrote was a "quirk" handler for a parallel ATA PCI chip that came up programmed to the same default I/O addresses as the South Bridge's internal ports, and a BIOS that didn't properly perform PCI enumeration on it, since it already had PCI addresses.
Re: (Score:2)
Re: not surprising (Score:3)
Power management ?
It's worked well for years on Linux. Only problems I ever had were involving nVidia workaround drivers and sleep.
Re: (Score:3)
My Windows 7 laptop will regularly not come out of sleep, requiring a battery removal to resolve. This occurs more often when I'm packing up and unplug something while it's going to sleep. Typically this happens after about 3 weeks since last reboot, which really sucks because it only has to get to a month and it gets rebooted to install updates anyway.
The other annoying thing it does is when it wakes up it still thinks it has the external monitor attached (the VGA/DVI monitor only - the USB attached ones a
Maybe not all the disconnects? (Score:5, Informative)
Sarah's Google+ post has an update:
USB sucks (Score:5, Informative)
USB as a whole is already a silly design, having all these silly details and ambiguities. For example, where it has a minimum time (10ms in this case), it should also have a maximum time (for example 50ms). Devices should be able to communicate after that maximum time or they are broken. Actually, there should be a maximum time when powered up ... how is a minimum even useful for anything.
This only needs to specify controller communication, not device function. For example a hard drive might take several seconds to spin up and get in sync. But the controller should be able to do basic communication in 50ms, even if all it can say about the actual hard drive is "spinning up but not ready". USB has a lot of other stuff that is far from the KISS principle.
Welcome to EE (Score:5, Insightful)
The 10ms is for the software. The flip side of this is that the hardware has a maximum of 10ms to get its shit together so that it can be connected to. And 10ms is forever in hardware.
Re: (Score:3)
The 10ms is for the software. The flip side of this is that the hardware has a maximum of 10ms to get its shit together so that it can be connected to. And 10ms is forever in hardware.
Dear Linux kernel, i'll be ready when my disk is done spinning up. kthanksbye
Re:Welcome to EE (Score:5, Funny)
The 10ms is for the software. The flip side of this is that the hardware has a maximum of 10ms to get its shit together so that it can be connected to. And 10ms is forever in hardware.
Dear Linux kernel, i'll be ready when my disk is done spinning up. kthanksbye
Dear USB hard drive, that's fine, but don't go and disconnect from the USB bus in the meantime. Forever waiting, Linux kernel.
Re: (Score:2)
And 10ms is forever in hardware.
Not if the hardware is composed of a microprocessor, and the hardware holds the CPU in reset for 2 ms waiting for the crystal to stabilize before letting it run.
The 10 ms is for the DEVICE to get it's shit together. Coming out of suspend, the host starts sending SOF's, and must not send anything else to the device for 10 ms. The DEVICE is required to be ready for communications from the host after this 10 ms period.
Re: (Score:2)
Re: (Score:3)
Atleast with the case of xHCI the 10ms is actually a minimum for both -- the specs do not indicate a maximum for the hardware to resume at all.
That's not how specifications work. Both sides are required to obey the spec for things to work. A minimum for one side is a maximum for the other side.
It's like we have a lunch break specification. The specification says that on a lunch break he must wait a minimum 30 minutes before sending the employee more work to do. This means the employee has a maximum 30 minutes to finish lunch.
What is happening here is that the employer (the computer) is obeying the spec. It's waiting the required minimum time, then
Re: (Score:3)
You're pulling your punches. USB was a completely half-ass standard to start with, and then was continually modified with half-ass frankenstein additions to provide features that were already in competing bus technologies, once its designers finally had to admit that those features were actually useful.
Anyway it is not much of a surprise that things can slip in the USB area. There are only a few developers who are both talented enough to work do it right and also have the patience for wasting their talent
Re: (Score:3)
well, you CAN plug a usb male connector into an rj45 jack.
it won't work, but it will fit just fine.
(stupid oversight. the last 15 or so years, cable and connector designers have been pretty idiotic. don't get me started on that rant..)
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
As someone whose computer experience predates the birth of USB by many years, I find all the criticism of USB to be a hoot. I mean, sure, it's a mess compared with an ideal system, but oh my lord it's so much better than the mess we had before I don't even know what to say. When peripheral interconnects are so good that we resort to complaining about USB, it's a better world than I could have dreamed of 25 years ago.
Re: (Score:2)
When peripheral interconnects are so good that we resort to complaining about USB, it's a better world than I could have dreamed of 25 years ago.
USB basically works now because a lot of very bright kernel developers have been beating it into submission since the late 90's.
I remember pre USB, with my DIN keyboard and RS-232 mouse, parallel Epson compatible printer, 10 Base T ethernet and floppy drive. They all worked just fine.
I also remember when USB started taking over and one had to keep PS2 keyboards an
Re: (Score:2)
Shh! You won't get burned at the stake around here - they'll burn you at the stack in these parts.
Re: (Score:2)
In the best of all worlds, devices should treat it as a maximum and hosts should treat it as a minimum.
Re: (Score:3)
In the SPEC, devices are required to treat it as a maximum and the host is required to treat it as a minimum. Any device which isn't ready for communications after 10 ms are broken, and any host that attempts communications before 10 ms is broken. This isn't an area of the spec that's in any way vague.
Re: (Score:2)
Do we think that Sarah will receive some trademarked swearing because of this?
http://news.techeye.net/chips/linus-torvalds-and-intel-woman-in-sweary-spat
http://linux.slashdot.org/story/13/07/15/2316219/kernel-dev-tells-linus-torvalds-to-stop-using-abusive-language
I'm beginning to wonder if Sharp is a MS or Apple plant, sent into linux kernel work to sow seeds of antagonism and self-destruction.
Re: (Score:2)
USB as a whole is already a silly design, having all these silly details and ambiguities. For example, where it has a minimum time (10ms in this case), it should also have a maximum time (for example 50ms). Devices should be able to communicate after that maximum time or they are broken. Actually, there should be a maximum time when powered up ... how is a minimum even useful for anything.
There is a maximum time, which is 10ms. According to the spec, the kernel has to wait 10ms for the device to be ready. Say you have different kernels waiting 10, 20, 50 and 100 ms. A device that is ready after 15ms works on 3 kernels, but not on all 4, therefore it is broken. Actually, the device must be ready in 10ms, or it is broken.
"Minimum time the kernel has to wait" = "Maximum time the device is allowed to take to get ready".
Misinterpretation *By Linux* (Score:3)
Unfortunately, the article's referring to the "misinterpretation" passively, not saying directly who the author is asserting misinterpreted the spec, but I think from context it seems to be saying "misinterpreted by Linux", as opposed to "misinterpreted by lots of cheapo USB devices". It's bad that Linux does that, but it's certainly easier to fix in one place in Linux than going out to lots of vendors putting out equipment with very low profit margins and hope they'll all do the right thing.
I was also a bit confused as to when the article was referring to microseconds (s) vs. milliseconds (ms); I found it surprising that it seemed to be saying that most of the devices responded in under a microsecond, while others were over 10ms.
Re: (Score:2)
I have used Linux in the past as a rough guideline to determine what something ambiguous means. The assumption is that the devices work in Linux and have been tested somewhat, therefore how Linux does things is a better first approximation of what I should do than just guessing. I also hedged this by looking at xBSD code which is easier to understand.
A lot of times when it's safest to just try and figure out what Windows does because there are countless devices that only support the commands that Windows
Re:Misinterpretation *By Linux* (Score:5, Insightful)
There is no ambiguity in the USB spec, and Sarah has an incorrect interpretation. The spec requires that the host provide at least 10 ms of recovery time coming out of suspend; a device is required to be able to communicate after this minimum time. Any device which isn't ready for communications after 10 ms of resume recovery time is broken. A host is permitted to provide more than this, but isn't required to.
So, yes, it's perfectly valid for the host to blindly attempt to communicate with the device after 10 ms - presuming that the host KNOWS precisely when the recovery period began. If the host requested that the bus resume, set a timer for 10 ms, and then tried talking, the HOST is at fault because it didn't check with the hardware as to when the resume period began. I think the 17 ms that they reference in the article is related to this - there is a delay between the request to resume the bus and the actual time that the hardware does resume the bus, so they were trying to talk with devices before the 10 ms period was up.
The device is perfectly within the spec if it ignores communications prior to 10 ms, or if it responds to them - it has complete flexibility. After 10 ms, however, it MUST be ready to communicate.
Re: (Score:3)
Re: (Score:3)
Sorry, but that's not how it works. If you want to say that there's a consensus to operate out of spec, that's fine, but according to the spec 10ms is the minimum amount of time before you can communicate. Which is another way of saying it's the maximum amount of time you have to get ready for communication. The Intel engineer that claimed it meant that they could take longer than that is an idiot - reading the spec that way makes it meaningless.
I think it is probably wise that other systems give a longer d
Resume? What's that? (Score:5, Insightful)
Back in the mid-1990s to the mid-2000s when I used Windows, I realized sleep mode was a complete joke, unreliable, and just stopped using it by the time I upgraded to Windows XP or shortly after. In Linux, I am still not a fan of waiting for the damn thing to "wake up" for 5-10 seconds before it will even accept my password, so the only component that ever even enters standby on my machines is the moniter (and this has been the case for over a decade, even dating back to my last years in Windows). Windows, Linux--doesn't matter what the OS is, not putting the system into standby makes the whole experience much smoother, faster and hassle-free.
On the other hand, though--it is a good thing this was fixed for those laptop users out there.
Re: (Score:2)
My 8 year old laptop running Windows XP usually wakes up in under 5 seconds from sleep.
When it was brand new it would last around 5 hours with the lid shut and turned on (it consumed around 10 - 12W idle with the screen off, with a 65Wh batter). On standby it would last weeks
Standby support for a laptop is mandatory..
Re: (Score:2)
My several-year-old laptop running Xubuntu *boots* in about 5 seconds.
Even now it lasts weeks when powered down. It consumes about 0 watts when powered down.
Standby support is only mandatory for OSes that take a rediculously long time to boot.
Re: (Score:3)
Not true.
Standby is there for when you want to leave your desktop applications open and have the benefits of using less energy while you eat dinner or take a walk. Hibernation is great if you need that 0 watts of draw, but don't want to have to have all your programs closed down and to have to start from scratch.
Re: (Score:3)
Re: (Score:2)
How long does it take Ubuntu to load all the applications you had previously open in the same state you left them?
Re: (Score:2)
My problem is that I'm impatient: 5 seconds is too too for me to wait for a machine plugged into a wall outlet to become responsive, and all of my computers to date have been desktop machines. But I agree with you that standby is very important on a laptop.
I have never owned a laptop myself, but I likely will end up buying one at some point and have been considering what I will be doing as far as power management goes. I'm considering running the system 24/7 without the battery (to conserve it) and plugged
Re: (Score:2)
To be honest, back in the 1990's - 2000's this was a problem. But so was a lot of other stuff.
I'm no Windows fan, but I have had Windows XP and Windows 7 laptops for the last 6-7 years. Both basically live their lives in standby or hibernate and a full boot or BIOS screen means I forgot to charge it.
They got turned on in the morning to check websites before work. Put into suspend. Taken to work. Opened and used for 8+ hours. Put into suspend. Taken home. Gamed/browsed on for the next 8 hours. Put i
pm-suspend-hybrid (Score:3)
Try pm-suspend-hybrid, this will initiate a normal hibernate: copy ram to disk as usual, but at the end it won't shutdown, but go into suspend. Result: If you come back, its instant on; but if the power ran out or was unplugged, your state is still saved and you return from hibernation.
People still unplug their stuff or let the batteries run out so don't expect that scenario until a memory technology that keeps its state without power (such as mram) becomes the norm.
Re: (Score:2)
There are also different suspend levels. I think they are labeled S3 and something else in the BIOS, with one being slower, more extreme in energy saving, and more unreliable. Don't know (or care) about the difference, but again, I haven't messed with sleep mode in years (and will not again until I get a laptop). Anyway, this could be the reason for the speed difference (I do recall calling both modes, and one was much faster than the other). Also, maybe a bit less likely, over the years coming out of sleep
Re: (Score:2)
My two year old netbook (amd c-50; so slower than anything other than an Intel Atom) running Debian Wheezy wakes up from sleep in under a second (suspending is just as fast).
On a sidenote, I have a netbook with AMD C-60 and unfortunately the turbo core feature does not work on Linux. The CPU can lower speed with cpufreq without problems, but I'm not sure if the Bobcat platform has proper turbo core support in place. AMD does some OSS work these days so I wonder if some smart guy there could actually fix this.
So I can close my laptop now? (Score:4, Informative)
So I can close my laptop now instead of carrying it around like a sort of open pizza box for fear of never having a working mouse until the next reboot? How annoying to start a meeting by rebooting a Linux laptop.
Re: (Score:2)
Re:So I can close my laptop now? (Score:5, Insightful)
Because most laptops generally have terrible pointing devices. If they have touchpads, they're usually far too tiny to be useful (Apple ones excluded - why can't others put big ass touchpads on their laptops?)
The rubber trackpoint ones are nice for PCs, though the rubber tips wear down way too quickly and you end up with a slippery lump in short order.
And practically all are pathetic at scrolling. Unless it's an Apple trackpad where the double finger scroll works (once you fix the ()#@% scroll direction).
Life's just generally easier with an external pointing device.
Re: (Score:2)
Re: (Score:2)
Not sure why would you fear for never having a working mouse until the next reboot, but besides that, may I kindly suggest configuring your laptop not to go to sleep when the lid closes? It really doesn't have to do that if you don't wish it to.
Re: (Score:3)
That's the obvious solution, but, really, my computer should not lose the ability to be controlled by a USB device just because it went to sleep when the lid was closed. My Windows laptop has done this successfully for, like, fifteen years now?
Re: (Score:2)
No you may not. I want it to spin down the disk for shock resistance while it's in my bag, and cut its power consumption so I'll have a useful amount of battery left when I get to wherever I'm going. I'm not going to shut down, because I don't want to lose where I was in all my applications. This is as bad as Microsoft IIS "it's not a bug - it's a feature" spin.
Just try to boot from a USB drive that powers off (Score:2)
It explains a lot! (Score:4, Insightful)
Re: (Score:3)
Fat chance (Score:2)
You might as well try to tell the Sun not to set tonight.
Re: (Score:2)
Wow, ever heard of something called "comma"?
Wow, ever heard of something called a "comma"?
I also don't find that commas make that paragraph any more readable.
Re: (Score:3)
Re: (Score:3)
Parent is a troll, but s/he has a point. All this talk about sleep mode being useless sounds more than a little silly from the POV of a Mac user. Macs have had fast, reliable sleep/wake since the second generation G3s around 2000. Windows has kinda caught up. You really want this if you actually use a notebook computer. I don't want to have to go saving and closing everything then shutting down when it's time to get on the plane, then having to open it all again and get back to where I was once I'm on
Re: (Score:2)
Wake on LAN is a single checkbox, so you're flat out wrong on that one. Suppressing sleep with lid closed would be nice, I'll give you that. But what exactly is "hybrid sle
Re: (Score:2)
Wow, a massive block of text with no paragraph breaks spanning several topics and a few complete diversions... werre you one of the USB spec authors by any chance?
Re: (Score:2)