Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Linux Handwriting Recognition 78

Dark Paladin writes "Cnn.com is reporting that Communication Intelligence Corp. and Middlesoft is working on Linux based handwriting software that will work in handhelds, Internet appliances, and have GTK and QT hooks. Now we can move from carpal-tunnel to writers cramp. " Now I'm curious, what matters more to people, handwriting or voice recognition?
This discussion has been archived. No new comments can be posted.

Linux Handwriting Recognition

Comments Filter:
  • What is this doing here? It never appeared on the main page even.

    But, all-in-all, this is pretty cool. Maybe we'll finally see Palm-style Open Souce handhelds.

    And, if you're wondering, an open souce handheld would be great. Why settle for what Palm (or Microsoft) thinks is good for you when you can write your own look and feel?! Just think, you could probably add compatibility for both Palm apps and Wince apps. Now all we need is some good touchscreen support, and we're set! (Or is there already touchscreen support I don't know about?)
  • How would you install Linux on a ROM based product?
  • by Mara the Dancer ( 90237 ) on Saturday December 25, 1999 @03:36AM (#1446320)
    The dynamics are simple: hand-writing recognition has many contexts for use, whereas voice is context-free, but immediately useful. But imagine if this stuff got _wasn't_ in an Imbedix system somewhere, but on your desktop! I'd like to see some good OCR technology cooking on Linux; the fun part is that OCR doesn't have to be in realtime to do its job. I imagine the temporal gap would help in some creative ideas rather than scanning in all those old notes of yours from class.

    The other reason why handwriting is better than voice recognition, despite the possible and immediate use in handhelds, is that we seem to be fairly saturated with voice technologies, but nobody has created a decent OCR package. (Unless, of course, SANE has done it and I haven't noticed.) Getting--not *just OCR*--but handwritintg recognition, is something to be pushed. Hearing about the US post office using stuff for sorting is one thing; it's totally another to have advanced stuff on your own machine.

  • by matman ( 71405 ) on Saturday December 25, 1999 @03:40AM (#1446321)
    ultimatly, i think that voice recognition will become commonplace, but that there will always be a place for a good hand written letter. but will anyone be able to tell the difference? Theres a big difference between dictated tone (ie what you say) and what you want to be written. A letter usually sounds different when you read it than when you are spoken to by that person. not only will voice recognition software need to become more accurate... but it'll also need to have some kind of filter to 'formalize' dictations. I think :)
  • Why are we writing 'writing recognition' software? I thought the whole reason for using computers (typing) was because they were faster...
    I type 120 wpm, I write like 5. Just my thoughts on Christmas morning ;p

    *3rd post*
  • Observe The LinuxCE Project [linuxce.org]. We have it already booting on several WindowsCE devices.

    -- Give him Head? Be a Beacon?

  • Imagine, if you will, Voice recognition software that transcribes words into your own handwriting!

    Anyway, I think handwriting recognition is definitely appropriate for handheld computer applications, while Voice recognition will evolve into regular applications.

    Just out of curiosity, How long will it be before everyone forgets how to write?

    -- Give him Head? Be a Beacon?

  • Voice and handwriting req
  • Now I'm curious, what matters more to people, handwriting or voice recognition?

    Well, to me it would be which is the most efficient. I know I can certainly type faster than I can write, so I would choose typing over handwriting.

    Voice recognition would me much more usefull in situations where the hands are needed to control delicate things that only the trained motor skills of a human could do the job (fighter pilot, surgeon) to control things such as providing the right tool or weapons control.

    So I would rank the three options in this order:

    1. Voice
    2. Typing
    3. Handwriting

    However I have left one option out, thought control (You know what I mean). The reason I left this out is because I think it is such a long way of. Sure we can train rats to think about pulling levers, but it is a long way before we can create machines that could understand the brain patters. It must be even more complex than voice recognition...

  • Voice and natural language are great tools for communicating directly with other people. Computers are not people. Voice and natural language may not be an efficient / effective way of communicating with software elements in a computer.
  • This is cool, but it's probably a little different than OCR. Handwriting systems (like Jot, on which this is based) usually use movement as part of the interperetation, allowing a much wider range of characters to be recognized, but eliminating static character recognition. I for one would love to see a decent OCR packages on Linux.

    A couple of free [python.net] or semi-free [umd.edu] efforts have been started, but they seem to have lost momentum. I'm still building up the gumption to start a project, but if anyone else is working on it, please drop me a line...

    Hmm... time to check on the turkey.

  • It would be nice to see this kind of software in linux...

    There are a fair number of projects out there. Heck, I even designed a HR system and implemented in VB in 1/2 an hour (after 3 weeks of design work). That was when I thought I had money to buy a uCsimm with and build my own handheld. Since the handheld didn't happen, I left the prototype where it stands, but if anyone wants the system, they are welcome to it. (email me, and I will describe my method and send you the code.)... Anyhow, I think linux is ideal in many way for handhelds, because it can (and has) been implemented on mmuless systems, it's efficient (which is good for small microprocessors, and you can still get lots of full fledged software for it. Keep up the good work everyone.
  • I never understood what all the fuss about those penn-based computers was. My normal handwriting is so incredibly garbled that i'm about the only one who's able to read it. In contrast, i can type extremely fast. When i'm able to, i type. It's that those notebook keyboards are too loud to take to university, but if i could i would take one with me too to avoid writing. For OCR purposes it would probably come in handy, and maybe for signature recognition on all sorts of transaction devices, but for me a computer reacting to handwriting or voice control would be a step back. (Can you imagine somebody sitting behind his desk, just finished his masterpiece, and somebody pops up behind him yelling "format slash why enter yes!!"?) Maybe not realistic, but it's got a point to it. Typing is fast, secure and error-proof. Maybe i'm looking at it from the perspective of a techie, but voice and handwriting recognition adds another layer between the user and the computer instead of removing one.
  • BOTH human-machine interfaces matter for handhelds.

    For most of the present users of unix, the machines are large and in charge, and have keyboards. So voice is a larger market for TODAY. (handwriting -> keyboard -> voice in order of ease)

    And, as voice can exist on handhelds (when handhelds become powerful enough), people will use the voice interface over writing. (because writing is more work than talking)

    Voice on handhelds exists as demo-software. The Newton 2000 had voice software from Dragon Systems. (20 or so words)
  • by jfunk ( 33224 ) <jfunk@roadrunner.nf.net> on Saturday December 25, 1999 @04:16AM (#1446334) Homepage
    I type much faster than I can write as well, and I've been a "hunt-and-pecker" since I was 5.

    There are three reasons for handwriting recognition I can see off the top of my head:

    • Only a small space is required for portable devices. Ever try typing on an ultra-small keyboard? How about a TI-85?
    • People who never typed very much may be able to write faster than type.
    • OCR of handwritten documents. Very useful.

  • All we need is a touch screen handheld? Hell, I've had one for months. In June I picked up an Everex Freestyle and reflashed it with BSD, which is available for the mips freestyle at www.freebsd.org. All this is available, its just waiting for people to realize that unix really is that portable.
  • While handwriting recognition often gets a lot of gee whiz value, I think I'd find truly resilient voice dictation to be more useful. The reason is that the technology applies to more situations. When one considers how much one communicates verbally (well, those of us that haven't locked ourselves into our cellars with our beloved machines), compared to how much people handwrite - the advantages are clear.

    Aditionally:

    1) A *noise resistant* voice dictation technology would apply to /almost/ every situation where one would wish to write. While the two would certainely be complementary - I think voice stands best alone. For instance, PDAs could be made very effectively using only voice dictation. The reason this hasn't been done is prohibitive hardware costs to wield such processing power in a restricted space / weight / power. This *will* be overcome, though perhaps gradually.

    2) It is readily apparent that in the desktop situation (ie non-portable) a voice technology is FAR *FAR* faster than a handwriting system. As voice improves it will /exceed/ most people's typing speeds - handwriting can NEVER do this. Therefore I would say that many forms of typing will be outdated (though for many applications the additional precision is desirable). I cannot see this happening with handwriting.

    So from my standpoint it seems that Voice Dictation addresses a true "holy grail" of computing accessibility and ease, whereas handwriting recognition fufills a few important niche markets. This would be a tough choice to make except that voice recognition also performs suitably well in 90% of the aforementioned niches.

    As much as I have enjoyed handwriting recognition (I'm a happy newton owner), I would trade it in a second for a really slick and accurate voice interface.

    -nullity-
  • by Anonymous Coward
    1. Keyboards are too bulky, and not portable in a true sense. Pencil/paper is a very portable medium. (Someone [MIT?] was working on a project that was literally a pen computer -- a pen with accelerometers in the tip to record the letters you stroke out with or without paper. A link to a PC downloads an entire document stored *in* the pen.

    2. Pens are easier to use and more less restricted in character composition than keyboards. (Think of Japanese, Chinese, etc. with a keyboard. A nightmare!)

    IMHO, the biggest current limitation to computers is their archaic interfaces: monitors, keyboards & mice.

    Besides, while most people can type faster than they write, they can *speak* even quicker. Choose the interface that is best for the application.

    (Think medical charts -- replace that clipboard on the bed with a pen-based computer [big business now] and a vorec chip for doctor notes.)
  • I can already imagine it.

    hash include lessthan stdio dot h greaterthan feed feed int main open void close feed curlyopen feed indent-one printf open quotes h e l l o comma space w o r l d backslash n quotes close semicolon feed back-one curlyclose feed

    Though I don't really think that editors would try to naively replace the keyboard with a voice recognition interface :-)

  • also, to add to your list
    • handwriting is a one-handed activity
    freeing your other hand to hold a handheld, the telephone, etc.

    For a desktop, I actually like having everthing in parallel. Touchscreen, mouse, trackball, stylus, trackpoint nubbin, and glidepad. I've never had them all at once, but I've had many of the combinations and they're all useful at times throughout the day. If my hands are on the keyboard, I don't want to take them off, but if I'm standing, talking to someone, on the phone, or my workspace is cluttered, I like to reach for more convenient or demonstrative devices.

  • The problem is, my handwriting is horrible: I can barely read it. I doubt a computer would get very good accuracy.




    -----
  • The other fairly compelling reason why handwriting recognition has uses that voice recognition won't fit is:

    Sometimes you don't want to be blurting everything out for the world to hear.
  • All this technology is pretty cool, but I think it will prove unpractival in time. I especially see talking to the computer as slowing things done. Sure, it's good in telephone systems and the likes where there are no good keyboard, but when it comes to the average desktop computer - it would only be a nuiceance. Even in the telephone appliances it can be boring, here in Sweden the railroad has speach recog. ticket orderings, which gets the most crazy ideas about where one would like to go.

    Antoher thing about speaking is that, when the user wants a written report or something about that, she / he would generally produce better material writing than speaking (which gives no time to think about things ...).

    Handwriting recognition should be more useful, but, as someone else said here, keyboard recognition is probably much faster (and uses fewer resources - plus the ability to backspace ;-) ).

    // Simon Kågström. Enters his text keyboard-wise.
  • Humans can't read my writing, I imagine software would have a rough time too (and how would it handle all those crossed out words that I thoroughly messed up?). Plus I deplore actually putting pen to paper. With a keyboard people finally read what I have to say, after I fix the screw-ups(TM).

    Merry Christmas and Thanks to All for an informative/entertaining year.

    JM
  • There are several places where pre-set forms are used and filled out that could greatly benefite from handwriting recognition:

    Police Reports/Tickets
    Hospital/Doctor's Charts
    Signature Recognition (With a pressure-sensative device a signature could be used as a unique encyption key. It's basically a manual "key" now to banks, UPS, etc.)

    There are numerous applications that lend themselves to handwriting much better than voice (noisy environments; limited but various response questionnaires, etc.)

    I think the best items will combine both (voice notes w/recognition).

    Several WinCE PDAs have the ability to record voice notes (w/o recognition).
  • A lot of you have forgotten. Newtons had really powerful hand writting recognition. May we could see about getting some of their code. A friend of mine uses a newton 2100 as a replacement for a laptop (Risc 117mhz) for checking his email. It has adapted to his handwriting pretty well. And its VERY fast.

    just a thought
  • Years ago I installed OS/2 with VoiceType dictation. I was very excited about using my voice to control my computer. My wife was less excited. She all but threatened divorce. Its one thing to spend hours on the computer, but when I started talking to it... well that was too much for her.

    After years of typing my handwriting has devolved so much I can't see handwriting recognition being any benefit to me.

    Lets skip both and move forward with research on thought recognition. You wear a headband, hat, wrist strap, ring or something and you control your computer (or at least a pointer) with thought commands.

    The goal is to have an imput device that you could use in public without looking or sounding like an idiot.

  • In a portable device, it is often inconvenient to input data by speech. Suppose you are taking down an address or a note that someone is reading to you, (most cases where one needs to take a note) speaking over the other person would be rude, but you dont always want to write down everything they are saying. Handwriting recognition is more useful for a PDA.

    For a specialized wearable computer or robot, one that executes a limited set of commands, voice recognition is more beneficial as it does not require physical contact with the device. Both have their places, but for using Linux on a PDA, handwriting is more useful.
    --
    Gregory J. Barlow
    fight bloat. use blackbox [themes.org].

  • I've never used 'true' handwriting recognition, but having used a palm pilot (not a palm, i had an old usr palm pilot pro), I really liked graphiti.

    I can write in graphitti much faster than I can in any legible script and it seems to me that in the present state of processors and power drain that it is much better to train users to learn the computer's script than it is to train computers to learn the user's script.

    Other than that, I think this is good. Now if eventually I can get something like jot to work with my touchpad in X, that would be great.

  • People keep saying, "Why handwriting recognition? I can type faster than I can write."

    You are missing the point. In many situations, you don't have a keyboard! This is particularly true with PDA's, but any compact, hand held device will have the same need. Many embedded devices would be easier to use with some sort of handwriting recognition.

    If you are sitting in front of a computer, there is no doubt that the keyboard is a better input device, at least for most people, but computers are quickly growing out of the confined model of mouse, keyboard, CRT desktops that we have grown to know and love.

    Think of of how we could use StarTrek style data pads.

  • Imagine, if you will, Voice recognition software that transcribes words into your own handwriting!
    That's a bad idea, I can hardly read my own hand writing. How about it transcribes to my wife hand writing instead?
  • I think the question about how we will receive the data is irrelevant. It is more important to design software that can turn any input (text, voice etc.) into some general representation that can then be repeated with any available output mechanism like speech synthesis, text or braille. (And with translation software, it will not be necessary for everybody to speak English. :-) ) Remember, bits are bits.
  • by captaineo ( 87164 ) on Saturday December 25, 1999 @07:46AM (#1446361)
    Chinese, Japanese, Korean, etc are the REAL reasons we're going to need good handwriting and voice recognition. These languages don't map very well into English keyboards, and ones like Chinese simply aren't suited to "typing" at all.

    For English-speakers, voice recognition and OCR might seem like neat gimmicks, but they're going to be *vital* to bring information technology into places like rural China, where people are lucky to be literate in their own language, nevermind learning a foreign phonetic alphabet and awkward keyboard input methods.

    Check out China's up-and-coming domestic computer maker Legend at http://www.legend-holdings.co m/eng/press_centre6.html [legend-holdings.com]... Their basic model includes a keyboard, but more centrally - a writing tablet.

    I'm glad to see Linux-based voice and writing recognition efforts. Imagine this - Linux bringing the Internet to 1+ billion more people...

  • by Ungrounded Lightning ( 62228 ) on Saturday December 25, 1999 @07:58AM (#1446362) Journal
    Using handwriting recognition rather than typing would slow me down enormously, and also raise the error rate. (My handwriting is so awful I often can't read it myself, while my typing is darned good.)

    The only thing a handwriting input device buys me is the possibility of a smaller machine. Voice recognition, on the other hand, gives me computer input in contexts where my hands are busy - such as while driving. It also allows for an even smaller machine - since the input interface can be a pinhole rather than a surface at least the size of a Postit note. And I can usually speak faster than I can write.

    But while voice recognition may strike me as more enabling than handwriting recognition, it's not an either-or issue. They each add a unique capability. Pick either one, and I'm sure there will be a number of applications where it's a better fit than any other input mode.

    Handwriting recognition seems like the ideal input mode, for instance, for paper computers. Imagine a pad of Postits, each running PalmOS. B-)

    Also: If they ever come up with a handwriting recognition program that can read my writing better than I can, I've got a lot of old notes around here that I wish I could read...

  • Lets look at the options:

    Handwriting recognition:

    Great, so far as it helps with OCR on handwritten pages. Not so great, as the primary interface to a handheld computer. As much as I prefer the Palm Pilot as a PDA, I'd much rather have a tiny, two-finger typeable keyboard than use graffiti (or the onscreen keyboard alternative). Graffiti just doesn't work as quickly for me and feels more awkward. And recognizing "natural" handwriting would be even worse. My handwriting goes from left to right; it doesn't stay within the sensory area on a PDA. The "unnatural" graffiti writing actually helps because it eliminates that instinct to move your hand while writing.

    Voice recognition:

    Sure, would have been great to dictate homework when I was in the 6th grade and didn't know how to type. But today it would have to get punctuation, spelling of homonyms, etc. exactly the way I wanted it to be marginally better than typing long paragraphs. And as for voice commands? Forget it. It's been like three years since I've had my computer situated out of hearing range of any other computer around, and I don't want to work within some wacky scheme to decide which computer is supposed to be listening to me.

    OCR:

    Now, this would be useful. I'd love to live a paperless life, but unfortunately I live within a society that doesn't see things my way. It would be nice for my "filing cabinet" to be completely electronic, but today that's not really a good option unless I save .tif files of every document I wanted to scan. I'd move to completely electronic filing (except for those documents where someone/some government agency would want the original copy of something, of course), if only I could throw a search engine and editing capability around it. I'm a pack rat - I've never deleted a nontrivial personal email. But to try and do that with every important paper doc that crosses my path would be impossible without some better way of organizing/indexing them, and that means OCR.
  • I think the real issue is not whether one should come first or the other, but rather who is going to write the defining application. There are already a few voice recognition programs for linux: gvoice, kvoicecontrol, xvoice, FreeSpeech, and a few others. Handwriting recognition applications are a bit rarer, I'm sure. However, if its a commercial product, then does it really matter? Are we going to pay money for something we can get for free anyways?
  • I think that the messagepad 2x00 had a 160MHz chip that was upgradable to a 240MHz chip... for a computer that is (at least) 2 years old, this is pretty impressive. As for getting their code, good luck! Apple has refused to allow people to see enough code to fix some fundamental problems with the NewtOS.
    There has been some rumours that apple might be using some of the Newton technology with a PalmOS comuputer, but I would take this with a grain of salt as this has been predicted for over a year, with no solid info. If this were true, though, then I think that there would be no chance that Apple would release the tech.
  • Voice and natural language may not be an efficient / effective way of communicating with software elements in a computer.

    Perhaps natural language may not be efficient for computer control. But that's a separate issue from voice recognition input.

    We've been programming and controlling computers using UNnatural, contrived languages for a long time - and directing other machines, domestic animals, and human work teams ditto since long before comptuers - or even recorded history. Think about driving a horse - or a car, military command-and-control, coordinating a crew of sailors, or calling a football play.

    Voice recognition technology give us a way to capture, lex, and parse vocal gestures. Whether we try to decode the vocal gestures as a natural or a contrived language is an issue that doesn't arise until the parser level.

  • I was just looking through the documentation for my wife's new Palm (and O'Reilly's book on getting the most out of it). It strikes me that it wouldn't work terribly well for languages other than those supported by iso-8859-1 (latin 1). That is an excellent default, but won't support my I18N work completely.
  • Handwriting, voice recognition, sign language recognition, face recognition, special keyboards all good.

    Large & small screens, speech output, Braille, 3d all good.

    Different devices, different users, different situations, different tasks call for software that can adapt it's interface.

    There is no easy way for a programmer to take advantage of these different scenarios now, or easily write software that adapts to new environments.

    This sort of thing could be done at the component level in a toolkit. It would be a book the mobile and disabled users. You could use the same software, and programmers could more easily write their software for several audiences.

  • Since IBM is another large pushing factor in Linux's promotion, it can later decide to port it's voice-recognition applications to the Linux platform. But the handwriting-recognition was an excellent step, and will allow more internet-appliances to pop up supporting handwriting recognition, running Linux, and allowing anyone with the proper knowledge to tweak the hell out of the machine!
  • Handwriting recognition of Chinese characters would be useful for Chinese users, because most people aren't willing to invest the time to learn one of the many Chinese input methods. It's much easier to write the character out than to press a sequence of keys which represent the component parts of the character...
  • by 240 ( 120664 )
    Neither voice nor handwriting input would change much the `human-computer communication bottleneck'. Which is a very big bottleneck: while we receive very large amounts of information from the computer through our visual system, we talk back to it at (using a keyboard) only ~ 5 bits per second (much less than you got with even the very first modems!). [English text has around 1.5 bits per character, say there are on average 5 characters per word, and we type at 40 words per minute (for me only when typing gibberish)].

    The answer I think is to sacrifice some small part of our primary visual cortex (the bit of the brain your little finger is over when you're sitting back with your hands behind your head) to interfacing with the computer. By surgically implanting an electrode chip with a radio transmitted that had just enough power to transmit through bone, and then letting it heal over so there was no risk of infection (which would happen if you had wires hanging out of your skull).

    These electrodes would then be used to decode the nerve cell activity in a small patch of brain tissue. By visualising characters in that part of your visual field, you could communicate with the computer [the jury's still out on whether 'visual imagery' reaches all the way down to the primary visual cortex, but I hope it does].

    The end result might be that we could have a unix command-line interface in a small strip in the lower part of our visual field, where we just visualise command-words. Of course, English is a pretty inefficient language for communicating in this way, so performance would be improved by operating with a specifically optimised alphabet (even Chinese would be much more efficient for this, you'd think).
  • by hatless ( 8275 ) on Saturday December 25, 1999 @11:21AM (#1446373)
    At this point, voice recognition and speech dicatation have gotten good enough for certain narrow applications. Neither by itself is going to be accurate enough for general use because both technologies need to be targeted to a specific context and vocabulary to work well.

    On the other hand, if you combine speech recognition with a system of gestures and written jottings for doing corrections on the fly and for "nudging" the interpretive engine in one direction or another, you can probably increase the speed at which fully-corrected speech or text gets input by several orders of magnitude. Such a system gets rid of the need for stopping dictation/writing to go into an "edit" mode.

    Current products show a strange myopia--designs that do handwriting or speech recognition as though users are unable to do both at once, partly an outgrowth of these technologies' origins as accessibility tools. That is, while it's terrific that someone with no mobility can use ViaVoice to fully operate any software other than perhaps a raster imaging package, this approach has made these technologies more tedious and linear than they need to be.

    Indeed, such a thing may not even need handwriting recognition to get most of the benefit. I'd love to see what could be done to speech dictation performance with a gesture interface implemented on a pen tablet.
  • Find it here [ibm.com] in beta.

    Even though it is still in beta, it is already being used as a foundation for things like GVoice. [ogi.edu]
  • Like many other posters here, I can type much faster than I can write - and my handwriting was distorted to caps-only during my punchcard years. So nowadays I sign checks and scrawl notes for my wife... the rest is typed.

    Unlike many others, I can also type faster than I can talk. Frankly, I think speech input should never be an universal option as there are many people who have some sort of speech problem, foreign accent, or just plain mangle their language (just listen to any teenager :-).

  • For some reason, writing on a PDA seems less
    intrusive in public contexts (like a business
    meeting.) than typing into a computer. For years
    I carried around a Newton for notetaking and it
    was great for the quick notes and sketches that
    I used to use a notepad for. Now I use a pilot
    in similar situations.

    If you use a laptop in a meeting, for some reason
    you seem like you are ignoring the other atendees.
  • by / ( 33804 )
    handwriting is a one-handed activity

    I can see a large market for hand-writing recognition among certain segments of the online population who use their computers for certain activities that use a certain other hand....
  • I wish explained your position that "a lot of people find handwriting recognition crucial". From what I took from reading the other articles it seemed that more people were dismisssing it rather than encoraging it. I also couldn't figure out if your were attributing your "more like graffitti than the newton" comment to the many people who strongly desire handwriting recognition, or was that more of your own opinion?

    The Newton's data input system is one of the biggest things that I miss when using my pilot. Although when I tried "Jot" I realized that a Newton like data input system doesn't work with screen real estate as small as a Pilot, but if I wanted something as big as a Newton, I'd still be using it.

    As I said in another article in this thread, there are situations like meetings, that jotting down notes seems less intrusive to others than typing. Voice recognition would be even worse. And for anyone who is saying that they can type much quicker than they can write, I'd suggest they try a Psion PDA for a bit. (Athough smaller than a Newton, even Psions are looking a bit hefty these days.) Once you get into the size of a PDA, there is no data input system that convenient. Except maybe the desktop computer that you are synchronizing your data with.

    So, could you point me to all the people who were clammeroring for "graffitti-style" data input?

  • I wouldn't have thought the Hurd would run on it (yet?), or was stable enough for production use, anyway.

    Oh, you were talking about Linus' OS..

  • Been There, Done That.

    Come on folks!

    Its called the Newton Message Pad 2100.

    I don't know what Steve Jobs did with the technology, but these little honeys really did the job.


    I don't see that handwriting recognition is going to help Linux in any way, most things Unixy are designed for the keyboard with a occasional mousing around. To really make efficent use of the technology you need a shell or operating system design to support it.

    For references check out anything related to the Newton or for a good view of the damage Redmond has done to pen computing check out the book Start Up : A Silicon Valley Adventure by Jerry Kaplan.
  • Handwriting recognition could be useful for languages with too many characters to feasibly put on a keyboard (such as Chinese), and for OCR'ing handwritten documents. As an input method for languages using a latin-based alphabet, it's pretty useless, for me at least. I can type much faster than I can write, and I think this is true for pretty much anybody with more than rudimentary typing skills. I'll stick with typing, thanks.

    As for voice recognition, perhaps there are some limited uses, but I don't see this as being useful for me or most of the people I know either. I don't want to be talking at my computer - this would require either wanting everybody around to hear what I'm saying or being alone. I don't think it'd speed me up much either. I can type around 100-120 wpm, I doubt I can talk much more than 10% faster than this.
  • Just out of curiosity, How long will it be before everyone forgets how to write?

    That has already happened were I live. Instead of writing, people here just mope around clattering their teeth like beavers for communication.
    Dilbert: I have become one with my computer. It is a feeling of ecstacy... the blend of logic and emotion. I have reached...

  • > Just think, you could probably add compatibility for both Palm apps and Wince apps.

    What makes you think that writing WinCE API is going to be any easier than writing the Win32 API, a project which WINE is working at for some years now.

    I am positive that there is more then one aspect of the WinCE OS that is being done without documentation (I can tell this by simply looking at the things done, compared with what the documentation claims is possible).

    Shachar
  • This is only a problem if you really want to communicate with a computer. I like to think they are mere servants, not creatures with whom I want to communicate.
  • The whole point of me learning to touch type when I was 15 was because I hate handwriting! Why would I ever want to go backwards like that? I can write much faster on a keyboard than by hand. I can't handwrite as fast as I can think so I mess up and write the wrong letters and so on. No such problem with typing.
    It doesn't make sense to come up with backwards technology. Give us something that is more efficient than a keyboard and we're talking. Voice input? I think I want something that doesn't make me look like a lunatic, it's bad enough to see those idiots in town just standing there and babbling into their hands-free, invisible mobile phones.
    TA
  • Just want to add my vote strongly on the handwritting side. A few points:

    * I think most people have low expectations for handwritting recognition because the most they are familiar with is Graffiti. Graffiti is bunk. It's a highly circumscribed gesture system. Special characters, one at a time, not 'in place' on the screen, no visible ink. There's some acceptable reasons that Palm chose it when they did, but real handwriting recognition is something else entirely. Expect it to read your writting with 99.9% ease and accuracy, and within a couple years we'll be there.

    * HWR is not for desktops, it's for portable devices. Keyboards are easier for most everyone when they are practical. That said, there's at least one place where good HWR really excels, and that's with form based info. Sure you can tab between items with a keyboard easily enough, but there's nothing more intuitive than writing information _where it goes_. It's fluid.

    * Voice recognition has its place, but it's also a drag. I remember reading an interview with Jeff Hawkins (Palm/Handsping) where he describes trying out handheld voice recognition by talking to a dummy handheld for a day. Drove him crazy. I think that goes for most people. Try it for 15 minutes yourself, especially with other people around. Now compare that to writing with a stylus. Our technologies should be graceful, not vulgar (I say). That's how I'd characterize the two. (To elaborate: yes, there is nothing more natural than speech. But talking is not a data input device, and if a friend ever spoke to me that way I'd probably hit them.)

    The CIC news is great. Handhelds are probably _behind_ where you'd think they would be by now. I'm sure some Linux competition will get things moving again...
  • hash include lessthan stdio dot h greaterthan feed feed int main open void close feed curlyopen feed indent-one printf open quotes h e l l o comma space w o r l d backslash n quotes close semicolon feed back-one curlyclose feed

    hello.c:4:warning: main() declared as returning type 'int' but does not return expected type
  • > Just out of curiosity, How long will it be before everyone forgets how to write?

    Very good question... :)

    Palm and the Grafitti language are giving us a good start on that very possibility. Eventually, there will be further divisions between those who can write, those who only write grafitti, and those who can not write at all.

    In Neal Stephenson's book "Diamond Age", he makes reference to "mediaglyphs." I assume them to be iconic references for common items or concepts, sort of like the international symbols of today. While this can be beneficial, it sure represents a great opportunity for further "dumbing down."

    My Newton MessagePad taught to write better cursive (BTW, with practice I was getting 98% accuracy), my Palm is teaching me to write Grafitti. (And don't forget, Grafitti was available as a third-party product for my Newton years ago, so I could use either then.)

    How long before everyone forgets how to write? Just keep an eye on the handwritten work that teachers receive in school. When it starts turning up interspersed with Grafitti, we'll know we're on the way.

    Hopefully, the technology can catch up quickly enough to give us acceptable cursive recognition (sorry to be so Anglo-centric). I would love to see Apple GPL the code for their Newton recognition stuff. It's not perfect, but with the increase in hand-held processor capability and speed in the last few years and a bunch of talented Open Source programmers working on it, it could be whipped in shape in no time. :)

    Are you listening, Apple? Hey Steve, maybe you can write it off as a tax break...

    Russ
  • Tell that to my mother and the thousands of other secretaries who took dictation through the ages.

    They listened to the author, wrote what they said in a phonetic shorthand in real-time, then went back to their desks and re-translated their shorthand to typed output.

    The only reason they used that system was because they couldn't type as fast as the person talked, but they could write (shorthand) as fast as the person talked...
  • Don't know if this will appear where I want, apologies if it Doesn'y ( i'm a /. newbie ). Anyway, saw a Maclean's article saying some grotesque number like 40% ( unsure of that ) of Canadians would like an American Passport. I'm not sure why, I think it's atleast in part becasue we allow ourselves to have an inferiority complex to the americans, for no justifiable reason. There is nothing an American passport, aside from citizenship, gets you that a Canadian one doesn't. Canada needs to look more on itself, culture rules don't work, I'm not sure what does, but sokving a problem only works once you figure out what it is.
  • Actually, Korea mainly uses their phonetic script, Hangul, which types just fine. The Japanese phonetic scripts, Hirigana and Katakana, are especially easy to type, as you can basically just type the romanized version and get the script. Unfortunately, the Japanese have a penchant for Kanji, the Chinese derived characters, which makes things much more laborious. Voice recognition for Chinese will need a lot more contextual analysis, because there are so many different characters that all have the same sound. Without the analysis, the user would have to choose which character they mean from a list, which is what is currently required anyway.
  • by pne ( 93383 )
    Chinese, Japanese, Korean, etc are the REAL reasons we're going to need good handwriting and voice recognition. These languages don't map very well into English keyboards, and ones like Chinese simply aren't suited to "typing" at all.

    Actually, Korean isn't that bad since it does have an alphabet which maps fairly well onto an English keyboard (upper and lower case letters; 14 consonants + 21 vowels = 35 total letters which is less than 2*26 = 52). It only becomes a problem with hanja (Chinese characters), but they aren't used that much in modern Korean. For Chinese and Japanese you're right, though.

    Cheers,
    Philip

The explanation requiring the fewest assumptions is the most likely to be correct. -- William of Occam

Working...