Inference Group
.
.




Search :
.

Dasher meeting Fri 25/3/05

Progress reports

Phil

Phil is implementing several new language models. At present these are stand-alone models for evaluation, not yet integrated into Dasher. No new model has yet been found that beats standard ppm by more than 5%.
New models include models that use recent words as context, and models that ignore context further back than the most recent word-boundary (i.e. like Martin King's T9).
Phil has added bit-rate instrumentation.

Keith

Speech Dasher is working with Dragon.
A new button allows the user to confirm the whole utterance best guess is correct. (Suggested including a dwell-based solution to this task, for people who can't click.)
Code is instrumented and ready for user trials.

David M

Most Alphabet files are now believed to be correct. Arabic works nicely including cursive writing in text boxes.
Combining accents are also working well in some linux fonts, but not all. (Which shows, I think, that we have done it right; it's a font defect.) For the time being all our european languages use the "composed" form (ie we do not use combining accents), but from time to time we should revisit this issue and ask europeans whether they would prefer to use combining accents. In anticipation, I have created a single latin ISO-8859 alphabet in which every latin language can be written.

Chris

Groups and subgroups were completed in November 2004 but do not work with Kaburagi's Japanese groups.
Button modes
There is a working One button static mode (i.e. click when pointer is in correct location). There are working N-button direct, and 2-button menu-modes. "Click mode" should be ready too.
Bugs include
  • problems when the user backs up rapidly
  • Static One Button mode not yet test-ready because we need Interpolation and time correction (eg subtract 100ms).
  • these new modes do not yet all work in a single dasher executable.
One-button dynamic mode (metronome mode) exists but is not currently integrated. Chris has invented another one-button dynamic mode in which Dasher zooms steadily and relatively slowly towards the point top right in the Canvas; this means that the desired target will at some point fall off the bottom of the screen; the user clicks when the desired target enters a red zone. Clicking causes a (two-button-direct-like) zoom in on the red zone. A second button is used to stop. That stop button could also be used to back up (reverse).
Ingrid
Ingrid is testing N-button direct, and 2-button menu-modes.

Language model issues

  1. Would be nice to include recency. Possible way to handle this is to maintain two tries, one with the bulk of the corpus in it and one with say the last 1000 characters. Ideally a componential model or mixture model would be used, which would identify which of the experts are contributing best to the current predictions.
  2. At what time should the ppm language model learn? Should we bother making individual nodes have slightly different language models from each other? (I think not, since it only makes a big difference for completely untrained models.)
  3. Idea: identify a special character that has the meaning "reset the context at this point". This would be used in training texts that include lists of unrelated phrases or unrelated words (eg dictionary lists).
  4. Storage of the learned trie direct to file, and reading of trie instead of reading and retraining. (The first two bytes in this stored structure could define the language protocol, so that new language models can be added.)
  5. A dictionary based T9 like language model: would it go faster if we changed from storing the dictionary using hashes to using a trie?
  6. Is user speed well predicted by "just bits"? Experiment to test this: log an expert user's x-coordinate.
  7. Possible new parameters to replace "smoothing" (5%): Maximum permitted character size (eg 90%), and Smoothing (eg 2%). (Mick Donegan's request)

Steering issues

  1. Automatic speed control. Hopefully a student called Chris will work on this during the Summer.
  2. TILT SENSOR. We have received a loan of a tilt-driven hand-held mouse. The mouse does not work! Would make a grand summer project for a student
  3. Oliver Williams from CUED gave a great talk demonstrating live, video-driven use of dasher by simple gestures (eg hand motion). Should be easy for him to make a free nose tracker too.
  4. Singing/humming as one-dimensional input: Keith will do.
  5. Button experiments: direct and menu both under way (Ingrid). Other button modes need to be tested too. What buttons should we use? Pswitch? ACE centre switches? (Both have latency/delay problems for fast users.) Could make out own modified keyboard.
  6. Can we/Ryan/Oliver make video-based switch events (eg eyebrow raised)?
  7. Add breath signal to breath dasher to ensure that the user is forced to breath.
  8. Eyetracker: would be nice to have one that works in linux. headmouse: ditto. (possible project for expert programmer)
  9. Brain Computer Interface? DJCM might go to a meeting in June.

Platform issues, useability, integration

  1. We need a meeting with DJW to discuss tablet pc, pocket pc, and .net
  2. Palm pilot is a challenging platform to work on - difficult to find a volunteer developer, but perhaps if money were available we could pay someone to handle the port. Chris will send out an email.
  3. Mac and Windows will in due course require a little bit of work to add (1) button dasher dialogs; (2) language model dialogs.
  4. We discussed the idea of generating the source code for the dialog windows automatically from a single universal specification.
  5. We discussed whether using the gconf file of gnome and the registry of windows is the best way to store the user's dasher settings. Doing it ourselves with values stored in a dasherrc file feels simpler; but it was agreed to continue with the registry approach.
  6. Integration of Dasher as an input mode.
    Difficult in windows because the desired accessibility features are not present.
    Talk to GOK people about getting Dasher integrated as input mode in Gnome. A good task for an expert programmer to be dedicated to
    Windows (tablet pc) - is there a problem with the (keyboard-shaped!) geometry of the area allowed for an input method?
  7. To discuss further:
    Dasher's control mode menus, speech production, and driving of other computer functions, eg scripts, from within Dasher.

Japanese

  1. Possible idea is to get Kaburagi and Frederik (from Caltech) to work together on making the new language model that's needed.
  2. Hopefully we could get Martin King's colleague Cliff (Japanese T9 expert) to advise us too.
  3. We await the results of Itoh's gaze experiments with interest.

COGAIN

  1. We will go to the language modelling camp on Sat 28 May or Sun 29th and come back on Friday 3rd June, evening. We get to give about 3 hrs of presentations about language models and Dasher.
  2. Tobii eyetracking company are interested in integrating Dasher with their system, but are they going to do the required work? (Seems like they would like us to, but we are too busy at present.)
  3. Cogain funding has been used to send Ryan to interact with Tobi Delbruck.

Future projects

  1. Tasks for undergraduate projects:
    • Automatic speed control - Chris
    • Automatic eyetracker callibration (more than one scalar) - Chris
    • Button Dasher
    • Tilt Dasher
  2. Tasks for professionals
    • Interface design, polishing. Get expert advice on enhancing useability.
    • basher (ie the shell in which writing of commands and filenames is all done in Dasher style) [Luke is working on source-code-writing]
    • Get Dasher working on Playstation 2. With the aim being to get it used on all those new handheld PS machines. Good for publicity.
    • Make Dasher a fully-integrated input method for tablet pc; for pocket pc.
    • Use of Dasher for non-writing tasks. searching for a string in a document, or in MANY documents. (eg in all your files (like glimpse or Beagle))
      The idea being that the interface would show you which strings are legal and allow you to write them quickly. Requires google-like indexing of all the data.
    • Peano Dasher
  3. Tasks for us to do:
    • Get scripts built into the control mode menus
    • Japanese project (as above) - make special language model
    • Chinese project We probably need a similar special language model for Chinese; at present we have no active developers or researchers helping us.
Bug list:
  • Should be able to back up as far as you want.
  • Visualize Cursor in text box at all times.
  • Focus problem (need to get focus back to canvas).
  • From DJW:
    The main thing making it difficult to integrate new langugage models into Dasher (or improving existing ones) is having all the control-node code inside the current PPM model. Another problem i recall is having the control-node characters added to the alphabet. I think a bit of re-design is necessary to decouple stuff like this.

The Dasher project is supported by the Gatsby Charitable Foundation
David MacKay
Site last modified Fri Oct 1 10:33:28 BST 2010