how to use multimodal speech and touch to work with CAD modeling

    Motivation- 3D modeling has been revolutionized in recent years by the advent of computers. While computers have become much more affordable and accessible to the masses, computer modeling remains a complex task involving a steep learning curve and extensive training.

    According to a survey 80 percent people want to model/create to visualize their imagination using their computers, however the difficult UI of such tools prevents them from doing so. Even to model something the user has to go through the obtrusive set of icons/toolbars/features which are rarely used.

    We propose a computer modeling interface to bring it to common layman who wish to rapidly visualize their imagination in 3D.

    This work is motivated to employ natural expression with fewest restrictions to free CAD users from the tedious command buttons and menu items. We have explored both the hardware and software aspects of the interface, specifically, the use of intuitive speech commands and multitouch gestures on an inclined interactive surface.Short Research Paper:​zkarL The very initial touch based integration was done at Google SoC. Touch+Speech Multimodal fusion with SriG at HP Labs. Thanks: Srig, NUIgroup forums

    [stextbox id=”info”] Dec 2010: Our research paper on Mozart selected for publication at ACM IndiaHCI 2011[/stextbox]

    [stextbox id=”info”]

    Sept 2011: MozArt will be published and demo-ed at 13th ACM International Conference on Multimodal Interfaces 2011(ICMI).[/stextbox]

    [stextbox id=”download” float=”true”] Mozart ACM ICMI Research Paper [/stextbox]