by touchaddict on Friday, 21 October 2011 | Comments Off |
Read the rest of this entry »
[Work in progress]
Recently I visited a commercial doctor chain in India for a small medic diagnose. I was given a small plastic bottle to pee into and return back for a diagnose on a later date. During my exit at the payment counter, around five more people had similar bottles. That’s where it all began.
Jaideep, victim to the previous shoe hack and I began some research back home about what could be done to ease an interface about using urine content data and health reporting. This below statement was inspiring to us:
Most signs of an impending human ailment can be detected from urine.
A human transacts with a toilet almost daily, what else could be a better place than that to integrate Urineanalysis kit within there. We started with an Alcohol analysis kit with contexual intelligent thought provocative messages presented to the user in realtime. Human urine is also analyzed based on its chemical and molecular properties or microscopic assessment. Urinalysis is also a very useful test that may be ordered by your physician for particular reasons. Urinalysis is commonly used to diagnose a urinary tract or kidney infection, to evaluate causes of kidney failure, to screen for progression of some chronic conditions such as diabetes mellitus and high blood pressure (hypertension).
I am currently working with Jaideep on prototyping a simple application based on the above concept. Initially trying to work with alcohol sensor. We plan to extend the system by glucose, sugar, salinity monitoring and doing a recommendation engine based on that.
Messages we plan to parse will be of type.
String “You have VAR_X content, which is (High/Low). Generic_MSG+ Recommendation”
” You have High alcohol content right now, Dont drink and drive! Consume lots of water/juices tomorrow”
At present people with limited or no vision depend either on walking canes, which help them detect obstructions, or seek help from friends and other people for assistance, or using voice-based navigation aids. The existing form of voice-based navigation aids can be very distracting for the blind as they mostly depend on their sense of hearing. Also, such devices are prohibitively expensive to buy.
Le-Chal is a way finding aid for the visually impaired that uses a language of vibrations, complementing their natural adaptations and extends their limitations.
The unobtrusive design of Le Chal is its most significant feature. The system comprises of a mechanism that condenses complex geographical navigational information and lets the user feel the directional and proximity information through vibrations.
The user speaks the destination using a GPS enabled Android device. After the location is confirmed the device is kept back in pocket! No further direct interaction with the cellphone.
The shoe and phone maintain a wireless connection.
The GPS transmitter within the cellphone gets real-time location using Google Maps. The built-in compass in the GPS module calculates the direction user is walking in. When the turning point is approached a mild vibrational feedback activated in the shoe informs the user the direction he or she needs to turn to.
The strength of the vibration depends upon the overall proximity from the destination, that is, vibration is weak in the beginning and is incrementally stronger at the end of the navigation task.
The built-in proximity sensor of the shoe can detect up to 10 feet, informing the user of the surroundings and allowing him or her to make decisions and plan the next move.
Visually imapired students trying LeChal at TechShare 2012
Jan 2012: Le Chal just won MIT Technology Review Grand Challenges 2012 prize
by touchaddict on Friday, 17 June 2011 | Comments Off |
Read the rest of this entry »
With Rahul Motiyar, Aparajita Choudhary
Puppetrix is a tangible medium to create stories out of imagination. Intended for children, who’re naturally adept at using objects to tell stories. ‘We all grew up playing with GI-Joe and Spiderman toys’
We use the spatial orientation of the toys to modify properties related to it. A tangible depicting sun, would insert sunlight into the system, and rotating the sun would change factors like brightness/time of the day etc.
Motivation- 3D modeling has been revolutionized in recent years by the advent of computers. While computers have become much more affordable and accessible to the masses, computer modeling remains a complex task involving a steep learning curve and extensive training.
According to a survey 80 percent people want to model/create to visualize their imagination using their computers, however the difficult UI of such tools prevents them from doing so. Even to model something the user has to go through the obtrusive set of icons/toolbars/features which are rarely used.
We propose a computer modeling interface to bring it to common layman who wish to rapidly visualize their imagination in 3D.
This work is motivated to employ natural expression with fewest restrictions to free CAD users from the tedious command buttons and menu items. We have explored both the hardware and software aspects of the interface, specifically, the use of intuitive speech commands and multitouch gestures on an inclined interactive surface.Short Research Paper: goo.gl/zkarL The very initial touch based integration was done at Google SoC. Touch+Speech Multimodal fusion with SriG at HP Labs. Thanks: Srig, NUIgroup forums
Dec 2010: Our research paper on Mozart selected for publication at ACM IndiaHCI 2011
Sept 2011: MozArt will be published and demo-ed at 13th ACM International Conference on Multimodal Interfaces 2011(ICMI).
What if computer graphics of the present day could be physically touched and felt, in addition to being seen?
Actuation principle of Zixel
Virtually rendered graphics displayed on a regular 2-D screen provide a rich visual feedback about the object. Present day touchscreen based devices employ direct manipulation of the screen elements, which are essentially a 2-D extension of the real 3D form. However, the feedback is limited by what the screen can render.
Somewhat like interactive RGBZ(Red, Green, Blue, Z-axis) Pin-Art which is controlled by the computer and can render coloured pixels.
This project aims at extending the present 2-D displays into a 2.5 D form where graphics as well as haptic sensation could be directly communicated to the end-user without need of a wearable accessory.
We also define a basic actuated addition to present 2-D pixel form, a physical Z-axis, which caters to the physical manifestation of the virtual object for a rich tactile and graphical feedback. We call the system- Zixel.
Tools used: Arduino Mega, Micro linear actuators, RGB LEDs, Blender-Arduino
by touchaddict on Sunday, 13 March 2011 | Comments Off |
Read the rest of this entry »
‘Fluid in-fluid’ explores the mingling of liquid colours into water and tries to capture the mixture into an interactive light-form. We wanted to develop a computer-less/tangible form for the same(though computers were used for the prototype). The prototype was demo-ed at a non-profit exhibit in Bangalore.
Water is the base, transparent as PNG’s background. Colors mix according to Munsell color theory and visualize in form of artificially connected lights
The work can be replicated by:
Using RGB lights
RGB Camera mounted below the table.
Using Munsell color library in VVVV/Processing.
The precision of mixing can be improved by using a Point-Grey camera.
So if you come up with a version of yours using the above idea. Do pingback!
by touchaddict on Wednesday, 2 February 2011 | Comments Off |
Read the rest of this entry »
I’ve been toying around with the idea of heartbeat monitoring and the possibilities it has.
Hence, I modified an old stethoscope, interfaced it to a Microphone, and used a line-in input to machine. Presently I am trying to get a realtime visual interpretation of whats beating inside our bodies. Maybe this could be extended to cloud where real-time analysis is possible.
Presently electronic stethoscopes cost anywhere between 200-400$ and are limited by their functionality(as they just display a graph/bpm).
The aim is to create an interface between our bodies and visual/network power of computers.
What if music devices could change genres based on our heart-rate. A soothing song to relax us when the heart beats faster.
DIY instructions coming soon, if you’re interested in collaborating, ping me up!
What if the classic Space Invaders designed by Tomohiro Nishikado, could be played collaboratively with objects. The throw expression showing the gut feeling of shooting down aliens by tangibles.
The project explores how kids could be indulged into a collaborative activity using a fun tangible game. We demoed it at the fest.The game is written AS3. A group of children use softballs and throw them on the screen so as to kill the space invaders. In a way we wanted to re-live the 80′s experience of play the good old Space Invaders on ATARIS. Team Sparsh demoed a 100″ wall at IIT Bombay Techfest 2011.
The same is a clever implementation of LLP technique, the game was developed in house and got eye candy treatment at the festival.
Thanks to Rahul(Team Sparsh), Rajdeep, Deepak, Dhaumya, Sachin & Jyotirmoy.
Special thanks to Konark, Asutosh, Aman Xaxa.
Almost all gesture based installations are bulky, expensive and difficult to carry around. Even the locally available multitouch overlays cost a lot! During 2007 when there were no Kinects/iPhones this turned out to be poor-man’s-DIY-gesture setup.
The prototype mounted on top of a laptop
We started this project with the aim to design a low cost multitouch overlay that could fit almost all computer displays. We home-brewed an IR sensor using TSOP1738 and later a modified a SonyPS3 camera to handle the sensing. Most of it is based on LLP Technique.
FoldyTouch is a foldable, low-cost, pocketable design for a gesture sensing system that could be fit onto a regular monitor/laptop and fold and sit into your bag. The system worked fine with TUIO/OSC based applications. Within minutes of installation Windows7, Ubuntu could be controlled via nesher’s MultitouchVista wrapper.
The opening arm contains an IR sensor with the base plate having a 120 degree infrared laser beam covering the surface, the frame is made from balsa wood which is strong and lightweight. Approximate cost of the TSOP prototype components turned out to be INR 300/6 USD (slow FPS). The camera prototype cost was INR 700/ 14USD (fast FPS and tracking)
An instructable of DIY schematics, instructions is now online.
The project aimed at creating simple interactions for kids and people with keyboard-mouse illiteracy. We developed a graphical Zoomable User Interfaces (ZUIs) that could be easy, fun, and relatively easy for such target group.
The project was done using a 100′ transparent projection map. The major media related tasks were grouped in the form of loadable on-screen modules which could be browsed using gestures. We also made immersive apps that kids could easily interact with. The research was patented in form of new projection techniques by Prof. Atrheya, guide.
The working installation was demoed during IIT Delhi Design Degree Show, IIT Exhibit, PanIIT meetup 2008. Other awards to Concept-S:
1st prize at IIT Kanpur Techkriti 2009
1st prize at BITS Pilani’s Entrepreneurship Product Pitch Competition
Special Achievement Award from Industrial Design Center, IIT Delhi