Life is full of moments that come with obvious or subtle expressions of energy. It is common for us as human beings to attach different emotions to such expressions. Here we present a concept and prototype that explores a novel physical-visual language of dynamic, emotionally expressive waveforms, designed to transform the way we perceive different forms of energy as we go about our daily lives. With the power of computation hidden within the physical materials used in the interface, we create an interactive form that takes one form of energy and transmute it into a waveform as its output, or Wave Alchemy.
Design inspiration Our primary design inspiration for Wave Alchemy came from an art installation called ‘Waves’ by Daniel Palacios . ‘Waves’ is a rope-based mechanical wave machine, which detects the presence and motion of viewers around it. Depending on these contextual parameters, the frequency of the rope and the sound it generated are altered.
If the audience was mostly immobile or still, the waves and sound environment was more harmonious. In other situations, it was more irregular and chaotic. Natalie Jeremijenko’s Dangling String is also similar in the design of the cables’ responses to the Internet traffic around it . We found a number of other implementations with varying degree of complexity, from the classic Shive Wave Machine to gigantic architectural robotic surfaces like HypoSurface . We chose to follow the simplest technique to avoid the users from getting overwhelmed by the complexity of the interface itself.
The sensitivity of Wave alchemy is reconfigurable. It can amplify, attenuate or maintain the amplitude of the signal it is listening to. When placed next to a faucet, its amplitude rises to the maximum level, and then gradually reduces to the minimum, following the waterflow.
When a crowd cheers for their favorite team, they usually say a number of words repeatedly and synchronously. Wave machine then assumes the shape of a standing wave .
Transient events as pulses
Wave alchemy when coupled with sensors can also be used for notification of transient events, like clapping, incoming email, temperature, number of people etc.
Use Case Scenario Reminiscence through Dynamic Physical Querying
With the ability to capture and store events and memories of all different energies and emotions, Wave Alchemy can transform the way we interact with the contextual information of any place in the past. A Wave Alchemy system can be used for reminiscing, or searching. Imagine our system to be situated in a city neighborhood on a quiet sunny morning in May. As the sun ascends into the sky, birds begin to chirp more loudly and our waveform ripples gently in response. (Figure 7).
[Work in progress]
Recently I visited a commercial doctor chain in India for a small medic diagnose. I was given a small plastic bottle to pee into and return back for a diagnose on a later date. During my exit at the payment counter, around five more people had similar bottles. That’s where it all began.
Jaideep, victim to the previous shoe hack and I began some research back home about what could be done to ease an interface about using urine content data and health reporting. This below statement was inspiring to us:
Most signs of an impending human ailment can be detected from urine.
A human transacts with a toilet almost daily, what else could be a better place than that to integrate Urineanalysis kit within there. We started with an Alcohol analysis kit with contexual intelligent thought provocative messages presented to the user in realtime. Human urine is also analyzed based on its chemical and molecular properties or microscopic assessment. Urinalysis is also a very useful test that may be ordered by your physician for particular reasons. Urinalysis is commonly used to diagnose a urinary tract or kidney infection, to evaluate causes of kidney failure, to screen for progression of some chronic conditions such as diabetes mellitus and high blood pressure (hypertension).
I am currently working with Jaideep on prototyping a simple application based on the above concept. Initially trying to work with alcohol sensor. We plan to extend the system by glucose, sugar, salinity monitoring and doing a recommendation engine based on that.
Messages we plan to parse will be of type.
[stextbox id=”alert”] String “You have VAR_X content, which is (High/Low). Generic_MSG+ Recommendation”[/stextbox]
[stextbox id=”warning”]” You have High alcohol content right now, Dont drink and drive! Consume lots of water/juices tomorrow” [/stextbox]
Advanced versions of This project being taken to real world deployment through a MIT Spinoff.
Black ink is one of the most consumed products in the industry. Most of this printing ink is produced in factories with complex chemical procedures. Companies like HP/Canon make 70 percent of their profits by selling these cartridges at 400% margin. So is the case with other kinds of paints and pigments.
This is not an attempt to win over the pollution. Just a minor itch that led me to build something cool from observations arising from nostalgia of the days back in India.
There’s so much soot/pollution around us, esp. in crowded cities. What if the same could be repurposed to generate ink for printers?
Rubbing alcohol+oil substrate+soot= lowtech non-uniform ink
For printing we assembled a Nicolas’ ink shield with arduino interfaced with our Soot-catcher pump design. This shield allows you to connect a HP C6602 inkjet cartridge to your Arduino turning it into a 96dpi print platform. It only uses 5 pins which can be jumper selected to avoid other shields.
For the project we had to widen the holes of the cartridge to let the ink out, since the size of the particles in our ink is much larger than the fine industrial ink.
Nostalgia 0 – Just after you’re born. ( Suggested by Kunal)
If you’re born in India chances are that your grandma will take some steps to protect you from wicked spirits by smudging kajal onto your eyes. The technique to make kajal is old. Its carbon deposited from burning a low fidelity oil.(mustard etc.)
Nostalgia 1- Getting around
I was once day dreaming about the awesome days we spent back in Bikaner, a small city in the west of Rajasthan. It reminded me of the heat, travelling in sweat inducing autorickshaws while we used to do our experiments with building our Multi-touch table with low tech techniques. The month of June there was full of sweat, with unburnt smoke rising from unending tur-tur-ing of autorickshaws blackening our skin.
Next phase (Thanks Rahul Motiyar and dirtydevil for the tipoff)
– Design the carbon separator using capacitive plates. The air comes in with a lot of dust most of which dust and other particles. The powdery black soot separated from rest is what we’re interested in. This principle is used in Chimneys to reduce the carbon particles injected into the atmosphere.
Sevenbyfive is an interactive architectural surface that has the ability to scale and bend out plane at the same time. We create a “second-order” four-way linkage mechanism to achieve this. Generic ball and socket joint connectors allow out of plane movements. We hand-casted 800 components over a period of 5 days and motorized them.
Team: Phillip Ewing, Anirudh Sharma, George Samartpolous, Eric Demaine, Chuck Hobermann
We introduce Glassified, a modified ruler with a transparent display to supplement physical strokes made on paper with virtual graphics. Because the display is transparent, both the physical strokes and the virtual graphics are visible in the same plane.
A digitizer captures the pen strokes in order to update the graphical overlay, fusing the traditional function of a ruler with the added advantages of a digital, display-based system. We describe use-cases of Glassified in the areas of math and physics and discuss its advantages over traditional systems.
The Ruler is embedded with -> TOLED display, Digitizer to track pen input.
Source code for Glassified 0.1 is released. Its kind of messy right now but you should be able to compile it with VS2010 . Instructions to replicate the system will be up soon.
Moving forward we explored the LittleAR, paper bookmark form factor.
LittleAR was motivated by the observation of users who have held on to paper, despite powerful incentives to adopt electronic devices such as Kindle, tablets, etc.
LittleAR is a 2-inch transparent OLED and a mouse sensor combined in a form similar to a paper bookmark. The design naturally affords translation on paper such that the user can move it along paper similar to a magnifying glass.
The advantage of using a thin OLED display is that it allows graphical augmentations to be almost in the same plane as the paper, which creates a seamless fusion of the virtual and real world. This preserves the aesthetics of the printed books while giving the user an option to see the printed text/image augmented with dynamic text and graphics.
I started Lechal as a DIY project in 2010. After 5 years Le-chal has evolved as a full-fledged lifestyle product now.
Lechal was started in 2010 with a simple sketch and an Arduino lilypad prototype. This made me quit my job at HP Labs, which HP mentioned.
After 6 months of initial demos, Krispian and I decided to pursue Lechal further as a product 🙂 We were introduced to each other magically by a visually impaired person.
Lechal was started as a navigation aid for the visually impaired. At present people with limited or no vision depend either on walking canes, which help them detect obstructions, or seek help from friends and other people for assistance, or using voice-based navigation aids. The existing form of voice-based navigation aids can be very distracting for the blind as they mostly depend on their sense of hearing. Also, such devices are prohibitively expensive to buy.
Lechal initial prototypes and design got the MIT TR35 Award, and more than 1 million social media hits.
The unobtrusive design of LeChal is its most significant feature. The system comprises of a mechanism that condenses complex geographical navigational information and lets the user feel the directional and proximity information through vibrations.
With Rahul Motiyar, Aparajita Choudhary
Puppetrix is an educational tool designed for schools to enable kids to play, learn and experiment with different set of toys / objects using animation. It aims to use the physicality of a toy and the animated outcome to bridge the gap between abstract concepts of kids imaginative narratives and reality.
Its is a tangible medium to create stories out of imagination. Intended for children, who’re naturally adept at using objects to tell stories. ‘We all grew up playing with GI-Joe and Spiderman toys’
We use the spatial orientation of the toys to modify properties related to it. A tangible depicting sun, would insert sunlight into the system, and rotating the sun would change factors like brightness/time of the day etc.
I think the ability of children to make play out of anything and everything is fascinating. Not only it is a process of narration but also plays a role in shaping child’s understanding and triggering his/her curiosity about the world around him/her.
Contextual study was made on media to gauge the impact of television, video games and internet on the routine of middle class family children. Through out the project my main interest remained the Indian middle-class family which is amongst the largest and the fastest growing socio-economic groups by both numbers and percentage.
Motivation- 3D modeling has been revolutionized in recent years by the advent of computers. While computers have become much more affordable and accessible to the masses, computer modeling remains a complex task involving a steep learning curve and extensive training.
According to a survey 80 percent people want to model/create to visualize their imagination using their computers, however the difficult UI of such tools prevents them from doing so. Even to model something the user has to go through the obtrusive set of icons/toolbars/features which are rarely used.
We propose a computer modeling interface to bring it to common layman who wish to rapidly visualize their imagination in 3D.
This work is motivated to employ natural expression with fewest restrictions to free CAD users from the tedious command buttons and menu items. We have explored both the hardware and software aspects of the interface, specifically, the use of intuitive speech commands and multitouch gestures on an inclined interactive surface.Short Research Paper: goo.gl/zkarL The very initial touch based integration was done at Google SoC. Touch+Speech Multimodal fusion with SriG at HP Labs. Thanks: Srig, NUIgroup forums
[stextbox id=”info”] Dec 2010: Our research paper on Mozart selected for publication at ACM IndiaHCI 2011[/stextbox]
Sept 2011: MozArt will be published and demo-ed at 13th ACM International Conference on Multimodal Interfaces 2011(ICMI).[/stextbox]
[stextbox id=”download” float=”true”] Mozart ACM ICMI Research Paper [/stextbox]
What if computer graphics of the present day could be physically touched and felt, in addition to being seen?
Classically if we zoom-in the present displays the pixel are nothing but a form of 2D array of clusters(RGB) which are arranged in X and Y plane.
Virtually rendered graphics displayed on a regular 2-D screen provide a rich visual feedback about the object. Present day touchscreen based devices employ direct manipulation of the screen elements, which are essentially a 2-D extension of the real 3D form. However, the feedback is limited by what the screen can render.
Somewhat like interactive RGBZ(Red, Green, Blue, Z-axis) Pin-Art which is controlled by the computer and can render coloured pixels.
This project aims at extending the present 2-D displays into a 2.5 D form where graphics as well as haptic sensation could be directly communicated to the end-user without need of a wearable accessory.
We also define a basic actuated addition to present 2-D pixel form, a physical Z-axis, which caters to the physical manifestation of the virtual object for a rich tactile and graphical feedback. We call the system- Zixel.
‘Fluid in-fluid’ explores the mingling of liquid colours into water and tries to capture the mixture into an interactive light-form. We wanted to develop a computer-less/tangible form for the same(though computers were used for the prototype). The prototype was demo-ed at a non-profit exhibit in Bangalore.
Water is the base, transparent as PNG’s background. Colors mix according to Munsell color theory and visualize in form of artificially connected lights
The work can be replicated by:
So if you come up with a version of yours using the above idea. Do pingback!
Credits: Sarthak for the video, Tabla(SoundCloud)
I’ve been toying around with the idea of heartbeat monitoring and the possibilities it has.
Hence, I modified an old stethoscope, interfaced it to a Microphone, and used a line-in input to machine. Presently I am trying to get a realtime visual interpretation of whats beating inside our bodies. Maybe this could be extended to cloud where real-time analysis is possible.
Presently electronic stethoscopes cost anywhere between 200-400$ and are limited by their functionality(as they just display a graph/bpm).
The aim is to create an interface between our bodies and visual/network power of computers.
[idea] What if music devices could change genres based on our heart-rate. A soothing song to relax us when the heart beats faster.[/idea]
DIY instructions coming soon, if you’re interested in collaborating, ping me up!
What if the classic Space Invaders designed by Tomohiro Nishikado, could be played collaboratively with objects. The throw expression showing the gut feeling of shooting down aliens by tangibles.
The project explores how kids could be indulged into a collaborative activity using a fun tangible game. We demoed it at the fest.The game is written AS3. A group of children use softballs and throw them on the screen so as to kill the space invaders. In a way we wanted to re-live the 80’s experience of play the good old Space Invaders on ATARIS. Team Sparsh demoed a 100″ wall at IIT Bombay Techfest 2011.
The same is a clever implementation of LLP technique, the game was developed in house and got eye candy treatment at the festival.
Thanks to Rahul(Team Sparsh), Rajdeep, Deepak, Dhaumya, Sachin & Jyotirmoy.
Special thanks to Konark, Asutosh, Aman Xaxa.
Almost all gesture based installations are bulky, expensive and difficult to carry around. Even the locally available multitouch overlays cost a lot! During 2007 when there were no Kinects/iPhones this turned out to be poor-man’s-DIY-gesture setup.
We started this project with the aim to design a low cost multitouch overlay that could fit almost all computer displays. We home-brewed an IR sensor using TSOP1738 and later a modified a SonyPS3 camera to handle the sensing. Most of it is based on LLP Technique.
FoldyTouch is a foldable, low-cost, pocketable design for a gesture sensing system that could be fit onto a regular monitor/laptop and fold and sit into your bag. The system worked fine with TUIO/OSC based applications. Within minutes of installation Windows7, Ubuntu could be controlled via nesher’s MultitouchVista wrapper.
The opening arm contains an IR sensor with the base plate having a 120 degree infrared laser beam covering the surface, the frame is made from balsa wood which is strong and lightweight. Approximate cost of the TSOP prototype components turned out to be INR 300/6 USD (slow FPS). The camera prototype cost was INR 700/ 14USD (fast FPS and tracking)
[idea]An instructable of DIY schematics, instructions is now online.[/idea]
The project aimed at creating simple interactions for kids and people with keyboard-mouse illiteracy. We developed a graphical Zoomable User Interfaces (ZUIs) that could be easy, fun, and relatively easy for such target group.
The project was done using a 100′ transparent projection map. The major media related tasks were grouped in the form of loadable on-screen modules which could be browsed using gestures. We also made immersive apps that kids could easily interact with. The research was patented in form of new projection techniques by Prof. Atrheya, guide.
The working installation was demoed during IIT Delhi Design Degree Show, IIT Exhibit, PanIIT meetup 2008. Other awards to Concept-S:
[stextbox id=”info” caption=”Awards”] 1st prize at IIT Kanpur Techkriti 2009 [/stextbox]
[stextbox id=”info”] 1st prize at BITS Pilani’s Entrepreneurship Product Pitch Competition [/stextbox]
[stextbox id=”info”] Special Achievement Award from Industrial Design Center, IIT Delhi [/stextbox]
[stextbox id=”info”]UPDATE: Initial ConceptS work covered by Times Of India [/stextbox]