Touché, and Water as an Interface

After experimenting with learning how Touché could be used to interact with plants, I wanted to see how I could use it to interact with water. 

In the Touché paper, the authors demonstrate that the sensor is capable of detecting how a user is touching the surface of, or submerging their hand in water. The system is able to distinguish among a variety of touch interactions: no hand, 1 finger, 3 fingers, and hand submerged:

I decided to see if I could replicate the same results using the ESP-system.

This experiment was not as successful as the previous one using plants as an interface (I wasn't able to exactly replicate all of the types of interactions that the author's of the Touché paper did), but, what I learned may provide some clues with how to go about improving a Touché based - water interface in future, longer term projects. Additionally, I was able to train the system to distinguish between two types of touches:

First, the default state of not touching the water:

Touching the water with one finger-tip:

Touche One Finger

Touching the water with all five finger-tips :

Touche Five Fingers

Setup Configurations

In this still from the Touché paper video, you can see what looks like a metal plate on the bottom of the water tank, which is connected to the circuit via a wire. The narration in the video stated that the electrode was placed under the container - I attempted this, but my sensor readings did not appear as strong as the electrodes I used (shown below) had some direct contact with the water. 

 
Touché Electode

Touché Electode

 

I ended up trying a variety of different configurations for working with the sensor.

My first thought was to try using aluminum foil, since it was cheap and easy to work with:

 
Attempting to use aluminum as a conductor

Attempting to use aluminum as a conductor

 

I went to Lowe's, and got a sheet of steel, and crudely hacked off a small square that would fit in my water container. I first tried having the plate submerged, but this seemed to make the system less responsive than if the plate was put upright against a corner of the Tupperware container, as shown below:

 
Upright steel plate

Upright steel plate

 

This seemed to work *OK*, but I wanted to experiment further with electrodes and water containers. I tried wrapping the steel plate in solder, and then connecting the alligator clip to one of the ends of the solder:

Steel plate with solder

Steel plate with solder

That configuration did not prove to be very effective. 


Finally, I took a glass cooking dish, and placed the steel-electrode horizontally as shown below:

Glass cooking container 

Glass cooking container 

Again, I wasn't able to train the gesture recognition system to distinguish touches beyond whether a hand was placed in the water or not. 

The combination of the Tupperware container and the steel plate in a vertical position seemed to give the best results of any of the configurations I tried - I was able to train the gesture recognizer to discriminate between whether one finger, or five fingers were touching the surface of the water; and whether or not the hand was not in contact with the water. 

Here's a short clip of the system in use:

Code

The Processing sketch + ESP model are on GitHub here. If you're trying to replicate the experiments shown here, you'll most likely need to train your own model; as the accuracy will vary based on individual setups.

further goals

Though I wasn't able to get the results I necessarily wanted with this experiment, I believe there is a lot of room for further research and exploration that may shed some insight into a deeper understanding of how the Touché system works, and how it can be applied to create a water-based interface. 

Test out a greater variety of containers and electrodes - I have a feeling that this is probably the key to building a better water-based interface. It will be important to keep in mind that the results of this study will determine how the interface is used in-context; whether it's an installation, a game, or a VR scenario. 

Better understand of Support-Vector Machines - This would be useful so that I can understand why the Touché developers chose this particular algorithm. Currently, I'd say I have a fairly shallow knowledge of the algorithms used in interactive machine learning contexts...often times, the parameters used in these situations still feel a bit like magic numbers. I think that diving a bit deeper into the nature of the algorithms will be useful in creating more accurate gesture-recognition systems. 

Experiment with pre-and-post processing modules for the machine learning pipeline - The GRT has several modules for processing incoming data before and after it's sent through a classification or regression module. Some of these modules may be useful in filtering sensor the data from the sensor, potentially providing for more accurate gesture classification. 

I still consider this prototyping experiment a success, as I was able to use everyday objects (a Tupperware container, a sheet of steel, an off-the-shelf electronics) to build a system that could distinguish whether I was touching the surface of water with one finger, five fingers, or not touching the water at all. 

Implications

Given its multi-sensory characteristics and emotional resonance, water can be an interesting choice for an interface in a game, installation, or in an AR/VR context.

Pier and Goldberg, in the their paper "Using water as interface media in VR applications", make a compelling case for why using water may be an interesting and engaging digital interface. They state

"In our experience, touching real water has proven to be rather dramatic as a tactile interface since it involves much more than the mere skin contact with the water, the effect is magnified by the temperature and wetness which affect the perception of the user."

The aspects of the water that the authors measured were pressure, flux, and movement. It could be interesting to combine these with a Touché sensor in a sensor-fusion setup (whereby several different types of sensors are combined into one data stream that's fed to a machine-learning classifier, which would then detect meaningful data based on the combination of inputs). 

In "Capturing Water and Sound Waves to Interact with Virtual Nature", Díaz et al. propose interacting with water (and wind, in their specific experiments) as a way to "....give the user the sensation that his/her presence affects the virtual world and then to let the user perceive that the actions he/she takes on the real world can change the virtual one in a smooth nature way in order to achieve virtual biofeedback"...in other words, using a water-based interface as a bridge between the physical and virtual worlds.

There remains a lot of work left in experimenting with water-based interfaces - sensors like Touché have the ability to allow for more interactions that help make use of our everyday environment.

Talking to Plants: Touché Experiments

As I mentioned in a previous post, I was really pleased to see that the ESP-Sensors project had included code for working with a circuit based on Touché

I had earlier come across other implementations of Touché for the Arduino, but unlike the ESP project, none of them utilized machine learning for classifying gesture types.

Touché is a project developed at Disney Research that uses swept-frequency capacitive sensing to "...not only detect a touch event, but simultaneously recognize complex configurations of the human hands and body during touch interaction." 

In other words, it's able to not just tell if a touch event occurred, but what type of touch event occurred. This differs from most capacitive sensors, which are only able to detect whether a touch event occurred, and possibly how far away from a sensor the user's hand is. 

Traditional capacitive sensing works by generating an electrical signal at a single frequency. This signal is applied to a conductive surface, such as a metal plate. When a hand is either close to or touching the surface, the value of capacitance changes - signifying that a touch event has occurred, or that a hand is close to the surface of the sensor. The CapSense library allows for traditional capacitive sensing to be implemented on an Arduino.

Swept-frequency capacitive sensing makes use of multiple frequencies. In their CHI 2012 paper, the Touché developers state the reason for using multiple frequencies is "Objects excited by an electrical signal respond differently at different frequencies, therefore, the changes in the return signal will also be frequency dependent." Rather than using a single data point generated by an electrical signal at a single frequency, as in traditional capacitive sensing, Touché utilizes multiple data points from multiple generated frequencies.

This capacitive profile is used to train a machine learning pipeline to differentiate between various touch interactions. This machine learning pipeline is based around a Support Vector Machine. Specifically, it uses the SVM module from the Gesture Recognition Toolkit

Check out the video that accompanied the Touché CHI 2012 paper: 

There have been a few other open-source implementation of touch-sensing based off of Touché that I've come across before, but the one provided in the ESP project seemed to be the easiest to set-up, and the most usable to work with. 

The authors continued their work by showing how Touché could be used to interact with plants in the Botanicus Interacticus project, which was displayed in an exhibition at SIGGRAPH 2012:

 

 

I have two plants on my desk: a fern plant, and an air plant. I really enjoy the way they add some color to my work area, and am grateful for their presence. 

I wanted to see if they could talk to me. 

 
Touché connected to a fern

Touché connected to a fern

 
 
Touché connected to an air plant

Touché connected to an air plant

 

 

The fern

I first experimented with the fern. As suggested in the Botanicus Interacticus paper, I inserted a simple wire lead into the soil of the plant. This would allow the ESP system to measure the conductive profile of the plant as I touch it. 

After doing some experiments, I was able to easily train the ESP system to recognize whether I was touching a single leaf

 
 

 

or whether I was lightly caressing down on the top of the leafs with the palm of my hand.

 
 

 

Here's what that looked like in action:

 

I also tried experimenting as to whether or not the system was able to detect whether or not I was touching individual leaves, but was not able to get consistent results. I discuss my theory on why this may be the case at the end of this post.

 

The air plant

I experimented with the air plant next, and successfully trained the ESP system to be able to discriminate between whether I had my hand at rest on top of the plant:

 
 

 

and whether or not was "tickling" the top of the plant

 
 

 

Here's what the system looked like in use:

What I was not successful at was training the ESP to discriminate between the act of touching a leaf, and rubbing my finger on a leaf as shown below:

 
 

 

I tried moving the alligator clip from one of the leafs to the root - my theory being that perhaps the capacitance wasn't being spread evenly throughout the plant.

 
Connecting the electrode on a leaf

Connecting the electrode on a leaf

 
 
Connecting the electrode near the root

Connecting the electrode near the root

 

This appeared to have no affect, however.

I was a bit surprised at this - given the subtlety in touch which it seemed Touché was capable of measuring, I had thought the system would be capable of discriminating between touching and rubbing a single leaf. That said, there could be some missing factor (such as amount of training data/sessions) that I'm not aware of yet in order to make that happen.

Further Learning Goals:

Using Regression

In the Bottanicus Interactus video (at the time below), the authors show that they are able to determine where at on a long plant stem is being touched, and interact with it in a way that resembles using a slider moving continuously between two points.

 

 

 

 

 

 

 

 

 

 

The Touché system uses a Support Vector Machine Learning algorithm, which is capable of both classification and regression; two types of machine learning tasks. In classification, a machine learning system detects what type of events have occurred - in this case, the type of touch that occurred. In regression tasks, a machine learning system maps the distance between two points to control a parameter - so, for instance, you could map the distance travelled by the hand between two points on a plant stem to the value of a volume slider. 

in the ESP system, classification is currently supported; regression is not. In order to use Touché to control a continuous stream of value between one point and another, the ESP system would need to be modified to support regression.

(For more info on the principles behind classification and regression, check Rebecca Fiebrink's Kadenze course, Machine Learning for Musicians and Artists). 

finer discrimination

Determine whether or not it is possible to detect the touches of individual leafs, as opposed to detecting whether "a leaf" has been touched. It may be that this is possible, but dependent on the type of plant involved - a plant with thicker, more "solid" leaves may return a conductive pattern that's better at discriminating between individual leaf touches than the thin, loose leaves of the fern plant.

gesture timeouts

If you watch the videos of the Touché system in action above, you may have noticed that there were occasionally instances in which there is a short "bounce" during the transition between one gesture class and another. In the Air Plant example, when the hand is moving from a "Resting Hand" position to a "No Hand" position, the ESP system will falsely recognize a "Tickling" gesture.

A potential remedy for this situation is to add a Class Label Filter to the ESP's gesture recognition pipeline. This class will allow the system to get rid of the erroneously recognized gesture class. Adding this filter to the ESP pipeline is something I'm planning on exploring in future experiments. 

 

Code

The Processing sketches shown in the videos, and the ESP session data, can be found here. You'll need to make sure you have Processing installed on your system to run the sketches, and have followed the installation guide for setting the ESP if you want to run the included sessions. 

On NDA's and vague Non-Compete Agreements as a junior employee

When I was working at a small software company in the Midwest as a junior developer, I was asked to sign an NDA / Non-Compete agreement. It was an amended agreement to the one I had signed upon first starting my employment. During the process of signing the first agreement, I had signed it without asking any questions - because I simply hadn’t known any better.

At the time of the second agreement, the company had been dealing with some IP infringement issues, so it was logical and understandable that they would ask their employees to sign an updated agreement.

However, upon reading this agreement, I had felt a bit uncomfortable with the rather vague nature it presented.

A redacted version of the agreement is shown below - as you can see, by being as vague as possible in terms of definitions of any of the terms...giving a full advantage to the company. In particular, Section 6, the Non-Compete section, is troublingly vague - “Direct competition” is an incredibly nebulous term. In particular, it the agreement purports to cover several different industries...all of which cover an extremely broad range of activities.

 

After reading the agreement, I met with the COO of the company to express my concerns and ask questions intended for clarifying the scope of the document.

During this process I was given verbal assurances that the Company would only seek to enforce it if went to work for any of the company’s direct competitors - even though this was not spelled out in the agreement.

I still felt uncomfortable with this, but in the end, I ended up signing the agreement for the rather simple reason of...I needed to keep my job.

Roughly a year after I had left employment with this particular company, I was in another process of job searching, and was interviewing with a startup that was in one of the fields listed in Section 6, the Non-Competition section.

I decided to ask a lawyer to review the agreement, to be on the safe side, and because I was curious as to whether or not the agreement would even be enforceable. In addition to their own law practice, the lawyer I talked with also had experience working with startups in the Venture Capital industry.

After I sent over the NDA for them to review, they responded with this assessment:

This NDA would not be enforceable, as there is no clear scope on the limitation of enforceability. That does not mean that they may not try to enforce it, but this would not stand under Ohio law (assuming that is the law governing this doc). This is clearly a form off the internet and the person copy and pasting it together clearly had no idea what they were doing.

I couldn’t help but laugh upon reading this...simply put, it had been more or less my gut reaction, as well.

So, a few lessons from this experience:

1) It’s normal to feel pressure to sign these types of agreements...as I had mentioned, I signed it out of the very normal fear of not wanting to lose my job in the event that I refused to sign it.

2) That being said, do not be afraid to ask questions regarding NDA’s / Non-Compete Agreements.

2) If possible (especially in regards to a full-time position), have a lawyer review the agreement!

3) Be willing to say no to signing an agreement that’s either too restrictive, or makes you feel uncomfortable. 

4) If you’re an employer, and you absolutely feel the need to have your employees sign an NDA / Non - Compete, be sure to actually take the time to hire a lawyer to write it for you, rather than trying to cobble it together yourself.

 

In closing, I want to clarify one thing - this isn't to say that all NDA's are bad - far from it. There is definitely a time and place for them. However, a well-written NDA will be clear in it's scope and enforceability. Just always be sure you read something before you sign it!

Tools for Machine Learning and Sensor Inputs for Gesture Recognition

The past several years have seen an explosion in machine learning, including in creative contexts - everything from hallucinating puppyslugs, to generating new melodies, machine learning is already beginning to revolutionize the way artists and musicians execute their craft.

My personal interest in the area of machine learning relates to using it to recognize human gestural input via sensors. This interest was sparked from working with Project Soli as a member of the Alpha Developer program.

Sensors offer a bridge between the physical world and the digital. Rich sensor input combined with machine learning allows for new interfaces to be developed that are novel, expressive, and can be configured to a specialized creative task.

In order to make sense of the potential that sensors offer technologists and artists, it's often necessary to utilize machine learning to build a system that can take advantage of a sensor's capabilities. Sensors by themselves aren't always easy to make sense of right away.

With something like an infrared sensor, you can get a simple interactive system working with a sensor and some if-then-else-that type of logic. Something like LIDAR, or a depth-field camera, on the other hand, will have a much larger data footprint - the only way to make sense of the sensor's data is to use machine learning to recognize what patterns are in the real-time data that the sensor is gathering. It’s important to be aware of what type of data a sensor is capable of providing, and whether or not it will be appropriate for the interactive system you are trying to build.

Often, the more complex the data is that the sensor provides, the more interesting things you can do with it.

I wanted to go over some of the open-source tools that are currently available to the creative technology community to leverage the power of sensors for adding gestural interactivity to creative coding projects:

  • Wekinator

  • Gesture Recognition Toolkit / ofxGrt

  • ESP

 

Wekinator

The Wekinator is a machine-learning middleware program developed by Dr. Rebecca Fiebrink of Goldsmith's University in London. The basic idea of its use is that it receives data from a sensor via OSC (open-sound control) from a program that’s acquiring the data from the sensor, such as an Arduino or a Processing sketch. Wekinator is used to train a machine learning system on this incoming data to recognize which gesture has occurred, or to map the start and end times of a gesture to a value that can be used to control a parameter range. These values are sent out from Wekinator via OSC, and can then be received by a program that maps those values to to control audio/visual elements.

Lots of examples are provided showing how to use a plethora of sensors with the Wekinator, such as a Wii-Mote, Kinect, Leap Motion, etc, and map them to parameters various receiver programs.

What's great about the Wekinator is that you don't have to know much about how machine learning works in order to use it - its strength is in the fact that it's user friendly and easy to experiment with quickly.

If you're interested in exploring how you can use Wekinator to add interactivity to your projects, I highly recommend Dr. Fiebrink's Machine Learning for Artists and Musicians course on Kadenze.

 

Gesture Recognition Toolkit

The GRT is a cross-platform toolkit for interfacing with sensors for gesture-recognition systems. It's developed and maintained by Nick Gillian, who is currently a lead machine learning researcher for Google's Project Soli.

The GRT can be used as a GUI-based application that acts as middleware between your sensor input via OSC, and a receiver program that reacts to the gesture events detected by your sensor. It’s useage follows the same pattern as the Wekinator.

However, the real benefit of the GRT is how you can write your gesture-recognition pipeline, train your model, and use your code on multiple platforms. This is useful if you want to prototype a gesture-recognition system on your desktop that may need to be deployed on custom, embedded hardware. Additionally, you can write a custom module for the GRT in order to customize your pipeline based on some unique characteristics of the sensor you're using.

Be sure to read Nick’s paper on the toolkit in the Journal of Machine Learning Research.

ofxGrt

The GRT also comes embedded in a wrapper for use in Open Frameworks, a C++ toolkit for creative coding, ofxGrt. This type of wrapper is known as an addon. This makes it extremely easy to integrate into new or existing Open Frameworks projects. Combined with the numerous other Open Frameworks addons, ofxGrt allows coders to integrate with various types of sensors for adding an interface to physical world with their creative projects.

 

Example-based Sensor Predictions

The Example-based Sensor Predictions project by David Mellis and Ben Zhang was created to make sensor-based machine-learning systems accessible to the Maker and Arduino community. Though the maker community was very familiar with how to work with sensors, fully utilizing their potential for rich interactions really wasn't possible to do in the Arduino ecosystem until this project was developed.

Here’s a short video overview of the project from the creators:

 

The project is built so that users can interface with sensors via Processing, and the gesture recognition pipeline is built using the GRT. It contains four examples:

  • Audio Detection
  • Color Detection
  • Gesture Recognition
  • Touché touch detection

Developers are also given API documentation for how to write their own sensor-based gesture recognition examples as part of the ESP environment.

I've recently been doing some experiments with the ESP project, and hope to share some of those examples soon.

There’s still a lot of work to be done exploring how new sensing technologies can be leveraged to improve the way that people interact with the devices around them. In order to understand how these technologies may be useful, artists and musicians need to be at the forefront of using them...pushing sensors and machine learning systems to the limits of how they can be used in expressive contexts. When artists push interfaces to their full potential, everyone benefits as a result - as Bill Buxton said in Artists and the Art of the Luthier, “I also discovered that in the grand scheme of things, there are three levels of design: standard spec., military spec., and artist spec. Most significantly, I learned that the third was the hardest (and most important), but if you could nail it, then everything else was easy.”

The tools described above should encourage artists, musicians, designers, and technologists to explore using sensor-based gesture recognition systems in their creative practice, and imagine ways in which interactivity can be added to new products, pieces, designs, and compositions.

EYEO 2016: Observations on Toolmaking

I'm writing this having returned from the 2016 EYEO Festival, a gathering of creative technologists, designers, and artists from all over the world. It was an amazing experience, and I highly recommend going if you ever have the chance to do so. There were many things I enjoyed about it...the excellent talks, getting to meet people I've only talked to on Twitter for the first time, and the late night dancing at Prince's nightclub. 

By the end of the week, I noticed an underlying theme to several of the talks I went to and conversations I was part of, which was that of designing and creating tools. 

The first talk I noticed this theme in was Hannah Perner-Wilson's talk, "The More I make, the more I wonder why". She said there was a point in her e-textiles practice where she started to "make tools and not parts". This was specifically reflected in her OHMHOOK  (ohm meter / crochet hook), which lets her measure the resistance value of the circuits she creates with conductive thread. 

This tool is able to help Hannah work much more quickly...instead of having to put down her crochet hooks, pick up the leads of her ohm meter to test the circuit's resistance, put down the leads, and pick up her hooks to start crocheting again, she could test the resistance value of the circuit with the same tool she uses to create the circuit in the first place. 

The topic of tools also came up in conversation. Derek Kinsman and I were talking about the tendency of some programmers to fetishize certain languages, frameworks, and development patterns at the expense of whatever languages, frameworks, and patterns are the most appropriate for the task you are trying to accomplish. (This is something Derek is particularly passionate about...on his website, he states that he  "...believes finding the right solution is more important than finding the right problem and that said solution determines what tools will be used.")

Since creative technologists continually push the boundary of what technology can do (which often involves using it in ways it was never intended for), we frequently run into instances where the tools we're try to use just can't do the thing we're trying to use them for. The solution is often then to modify that tool, or create a new one. In his talk, Ben Fry also said that the reason the Processing project exists is because of the frustration he experienced when watching students struggle to with programming something when working on a design project --- they spent so much time trying to work out how to do something in their code that they would lose the bigger picture of what they were working on. As a result, Processing has helped countless others be introduced to the creative coding. 

During their practices, Hannah, Ben, and many others pushed up against the limit of what their tools could do while trying to accomplish their creative or pedagogical goals. As a result, they had to create their own tools that would server their needs better than any existing tools. By open-sourcing them, their work has been able to benefit the creative practice of many others as well. 

Thoughts on Swift, first pull-requests, and the FOSS community.

Since Swift was open-sourced, there have been many pull-requests on the project. Many of these have been grammatical or spelling fixes. Unfortunately, both on Twitter and on GitHub, there have been snide comments regarding these pull requests as being insubstantial. 

Chris Lattner, the Swift project architect, thinks otherwise: 

(You can find the specific tweet here).

For those of us who are already active in the open-source software community, it can be easy to forget this. It may be a good idea to take a look and see what your first pull-request was on Github. You can find out what that was on First Pull Request. Chances are good that for many people, their first pull-request was a typo fix in a README file, or fixing some formatting in a header file. 

It's important to remember that all types of improvements to a project are important, even if they're just typo fixes. There's always a chance that the people making these "small" pull-requests on the newly open-sourced Swift project will make improvements in the future in more in-depth areas of the project as they continue to explore the code base. 

On a final note, if you are looking to get involved in the open-source community but aren't sure where to start, take a look at Your First Pull Request. They post GitHub issues that would be appropriate for "first timers" to tackle. 

Happy coding!

Arduino + AudioKit Demo

I've added a new OS X project on GitHub that shows how a simple oscillator created with AudioKit can be controlled with a physical interface.  It's written in Swift, and uses the ORSSerialPort library to interface with an Arduino Uno. I've published a demo video of the project on Vimeo. 

A detailed description of how the app works can be found in the project's README file. 

I hope you find it useful! If you have any questions, or use it as part of your project, please let me know! 

Nick

nick (at) audiokit (dot) io

AudioKit 1.2 and the future of Audio Development

Recently, we launched version 1.2 of AudioKit. We've included what we like to call sensible defaults for most operations. With sensible defaults, you can create an instance of an AudioKit object without having to initialize the object's parameters.  

Additionally, most operations now include tests. These tests let you hear what an individual operation is capable of doing, so it's easier for you to figure out what operations you need to get the sound you want!

I'm extremely excited for this release, as I think it's going to go a long way in helping iOS and OSX developers use audio in their apps in new and exciting ways. 

If you're developing for iOS or OSX, for example, you're probably at least aware of Core Audio. While Core Audio is extremely powerful, it's low-level nature makes it extremely difficult to prototype and deploy quickly...you're spending so much time working with low-level samples and buffers that you're not able to focus on making great sounds!

Before AudioKit, there were other 3rd party audio development solutions, such as libpd and Csound for iOS. However, both of these required developers to use other environments: Pure Data and Csound. If you were a developer and wanted to make interesting audio, you were stuck either trying to decipher Core Audio's cryptic nature, or learn how to use another environment. 

That's why AudioKit is so special...it allows developers to implement audio in a high-level way using Objective-C or Swift, without having to learn another environment such as Pure Data or Csound. 

In order for audio to be integrated into more and more applications, it has to be easier for software developers to work with audio. Csound for iOS started to solve this problem, but; for the actual audio implementation, you still had to use Csound...which, at it's best, can be described as having a rather cryptic syntax, even if the audio engine is extremely powerful. 

And, I'm not saying that their is no need for visual programming environments, such as Pure Data or Max/MSP (I for one am extremely excited about Max7).What I am saying is that software developers in the 'traditional' sense who want to write apps for iOS or OSX will be able to leverage the power of AudioKit

I believe that with AudioKit, developers will be able to create new and interesting experiences for users through high-level audio. And we're going to keep on improving AudioKit as we go.

Tactile Interactions with Multi-Touch Apps

As much as multi-touch devices have enabled new forms of music creation and performance, they are still lacking in one thing that traditional, acoustic instruments have: tactile feedback. However, Dutch designer Samuel Verburg of Tweetonig created a solution to this problem with TunaKnobs; rotary knobs that will attach to any capacitive surface (Wired UK has a great write-up on the project).



(Image taken from Tweetonig website)

(Image taken from Tweetonig website)

The video on the project's Kickstarter page humorously shows the problem that musicians and DJs have with many multi-touch music apps: they're somewhat clumsy to interact with during a performance. 

Tuna Knobs are not the first example of this idea. Flip Visnjic has an article on The Creative Applications Network about a similar project from designers at TEAGUE. In my opinion, Visnjic is spot on when he says  "I think a lot of people fail to acknowledge that the future are NOT touch screen devices but those that combine both the physical and touch input...". Interestingly enough, the only comment on the article is from someone saying how much better these knobs would be, opposed to a virtual pan knob in an audio app.

Most multi-touch devices only enable what Bret Victor calls 'picture under glass' interaction. In other words, while these devices are capable of allowing complex gestural interactions, we're generally only manipulating images on a glass screen. Our fingers only have the feeling of touching glass. Tuna Knobs seems to provide one niche-specific use for this problem: instead of turning an image of a knob with one finger, you're turning an actual, physical rotary knob in the same manner as if you were using an analog synthesizer. 

Given that Tuna Knobs reached their Kickstarter goal in 21 hours, it's safe to say that there is a demand for additional modes of interaction in consumer computing devices. My prediction is that more and more work will be done to extend and augment the experience of multi-touch based computing, but (for now) it will continue to come from designers such as Verbung, not consumer electronics manufacturers. 

Workshop with the Fuse Factory

This past Saturday, I had the pleasure of giving an Introductory Workshop on Pure Data at the Fuse Factory in Columbus, Ohio. We worked on a variety of topics, ranging from how to install and set-up PD to basic synthesis techniques, video effects with GEM, and interacting with an Arduino. 

I've put the presentation and patches on GItHub. If you have any questions, feel free to get in touch!

Many thanks to Alison Colman for organizing the event. 

 

Intro to Pure Data Workshop

Next week, I'll be giving an Introduction to Pure Data workshop at the Fuse Factory in Columbus, Ohio. You can sign up for it here. I'm going to be talking about:

-Installing and getting started with Pure Data
-Making a basic synthesizer
-User interaction
-GEM and visuals
-PD with other programs
-PD on your phone
 

Hope to see you there!