Individual Research – Community/Online Storytelling

•December 13, 2007 • Leave a Comment

With the “Web 2.0” revolution – which is more of a collective of buzzwords than a truly life-altering online experience – there have been a number of new attempts at utilizing new technologies in user-generated content. Of particular note are attempts at creating a non-linear, web-based storytelling experience by which multiple users form a collective narrative combining all of their experiences. This sort of experience may be viewed as stemming from popular pastimes such as blogging, which enable users to keep online journals and share their thoughts, feelings, and ramblings. However, these new innovative collective storytelling outlets are far more involved than simple blogs and provide considerably greater potential.

One such experience is a product of Drexel University’s Digital Media program, Philbert and Dodge. The site presents an animation about the two characters for which it is named, but the true functionality of P&D is not in the story it tells solely as an example, but in the potential it provides for a network of story tellers all contributing to the same experience. It supports multiple user log-in and asset management by which anyone connected to the story with a user name and password may cut their own animation with existing assets or upload new ones. Additionally, it automatically packages new animations in the form of a flash video for immediate streaming playback as well as formats that may be downloaded. Its asset management system is also advanced, providing the ability to tag assets as belonging to a clip, and the immediate retrieval of all assets that make up a particular section of animation.

An interesting Java-based exploration in storytelling is We Feel Fine.

This is an interesting experience to discover. Essentially, users may connect via the We Feel Fine API and upload their own feelings. Feelings can be tagged with metadata that allows them to be sorted according to the gender of the person sharing the feeling, the city or location, the weather during the posting, and other traits that may help others understand where the feeling – shared in the form of a brief comment – is coming from. Additionally, it also provides for the sharing of these feelings in “movements,” animated experiences sorting the bouncing feelings – each represented by a dot – in a way that corresponds to the movement and provides a different method of functionality. It is quite fun to manipulate the feelings, throw them about, and experience them while at the same time providing a surprisingly deep feeling of connection to the collective intelligence shared by humanity as a whole. Though the actual identity of those posting the feelings are not shared, there is a considerable number of heartfelt confessions and ruminations that are surprisingly affective.

While blogs and similar means of story telling are less technologically advanced or perhaps less visually interesting as the aforementioned means of sharing, there are occasions in which blogs serve as a peculiar or special example of community storytelling. One such blog is My Baby Monsters. Supposedly, the blog is centered around the stories taught by a seven year old girl. The website is built and maintained by her father, and serves as a locus for children’s stories written not only by the main contributor to the blog (seven year old Josie), but also providing links to other stories for children. Moreover, the blog also features several story topics or beginnings, and through a comment-like system common on blogs encourages that other children add to the stories with their own words. It is a digital example of an old method of storytelling by which one individual adds a piece to the end of the length and passes it on. The only difference is that this draws from potentially an entire world of kids for source material.

Interface Design – Touch-Screen Interfaces

•December 13, 2007 • Leave a Comment

Another interesting area of innovation in interface design is in touch-screen interfaces. By now the news of Apple’s iPhone or iPod Touch products is no longer news, but the wave of touch-screen interfaces has only just begun to struck popular culture and gain media attention.

Perhaps the most well known development in the area of touch screen technology meant to achieve heavy use is As a combined display/interface technology, it certainly does a lot to bring a new facet of interactivity to human-computer interaction. Though it may not be as tremendous a departure from typical mouse/keyboard interfaces operating off of a desktop metaphor when compared to brain-computer interface technologies and the non-analog methods of control they provide, the touch-based interaction that Surface provides is certainly a step forward. And with some of the specific interface designs that the Surface development team has implemented such as seamless transferral of data such as images or songs to portable devices like cellular phones or cameras placed directly on the surface, they have made tremendous strides in the realm of interface design. Of particular note is the aforementioned manipulation of cell phones and cameras, and the tremendous number of possibilities surface could provide. It may offer up interesting new ways to deal with data as well as serve as an interesting entertainment piece; just imagine playing board games or painting virtually on a surface that is capable of saving the state of a work of art or game without requiring set-up or clean-up. It is a tremendous innovation – and hopefully one that will see widespread use before long. Below are some videos illustrating some of the uses Surface’s development team has imagined for the technology.

Possibilities promo piece:

Commercial Surface preview:

Popular Mechanics look at Surface:

Lastly, an interesting look at Surface’s use of physical objects to create a sort of video jigsaw puzzle:

Similar to Surface is another touch-screen technology made by KsanLab – a company dedicated to creating User-Generated Content tools. Their prototype interface, Touch Me Tender, is very similar to Surface but perhaps just slightly less robust. It does provide photo manipulation and painting tools, however, and seems like it could well be marketed as a cheaper alternative to Surface.

Finally, another similar interface for music creation and mixing is reacTable – a touch-screen based table that incorporates objects into its interface design to allow individuals to intuitively interact with music given visual feedback and physical tools for the interaction. It lacks the overall commercial viability or the scope to impact daily life to the extend that Surface and Touch Me Tender do, but it serves as an encouraging example of the widespread adaptation of this new technology and a very innovative new use and design.

Interface Design – Brain-Computer Interfacing

•December 13, 2007 • Leave a Comment

In pouring over random interface design ideas, schemes, and notions, the general focus on interface design seems to rest largely in the field of human-computer interfaces. This field is obviously an important one, given the widespread use of personal computers today, and it is a field that – thanks to a few innovative and challenging notions and concepts – I believe will soon undergo some major revolutionary changes that forever alter the way we use computers.

Brain-Computer Interfacing is an emerging concept that allows for computers to process the data and signals emitted by the human brain. Tackled in a few different ways by various experimental groups, the potential for these technologies is enormous. With brain-computer interfacing, we may be able to interact with data and computers simply by issuing commands via thought – and eventually companies like Sony aim to raise BCI to a level with which computers can communicate with the brain in its native language of electrical impulses in such a way that computers can induce sensations in the brain just as it receives commands by thought.

Here are some examples of this innovative new technology:

Washington University’s Neural Systems Group

At Washington University, the Computer Science department has created the Neural Systems Group. In addition to Brain-Computer Interface technologies, this group has also made interesting discoveries and breakthroughs in the realm of robotics. Their non-invasive BCI, pictured below, is an interesting solution to the problem of effectively connecting computers to the human brain.

Neural Systems Group - BCI Headgear

Another very similar product that may see even more widespread use is the OCZ Neural Impulse Actuator. Simply put, it is a headband that picks up on signals from the brain in much the same way that an electrocardiograph monitors the human heart. Of note is its specific target market of gamers – it may be that this sort of technology first sees widespread commercial use in the field of games before it spreads to other (potentially more societally useful) markets.

OCZ's Neural Impulse Actuator in action

Unfortunately, little details are available on this product, though there are some articles providing a cursory glance at what it is expected to look like, and how it is likely to perform:

Bit-Tech: OCZ Controls Games With Your Mind

Gizmodo: OCZ’s Neural Impulse Actuator Lets You Play Games With Your Mind

Legit Reviews: OCZ’s Neural Impulse Actuator at CeBIT 2007

BCI Interfaces have been studied in various locations abroad; another University program that has found results similar to Washington University’s Neural Systems Group is Cognitive and Social Systems Group at the Laboratory of Computational Engineering, part of the Helsinki University of Technology. Their feedback and results are somewhat similar to the other products mentioned (current BCI products seem somewhat limited as far as the scope and accuracy of their interpretation of signals), but of particular note is the combination of BCI interfaces with haptic feedback to present the user not only the freedom to think their commands, but also physical tactile feedback.

CSS Group, Helsinki University of Technology

One final group that has achieved a similar device from a physical and functional standpoint is the Ushida & Tomita Laboratory. Their BCI appears very similar to the other similar products listed above, but is different in its application. Though the site for the Ushida & Tomita Laboratory is not in English, it does provide videos showcasing their BCI being utilized to enable players of the online game Second Life to control their virtual avatars using the BCI. According to the descriptions on this site, it works by enabling the user to imagine moving their legs to walk forward or backward, while thinking about moving their arms enables moving to the right or left, while the research time is hard at work in developing other functions and movements.

Computational Sensing- Robot interaction and Brain Computer Interface

•December 13, 2007 • Leave a Comment

Android Science (Scientific American)
Japanese develop ‘female’ android (BBC News)

These articles describe an android – a robot designed to mimic and resemble a human – designed by Professor Hiroshi Ishiguro of Osaka University. Named Repliee Q1Expo, “she” has been programmed with many subtle motions, such as breathing, blinking, and hand gestures. According to Ishiguro, those who interact with her sometimes even forget that she is not a human. He stresses the importance of a robot’s appearance, explaining that making one which looks like a human “gives the robot a strong feeling of presence.”

The physique and facial features of this robot were based on Japanese Newscaster Ayako Fujii.

Under Repliee Q1Expo’s 5mm thick silicone skin lies a network of piezoelectric pressure sensors, giving her the ability to differentiate various touch sensations and react appropriately. She is also equipped with tiny video cameras to record environmental stimuli and observe human facial expressions. Her senses are not fully contained within her body, however, as she requires floor sensors to aid in detecting human proximity and following (with eyes and head rotation only; she cannot walk) human movement.
She currently uses only motion detection and facial recognition to gage a person’s emotional state. Perhaps in future iterations, humanoid robots such as Ishiguro’s could have all of their mechanics and devices contained within their “body” and utilize more sophisticated technologies, such as a wireless neural sensor interface, in addition to its current methods.

This type of technology is already emerging in the field of medical science. Many companies are working towards improving the quality of life for disabled individuals, such as those who are sound of mind but confined to a wheelchair due to paralysis or other disability. By developing wearable devices which can be used to control other machines and communications by converting human thoughts into electrical signals, these companies hope to fill the human desire for independence.

BrainGate Neural Interface Systems (Cyberkinetics)

The BrainGate Neural Interface System is such a device, based on Electroencephalograph (EEG) technology, and is not merely a theoretical ideal, but is actually in existence and undergoing clinical trials at select rehabilitation centers. The tests focus mainly on controlling a cursor on a computer screen via a surgically implanted sensor which is attached to the part of the brain responsible for movement. This sensor protrudes through the skull and skin and is then connected bye a wire to the computer through an interpretive device. By combining the cutting edge knowledge from the fields of neurology and computer science, the very serious issues of freedom and independence for our disabled people may one day be solved.
Many modern homes have integrated electronic systems such as security alarms and thermostats that can be controlled from a single wall panel interface. A device such as the BrainGate could enable a person with physical limitations to manage not just a personal computer, but an entire household of devices and functions with their mind. Although the device currently has some issues with the inconvenient size and weight of the wearable device and the need for frequent adjustments, future advances resulting in refinement of its receivers and miniaturization of its parts could produce a practical tool with a huge potential customer base and social benefit.
A more in depth explanation of the technology behind this device is available in the following articles about a device which is currently commercially available and is based on the same methods as the experimental BrainGate; the NeuroPort System.

Cyberkinetics Receives Neuroport 510K Clearance (The Healthcare Sales & Marketing Network)

501(k) Summary (Official FDA document, PDF format)

Predictably, such technologies are being adopted by the entertainment industry. The trickle-down of useful devices and technology from the military or medical science realms towards the common consumer are a regular occurrence, with such popular examples as GPS, the Internet, and cellular phones.

OCZ controls games with your mind (

OCZ’s mind-control system examined (The Tech Report)

OCZ Press Release – CeBIT 2007 (OCZ Technology)

OCZ, a small firm which deals mostly in computer memory (flash and RAM), power supplies, and cooling mechanisms for the private consumer, has jumped onto the neuro-interface bandwagon with a brain-operated gaming controller for the PC dubbed the “Neural Impulse Actuator”. At first glance, this looks like a marvelous leap forward for the interface, yet it remains believable because it vaguely resembles some of the medical technology discussed earlier. However, it is of course non-intrusive, meaning the user wouldn’t have to have brain surgery in order to use it, unlike the BrainGate. It has three metal plates on the inside of the headband, which supposedly read brainwaves without any direct contact with the brain or even electrode cream which ECGs require. It could simply be a case of over-hyped vaporware, because the product has failed to be released when the articles claim it would (which is now, the end of 2007), and it apparently requires the use of facial movements. With that detail, it strongly resembles the Atari Mindlink, which was a failed concept and never made it to the mass market.

Atari Mindlink (Atari Museum)

“Although never released, feedback from Atari engineers and people who tested the Mindlink have commented that the time and effort put into the Mindlink system was wasted because the controllers did not perform well and gave people headaches from over concentration and constantly moving their eyebrows around to control the onscreen activities.”

Another company developing a similar product is Emotiv Systems. Their headsemotiv's project epocet, Project Epoc, which has many more sensors than OCZ’s, has been in development for slightly over three years and hopes to be released in 2008. Their company website has more information than OCZ’s, as well as a detailed section for developers explaining their development kits. Like the OCZ, whether its functionality will be true to its marketing remains to be seen.

Emotiv Systems

Despite such doubts, non-invasive Brain Computer Interfaces (BCI), as they have come to be called, do exist. The Neural Systems Group at Washington University works with many scientific disciplines with the aim of fusing deep understandings of the human mind as well as computer systems in order to create intelligent robots capable of learning, as well as BCI approaches to controlling computer and robots.

Neural Systems Group, University of Washington (further press links are available on this page)

The BCI they have now is somewhat of a halfway point hybrid in relation to their stated goal. A humanoid robot walks around in its own space while the human user, wearing a cap covered in electrodes, sees on a computer screen what the robot sees. The robot already has the ability to recognize objects apart from their surroundings, but it uses the human observer’s mental abilities to “decide” to pick up an object. When an object flashes on screen, the cap’s sensors pick up the brain’s “surprise” signal, which the robot interprets as instructions to pick up the object. Because the electrodes pick up these brain signals from the outside of the head, not deep within the brain where these signals originate, the instruments are only able to register “high level” commands for the robot. As illustrated by the BrainGate trials, using a BCI with implanted sensors is feasible. On the other hand, perhaps future progress along the NSG path will yield non-invasive solutions to receiving and decoding these brain signals.
With innovations such as these, it is apparent that modern science is capable of decoding the workings of the brain and translating its signals into signals legible by man-made machines. There are even successful experiments of using the brain to control robotic limbs, which has enormous potential for amputees and people living with birth defects.

Monkey’s Brain Runs Robotic Arm (LiveScience)

With the technological foundations for mind controlled computation, or brain computer interface, already established, it is apparent that a real, practical system is well on its way to the popular market. The possible applications, from gaming to business to home care, cover a wide range and potentially huge sales. The momentum behind such innovations coupled with competition for such financial opportunities will attract resources and drive development for years to come.

– Emily

Robotics in relation to computational sensing and toys

•December 13, 2007 • Leave a Comment
This article, written in 2005 for the BBC, describes development of a human-like android. At the time of this article the android’s reaction abilities were impressive yet limited. One line that was particularly interesting was “More importantly, we have found that people forget she is an android while interacting with her. Consciously, it is easy to see that she is an android, but unconsciously, we react to the android as if she were a woman.” –Professor Ishiguro
Similar to Ishiguro’s progress, Honda (the automotive company) has been exploring and developing a humanoid robot to assist humans in day-to-day activities. Honda’s robot nicknamed the ASIMO has been designed with a number of joints in order to more closely replicate human movement (such as climbing stairs, walking). In a few days ago, on December 11th, Honda announced that the ASIMO was now capable of recharging its battery on its own and working cooperatively with another ASIMO unit.
Recently, in a review of the book “Love and Sex with Robots” by David Levy, New York Times writer Robin Henig touches on her humanoid crush, while visiting an MIT robotics laboratory. Robin continues to describe Levy’s theories and discoveries about the philosophy of love and human relationships. She later goes on to point out Levy’s notion of “reciprocal liking” and states possibility of programming that into robots, so that robots would like you back. While the whole article is centered on robots and sex, this notion would prove interesting and beneficial to robots beyond sexual functions. “Reciprocal liking” could for instance be applied to toys or robotic pets designed to interact with children or patients suffering from depression.
This article talked about robots that were made out of a collection of cubes. Each cube was programmed with building instructions and by working together could assemble another in minutes. This process of self-replication if used in the ways they mentioned may have the ability to save countless lives and prove an invaluable technique for the medical field. Including assisting in microsurgery through computational sensing of biological material.
Dr. Wilson, the author of “How to Survive a Robot Uprising”, while a humorous read points out various flaws of the robots of today. In this review of his book, I was struck by the paragraph:

“And his thesis describes a version of the smart house, a dwelling so rich in sensors that it would monitor people’s activities and raise an alarm if their movements changed or stopped. He said he was inspired to investigate the possibilities of such ”assisted intelligent environments” by his mother, a nurse who organizes care for elderly people who want to remain in their own homes — or ”age in place,” as Dr. Wilson put it.”

Dr. Wilson, like other robotics designers of today, are focusing some of their energy at devising ways to assist the elderly. One further possibility of artificial intelligence integration to assist the expanding senior population (as well as other) would be to create robotic cars with the knowledge of how to drive as well as the ability to sense other cars and the environmental conditions. In doing this seniors, who suffer from a loss of their senses, and others can experience a safer experience and more relaxing experience on the road. This would also allow for the passengers to just enter in the destination and the artificial autopilot would do the rest.
In an interview conducted last year, chief executive of iRobot Corp. (the makers of the Roomba), Colin Angle talked about the earnings and products designed by iRobot. Toward the end of the article Colin addressed the company’s plans to expand into other areas of development including assistance for the elderly. Also mentioned in the interview were iRobot’s PackBot robots, which were in use in Iraq to deal with the Improvised Explosive Device threats. Colin also mentioned a then newly released variant of PackBot called Fido, which is able to detect bombs safely without endangering human lives.

Welcome to our research hub.

•November 28, 2007 • 1 Comment

Welcome to the Team Red Research Hub.Our research is focused around new and upcoming innovations in the fields of interface design, robotics, and toys/games, with an emphasis on the relations between all of these advancements.Stay tuned as we update this space with all of our findings.