Jessica Barfield

Personal Homepage

Jess Barfield is a student and ceramics artist and will be attending Dartmouth College in the fall of 2012. She can normally be found on a field hockey field or in front of a computer.

Publication Statistics

Publication period start
Publication period end
Number of co-authors

Number of publications with favourite co-authors

Productive Colleagues
Most productive colleagues in number of publications


Barfield, Woodrow, Barfield, Jessica

23.12 Commentary by Woodrow Barfield and Jessica Barfield

Steve Mann has written a comprehensive and informative chapter on the general topic of wearable computing (which Steve describes as miniature body-borne computational and sensory devices). We use the phrase- “general topic” because Steve expands his discussion of wearable computing to include the more expansive term, “bearable” computing (essentially wearable computing technology that is on or in the body). In the chapter, Steve also discusses how wearable computers may be used to augment, mediate, or diminish reality. As background for this commentary, I first met Steve many years ago when I attended a meeting at MIT concerning the first conference to be held on wearable computers, and Steve was then a PhD student at the MIT Media Laboratory (At the conference I made the statement: “Are we wearing the computers, or are they wearing us!”). As the faculty gathered to discuss the aims and direction of the conference, I thought then that Steve had done more to develop the field of wearable computers than the faculty that had gathered to organize the conference. Since my first meeting with Steve, he has continued his innovative work on wearable computing, and he has published extensively on the subject. I particularly enjoyed reading Steve’s antidotes concerning his experiences as a “cyborg” in a book Steve wrote for the general public, “Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer,” 2001. While much of Steve’s current chapter is historical in content, he also discusses many of the wearable computing applications he has created, often with Steve’s insight as to the rationale behind his inventions.

When we think of the different types of computing technology that may be worn on or in the body, we can envision a continuum that starts with the most basic of wearable computing technology (Steve mentions a wearable abacus) and ends with wearable computing that is actually connected to a person’s central nervous system. In fact, as humans are becoming more-and-more equipped with wearable (and bearable) computing technology, the distinction as to what is thought of as a “prosthesis” is becoming blurred as we integrate more computing into human anatomy and physiology. On this very topic, I co-authored a chapter about the use of computing technology to control feedback systems in human physiology (“Computing Under the Skin” which was published in Barfield and Caudell, “Fundamentals of Wearable Computing and Augmented Reality,” 2001). I agree with Steve that the extension of computing integrated into a person’s brain could radically enhance human sensory and cognitive capabilities and alter the direction of human evolution; in fact, in my view, we are just now at the cusp of this development and experimental systems (computing technology integrated into a person’s brain) are in-field now that are helping those with severe physical disabilities. For example, consider people with debilitating diseases such that they are essentially “locked in” their own body. With the appropriate wearable computing technology consisting of a microchip that is implanted onto the surface of the brain (where it monitors electronic 'thought' pulses), such people may use a computer by thought alone allowing them to communicate with their family, caregivers, and through the internet, the world at large. Sadly, in the U.S. alone about 5,000 people yearly are diagnosed with just such a disease that ultimately shuts down the motor control capabilities of their body- Amyotrophic lateral sclerosis, sometimes called Lou Gehrig's disease. This disease is a rapidly progressive, invariably fatal neurological disease that attacks the nerve cells responsible for controlling voluntary muscles. Much of the work on control theory and supervisory control of remote robots, along with digital technology, is applicable to the design and use of wearable computing for such individuals.  

In our view, anyone at the cutting-edge of their discipline is not only pushing their field further, but by nature of their work, is also at the forefront of other academic disciplines as well. For example, particle physicists in search of the ultimate building blocks of the Universe, often find themselves debating those who hold a nonsecular view of the origins and structure of the Universe. Similarly, Steve’s work, albeit on a less dramatic fashion, has raised many important issues of public policy and law. For example, Steve presents the idea that wearable computers can be used to film newsworthy events as they happen or people of authority as they perform their duties. This brings up the question of whether a person has a legal right to film other people in public (answer: generally they do). In the chapter, Steve refers to an interesting case on just this topic decided by the U.S. First Circuit Court of Appeals. In the case, Simon Glik was arrested for using his cell phone’s digital video camera (a wearable computer) to film several police officers arresting a young man on the Boston Common. The charges against Glik, which included violation of Massachusetts’s wiretap statute and two other state-law offenses, were subsequently judged baseless and were dismissed. Glik then brought suit under a U.S. Federal Statute (42 U.S.C. § 1983), claiming that his very arrest for filming the officers constituted a violation of his rights under the First (free speech) and Fourth (unlawful arrest) Amendments to the U.S. Constitution. The court held that based on the facts alleged, that Glik was exercising clearly-established First Amendment rights in filming the officers in a public space, and that his clearly-established Fourth Amendment rights were violated by his arrest without probable cause. However, the readers of this comment should know: In the U.S. the right to film is not without limitations. It may be subject to reasonable time, place, and manner restrictions a topic in which much case law has been decided.

Steve also discusses privacy issues they may occur when an individual wearing a computer/camera films and records people in public places. While Steve emphasizes the example where state actors, or people generally in positions of authority, are filmed, we worry about the potential to abuse people’s privacy using the technology of wearable computing. For example, video voyeurism, the act of filming or disseminating images of a person’s “private areas” under circumstance in which the person had a reasonable expectation of privacy regardless of whether the person is in a private or public location, is possible using the technology of wearable computers. In the U.S. such conduct is prohibited under State and Federal law (see for example, Video Voyeurism Prevention Act of 2004, 18 U.S.C.A. § 1801). And what about the privacy issues associated with other wearable computing technology such as the ability to recognize a person’s face, then search the internet for personal information about the individual (e.g., police record, or credit report), and “tack” that information on the person as they move through the environment? Could digital “scarlet letters” be far off?

Steve’s concept of “diminished reality” in which a wearable computer can be used to replace or remove clutter, say for example, an unwanted advertisement on the side of a building, is also of interest to those in law and public policy. On this topic, I published an article in the UCLA Entertainment Law Review, 2006, titled- Commercial Speech, Intellectual Property Rights, and Advertising Using Virtual Images Inserted in TV, Film, and the Real World. In the article, I discussed the legal ramifications of placing ads consisting of virtual images projected in the real world. We can think of virtual advertising as a form of digital technology that allows advertisers to insert computer-generated brand names, logos, or animated images into television programs or movies; or with Steve’s wearable computer technology, the real world. In the case of TV, a reported benefit of virtual advertising is that it allows the action on the screen to continue while displaying an ad viewable only by the home audience. What may be worrisome about the use of virtual images to replace portions of the real world is that corporations and government officials may be able to alter what people see based on political or economic considerations; an altered reality may then become the accepted norm, the consequences of which seem to bring up the dystopian society described in Huxley’s “Brave New World.”

As a final comment, one often hears people discuss the need for “theory” to provide an intellectual framework for the work done in virtual and augmented reality. When I was on the faculty at the University of Washington, my students and I built a head tracked augmented reality system that as one looked around the space of the laboratory, they saw a corresponding computer-generated image that was rendered such that it occluded real objects in that space. We noticed that some attributes of the virtual images allowed the person to more easily view the virtual object and real world in a seamless manner. Later, I became interested in the topic of how people performed cognitive operations on computer-generated images. With Jim Foley, now at Georgia Tech, I performed experiments to determine how people mentally rotated images rendered with different lighting models. This led to thinking about how virtual images could be seamlessly integrated into the real world. I asked the question of whether there was any theory to explain how different characteristics of virtual images combined to form a “seamless whole” with the environment they were projected into, or whether virtual images projected in the real world appeared separate from the surrounding space (floating and disembodied from the real world scene). I recalled a paper I had read while in college by Garner and Felfoldy, published in Cognitive Psychology, 1970, on the integrality of stimulus dimensions in various types of information processing. The authors of the paper noted that “separable” dimensions remain psychologically distinct when in combination; an example being forms varying in shape and color. A vast amount of converging evidence suggests that people are highly efficient at selectively attending to separable dimensions. By contrast, "integral" dimensions combine into relatively unanalyzable, unitary wholes; an example being colors varying in hue, brightness and saturation. Although people can selectively attend to integral dimensions to some degree, the process is far less efficient than occurs for separable-dimension stimuli (see also Shepard, R. N., Attention and the metric structure of the stimulus space, Journal of Mathematical Psychology, 1964). I think that much can be done to develop a theory of augmented, mediated, or diminished reality using the approach discussed by Garner and Felfody, and Shepard, and I encourage readers of this comment to do so. Such research would have to expand the past work which was done on single images, to virtual images projected into the real world.

Returning to Steve’s chapter, it is an excellent source for those interested in learning about the historical context of wearable computing, and about the numerous applications Steve has developed to design a world in which the humans signal processing capabilities and wearable computing system functions form a feedback loop; the thought being, two brains are better than one! We also see Steve’s work evolving in the not too distant future to the point where humans and wearable computing technology “live” in a mutually symbiotic manner, which implies of course, the primary thinker, the wearable computing, is in some way benefiting from having a human in the loop. So, returning to what I said at the first conference held on wearable computers: Are we wearing them, or are they wearing us?