HOME     ICDVRAT 2024     2018-2022     1996-2016     BOOKS     ABOUT       SPONSORS     CONTACT


ECDVRAT 1996 - Full Papers Download

The First European Conference on Disability, Virtual Reality and Associated Technologies

8th to 10th July 1996 - Maidenhead, UK


ICDVRAT Online Archive - Part I


Keynote Addresses

Chair: Paul Sharkey

Virtual reality and persons with disabilities

J Murphy, California State University, Northridge, USA

There are many exciting developments in the field of virtual reality. This presentation will use slides and videotapes to demonstrate three important applications of the technologies of virtual reality: one with people with movement disorders, another with people who experience height anxiety, and a third that deals with the treatment of injuries on the battlefield. More people in the field of disability need to become involved in the potential of virtual reality. Conferences such as this one play an important role in stimulating ideas that may result in more applications.

PDF Full Paper

Virtual reality in rehabilitation following traumatic brain injury

F D Rose, University of East London, UK

The potential use of virtual reality (VR) in neurological rehabilitation has frequently been discussed. This paper relates current thinking on the subject to the clinically defined concepts of impairment, disability and handicap. It is concluded that VR has a contribution to make to reducing all three, as well as in the initial assessment of the consequences of traumatic brain injury.

PDF Full Paper

Virtual computer monitor for visually impaired users

A L Zwern & G L Goodrich, General Reality Company, San Jose, California, USA

Conventional computer display products for the visually impaired are limited by the amount of enlarged imagery that can be displayed at any one time, and by awkward methods for navigating about the scene. This paper describes a prototype system designed to address these problems by providing a head-mounted display interface which allows the user to position a cursor anywhere on an enlarged virtual page by turning to view that location in space. Also discussed are human subject trials underway at SRI International and the Veterans Administration under a US Department of Health and Human Service research grant, and technical issues requiring further investigation.

PDF Full Paper


Session I - Communication and Language

Session Chair: Paul Sharkey

Recognition of sign language gestures using neural networks

P Vamplew, University of Tasmania, Australia

This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

PDF Full Paper

Tackling isolation and the expression of emotion in a virtual medium

D J Roberts, C Wood & A Gibbens, University of Reading, UK / Project DISCOVER, London, UK

This paper discusses how virtual reality may be used as a medium for expressive communication and thus aid in tackling the problems of isolation faced by many people with disabilities. The principle causes and effects of isolation are explained, along with the strengths of the virtual medium that make it applicable to this problem area. The concept of a virtual meeting place is introduced, where users can overcome the physical problems of mobility and communication. The more esoteric problem of communicating emotion is addressed and a number of ways in which this may be tackled in virtual reality are proposed.

PDF Full Paper


Session II - Virtual and Enhanced Environments

Session Chair: William Harwin

Applications of virtual reality technology to wheelchair remote steering systems

R W Gunderson, S J Smith & B A Abbott, Utah State University, USA.

The Center for Self-Organizing and Intelligent Systems at Utah State University has been engaged in a two year project to investigate the application of virtual reality and associated technologies as a means for assisting the disabled to steer and control motorized wheelchairs. There have already been several interesting investigations aimed at steering virtual wheelchairs in virtual, computer generated, environments. This paper, however, reports on how this technology may be used to assist, or even completely take over, the task of steering and navigating a real wheel chair in real environments. The basic objective is to arrive at affordable and effective systems that can be used to improve the independence and quality of life of the disabled.

PDF Full Paper

Integrating augmented reality with intelligent home systems

J C Hammond, P M Sharkey & G T Foster, University of Reading, UK.

Augmented Reality systems overlay computer generated information onto a user's natural senses. Where this additional information is visual, the information is overlaid on the user's visual field of view through a head mounted (or "head-up") display device. Integrated Home Systems provides a network that links every electrical device in the home, giving users control over and information on each device across the network. This paper discusses the integration of AR techniques with a real world infrastructure and demonstrates this through the integration of an visual AR prototype with an existing Home Systems technology test rig. The integration of these two technologies provides the basis of a specialised information/control device that allows the control and diagnostics of Home Systems devices from the basic action of looking at them.

PDF Full Paper

Using virtual reality in the adaptation of environments for disabled people

R C Davies; J Eriksson, University of Lund, Sweden.

This paper describes work on a computer aided design and visualisation toolkit for the adaptation of homes and workplaces for disabled people. The basis for this work comes from a number of case studies which used a prototype planning tool based on ordinary 3D modelling and drawing packages. These case studies highlighted the need for a 3D object library containing office furniture, mobility aids, building construction elements and so forth. Three further prototype tools have also been developed: a user-friendly design and visualisation tool for non-computists based on a 3D graphics API; a mannequin modeller; and a VR based design and visualisation tool.

PDF Full Paper

An emergent methodology for the development of virtual learning environments

D J Brown; D S Stewart, University of Nottingham/Sheppard School, Nottingham, UK.

As any research field graduates from infacy to youth (or it may be argued survives in this case) there is a need to order and structure the methods you have used for getting answers to questions. This should allow us to rationalis these methods and select the ones that have given good and practical answers. Successful methods can then be imbedded within an overall methodology for the design, development and implementation of the Learning In Virtual Environments (LIVE) programme. This programme consists of virtual environments (VE) to teach basic life, communicational and personal and social skills, developed in conjunction with the staff and pupils at the Sheppard School.

This overall methodology will be presented in a series of five stages that exert a contingent influence on the final nature of any virtual learning environment. Stage on seeks to imbed any development in this field in contemporary educational theory. Stage two looks at the role of parents, teachers, pupils and caregivers in the development of these VE. Stage three considers the role of testing in refining any developments we make, whilst stage fourth considers the ethical implementation of these learning aids in the classroom. Finally, stage five looks at the development of guidelines for the optimal design of virtual learning environments for various types of learning.

PDF Full Paper

Virtual partners in cyberspace - a proposal

M Magnusson, University of Karlstad, Sweden.

People with Aphasia have great use for computer support in their rehabilitation, aided by different types of software. Most of the software consists of sets of exercises with a clear language content. However, in this paper there is a presentation of ideas around a more general communication situation where a computer-simulated environment could offer people with Aphasia a broader experience of interaction and communication. Examples are given from work with videotelephony and different types of simulation software.

PDF Full Paper


Session III - Rehabilitation I

Session Chair: John Wann

Using virtual reality environments to aid spatial awareness in disabled children

D Stanton, P Wilson & N Foreman, University of Leicester, UK.

Several spatial tasks were presented to subjects in computer simulated environments to ascertain whether spatial skills could be trained and enhanced using this medium. Studies carried out at Leicester University have demonstrated that exploration of a simulated building by disabled and able bodied subjects allows them to acquire considerable information about the spatial layout of that specific building. The present studies extended these earlier findings. In Experiment 1, transfer of spatial skills between different virtual environments was investigated. The results confirmed that the skills disabled children acquired using computer simulated environments improved with exposure to successive environments. To eliminate the possibility that learning was non-specific, Experiment 2 compared 3-D exploration and 2-D (control) exploration, finding the former to be superior. Thus the interactivity and three-dimensionality of virtual environments seem to be crucial to spatial learning. Further research is being carried out to establish the nature and extent of the improvement in spatial skills of physically disabled children after intensive exploration of complex virtual environments and thus the value and limitations of VR as a training tool for children with mobility problems.

PDF Full Paper

Successful transfer to the real world of skills practised in a virtual environment by students with severe learning disabilities

J J Cromby, P J Standen, J Newman & H Tasker, University of Nottingham, UK.

Nineteen students with severe learning difficulties aged between 14 and 19 years completed a shopping task in a real supermarket before 9 students, the experimental group, had twice-weekly sessions carrying out a similar task in a virtual supermarket. The remaining 10 students formed the control group, matched with the experimental group for ability, age and sex. They had the same number of sessions using other virtual environments. Although there was no difference between the two groups at baseline, on repeating the task in the real supermarket the experimental group were significantly faster and more accurate than the control group.

PDF Full Paper

Providing motor-impaired users with access to standard graphical user interface (GUI) software via eye-based interaction

H O Istance, C Spinner & P A Howarth, De Montfort University, UK / Loughborough University, UK.

We have designed an on-screen keyboard, operated by eye-gaze, for use by motor-impaired users. It enables interaction with unmodified standard Graphic User Interface (GUI) software written for able-bodied users, and it is not solely designed around the need to enter text. The keyboard will adapt automatically to the application context by, for example, loading a specific set of keys designed for use with particular menus whenever a menu is displayed in the target application. Results of initial evaluation trials are presented and the implications for improvements in design are discussed.

PDF Full Paper

Memory processes and virtual environments: I can't remember what was there, but I can remember how I got there. Implications for people with disabilities

E A Attree, B M Brooks, F D Rose, T K Andrews, A G Leadbetter; B R Clifford, University of East London, UK.

Memory deterioration is often a consequence of brain damage. Successful memory rehabilitation programmes depend upon effective methods of cognitive assessment. This paper considers the potential value of virtual reality (VR) in this context. The effect of active or passive participation in a virtual environment on specific aspects of memory was investigated. It was found that active participation enhanced memory for spatial layout whereas passive observation enhanced object memory. No differences were found for object location memory. These findings are discussed in terms of how VR may provide a means of measuring memory which combines ecological validity with experimental rigour.

PDF Full Paper

Do virtual environments promote self directed activity? A study of students with severe learning difficulties learning Makaton sign language

P J Standen; H L Low, University of Nottingham, UK.

Eighteen students with severe learning difficulties and their teachers were videoed while using an educational virtual environment. Teachers' activity was coded into eight categories (e.g. instruction, suggestion, pointing) and the students' into three (e.g. moves in three dimensional space) and intra-rater reliability established. Significant (p<0.0001) decreases in rate over repeated sessions was found for all the teacher's categories with the more didactic (e.g. instruction and physical guidance) decreasing at a faster rate than suggestion and pointing. For the students' categories there was a significant increase (p<0.05) in rate between the first and last sessions for two of the three categories.

PDF Full Paper


Session IV - Technology

Session Chair: Penny Probert

A virtual reality training tool for the arthroscopic treatment of knee disabilities

R J Hollands & E A Trowbridge, University of Sheffield, UK.

Knee injuries are a common form of disability, but many can be treated using surgery. The minimally invasive approach of arthroscopy means faster recover times for the patient than when using open surgery, but the skills required by the surgeon are radically different. Although a number of arthroscopic training techniques are available all have problems either with cost, maintenance or availability. Through the medium of virtual reality, a computer based system can recreate the three dimensional geometry inside the knee, and allow the trainee surgeon to practice on it using replica instruments. In order to provide this facility at a feasible price, this paper describes work under way at Sheffield to develop a PC-based virtual reality arthroscopic training system. The resulting trainer has been tested by surgeons, and despite compromises made to accommodate the PC-platform, has been found to be extremely realistic at replicating some of the standard tasks of arthroscopy.

PDF Full Paper

Adaptive multi-media interfaces in PolyMestra

E P Glinert & G B Wise, Rensselaer Polytechnic University, New York, USA.

An architecture for a new generation of multimedia systems is presented based on the concept of metawidgets, which are collections of alternative representations for information, both within and across sensory modalities, along with user-transparent mechanisms for choosing among them. The proposed architecture allows us to overcome certain drawbacks of today's systems, where the designer typically must assign each component of the display to a specific modality in a fixed and inflexible manner. The design of the PolyMestra environment based on our architecture is next described in detail, with particular emphasis on the layered development approach, core software tools and inter-application communication. Finally, we discuss the current status of the implementation, and outline plans for distribution of the prototype later this year to get user feedback.

PDF Full Paper

Myoelectric signals pattern recognition for intelligent functional operation of upper-limb prosthesis

N Chaiyaratana, A M S Zalzala & D Datta, University of Sheffield, UK / Northern General Hospital, Sheffield, UK.

This paper presents a comparative study of the classification accuracy of myoelectric signals using multilayer perceptron with back-propagation algorithm and radial-basis functions networks. The myoelectric signals considered are used to classify fourth upper-limb movements which are elbow bending, elbow extension, wrist pronation and wrist supination. The network structure for multilayer perceptron is a fully connected one, while the structures used in radial-basis functions network are both fully connected and partially connected. Two learning strategies are used for training radial-basis networks, namely supervised selection of centres and fixed centres selected at random. The results suggest that radial-basis function network with fixed centres can generalise better than the others without requiring extra computational effort.

PDF Full Paper

CCD-camera based optical tracking for human computer interaction

F Madritsch, Graz University of Technology, Austria.

We are investigating the application of CCD-camera based optical-beacon-tracking-systems in 3-D interactive environments. An optical tracking system has been developed which serves as a testbed for tracking algorithms and accuracy investigations. The 3-D interactive environments features tracking of the observer's viewpoints, stereoscopic visualisation and directed 3-D pointing. The focus is set on high accuracy of both tracking and stereoscopic visualisation. Algorithms which can track beacons with sub-pixel resolution at low noise, help to reduce hardware expenditure in terms of camera resolution and computing power. Theoretical considerations about resolution are made and practical experience is presented. Furthermore, user studies are performed to test the created interface environment with regard to immersive interfaces and direct interaction.

PDF Full Paper

Analysis of force-reflecting telerobotic systems for rehabilitation applications

W S Harwin & T Rahman, University of Reading, UK / University of Delaware & Alfred I duPont Institute, Delaware, USA.

There is interest in a class of assistive technology devices for people with physical disabilities where a person's existing strength and movement has a direct relation to the force and position of the tool used to manipulate an environment. In this paper we explore the use of head controlled force reflecting master-slave telemanipulator for rehabilitation applications. A suitable interface philosophy is to allow the system to function in a way that is conceptually similar to a head-stick or a mouth-stick. The result is an intuitive method to operate a rehabilitation robot that is readily learned, and has the ability to provide the person with added strength, range of movement and degrees of freedom. This approach is further expanded for a similar class of assistive devices, power assisted orthoses, that support and move the person's arm in a programmed way. The techniques developed for power orthoses and telemanipulators can also be applied to haptic displays that allow an individual to feel a virtual environment. The so-called two-port model is used to predict the behaviour of telemanipulators, power orthoses, and haptic interfaces, and issues relating to stability are discussed.

PDF Full Paper

Designing rehabilitation robots using virtual work platforms

J Ibañez-Guzmán, Ecole Supérieure Atlantique d'Ingénieurs en Génie Electrique, Saint Nazaire, France.

The design, implementation and testing of rehabilitation robot like devices is expensive both in time and resources. The use of virtual workspaces based on graphics kinematic-simulation and computer aided design techniques gives qualitatively a design technology in which a full system can be virtually tested and evaluated in advance of actual prototyping. The modelling of the mechanisms plus work environments implies that a virtual test-rig can be built. It allows the preliminary verification of concepts and the generation of useful feedback to designers. The entire cell layout could be changed easily creating complex situations and scenarios.

This paper describes the use of virtual work platforms based on kinematic graphics-robotic simulation for the design of novel rehabilitation devices and other aids, as well as the evaluation and personalisation of existing ones. The objective is to present the techniques available for the design of rehabilitation robots using a software-based approach.

PDF Full Paper


Session V - Visual Impairment, Ambisonics and Mobility

Session Chair: Paul Wilson

A 3D sound hypermedial system for the blind

M Lumbreras, M Barcia & J Sánchez, Universidad Nacional de La Plata, Argentina / University of Chile, Santiago, Chile.

It has been said that quite often a hypermedial application running over a GUI is somehow inappropriate or unusable. This is the case for end-users with little or no visual capabilities. In this paper we present a conversational metaphor to try to ameliorate this problem. We propose a framework in which the interaction is rendered using 3D sound. Several voices used as media to convey information are placed in the space through this technology. By using a glove the user controls the system manipulating a special version of 3D auditory icons, called audicons. Moreover, we introduce a technique called grabanddrop, equivalent to the visual draganddrop. We show that this framework permits the building and adaptation of current hypermedia interfaces in a way that can be used without visual cues.

PDF Full Paper

Animated tactile sensations in sensory substitution systems

D A Eves & M M Novak, Kingston University, UK.

We have designed and made a computer controlled, electrocutaneously stimulated, tactile display to assist in the interpretation of complex graphical data. This paper covers the initial experiments with the tactile display, complementing the standard VDU. An animated vector was generated and the absolute tactile directional acuity of the observers was measured. Consideration is given as to how the use of animated tactile data would enhance graphical user interfaces for the visually impaired.

PDF Full Paper

The generation of virtual acoustic environments for blind people

D A Keating, University of Reading, UK.

VR systems, like the cinema, tend to concentrate on the visual image, leaving the audible image as garnishing. If blind people are to make any use of virtual environments then the audible image needs to be greatly improved. An ideal system would encode the amplitude and direction of sounds in a way that was independent of the means used to portray the image to the user. The Ambisonic B-format describes fourth signals: a reference, and three vectors. These are simple to generate and manipulate and may easily be converted into loudspeaker feeds for one, two or three dimensional arrays of speakers. The generation of binaural signals for headset based systems is made difficult by the complexity of the head related transfer function(HRTF). Steering the virtual image to take account of head movements is computationally expensive and requires detailed knowledge of the users HRTF if the image is to be at all realistic. An alternative is to make use of virtual loudspeakers around which the image is moved but which have a fixed location relative to the users head. If the speaker feeds are derived from a B-format signal this may easily be panned and tilted by manipulating the relative amplitudes of the three vectors according to simple mathematical rules. The HRTF of the user can be measured at the positions of the virtual speakers and the resultant relationship between the combination of all these signals and the two headphone signals need only be computed once.

This paper gives an introduction to the Ambisonic system and shows how B-format signals can be used to encode a virtual acoustic environment. It concludes by describing how the B-format signals may be manipulated and used to generate almost any number speaker feeds or headphone feeds.

PDF Full Paper

Interfaces for multi-sensor systems for navigation for the blind

P J Probert, D Lee & G Kao, University of Oxford, UK.

This paper describes work towards a multi-sensor system to assist blind people to move around an urban or indoor environment. The communication of navigation information is constrained by both the type of information imparted and, in many ways more crucially, by the type of information which real sensors can extract. In this paper we describe the use of ultrasound to provide two types of information: first the low level directional information which may be provided currently by a guide dog and then information on features to allow the blind person to locate and direct themselves within the environment. A directional system with a simple vibrator interface is described and its performance assessed by a group of users. Finally we discuss the possibility of feature extraction for navigation by recognition and give some results using frequency modulated sonar.

PDF Full Paper


Session VI - Rehabilitation II

Session Chair: Stuart Neilson

An overview of rehabilitation engineering research at the Delaware Applied Science and Engineering Laboratories

W S Harwin & R Foulds, University of Reading, UK / University of Delaware & Alfred I duPont Institute, Delaware, USA.

The Applied Science and Engineering Laboratories of the University of Delaware and the Alfred I. duPont Institute have an ongoing research program on human machine interactions with a special emphasis on applications for people with disability. The Laboratories house two Rehabilitation Engineering Research Centers, the first on Augmentative and Alternative Communication Systems and the second on Applications of Robotics to Inhance the Functioning of Individuals with Disibilities. This paper reviews several projects within these two Centers as well as a project within a program on Science, Engineering and Math Education for Individuals with Disabilities. Consumer involvement has always been central to the research philosophy of the Laboratories and is achieved at a variety of levels. This paper includes a discussion on how effective user interaction is achieved.

PDF Full Paper

Virtual reality technology in the assessment and rehabilitation of unilateral visual neglect

S K Rushton, K L Coles & J P Wann, University of Edinburgh, UK / University of Reading, UK.

Unilateral visual neglect affects a very large proportion of patients immediately after stroke. The presence of neglect has been found to be the major determinant of recovery of everyday function (Denes, Semenza, Stoppa & Lis, 1982). Interventions for the rehabilitation of neglect are not very effective. Picking up from the suggestions of Robertson, Halligan & Marshall (1993) potential ways forward employing VR technology are discussed.

PDF Full Paper

Virtual reality environments for rehabilitation of perceptual-motor disorders following stroke

J P Wann, University of Reading, UK.

The incidence of perceptual-motor disorders arising from stroke is steadily increasing in the population of Europe and the USA. This paper outlines the potential role that virtual environments (VE) may play in designing remedial programmes for rehabilitation following stroke. Key principles for the structure of guided learning are identified, but emphasis is also placed on the need to identify when, and how, VE technology can introduce added value to the therapy situation.

PDF Full Paper

Nervous system correlates of virtual reality experience

L Pugnetti, L Mendozzi, E Barberi, F D Rose & E A Attree, Scientific Institute S. Maria Nascente, Don Gnocchi Foundation, Milan, Italy / University of East London, UK.

In recent years several papers have been published in which the effect of exposure to virtual reality (VR) on the activity of the nervous system have been discussed. This area of knowledge is of importance to those interested in VR applications, and especially to those who seek to apply VR to cognitive rehabilitation following brain damage. This paper reviews what is known about the effects on nervous system activity of the interaction with virtual environments, comments on the authors' experience with both normal and neurologically impaired subjects, and outlines a suggested programme for future research.

PDF Full Paper

Virtual reality enriched environments, physical exercise and neuropsychological rehabilitation

D A Johnson, S K Rushton & J H Shaw, Astley Ainslee Hospital, Edinburgh, UK, /University of Edinburgh, UK.

This paper presents preliminary data on the effects of physical exercise and virtual reality upon mood and cognition in severely brain injured adults. The work draws upon two established fields of experimental psychology, enriched environments and physical exercise, in order to propose a new orientation for neurological rehabilitation. The results will be discussed in relation to organisation and delivery of clinical services, and to the theoretical model of von Steinbuchel and Poppel (1993).

PDF Full Paper

ICDVRAT Archive  |  Email © 1996 Copyright ICDVRAT