HOME
ICDVRAT 2024
2018-2022
1996-2016
BOOKS
ABOUT
SPONSORS
CONTACT
L Meakin, L Wilkins, C Gent, S Brown, D Moreledge, C Gretton, M Carlisle, C McClean, J Scott, J Constance & A Mallett, The Shepherd School, Nottingham, ENGLAND
In April l997 the Shepherd School, Virtual Reality Applications Research Team (VIRART) and the Metropolitan Housing Trust joined together to develop a Virtual City, which would help people with learning difficulties learn independent living skills. The Virtual City would be made up of many parts and each would be a Virtual Learning Environment.
At the end of the first year the Virtual City includes a house, a cafe, a supermarket and a transport system. People with learning difficulties can use a computer in a game playing way to learn lifeskills in these fourth learning environments.
Often when computer programmes or learning schemes are devised for people with learning difficulties, no one asks them what they would like to be included. However, in the Virtual City from the very beginning, the views of people who might use the programmes guided the whole project. This paper will examine in detail the User Group involvement in the development of the Virtual City.
D J Brown, S J Kerr & V Bayou, Nottingham Trent University/University of Nottingham, ENGLAND
This paper will develop the theme of the importance of community based involvement in the development of virtual learning environments (VLEs) for people with a learning disability. It is being presented alongside two other papers, one by the User Group, the other by the Testing Group, describing the design, testing and distribution of the Virtual City. This set of VLEs comprise a computer aided learning (CAL) tool to teach independent living skills to people with a learning disability. Our presentation will demonstrate the involvement of users in each of the stages of development of the Virtual City, and the benefits of this partnership, as opposed to a more tokenistic involvement.
Along side the development of this methodology, the presentation will concentrate on the demonstration of the Virtual City to show how users can learn skills such as the use of public transport, road safety skills, safety within a home environment, the use of public facilities within a cafe and the development of shopping skills within a large supermarket. Video will also be shown demonstrating users involving themselves in the various stages of production of this CAL tool.
S V G Cobb, H R Neale & H Reynolds, University of Nottingham/Metropolitan Housing Trust Nottingham, ENGLAND
The Virtual Life Skills project describes a user-centred design approach to building virtual environments intended to provide a practice arena for skill learning in children and adults with learning disabilities. In the first year of the project fourth modules of a Virtual City have been developed: a house, a supermarket, a café and a transport system (see Brown et al, this issue for a description of the project). Evaluation of the project has been concerned as much with the design of the virtual learning environments (VLEs) and issues of usability and access as with monitoring skill learning and transfer to the real world. Two approaches were taken to the evaluation the fourth virtual learning environments. For three of the VLEs, Supermarket, Café and Transport, a test-retest experimental design method was used. This compared user performance in real world tasks with the same tasks presented in the VLE. Expert assessment was used to evaluate the Virtual House, looking at usability and appropriateness of the learning scenarios. It was found that VLEs can provide interesting, motivating learning environments, which are accessible to users with special needs. However, individuals differed in the amount of support required to use the input devices and achieve task objectives in the VLE. Expert and user review methods indicate that the VLEs are seen to be representative of real world tasks and that users are able to learn some basic skills. However, it would be unrealistic to expect transfer of skill over a short time period of learning as used in this project. Further testing is needed to establish the longitudinal learning effects and to develop more reliable techniques to allow users to express their own opinions by themselves.
R J McCrindle & R M Adams, The University of Reading, UK
The Multimedia Interface for the Disabled (MIND) project is concerned with developing a set of guidelines and authoring tools, for use by multimedia developers, to enable them to augment their products to encompass the specific needs of sensory impaired users. This paper presents the ethos behind the project and describes the MIND software prototype developed. The MIND prototype maximises the effectiveness of multimedia information delivery, through the provision of an adaptable and easily navigated user interface, which incorporates access to the augmented multimedia information created with the authoring tools.
E D Coyle, M Farrell, R White & B Stewart, Dublin Institute of Technology, IRELAND
A non-invasive scanning mechanism has been designed which is microprocessor controlled to locate and follow head movement in a defined zone about a persons head. The resulting head movements may then be mapped onto a computer screen and can further be used to control the cursor, thus replacing the necessity of using a standard PC mouse. To demonstrate the concept, a Graphic User Interface (GUI) of a push button telephone has been designed, and a procedure is outlined below by which one may select and highlight a digit or group of digits, and in turn dial the chosen number.
P E Jones, University of Western Australia, AUSTRALIA
All our teenage users are confined to electric wheelchairs and are unable to speak or make any voluntary movements much beyond either moving their head against one of three switches mounted in the chair's headrest or to hit a large "banger" switch. Real-world devices are beyond reach, only devices in a virtual world are attainable. This virtual keyboard project was designed to meet their needs for interacting with commercial off-the-shelf software such as word processors, spreadsheets, electronic mail and Internet tools. The virtual keyboard uses scanning augmented by character prediction and word completion.
T Swingler, Norwich, UK
Soundbeam is a 'virtual' musical instrument, an invisible, elastic keyboard in space which allows sound and music to be created without the need for physical contact with any equipment. The system utilises ultrasonic ranging technology coupled to a processor which converts distance and movement information into MIDI. Originally developed for dancers - giving them a redefined relationship with music - Soundbeam has proved to have dramatic significance in the field of disability and special education, because even those with profound levels of impairment are, even with the most minimal movements, able to compose and to instigate and shape interesting, exciting and beautiful sounds. Individuals who may be especially difficult to stimulate can benefit from what may for them be a first experience of initiation and control. A continuum of applications - ranging from the fundamentals of 'sound therapy' (posture, balance, cause-and-effect) through to more creative and experimental explorations in which disabled children and adults become the composers and performers of enthralling musical collaborations, and beyond to interactive installations and multimedia performance - can be described.
Soundbeam first appeared in prototype form in 1984 and was taken up in a serious way in special schools in the UK and subsequently in Scandinavia, the Netherlands and elsewhere following its launch in 1990. A totally redesigned version of the machine will be ready in the Autumn of 1998.
R C Davies, G Johansson, K Boschian, A Lindén, U Minör & B Sonesson, University of Lund/Lund University Hospital, Höör, SWEDEN
Virtual Reality (VR) as a complementary tool for medical practitioners in the assessment and rehabilitation of people who have suffered a brain injury is discussed. A pilot-study has been undertaken on a prototype VR assessment tool. The design involved nine occupational therapists with expertise in the care of traumatic brain-injured patients and one (computer experienced) patient. The aim was to begin a dialogue and to ascertain the potential of a VR system. A common method for occupational therapists to assess function and ability is to ask a patient to brew coffee. From the performance of such a task, an individual's "functional signature" can be determined. The prototype was built using Superscape®, a personal computer based VR system, to be close to the real coffee making task, including effects of making mistakes, realistic graphics and sound effects. The world was designed to be as easy to use and intuitive as possible, though problems of mental abstraction level, transfer of training and realistic interaction have yet to be resolved. The comments from the test participants have highlighted problem areas, given positive insight and pointed out other scenarios where VR may be of use in the rehabilitation of people with a traumatic brain injury.
F D Rose, E A Attree, B M Brooks, D M Parslow, P R Penn & N Ambihaipahan, University of East London, UK
Training is one of the most rapidly expanding areas of application of the technology of Virtual Reality (VR) with virtual training being developed in industry, commerce, the military, medical and other areas of education and in a variety of types of rehabilitation. In all cases such training rests upon the assumption that what is learned in the virtual environment transfers to the equivalent real world task. Whilst there is much anecdotal evidence there have been few systematic empirical studies and those that have been carried out do not lead to clear conclusions. This paper reports preliminary findings from a study, using a simple sensorimotor task, which seeks to establish not only the extent of transfer, but also the reliability and robustness of whatever transfers. The findings demonstrate a clear positive transfer effect from virtual to real training and suggest that the cognitive strategy elements and cognitive loads of the two types of training are broadly equivalent. However, caution is advised in the interpretation of these findings. The results are discussed in the wider context of models of transfer of training.
L Pugnetti, L Mendozzi, E Barbieri, A Motta, D Alpini, E A Attree, B M Brooks & F D Rose, Scientific Institute S. Maria Nascente, Don Gnocchi Foundation, Milan, ITALY/Casa di Cura "Policlinico di Monza", Monza, ITALY/University of East London, UK
The collaboration between or two scientific institutions is giving significant contributions to VR research into several fields of clinical application. Concerning the important issue of side-effects, future studies will clarify whether the encouraging results obtained in the recent past on patients with neurological diseases can be confirmed, and whether specific recommendations for the use of immersive VR in selected clinical populations can be made. Recent collaborative studies on the application of non-immersive VR to improve clinical testing of spatial memory provided evidence of good replicability of results in both healthy and neurologically affected groups. The development of retraining applications for spatial memory impairments and future studies aimed at assessing the impact of ambulatory disability on spatial cognitive abilities will be based on these findings. Finally, a newly approved transnational project will lead our groups into the field of the assistive technology to improve working skills and opportunities for employment of subjects with mental disabilities who seek a job.
M Magnusson, University College of Karlstad, SWEDEN
All speech pathologists in one Swedish county will develop methods for supervision, therapy and development of professional methods, using ISDN-based videotelephony and Internet. The project is the first one of its kind and has initiated follow-up projects. It is primarily based upon research developed at the University of Karlstad, the Department of Disability and Language.
R Berka & P Slavík, Czech Technical University Prague, CZECH REPUBLIC
The paper describes a solution to the problem how blind users can work with the three dimensional information available in the computer environment. This serious problem gains more and more importance as the use of three dimensional information is permanently growing. An experimental system that allows such communication has been designed and implemented. This system can be used as a sort of kernel for applications of various kinds that deal with the problem of communication between a blind user and a three dimensional model.
C Colwell, H Petrie, D Kornbrot, A Hardwick & S Furner, University of Hertfordshire/British Telecommunications plc Ipswich, UK
This paper describes two studies concerning the use of haptic virtual environments for blind people. The studies investigated the perception of virtual textures and the perception of the size and angles of virtual objects. In both studies, differences in perception by blind and sighted people were also explored. The results have implications for the future design of VEs in that it cannot be assumed that virtual textures and objects will feel to users, whether blind or sighted, as the designer intends.
G Jansson, Uppsala University, SWEDEN
The aim was to investigate the usefulness of a haptic force feedback device (the PHANToM) for information without visual guidance. Blind-folded sighted observers judged the roughness of real and virtual sandpapers to be closely the same. The 3D forms of virtual objects could be judged accurately and with short exploration times down to a size of 5 mm. It is concluded that the haptic device can present useful information without vision under the conditions of the experiments. The result can be expected to be similar when observers are severely visually impaired, but this will be controlled in a separate experiment.
M Cooper & M E Taylor, Open University Milton Keynes, UK
To date there has been much more effort directed to generating credible presentations of virtual worlds in a visual medium than in reproducing corresponding or even self contained worlds with synthesised or recorded sound. There is thus today a disparity in the relative fidelity of these 2 modes in most VR systems. While much work has been done in hi-fidelity and 3 dimensional sound reproduction in psycho-acoustic research and applications for audiophiles this has rarely been taken onboard by the VR community.
This paper describes work ongoing to apply Ambisonic techniques to the generation of audio virtual worlds and environments. Firstly Ambisonics is briefly outlined then principles behind its implementation in this context described. The design of the implementations to date is described and the results of trials discussed. The strengths and limitations of the approach are discussed in the light of this practical experience.
There is a range of applications envisaged for this technique that would be particularly of benefit to disabled people. The 2 principal areas under consideration by the authors is the extended use of the audio mode in HCI for people with disabilities and VR applications for blind and partially sighted people.
Both of these areas are described and specific examples of applications given in each. The limitations given available low cost technology are highlighted and the technology evolution required to make such applications widespread commented on.
The results of the development work and subsequent user trials undertaken to date are given and discussed. Lines of further exploration of this technique and its application are outlined.
M Lumbreras & J Sánchez, University of Chile, Santiago, CHILE
Interactive stories are commonly used for learning and entertaining purposes enhancing the development of several perceptual and cognitive skills. These experiences are not very common among blind children because most computer games and electronics toys do not have appropriate interfaces to be accessible by them.
This study introduces the idea of interactive Hyperstories performed in a 3D acoustic virtual world. The hyperstory model enables us to build an application to help blind children to enrich their early world experiences through exploration of interactive virtual worlds by using 3D aural representations of the space. We have produced AudioDoom, interactive model-based software for blind children. The prototype was qualitative and quantitatively field-tested with several blind children in a Chilean school setting.
Our preliminary results indicate that when acoustic-based entertainment applications are careful applied with an appropriate methodology can stimulate diminished cognitive skills. We also found that spatial sound experiences can create spatial navigable structures in the mind of blind children. Methodology and usability evaluation procedures and results appeared to be critical to the efectiveness of interactive Hyperstories performed in a 3D acoustic virtual world.
O Losson & J-M Vannobel, Université des Sciences et Technologies de Lille, FRANCE
Special needs of deaf people appeal henceforward to sign language synthesis. The system presented here is based on a hierarchical description of sign, trying to take the different grammatical processes into account. Stress is laid on hand configurations specification thanks to finger shapes primitives and hand global properties, and on location and orientation computation issues. We then expose the results achieved from the corresponding written form of signs, leading to their computer virtual animation.
F García-Ugalde, D Gatica-Pérez & V García-Garduño, FI-UNAM, MÉXICO / University of Washington, Seattle, USA
Man-machine communication in a natural way, it means without cumbersome gloves, is still an open problem. Keeping in mind the need to develop some friendly tools for helping people with disabilities to use the computer as a support tool for training into reinforced methods for learning to read or any other application. In this work we have addressed the problem of communication with a computer using some recognition of very basic hand gestures. From an engineering point of view our system is based on a video camera which captures image sequences and in a first time a segmentation of hand gestures is developed in order to provide information for its posterior classification and recognition. For classifying the segmented fields named e of gestures, for instance hand # 1 and hand # 2, see figures 5a and 6a, we have proceed first to obtain a binary version of these segmented fields comparing them with a threshold, so rendering the classification faster, then based on the Radon transform (Lim, 1990), a computation of the projected sum of the binary intensity of gestures has been done at directions and , see figures 1 and 2. For reducing the number of data to be processed a wavelet decomposition of the projected sum of the binary intensity for each orientation ( and ) has been done using Daubechies filters: d4 (Daubechies, 1988). This projected and wavelet decomposed information has been used for classifying the gestures: training our system with our dictionary and computing the correlation coefficient between the wavelet coefficients corresponding to trained sequences and others captured and computed in continuous operation, the computer is able to recognize the very simple gestures. The region segmentation has been done using a dense motion vector field as the main information then each region is matched to a fourth-parameter motion model (Gatica-Pérez et al, 1997). Based on Markov Random Fields the segmentation model detects moving parts of the human body with different apparent displacement such as the hands (García-Ugalde et al, 1997). The motion vector field has been estimated by a Baaziz pel-recursive method (Baaziz, 1991) and considered together with others sources of information such as intensity contours, intensity values and non-compensated pixels as inputs of the Markov Random Field model. The maximum a posteriori criterion (MAP) is used for the optimization of the solution, and performed with a deterministic method: iterated conditional modes (ICM). The complete segmentation algorithm includes initializing, region numbering and labeling, parameter estimation of the motion model in each region, and optimization of the segmentation field. So our probabilistic approach takes into account the fact that an exact displacement field does not exist (errors usually occur at or around motion boundaries), and that better results can be attained if an indicator of the quality of the vector field is known, this indicator is obtained from the non-compensated pixels as well as the intensity contours (García-Ugalde et al, 1997).
H Sawada, T Notsu & S Hashimoto, Waseda University, Tokyo, JAPAN
This paper proposes a Japanese sign-language recognition system using acceleration sensors, position sensors and datagloves, to understand human dynamic motions and finger geometry. The sensor integration method realized a robust gesture recognition comparing with a single sensor method. The sign-language recognition is done by referring to a Japanese sign-language database in which words are written as sequences of the gesture primitives. Two recognition algorithms which are the automata algorithm and the HMM are introduced and tested in the practical experiments.
T Kuroda, K Sato & K Chihara, Nara Institute of Science and Technology, JAPAN
Although modern telecommunication have changed our daily lives so drastically, the deaf cannot benefit from them based on phonetic media. This paper introduces a new telecommunication system for sign language utilizing VR technology, which enables natural sign conversation on analogue telephone line.
On this method, a person converses with his/her party's avatar instead of party's live video. As speaker's actions are transmitted as kinematic data, the transmitted data is ideally compressed without losing language and non-language information of spoken signs.
A prototype system, S-TEL, implementing this method on UDP/IP, proved the effectiveness of avatar-based communication for sign conversation via a real lossy channel.
P Peussa, A Virtanen & T Johansson, VTT Automation, Tampere/Adaptation Training Centre for Disabled, Launeenkatu, FINLAND
A modular mobility aid system is presented, which can be attached to most commercial electric wheelchairs to increase the independence and safety of the wheelchair user. The system consists of falling avoidance, obstacle avoidance, and beaconless navigation functions. Experiences from an evaluation period are presented. Also an idea is presented to combine map information to environmental perceptions in order to provide the wheelchair user a better awareness of his/her whereabouts. The navigation system can also be applied to different mobile robot applications.
M Desbonnet, S L Cox & A Rahman, University of Limerick, IRELAND
Children need mobility for normal development. An electric wheelchair can provide mobility for severely disabled children, but this requires training, which may be difficult. This virtual reality based training system is aimed at solving these difficulties in a practical, cost-effective way. The project involved the construction of two virtual environments for training these children. This was achieved by developing a software solution using WorldToolKit and AutoCAD.
M R Everingham, B T Thomas, T Troscianko & D Easty, University of Bristol, UK
This paper describes a new approach to image enhancement for people with severe visual impairments to enable mobility in an urban environment. A neural-network classifier is used to identify objects in a scene so that image content specifically important for mobility may be made more visible. Enhanced images are displayed to the user using a high saturation colour scheme where each type of object has a different colour, resulting in images which are highly visible and easy to interpret. The object classifier achieves a level of accuracy over 90%. Results from a pilot study conducted using people with a range of visual impairments are presented in which performance on a difficult mobility-related task was improved by over 100% using the system.
H Mori & S Kotani, Yamanashi University, JAPAN
We have been developing Robotic Travel Aid(RoTA) "HARUNOBU" to guide the visually impaired in the sidewalk or campus. RoTA is a motor wheel chair equipped with vision system, sonar, differential GPS system, dead reckoning system and a portable GIS. We estimate the performance of RoTA in two viewpoints, the viewpoint of guidance and the viewpoint of safety. RoTA is superior to the guide dog in the navigation function, and is inferior to the guide dog in the mobility. It can show the route from the current location to the destination but cannot walk up and down stairs. RoTA is superior to the portable navigation system in the orientation, obstacle avoidance and physical support to keep balance of walking, but is inferior in portability.
A Rizzo, J G Buckwalter, P Larson, A van Rooyen, K Kratz, U. Neumann, C Kesselman & M Thiebaux, University of Southern California, Los Angeles, CA/Fuller Graduate School of Psychology, Pasadena CA, USA
Virtual Reality technology offers the potential to create sophisticated new tools which could be applied in the areas of neuropsychological assessment and cognitive rehabilitation. If empirical studies demonstrate effectiveness, virtual environments (VE's) could be of considerable benefit to persons with cognitive and functional impairments due to acquired brain injury, neurological disorders, and learning disabilities. Testing and training scenarios that would be difficult, if not impossible, to deliver using conventional neuropsychological methods are being developed which take advantage of the attributes of virtual environments. VE technology allows for the precise presentation and control of dynamic 3D stimulus environments, in which all behavioral responding can be recorded. A cognitive domain where the specific advantages found in a virtual environment are particularly well-suited, is with human visuospatial ability. Our paper outlines the application of a virtual environment for the study, assessment, and possible rehabilitation of a visuospatial ability referred to as mental rotation. The rationale for the Virtual Reality Spatial Rotation (VRSR) system is discussed, and the experimental design that is being used to collect data from a normal, aged 18 to 40 population is presented. Our research questions are then outlined and we discuss some preliminary observations on the data that has been collected thus far with the system.
D Alpini, L Pugnetti, L Mendozzi, E Barbieri, B Monti & A Cesarani, Scientific Institute S. Maria Nascente, Fondazione don Gnocchi, Milan/University of Sassari, ITALY
While vestibulo-oculomotor and vestibulo-spinal functions are usually investigated by means of electronystagmography and stabilometry, environmental exploration and navigation cannot be easily studied in the laboratory. We propose that virtual reality (VR) can provide a solution, especially for those interested in the assessment of vestibular influence over spatial cognitive activities. Subjects exposed to immersive VR show rotatory behaviors during exploration that are the result of both a lateralized vestibular dominance and of the interplay with ongoing cognitive activity. The effect of vestibular dominance over exploratory behavior disappears in non-immersive VR conditions, but certain patterns of exploratory movements still seem to be associated to cognitive performance. On these grounds, we propose the use of VR to improve current techniques of vestibular rehabilitation based on visual feedback. We describe a new equipment that combines the functions of a digital stepping analyzer with those of a PC VR workstation. The patient controls the navigation of virtual environments by means of appropriate displacements of the center of gravity. The combination of a closed-loop feedback to control displacements of the center of gravity and active exploration of the environments makes an otherwise static exercise contingent on a viridical representation of spatial navigation.
E Ahlsén & V Geroimenko, Göteborg University, SWEDEN
The paper presents a prototype using virtual reality as the basis for a picture communication aid for persons with aphasia (acquired language disorder after a focal brain injury). Many persons with aphasia have severe word finding problems and can produce no speech or very little speech. This problem is often connected to problems of abstraction, which makes it problematic to use picture communication based on classifying pictures by semantically superordinate categories and searching for them via these superordinate categories. The use of virtual reality makes it possible to use purely visuo-spatial orientation as a search strategy in a picture database. In the Virtual Communicator for Aphasics, we are exploring these possibilities. By "moving around" in a virtual environment, based on video-filmed panoramas, the user can search for pictures essentially in the same way as he/she would search for objects in the real world and communicate by pointing to the pictures. Pointing for communication can be used directly in the panoramas, in overview pictures accessed by hot-spots, and in arrays of pictures of objects also accessed by pointing to hotspots in panoramas or overview pictures. Speech output is possible. Some of the potential advantages and limitations of using virtual reality based on photorealistic panoramas and pictures in this type of application are discussed.
G T Foster, D E N Wenn & F O'Hart, University of Reading, UK/University of Dublin, Trinity College, IRELAND
This paper describes the generation of virtual models of the built environment based on control network infrastructures currently utilised in intelligent building applications for such things as lighting, heating and access control. The use of control network architectures facilitates the creation of distributed models that closely mirror both the physical and control properties of the environment. The model of the environment is kept local to the installation which allows the virtual representation of a large building to be decomposed into an interconnecting series of smaller models. This paper describes two methods of interacting with the virtual model, firstly a two dimensional representation that can be used as the basis of a portable navigational device. Secondly an augmented reality called DAMOCLES is described that overlays additional information over a users normal field of view. The provision of virtual environments offers new possibilities in the man-machine interface allows intuitive access to network based services and control functions to a user.
A W Joyce, III & A C Phalangas, Alfred I. DuPont Hospital for Children/University of Delaware, USA
Virtual Interaction refers to a technique for interacting with computer generated graphics. Graphical objects are overlaid on live video of the user. A chromakey separates the user (foreground) from the background resulting in a silhouette of user. The computer causes the graphical objects to move in relation to the silhouette so that they appear to interact with the user. This paper presents the implementations of the system, some techniques for interaction and discusses using the system as a tool for physical therapy.
A Osorio, LIMSI-CNRS, FRANCE
The purpose of this publication is to present a graphic 3D modeler with interactive interaction capabilities operating in real time. It may be used on both in Unix workstations and PCs. The modeler can be used to make and edit 3D reconstruction of articulated objects, a facility that is particularly useful in processing medical images generated by a CT scanner or magnetic resonance units. The modeler takes account of the physical characteristics of the objects manipulated and, in developing it, we have assumed that the prospective user will have the necessary expertise to be able to interact with and interpret the model.
M T Beitler & R A Foulds, University of Pennsylvania/duPont Hospital for Children, Wilmington USA
The objective of the work presented in this paper is to create a complete database of Anthropometric Models of Children, which can not only describe the anthropometric attributes of the individuals, but the functional abilities and growth behaviors as well. This work also includes a prototype system, which is being used during an assessment session to automatically gather the anthropometric and functional information of the subject and incorporated it into the population data of the model database.
ICDVRAT Archive | Email | © 1998 Copyright ICDVRAT |