Open access peer-reviewed chapter - ONLINE FIRST

Portable Bioelectronic System for Real-Time Motion Tracking in Virtual Reality: Integrating Movella Sensors with Vizard for Neurorehabilitation and Sports Applications

Written By

Wangdo Kim

Submitted: 31 October 2024 Reviewed: 13 November 2024 Published: 09 January 2025

DOI: 10.5772/intechopen.1008680

Current Developments in Biosensor Applications and Smart Strategies IntechOpen
Current Developments in Biosensor Applications and Smart Strategi... Edited by Selcan Karakuş

From the Edited Volume

Current Developments in Biosensor Applications and Smart Strategies [Working Title]

Associate Prof. Selcan Karakuş

Chapter metrics overview

20 Chapter Downloads

View Full Metrics

Abstract

This study presents a portable bioelectronic system designed for real-time motion tracking in virtual reality (VR) environments, with a focus on applications in neurorehabilitation and sports performance analysis. By integrating Movella wearable sensors with the Vizard VR platform, the system offers a cost-effective and flexible solution for capturing and analyzing human motion. Leveraging Bluetooth Low Energy (BLE), it connects multiple Inertial Measurement Units (IMUs) to a computer, enabling precise kinematic computations essential for therapeutic exercises, biomechanical research, and performance optimization in sports. The integration of Python scripting within Vizard allows for the development of interactive three-dimensional (3D) content and VR applications that dynamically respond to live motion data. In addition, the system incorporates Laban’s A Scale from Laban Movement Analysis (LMA) to guide upper arm movement training, enhancing user engagement and rehabilitation outcomes. Validation through experiments using soft exoskeletons demonstrated high accuracy and reliability, making this system a robust tool for telemedicine, healthcare, and sports applications. The open-source availability of our code supports further innovation in wearable bioelectronic device technology and personalized therapy.

Keywords

  • real-time motion tracking
  • virtual reality (VR)
  • inertial measurement units (IMUs)
  • quaternion-based orientation
  • python scripting for VR
  • kinesphere

1. Introduction

The rapid advancement of wearable bioelectronic devices has revolutionized telemedicine, healthcare, and sports applications by enabling real-time, non-invasive monitoring and analysis of physiological signals [1]. Devices, such as Holter electrocardiographs (ECGs), wearable electroencephalogram (EEG) monitors, and fitness bands, have become integral tools for continuous health monitoring and performance optimization [2, 3]. These portable technologies, when combined with robust data analytics and digital signal processing, offer unprecedented opportunities for personalized care, remote diagnostics, and advanced biomechanical analysis.

In the context of human movement research and neurorehabilitation strategies, wearable sensors have emerged as powerful tools for tracking and analyzing human motion. Inertial Measurement Units (IMUs), in particular, provide a portable, cost-effective alternative to traditional optical motion capture systems [4]. Their ability to capture kinematic data in real-time has made them indispensable not only in rehabilitation protocols, but also for sports training and in interactive virtual environments.

Simultaneously, the growth of virtual reality (VR) technology has opened new frontiers for immersive therapy, training, and biomechanical research [5, 6]. VR environments allow for the creation of interactive scenarios that can be tailored to the needs of individual users [7], making them particularly effective in neurorehabilitation. Specifically, VR-based neurorehabilitation is recognized for promoting motor learning and recovery by providing task-specific and interactive environments critical for restoring functional abilities, particularly following stroke [8]. The use of VR technology facilitates high-resolution monitoring and real-time assessment of motor deficits, enabling therapists to tailor interventions according to individual performance needs [9].

Research on VR rehabilitation highlights the importance of engaging patients through gamified exercises, leading to improved motivation and adherence to therapeutic regimens. For example, virtual environments integrated with feedback mechanisms (both haptic and visual) have proven effective in enhancing motor performance during upper extremity rehabilitation exercises [9], leading to better functional recovery.

Motivated by these considerations, the primary objective of this study is to bridge the gap between wearable bioelectronic devices and VR applications by integrating research-grade Movella DOT IMU sensors (Movella, Henderson, NV, USA) with the Vizard VR platform using Python scripting [10]. Our approach builds upon existing tools like OpenSim [11], which models musculoskeletal systems, and OpenSense, which focuses on IMU-based kinematic measurements [12]. We extend these concepts by implementing a real-time data streaming solution that seamlessly integrates with Vizard’s Python-centric environment, enabling the development of interactive 3D content and VR applications for healthcare and sports performance analyses [13].

As a secondary objective, this work also incorporates Laban’s Space Harmony principles [14], specifically the “A” scale, into the tested rehabilitation exercises, offering a structured approach to guiding upper limb movements [15]. By visualizing the user’s kinesphere within an icosahedron, we enhance the perception of spatial relationships and movement patterns, making exercises more engaging and effective. This integration not only facilitates improved motor function recovery, but also provides clinicians with detailed movement data for personalized therapy planning [16].

Overall, this study contributes to the development of portable bioelectronic systems that can be used for telemedicine, rehabilitation, and sports applications. By providing an open-source framework, we aim to lower the barriers to entry for VR development in healthcare and sports science, paving the way for innovative approaches to personalized therapy and movement analysis.

Advertisement

2. Materials and methods

2.1 Hardware setup, software implementation, and kinematic verification

We utilized Movella DOT sensors (Movella, Henderson, NV, USA), which are IMUs capable of quantifying three-dimensional (3D) angular kinematics [17]. The sensors were connected to a computer via Bluetooth Low Energy (BLE) technology, allowing for wireless data transmission [18]. The hardware configuration consisted of multiple Movella DOT sensors for upper arm, forearm, and hand tracking, a Blue-tooth-enabled computer capable of running the Vizard software, and optionally a virtual reality headset for immersive visualization.

The software implementation was primarily based on Python scripting within the Vizard environment (Figure 1). Vizard was chosen for its Python-centric approach and its capabilities in 3D content generation and virtual reality simulations [19]. The key components developed include: (1) BLE Communication Module: we utilized the “bleak” library for BLE communication, implemented asynchronous functions for connecting to and streaming data from the Movella DOT sensors, and incorporated a notification callback system to handle incoming sensor data; (2) Data Processing Module: we created functions to encode and decode the custom data format from the Movella DOT sensors and employed quaternion-based calculations for orientation tracking [4]; (3) Avatar Control System: we utilized Vizard’s built-in avatar system, developed scripts to map sensor data to avatar bone orientations [20], and implemented real-time updating of avatar postures based on incoming sensor data; and (4) Virtual Environment: we created a 3D virtual environment using Vizard’s scene creation tools and implemented a camera system for user perspective in the virtual environment.

Figure 1.

Conceptual diagram of hardware setup and software implementation. This diagram illustrates the flow of data and processing within the system. The Movella DOT sensors provide motion data, which are transmitted via Bluetooth to the computer where BLE communication and data processing modules handle the data. The processed data are then used by the Vizard software for real-time avatar control and visualization within the virtual reality environment, ensuring accurate and responsive representation of the user’s movements.

To verify the proposed motion tracking system, we conducted a series of experiments comparing two-dimensional (2D) angular measurements obtained from the IMUs with those recorded using a traditional goniometer system. The goal was to assess the accuracy of the IMU system in capturing real-time angular changes during arm movements and compare its performance against that of a well-established reference method.

In the experimental setup, the IMUs and the goniometer were both attached to the participant’s upper and lower arms to measure flexion-extension angles of the elbow joint. The goniometer’s vertical arm was fixed to the upper arm, whereas the horizontal arm was fixed to the lower arm, allowing us to measure the angle between the two arm segments (Figure 2). The IMU data were then compared to the angular measurements derived from the goniometer.

Figure 2.

Experimental setup for 2D angular measurements comparing goniometer readings with those obtained from the IMU system. The vertical arm of the goniometer was fixed to the lateral side of the upper arm, whereas the horizontal arm was fixed to the lower arm, allowing us to measure the angle between the upper and lower arm segments.

2.2 Icosahedron creation and integration with gamification elements

In this study, the icosahedron is used as a spatial framework to represent and analyze the avatar’s kinesphere within the VR environment. The icosahedron serves as a structured, 3D model that enhances the understanding and guidance of movement, particularly in neurorehabilitation and movement research.

2.2.1 Perception of 3D space using body-fixed frames

The perception and understanding of 3D space are fundamentally linked to the body’s orientation and interaction with its surroundings [9]. The human body perceives space relative to its internal axes: right-left, anterior-posterior, and superior-inferior [21]. These axes are not merely abstract dimensions but are embodied in the environment, which has its own orientations, such as north-south, east-west, and up-down. The interplay between these external and internal orientations is critical for spatial navigation and movement planning.

In our study, the icosahedron functions as a fluid, adaptable vector space that conforms to various body poses. This dynamic model allows for the effective analysis and guidance of upper arm movements in the virtual environment, facilitating precise spatial interactions within the kinesphere. By mapping these interactions to the structured geometry of the icosahedron, users can better understand and navigate their spatial relationships during movement exercises.

2.2.2 Icosahedron implementation in Vizard

To implement this concept within the Vizard platform, we developed a Python script that generates a 3D icosahedron structure within the VR environment. The vertices and edges of the icosahedron are defined using mathematical principles, including the golden ratio, ensuring a symmetrical and precise framework. The structure is then animated to provide real-time feedback on spatial positioning and movement.

The script uses Vizard’s scene creation tools to display the icosahedron, and user navigation is enabled through adjustable camera controls. This setup allows users and clinicians to observe and interact with the avatar’s movements, offering a clear representation of how the body’s kinesphere is structured and navigated in virtual space.

2.2.3 Integration with gamification elements

To enhance user engagement, gamification elements were incorporated into the tested rehabilitation exercises within the VR environment. These elements include challenges that motivate users to improve their performance based on movement accuracy and task completion time. Visual and auditory feedback mechanisms were integrated to provide immediate responses to user actions, ensuring exercises remain engaging and effective.

A progression system was also developed to adjust the difficulty of exercises based on user performance. This adaptive approach ensures that tasks remain challenging yet achievable, promoting continuous improvement and sustained motivation.

2.2.4 Application of Laban’s “A” scale and icosahedron implementation in Vizard

The icosahedron framework (Figure 3) is further enhanced by integrating Laban’s “A” scale inclinations [15]. Originally designed for analyzing fencing movements, as detailed in Laban’s Choreutics [15], the “A” [22] scale is adapted here to guide upper arm movements in neurorehabilitation (Figure 4). The structured movement patterns provided by the “A” scale help improve spatial awareness and movement quality, allowing individuals to fully explore their kinesphere and develop better coordination and proprioception. The script that allowed us to bring this concept to life can be found in Supplementary Material M1.

Figure 3.

Visualization of an avatar inside an icosahedron kinesphere. The figure illustrates the avatar positioned at the center of an icosahedron, representing the kinesphere within the virtual reality environment. The icosahedron provides a structured framework for guiding and analyzing the avatar’s movements in three-dimensional space, particularly useful in applications, such as movement training and neurorehabilitation. The red arrow indicates the direction of movement or orientation, helping users understand spatial relationships and improve their coordination within the VR system.

Figure 4.

The “A” scale inclinations illustrate the specific body alignments and movements associated with various phases of the upper arm training. These inclinations represent the movements across the three spatial planes—frontal, sagittal, and transverse. The “A” scale integrates these planes through a series of 12 transversal units, comprising six on the right and six on the left, capturing a range of motions from flat and steep to flowing. This structure effectively links the angular dimensions of the three planes to the dynamic movements of the avatar.

2.3 Embodied cognition and Laban’s icosahedron

Laban’s icosahedron is closely related to the principles of embodied cognition [23], particularly in how it shapes spatial perception and guides action. The icosahedron is a geometric framework used in Laban Movement Analysis (LMA) that represents the spatial dimensions in which the body moves. In Laban’s work, the icosahedron serves as a model to help dancers and movement practitioners explore the full range of spatial possibilities available to them. Laban’s icosahedron concept is deeply tied to embodied cognition (see Supplementary Material M2) and has application in movement training and neurorehabilitation. Particularly in the therapeutic contexts, the icosahedron can be used to guide patients through specific movement patterns, helping them regain motor control while simultaneously improving their spatial cognition. This is particularly relevant in neurorehabilitation, where patients benefit from exercises that engage both their physical and cognitive systems in a coordinated manner. By using the icosahedron as a framework, therapists can design exercises that enhance spatial perception through embodied interaction, facilitating recovery of motor skills.

Laban’s icosahedron is an excellent example of how embodied cognition operates in practice. By using the body’s interaction with a structured spatial framework, the icosahedron not only guides movement, but also shapes how space is perceived and understood. This interrelationship between space, action, and perception reflects the core principles of embodied cognition, where thinking is inextricably linked to the body’s experiences and interactions with the world.

Laban’s Space Harmony offers a unique perspective on neurorehabilitation and movement research through the lens of the icosahedron and the “A” scale: As shown in Figure 4, the “A” scale is a specific sequence that captures the first and second halves of the movement, with a focus on coordinated, flowing movement patterns that are crucial for rehabilitation and motor learning. Originally designed for analyzing fencing movements, the “A” scale can be adapted to neurorehabilitation to better understand and guide therapeutic exercises. The respective elements can be found in Supplementary Material M3.

The A scale, in this context, provides a framework for understanding and potentially improving the spatial aspects of therapeutic exercises. It serves as a tool for gaining insights into the complex spatial relationships involved in restoring efficient and coordinated movement patterns—an essential focus in neurorehabilitation and movement research.

2.4 Data integration and real-time tracking via object-oriented approach

The integration of the hardware and software components was achieved through sensor initialization, data streaming and processing, avatar animation, and visualization. Sensor initialization involved connecting to multiple Movella DOT sensors simultaneously and configuring them to stream data in the required format. Data streaming and processing involved setting up a continuous data stream from the sensors using BLE notifications and processing the incoming data in real-time, converting raw sensor data into quaternion representations. Avatar animation involved mapping processed sensor data to corresponding avatar bone orientations and implementing a high-frequency update loop to ensure smooth avatar animation. Visualization involved rendering the animated avatar within the virtual environment and implementing user controls for camera movement and interaction within the virtual space.

The entire system was designed using object-oriented programming principles, including encapsulation, abstraction, inheritance, and polymorphism. Sensor data and processing methods were encapsulated within relevant classes. High-level interfaces were created for sensor communication and avatar control. Vizard’s built-in classes were extended for custom functionality, and generic interfaces were implemented for different types of sensors and avatar parts.

2.5 Validation and testing

To ensure the accuracy and reliability of our motion tracking system, we conducted three distinct validation processes: sensor accuracy validation, latency measurements, and user experience testing. Three male and four female students, aged 20–24 years, were recruited from the Universidad de Ingeniería y Tecnología (UTEC) for these tests, and informed consent was obtained from all participants following ethics board approval.

Sensor accuracy validation: The accuracy of the Movella DOT sensors was validated by analyzing the measured joint angles during various upper limb movements. The sensors were attached to the participants’ upper arm, lower arm, and hand to track flexion-extension and other arm movements.

Latency measurements: Latency, defined as the delay between physical movement and its representation in the VR environment, was measured by recording the time elapsed from sensor data acquisition to avatar movement in Vizard.

User experience testing: User experience was evaluated through a series of tests where participants interacted with the VR environment using the motion tracking system. The participants provided feedback on aspects, such as the smoothness of the avatar’s movements, responsiveness to their physical gestures, and the overall immersion of the VR system. No visible jitter was reported, and participants found the system engaging and easy to use.

Advertisement

3. Results

3.1 Kinematic verification

As shown in Figure 5, the IMU measurements (blue line) generally followed the trend of the goniometer’s readings (black circles), but there was a slight discrepancy between the two methods. The IMU system exhibited an error rate of approximately 11% compared to the goniometer measurements. The results demonstrate that, while the IMU system captures the general movement pattern, there are small errors in angle estimation that may be due to resolution and calibration errors and/or sensor drift over time. Despite these discrepancies, the IMU system represents a robust and portable tool for real-time motion tracking with a margin of error that is acceptable for most biomechanical and rehabilitation applications.

Figure 5.

Comparison of IMU-measured angles (blue line) with goniometer readings (black circles). The results indicate an average difference of approximately 11% between the two measurement systems.

3.2 Quaternion transformation and initial posture handling

In our system, the Movella DOT sensors provide orientation data in the form of quaternions, which are based on a right-handed coordinate system and follow the format (w, x, y, z), where:

  • w is the scalar component (representing the rotation angle) and

  • x, y, z are the vector components (representing the axis of rotation).

However, the Vizard VR environment uses a left-handed coordinate system, with quaternions formatted as (x, y, z, w). This difference in both the coordinate system and the quaternion component order necessitates careful reassignment of quaternion components to ensure accurate transformation and alignment between the sensor data and the avatar’s movement within the VR space (Figure 6). This was done in Python using the command:

Figure 6.

Quaternion vector parallelism in left-handed (Vizard) and right-handed (Xsens) coordinate systems, aligned with X-axis and Z’-axis facing right.

q_avatar=[data['quaty'], data['quatz'], data['quatx']data['quatw']]E1

To align the avatar’s initial posture with the user’s physical pose, we implemented a calibration step using quaternion multiplication. This calibration step ensures that the avatar starts from a posture that matches the user’s initial real-world orientation. For each body segment (upper arm, forearm, hand), we calculated a displacement quaternion during the initial calibration pose using the Python command:

q_disp=q_current_sensorq_initial_sensor.inverse()E2

where q_initial_sensor is quaternion from the sensor during the initial calibration, after being reordered and transformed to match Vizard’s left-handed coordinate system; and q_current_sensor is real-time quaternion for the user’s physical pose. The displacement quaternion is applied to move the avatar bone in subsequent movements, compensating for any discrepancies between the sensor’s initial orientation and the avatar’s initial posture, thereby ensuring accurate representation in the VR space.

3.3 Real-time avatar animation and overall calibration process

For each frame update in the Vizard environment, the following quaternion operation is applied to animate the avatar based on the sensor data:

q_final=q_dispq_initial_avatarE3

where q_disp is real-time displacement quaternion reading from the sensor after being transformed to match Vizard’s coordinate system and format; q_disp adjusts for any initial misalignment captured during the calibration step; and q_initial_avatar ensures that the animation starts from the correct posture. This operation effectively transforms the sensor data into Vizard’s left-handed coordinate system while accounting for the initial calibration, ensuring that the avatar’s movements are a faithful representation of the user’s physical actions.

Our overall calibration process involved the following steps:

  • T-pose calibration: Users are instructed to stand in a T-pose for initial calibration. During this pose, we record the quaternions from each sensor (q_initial_sensor) and the corresponding desired avatar bone orientations (q_initial_avatar).

  • Displacement calculation: We calculate the displacement quaternions as described earlier (q_disp). These offset quaternions are stored and applied to all subsequent sensor readings.

  • Dynamic recalibration: This feature allows users to recalibrate at any time by holding a specific pose for 3 seconds. This dynamic recalibration helps account for potential sensor drift or sensor placement adjustments.

The combination of these steps ensures that the quaternion data from the Movella sensors are accurately transformed and applied within the Vizard environment, enabling precise and responsive avatar animation in real-time.

3.4 Avatar animation and tracking accuracy

Upon running the script, we successfully established connections with three Movella DOT sensors capturing the upper arm, forearm, and hand kinematics. The system demonstrated the following performance characteristics:

  • Connection stability: All three sensors maintained stable connections throughout the testing period, demonstrating reliable communication with the Vizard environment.

  • Data streaming: We successfully received real-time data streams from all three sensors at an average rate of 60 Hz, ensuring consistent and accurate motion tracking.

  • Latency: The system demonstrated an average end-to-end latency of 50 ± 10 ms from physical movement to visual representation in the Vizard environment.

  • In line with Figure 7, the virtual avatar in the Vizard environment responded to the sensor data with the following observations:

  • Hand tracking: We achieved high accuracy in reproducing hand movements with an estimated error of less than 5 degrees in each rotational axis. The animation was smooth and responsive, with no visible jitter.

  • Forearm tracking: Forearm movements were reproduced with high accuracy, estimated at 5–10 degrees error in each rotational axis, ensuring reliable tracking.

  • Upper arm tracking: Upper arm movements were also accurately tracked, with an estimated error of 5–10 degrees in each rotational axis.

  • Overall pose estimation: The avatar successfully reproduced complex arm gestures with all sensors functioning optimally.

Figure 7.

Real-time avatar animation and tracking accuracy in the Vizard environment. The figure demonstrates the accuracy of avatar movements corresponding to real-time data streaming from the Movella DOT sensors. It highlights the tracking accuracy for the hand, forearm, and upper arm, with acceptable angular errors observed for each rotational axis.

These results indicate a high degree of accuracy in our quaternion-based alignment approach, demonstrating that the transformation process effectively reconciles the differences between the sensor’s right-handed system and Vizard’s left-handed system, as well as the differing quaternion formats.

The script also demonstrated robust error handling for BLE connection issues, allowing the system to continue functioning with partial data. The Vizard environment remained stable throughout the test, maintaining a consistent frame rate of 60 frames per second. The system successfully recovered from temporary sensor disconnections, automatically attempting to reconnect without user intervention. Qualitative observations of the user experience revealed intuitive mapping of physical movements to avatar animations, particularly for hand gestures. The virtual environment in Vizard provided a clear and responsive backdrop for observing the avatar, enhancing overall user immersion and engagement.

A comprehensive explanation of the Vizard script that was developed to integrate the Movella wearable sensors with the Vizard VR environment is provided in Supplementary Material M4. This script is critical for real-time motion tracking, visualization, and interaction within the VR space. Our code is freely available on GitHub under an open-source license.

3.5 Direct quaternion mapping from Xsens to Vizard

In contrast to conventional motion capture (MOCAP) systems, which typically rely on coordinate transformations for positional data, our approach implements a direct quaternion mapping from Xsens’s right-handed coordinate system to Vizard’s left-handed system. This real-time method leverages quaternions’ inherent rotational properties, enabling a straightforward component mapping across coordinate systems without the need for complex transformations. Specifically, the quaternion components from Xsens (W, X, Y, Z) are adapted to Vizard’s format by negating the W and Y components, resulting in (-Y, Z, X, -W) in Figure 6. This optimized left- to-right-handed mapping reduces processing demands significantly, enhancing real-time responsiveness and system stability.

This approach proves particularly advantageous within Vizard’s left-handed coordinate system, which follows historical graphics standards (e.g., DirectX) and VR conventions that prioritize user-centric orientation, with the positive Z-axis pointing forward. Although Vizard’s left-handed design diverges from traditional right-handed graphics systems, it facilitates VR visualization by aligning with user-focused conventions, especially in Windows-based environments. The quaternion mapping thus enables seamless integration with Vizard’s architecture, effectively bypassing traditional positional adjustments and fully capitalizing on the platform’s VR strengths.

Our system also incorporates an efficient algorithm that enables immediate transfer and processing of quaternion data from the sensors to the virtual avatar in Vizard, maintaining low latency essential for neurorehabilitation applications. This direct mapping, when paired with Vizard’s left-handed framework, ensures a continuous and smooth representation of user movements in the virtual space, which is crucial for therapies that depend on precise, real-time feedback to support effective motor learning, rehabilitation, and patient engagement.

Furthermore, this direct quaternion mapping addresses a common challenge in systems integrating different coordinate conventions, by mitigating potential confusion in coordinate alignment. This reduction in computational load allows for faster data handling and ensures that the user experience aligns with VR standards, where depth perception and orientation are consistent with user expectations within the VR environment.

Together, these innovations contribute to a robust and versatile system highly suitable for applications requiring immediate feedback, such as rehabilitation and training. We believe this approach not only enhances operational efficiency but also aligns with platform-specific VR design philosophies, ultimately elevating the user experience in immersive environments.

This figure illustrates the quaternion vector alignment between Vizard’s left-handed and Xsens’s right-handed coordinate systems. In this configuration, the X-axis in the right-handed system (Xsens) is aligned parallel to the Z’-axis in the left-handed system (Vizard), both oriented towards the right view. The standardized color convention shows the X-axis in red, Y-axis in green, and Z-axis in blue, with a dashed purple line representing the invariant quaternion axis. This alignment simplifies real-time quaternion mapping, enhancing compatibility and responsiveness in virtual reality applications.

Advertisement

4. Discussion

4.1 Significance and application to neurorehabilitation and movement research

Our study demonstrates the potential of integrating Movella sensors with the Vizard platform for real-time motion tracking in neurorehabilitation. By combining the portability and real-time data acquisition capabilities of IMUs with the immersive qualities of virtual reality (VR), our system offers a viable alternative to traditional rehabilitation methods, particularly in the context of upper extremity recovery.

Recent studies emphasize the effectiveness of task-oriented, repetitive exercises in VR-based rehabilitation for motor recovery, particularly poststroke. Such approaches align with our system’s design, where structured, task-specific movements are visualized in a 3D space using an icosahedron framework to guide upper limb exercises. Research published in recent years highlights how VR can create highly engaging environments that promote neuroplasticity through repetitive, goal-directed movements, which are crucial for regaining functional independence [24, 25].

Incorporating real-time feedback is a key feature of successful VR rehabilitation systems. Studies from the past decade have shown that systems providing both visual and haptic feedback lead to more significant improvements in motor function. For example, recent work demonstrates that combining VR with wearable haptic devices can significantly enhance proprioception and movement accuracy in patients recovering from upper limb injuries [26]. Although our system currently focuses on visual feedback, the gamified elements and real-time avatar guidance suggest to be effective in maintaining engagement and improving exercise adherence, similar to findings in recent VR-based rehabilitation studies [27].

4.2 Incorporation of the mirror neuron concept into gamification

In terms of methodology, mirror therapy and imitation-based exercises continue to show promise in VR rehabilitation, leveraging mirror neurons to facilitate motor learning. Studies conducted in the last 5 years have demonstrated the efficacy of avatar-based guidance for replicating complex movements, resulting in improved motor function and increased motivation during therapy [28, 29]. Our system’s integration of Laban’s Space Harmony principles provides an innovative approach to spatial orientation and movement sequencing, which complements the mirror therapy approach by structuring movements within a kinesphere.

Looking forward, our system could benefit from enhancements that have been gaining traction in the latest research, such as multisensory feedback and adaptive learning algorithms. Recent developments in VR rehabilitation emphasize the value of integrating artificial intelligence (AI)-driven customization to adjust difficulty levels in real-time, optimizing the therapy for each patient’s progress [30]. These advancements, combined with the flexibility and scalability of our platform, could lead to more personalized and effective rehabilitation interventions.

In our approach, the avatar’s movement is not just a representation, but serves as an interactive tool for motor learning and rehabilitation. The alignment of the virtual avatar’s actions with the user’s own motor intentions can engage the mirror neuron system, a neural network known for its role in both the observation and execution of movements [28]. Research shows that this engagement can lead to enhanced motor learning, as the brain simulates the observed movements as if it were performing them. By integrating the avatar within a neurorehabilitation framework, we aim to harness these principles of embodied cognition and mirror neuron activation to improve the effectiveness of motor recovery strategies.

The concept of mirror neurons, which are activated during both the observation and execution of actions, can be directly related to the virtual motion of an avatar in our study. When users observe the avatar moving in a virtual environment, especially when the movement is similar to their own, the mirror neuron system is engaged [28]. This process has been shown to activate brain regions that are crucial for motor learning and imitation, supporting the idea that interacting with a virtual avatar could enhance rehabilitation and motor skill training.

Action Observation and Motor Learning: Research shows that mirror neurons are involved in both the observation and execution of movements. In the context of VR and this study, when a user observes the avatar performing specific actions, their mirror neuron system may be stimulated in a similar way as when they perform the action themselves. This could enhance learning and rehabilitation by reinforcing motor pathways.

Enhancing engagement through virtual avatars: The use of a realistic avatar that mirrors the user’s movements can engage the mirror neuron system more effectively. When the avatar accurately reflects the user’s intended actions, this enhances the sense of ownership and embodiment, which are key in activating the mirror neuron system. This is particularly important for neurorehabilitation, where the goal is to reestablish lost motor functions through repetitive and immersive practice.

Neurorehabilitation and the mirror neuron system: As noted in studies like the one by Nabila Brihmat et al. [28], imitation of virtual hand movements engages areas within the mirror neuron system and the default mode network. This suggests that virtual environments can be designed to activate these neural circuits, improving outcomes in motor rehabilitation by leveraging the brain’s natural learning mechanisms.

4.3 Conclusions and future directions

In conclusion, the results of this study underscore the potential of portable bioelectronic devices integrated with VR for neurorehabilitation. While further clinical validation is needed, the combination of real-time motion tracking and interactive 3D environments presents a promising direction for advancing therapy. Future research should explore incorporating additional sensory feedback and expanding the system’s applicability across different rehabilitation scenarios and sports performance contexts.

Current challenges include occasional drift in IMU sensors and potential communication delays in multisensor systems. To address these, advanced calibration techniques and robust communication protocols are under consideration for future versions. We proposed incorporating adaptive algorithms to handle diverse motion profiles, which would allow the system to scale to a wider variety of use cases, such as advanced sports training.

Supplementary materials and nomenclature

1. Supplementary material M1: Vizard script for creating and integrating the icosahedron

To bring the icosahedron concept to life, we implemented the following Vizard script to create and integrate the icosahedron:

import viz

import vizshape

import math

# Initialize Vizard

viz.go()

# Create a group to hold the icosahedron’s edges

group = viz.addGroup()

# Define vertices and edges of the icosahedron

phi = (1 + 5**0.5) / 2 # Golden Ratio

vertices = [[−1, phi, 0], [1, phi, 0], [−1, −phi, 0], [1, −phi, 0],

[0, −1, phi], [0, 1, phi], [0, −1, −phi], [0, 1, −phi],

[phi, 0, −1], [phi, 0, 1], [−phi, 0, −1], [−phi, 0, 1]]

# Define a small subset of edges for testing

edges = [(0, 1), (0, 5), (0, 11)]

# Create cylinders for each edge

# for edge in edges:

start = vertices[edge[0]]

end = vertices[edge [1]]

# Calculate the midpoint, direction, and length of the edge

midpoint = [(start[i] + end[i]) / 2 for i in range(3)]

direction = [end[i] - start[i] for i in range(3)]

length = math.sqrt(sum(d**2 for d in direction))

# Create a cylinder to represent the edge

cylinder = vizshape.addCylinder(height = length, radius = 0.01, slices = 20,

parent = group)

cylinder.setPosition(midpoint)

# Calculate the yaw and pitch angles

dx, dy, dz. = direction

yaw = math.degrees(math.atan2(dy, dx))

pitch = math.degrees(math.atan2(dz, math.sqrt(dx**2 + dy**2)))

# Apply rotations

cylinder.setEuler([pitch, yaw, 0])

# Adjust the camera position to view the icosahedron

viz.MainView.setPosition(0, 0, 5)

# Define a function to spin the icosahedron

def spin():

group.setEuler([0, viz.getFrameElapsed() * 20, 0], viz.REL_PARENT)

vizact.ontimer(0, spin) # Call spin() every frame

2. Supplementary material M2: Connection between embodied cognition and Laban’s icosahedron

2.1 Perception of space through the body

Laban’s icosahedron emphasizes that our understanding of space is directly linked to the body’s movement within that space. Rather than seeing space as an abstract, external entity, the icosahedron situates spatial dimensions within the reach and orientation of the body. This aligns with embodied cognition, where the body’s interaction with the environment shapes our perception of space. For instance, the directions (e.g., up, down, forward, backward) in the icosahedron are not merely mental constructs, but are rooted in how our bodies are oriented and how we physically navigate space.

2.2 Action shapes perception

In Laban’s model, movement is not simply a reaction to pre-existing space; rather, movement actively shapes and defines spatial perception. The icosahedron provides a structured method for the body to explore space, guiding movements through specific paths and inclinations. This mirrors the embodied cognition principle that action and perception are intertwined. For example, as a dancer moves within the icosahedron, their perception of space is shaped by the angles and pathways they traverse, influencing how they think about spatial relationships.

2.3 Cognition distributed across body and environment

The icosahedron is not just a theoretical model; it serves as a physical tool that helps dancers and movement therapists conceptualize and perform spatially harmonious actions. This reflects the idea in embodied cognition that cognitive processes are distributed across the body and environment. The icosahedron essentially becomes an external cognitive aid that helps individuals plan, execute, and refine movements, making it a direct extension of the embodied cognitive system.

2.4 Structured movement for improved spatial awareness

The use of Laban’s icosahedron in movement training and rehabilitation helps individuals develop better spatial awareness by engaging their bodies in structured movement patterns. This structured exploration of space enhances the cognitive maps that individuals create, enabling more fluid and coordinated actions. In this sense, the icosahedron acts as a bridge between spatial perception and action, showing how the body’s movement through space directly informs cognitive understanding.

3. Supplementary material M3: Elements of the “A” scale

3.1 “A” scale structure

  • The “A” scale consists of 12 movement inclinations.

  • These are divided into two sets of 6 movements each.

  • The second set of 6 movements mirrors or parallels the first set, promoting bilateral coordination.

3.2 volute phrasing

  • “Volute” refers to a spiral or scroll-like form.

  • In movement terms, it implies a cyclical, flowing sequence that returns to its starting point.

  • Each volute in the “A” scale comprises 6 movements, encouraging continuous, fluid motion.

3.3 Application to neurorehabilitation

  • The first volute (6 movements) can be associated with preparatory phases in movement therapy, such as the initiation of a reach or step.

  • The second volute (next 6 movements) represents the completion and return phases, critical for tasks like reaching for an object and bringing it back.

3.4 Movement progression

  • Each inclination in the scale represents a specific direction and level in space.

  • The movements flow from one to another in a predetermined sequence.

  • This sequence ensures a harmonious transition between different spatial pulls, aiding in the retraining of movement patterns.

3.5 Body coordination

  • In the context of rehabilitation, the first inclination of each volute might involve controlled movements of the upper or lower limb in specific directions.

  • The second inclination could involve transitioning the limb to another target area, reinforcing motor planning and execution.

3.6 Fluidity and symmetry

  • The “A” scale promotes smooth, flowing movement as the limb follows coordinated trace forms, which is essential in retraining motor skills.

  • Both the right and left sides of the body can be trained using symmetrical “A” scale patterns, ensuring balanced motor recovery.

3.7 Spatial harmony in movement rehabilitation

  • The “A” scale helps illustrate that therapeutic movements are not just linear, but complex, three-dimensional patterns.

  • It shows how different parts of the body coordinate in space to create efficient, functional movements necessary for daily activities.

3.8 Analytical tool

  • By mapping therapeutic exercises to the “A” scale, therapists can analyze the spatial efficiency and coordination of the patient’s movements.

  • It can help identify areas where movement patterns deviate from the ideal, allowing for targeted interventions to improve spatial and motor coordination.

4. Supplementary material M4: Detailed explanation of Vizard code

Below is a breakdown of the key components of the script, detailing how each part contributes to the overall system. Our code is freely available on GitHub under an open-source license.

4.1 Environment initialization

  • The script starts by importing essential libraries, including viz., vizfx, vizshape, vizcam, and viztask from Vizard, which are necessary for creating and managing the three-dimensional virtual environment, avatars, and camera systems. Additionally, external libraries such as numpy for numerical operations and bleak for Bluetooth Low Energy (BLE) communication are imported.

  • viz.setMultiSample(4) and viz.go() initialize the Vizard environment, enabling anti-aliasing for smoother graphics and starting the virtual reality (VR) environment, respectively. The field of view (FOV) for the main window is set to 60 degrees using viz.MainWindow.fov(60), providing an optimal viewing angle.

4.2 Loading avatar and environment

  • An avatar is loaded using viz.addAvatar() with a predefined configuration file, and the environment is set up by loading an icosahedron model. The avatar’s orientation and position are adjusted to align it within the VR space appropriately.

  • The script employs the vizcam.PivotNavigate method to allow the user to navigate around the avatar, ensuring that the avatar remains centered while the user can view it from different angles.

4.3 Bone locking and device initialization

  • The script identifies and locks specific segments, or bones, of the avatar, such as the left upper arm, forearm, and hand, using man.getBone() and man.lock(). Locking the bones ensures that they are ready to receive and apply the quaternion data from the sensors.

  • A dictionary named devices is defined, which maps each body segment (upper arm, forearm, and hand) to its respective BLE address and corresponding bone on the avatar. This mapping is crucial for associating incoming sensor data with the correct part of the avatar.

4.4 BLE communication setup

  • The script defines global variables for BLE communication, including unique universal identifiers for the measurement and payload characteristics. It also sets a custom message to configure the sensors.

  • The callback and notification_callback functions are defined to handle incoming BLE data. The encode_custommode5 function decodes the sensor data into a structured format using numpy, extracting critical information such as quaternion values, free acceleration, and angular velocity.

4.5 Asynchronous BLE connection and data streaming

  • The connect_and_stream function is an asynchronous co-routine responsible for connecting to each BLE device and starting the data stream. It handles connection retries, verifies available characteristics, and initiates notifications to receive data continuously.

  • The main function gathers all BLE connection tasks and runs them concurrently using asyncio.gather(). The BLE operations are executed in a separate thread, ensuring that they do not block the main Vizard loop, which is responsible for rendering and updating the avatar.

4.6 Avatar update with real-time sensor data

  • The update_avatar function is a generator that continuously updates the avatar’s bone orientations based on the latest quaternion data received from the sensors. This function runs in a loop, applying the quaternion rotations to the corresponding bones of the avatar.

  • Initial quaternions from the sensors are recorded to establish a baseline, which is then used to calculate the final orientation of the bones. The quaternions are adjusted to match the avatar’s coordinate system and are applied globally to the avatar bones.

4.7 Continuous execution and frame rate management

  • The viztask.schedule(update_avatar) function schedules the avatar updating, running at approximately 60 frames per second. This ensures smooth and responsive animation within the VR environment.

  • The script maintains a loop that updates the avatar’s pose in real-time, providing immediate feedback to the user based on their physical movements captured by the sensors.Place appendix and nomenclature before Reference list.

References

  1. 1. De Fazio R et al. Wearable sensors and smart devices to monitor rehabilitation parameters and sports performance: An overview. Sensors. 2023;23(4):1856
  2. 2. Scheffler M, Hirt E. Wearable devices for telemedicine applications. Journal of Telemedicine and Telecare. 2005;11(1_suppl):11-14
  3. 3. Neri L et al. Electrocardiogram monitoring wearable devices and artificial-intelligence-enabled diagnostic capabilities: A review. Sensors. 2023;23(10):4805
  4. 4. Kim W et al. Validation of a biomechanical injury and disease assessment platform applying an inertial-based biosensor and axis vector computation. Electronics. 2023;12(17):3694
  5. 5. Guo QF et al. Virtual reality for neurorehabilitation: A bibliometric analysis of knowledge structure and theme trends. Frontiers in Public Health. 2022;10:1042618
  6. 6. Jackson Ii T. Immersive virtual reality in sports: Coaching and training. In: Russell D, editor. Implementing Augmented Reality into Immersive Virtual Learning Environments. Hershey, PA, USA: IGI Global; 2021. pp. 135-150
  7. 7. Kim W et al. Algorithmic implementation of visually guided interceptive actions: Harmonic ratios and stimulation invariants. Algorithms. 2024;17(7):277
  8. 8. Elor AK. The ultimate display for physical rehabilitation: A bridging review on immersive virtual reality. Frontiers in Virtual Reality. 2020;1:1-20
  9. 9. Riva G. From virtual to real body: Virtual reality as embodied technology. Journal of Cyber Therapy and Rehabilitation. 2008;1:7-22
  10. 10. Schuetz I, Karimpur H, Fiehler K. Vexptoolbox: A software toolbox for human behavior studies using the Vizard virtual reality platform. Behavior Research Methods. 2023;55(2):570-582
  11. 11. Seth A et al. OpenSim: Simulating musculoskeletal dynamics and neuromuscular control to study human and animal movement. PLoS Computational Biology. 2018;14(7):e1006223
  12. 12. Al Borno M et al. OpenSense: An open-source toolbox for inertial-measurement-unit-based measurement of lower extremity kinematics over long durations. Journal of Neuroengineering and Rehabilitation. 2022;19(1):22
  13. 13. Xanthidis D et al. Handbook of Computer Programming with Python. Boca Raton, FL, USA: CRC Press; 2022
  14. 14. Laban RV, Ullmann L. The Language of Movement: A Guidebook to Choreutics. London, UK: MacDonald & Evans; 1966
  15. 15. Longstaff JS. Rudolf Laban’s Dream: Re-Envisioning and Re-Scoring Ballet, Choreutics, and Simple Functional Movements with Vector Signs for Deflecting Diagonal Inclinations. New York, NY, USA: MoveScape Center; 2018
  16. 16. Moore C-L. Meaning in Motion: Introducing Laban Movement Analysis. New York, NY, USA: MoveScape Center; 2014
  17. 17. Ghattas J, Jarvis DN. Validity of inertial measurement units for tracking human motion: A systematic review. Sports Biomechanics. 2021;23:1-14
  18. 18. Barua A et al. Security and privacy threats for bluetooth low energy in IoT and wearable devices: A comprehensive survey. IEEE Open Journal of the Communications Society. 2022;3:251-281
  19. 19. Manolas C, Xanthidou OK, Xanthidis D. Virtual reality application development with python. In: Handbook of Computer Programming with Python. Boca Raton, FL, USA: Chapman and Hall/CRC; 2022. pp. 485-526
  20. 20. Caserman P et al. Real-time body tracking in virtual reality using a vive tracker. Virtual Reality. 2019;23:155-168
  21. 21. Gibson JJ. The Senses Considered as Perceptual Systems. Boston: Houghton; 1966
  22. 22. von Laban R. The Mastery of Movement on the Stage. London, UK: MacDonald & Evans; 1950
  23. 23. Anderson ML. Embodied cognition: A field guide. Artificial Intelligence. 2003;149(1):91-130
  24. 24. Karamians R et al. Effectiveness of virtual reality- and gaming-based interventions for upper extremity rehabilitation Poststroke: A meta-analysis. Archives of Physical Medicine and Rehabilitation. 2020;101(5):885-896
  25. 25. Fong KNK et al. Task-specific virtual reality training on hemiparetic upper extremity in patients with stroke. Virtual Reality. 2022;26(2):453-464
  26. 26. Andaluz VH et al. Virtual reality integration with force feedback in upper limb rehabilitation. In: Advances in Visual Computing. Cham: Springer International Publishing; 2016
  27. 27. Adlakha S, Chhabra D, Shukla P. Effectiveness of gamification for the rehabilitation of neurodegenerative disorders. Chaos, Solitons and Fractals. 2020;140:110192
  28. 28. Brihmat N et al. Action, observation or imitation of virtual hand movement affect differently regions of the mirror neuron system and the default mode network. Brain Imaging and Behavior. 2018;12(5):1363-1378
  29. 29. Westwood JD. Medicine Meets Virtual Reality 20: NextMed/MMVR20. Vol. 184. Amsterdam, Netherlands: IOS Press; 2013
  30. 30. Swarnakar R, Yadav SL. Artificial intelligence and machine learning in motor recovery: A rehabilitation medicine perspective. World Journal of Clinical Cases. 2023;11(29):7258-7260

Written By

Wangdo Kim

Submitted: 31 October 2024 Reviewed: 13 November 2024 Published: 09 January 2025