0
Research Papers

Biometric System for Measuring Gait and Fall Characteristics Captured on Video

[+] Author and Article Information
Flavio Firmani

Department of Mechanical Engineering,
University of Victoria,
Victoria, BC V8W 3P6, Canada
e-mail: ffirmani@uvic.ca

Stephen N. Robinovitch

Professor
Department of Biomedical
Physiology and Kinesiology,
Simon Fraser University,
Burnaby, BC V5A 1S6, Canada
e-mail: stever@sfu.ca

Edward J. Park

Associate Professor
School of Mechatronic Systems Engineering,
Simon Fraser University,
Surrey, BC V3T 0A3, Canada
e-mail: ed_park@sfu.ca

1Corresponding author.

Manuscript received January 3, 2014; final manuscript received April 8, 2014; accepted manuscript posted April 23, 2014; published online May 12, 2014. Assoc. Editor: Paul Rullkoetter.

J Biomech Eng 136(7), 071005 (May 12, 2014) (10 pages) Paper No: BIO-14-1002; doi: 10.1115/1.4027469 History: Received January 03, 2014; Revised April 08, 2014; Accepted April 23, 2014

This paper presents a methodology that quantifies gait and fall characteristics from video of real-life fall events. The method consists in selecting on-screen the points on the ground where the feet are in contact with the ground. The essence of the method lies in establishing a transformation from the video frames to the “real world.” In projected images, geometric properties such as lengths, angles, and parallelism are not preserved; thus, concepts of projective geometry are applied, namely homography. Because the ground is an invariant plane, using this plane for homography results in a constant transformation. The homographic transformation relies on the accuracy in the selection of on-screen points. An optimization algorithm that minimizes the errors caused by inaccurate on-screen point selection improves the results of the homographic transformation. Experimental trials are conducted at three walking velocities (slow, preferred, and fast) using two video cameras and a GAITRite walkway system. Spatial parameters of two independent video analyses are compared with the GAITRite system, yielding a limit of agreement of step length from −2.12 cm to 2.03 cm. Temporal parameters are less confident due to the existence of dropped frames in the video footage. This method is then used to analyze two real fall events as demonstrative cases. First, the gait characteristics are analyzed before imbalance, and subsequently, the characteristics of stepping are analyzed during the fall. In particular, we propose the stepping/impact angle as the metric that quantifies how much stepping affected the direction of the fall.

FIGURES IN THIS ARTICLE
<>
Copyright © 2014 by ASME
Your Session has timed out. Please sign back in to continue.

References

Figures

Grahic Jump Location
Fig. 1

The left picture is the original video frame. Notice that the lines of the door frames and corridor baseboards are curved away from the picture centre (barrel distortion). The right picture has been corrected and now the same lines appear straight.

Grahic Jump Location
Fig. 2

A schematic representation of the three vanishing points of three sets of orthogonal lines. The two vanishing points of the floor plane (vp) are the points where any line parallel to the axes of the reference plane intersect. The vanishing line is the line that passes through these points. The vertical vanishing point (vpv) is the point where all the vertical lines intersect. The vertical segments (href and h) have their lines passing through four points: b being the base point (contacting the floor), t being the top end of the segment, vl being the point that intersect the vanishing line, and vpv being the vertical vanishing point.

Grahic Jump Location
Fig. 3

Optimizing the location of the selected points on the screen. The left picture shows the floor reference lines (formed with four floor correspondence points), two vertical lines (two points each), and two reference heights (two points each). The reference heights (carpenter levels) were moved twice to make six reference heights. The right picture shows the optimized position of the coordinates (floor, vertical, and reference heights). The initial estimation yielded the following heights H = [60.96 60.35 59.80 60.17 62.51 63.23] cm, while Hz = 217.17 cm; after correction H = [60.96 60.96 60.96 60.96 60.96 60.96] cm (the length of the carpenter levels is 24 in.), and Hz = 234 cm (the actual height of the camera). The green line (farthest right) can be dragged around, and it serves to assist the user to check the direction of the vertical lines as one end is anchored to the vertical vanishing point (vpz).

Grahic Jump Location
Fig. 4

Implementation of the biometric system. Upper left: The points of contact of the hallux and heel with the ground are selected. Upper right: The position of the hidden anatomical landmark is corrected with the length of the foot. Bottom: The footprints of a sequence of five steps that were captured using this technique.

Grahic Jump Location
Fig. 5

Bland–Altman plots of step length (upper) and step time (lower) between two video-based methods (solid and empty markers) and the GAITRite system

Grahic Jump Location
Fig. 9

Video sequence of the real-life falls of the two case studies. White dots illustrate the location of the feet placement and the location of the major point of impact (last frame). The first two frames show the subjects still balanced, imbalance occurred during the weight shifting in the third frame (first stepping), fourth frame shows inadequate steppings (short step and self-stepping, respectively), and the last frame shows the moment of impact.

Grahic Jump Location
Fig. 8

Footprint mapping illustrates the sequence of steps during the falls. While the first two foot positions (left and right feet) show the subjects still balanced, the third foot positions illustrate the first stepping. The lines show the angle between the first step after imbalance and the point of major impact with respect to the last balanced position of the stepping foot.

Grahic Jump Location
Fig. 7

Video images of the captured falls with the superimposed correspondence points. For the left video, the camera view was altered and the calibration was conducted with visible points, whereas for the right video, the calibration protocol was conducted with a rectangular mat placed in front of the camera.

Grahic Jump Location
Fig. 6

Comparison of the stride time and stride velocity between the two methods. Despite being calculated with the stride time, the stride velocity resulted in a much stronger correlated parameter.

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In