# BIOMECHANICAL DATA CHARACTERIZING FOR MULTIFACTOR AND MULTIVARIATE STATISTICAL ANALYSIS

• P. Loslever
• J.J. Flahaut

## Abstract

Empirical studies in sport biomechanics are mainly organized in experimental designs which combine factor modalities. The no individuals are the modalities of the first factor (FO), and the other controlled factors are F1, F2, ... with nl, n2, ... modalities respectively. Thus, the data structure looks like a hyperparallelepiped (HP) with n0*n1 *n2... modality factor combinations (MFCs) or cells. All the cells can be tested or not. For each cell of the HP, the same variables are considered : V1, V2, ...( movements, forces, pressures...). Whatever the statistical method used to get results from this HP, there is an essential stage : the way to go from empirical raw data to the data characterizing stage i.e. data that are compatible with a statistical method. Our contribution is to state the problem of such a stage in the prospect that monodimensional but more particularly multidimensional statistical approaches can be used. An example about rock-climbing is considered. Data characterizing means data reduction but does not involve a structural simplification : the phenomena understood in each cell c of the HP are described without sacrificing interesting information but the output of the characterizing stage remains an HP. The variables V1, V2,.. can be characterized into specific and new variables according to two points of view : the space variable coding level and the time integration level. Illustration : climbing. The data structure is a parallelepiped where the three directions tally with no individuals, nl expertise levels, and n2 climbing situations. Variables are vertical (V) and horizontal (H) positions of left and right hands and feet (the segment number is s= 1 to 4) and the gravity centre (s=5). The characterizing stage aims at describing how an individual moves i.e. changes within the signals X(c,s,t) and Y(c,s,t), where c is a cell of the parallelepiped (c=l ... nc), s is a segment and t the time sample ( e l ... ntc). There are many methods to achieve this aim. Here are three of them. Method 1 : Entropy of the gravity center trajectory (s=5). The trajectory entropy can be computed for the trajectory Y=f(X), for instance. This indicator, named HAc,S) involves no space coding and an integration over the time. The ANOVA shows that H decreases from the first to the last trial and is larger with experts than with beginners. Method 2 : Chronology of motor actions of a climber i.e. a succession of static phases (crabing, magnesia taking, segment displacement, equilibrium reaching) and dynamic phases(body motion). Here the space variable is qualitative and there is an integration over the time The statistical analysis shows that the experts optimize the number of movements for each of the n2 climbing situations and the beginners don't achieve this ophization even in the last situation. Method 3 : Inter-segment transitions between two sucessive stable postures p and pcl, a stable posture p being obtained when the four limbs are a four grabbing postures (p=l, ..., npc). The signals X(c,s,t) and Y(c,s,t), s=l to 4, are first coded into a qualitative signal S(c,s,p) : S=O if the 4 segments are stable and held, S=l else. Then the transition matrix T is computed : T(s,s') contains the numbrer of times a segment s' moves after a segment s' between two postures p and pcl. This indicator represents a qualitative variable (with 4x4=16 categories) and is obtained when integrating over the npc postures. Nevertheless the notion of chronology is partly kept. The statistical analysis shows that the experts prefer diagonal transitions (hand to foot) and beginners lateral transitions (hand to hand or foot to foot). These three methods, among many others, show that there are many ways to characterize biomechanical data. Thus it is necessary to build standardized propositions to assess the characterizing stage. One of them must take into account the data level reduction level. Another one consists in stating and checking data reduction hypotheses ( symmetric and monomodal distribution for considering a time average, for instance).