A number of studies have highlighted the coordination of gesture and intonation (Bolinger,1983; Darwin, 1872; Cruttenden, 1997; Balog & Brentari, 2008; Roustan & Dohen, 2010) butthe technological set-ups have been insufficient to couple the acoustic and gestural data withsufficient detail. In this paper, we present the MODALISA platform which enables languagespecialists to integrate gesture, intonation, speech production and content. The methods of dataacquisition, annotation and analysis are detailed. The preliminary results of our pilot studyillustrate strong correlations between gestures and intonation when they are simultaneouslyperformed by the speaker. The correlations are particularly strong for proximal segments. Ouraim is to expand those results and analyse typical and atypical populations across the lifespan.