Learning movement sequences with music changes brain connectivity. Behavioural relevance pending…

Research Spotlight #2 Moore, E., Schaefer, R., Bastin, M., Roberts, N., & Overy, K. (2017). Diffusion tensor MRI tractography reveals increased fractional anisotropy (FA) in arcuate fasciculus following music-cued motor training. Brain and Cognition, 116, 40-46. 

A couple of weeks ago my mum emailed me a link to a BBC news article which she rightly knew would be of interest to me. It reported on a study by researchers mainly from the University of Edinburgh, and the title of the BBC article was: “Learning with music can change brain structure, says study“. This of course grabbed my attention, as I thought it could provide a neuroscientific counterpart to my PhD student John Dyer’s recent work, which had found that triggering a melody with one’s movements helped in learning and recalling a complex bimanual coordination task (tracing a diamond with one hand and a triangle with the other to create a 4:3 rhythm). The study by Moore et al. on which the BBC article was based is indeed interesting, but probably creates more questions than it answers. It also ties in neatly with an ongoing issue in neuroscience – trying to draw conclusions from neural measures when the behavioural framework within which they should fit is incomplete.

The study involved people learning different patterns of finger movements on their left (non-dominant hand), e.g. index-ring-middle-pinkie-ring-index-ring-middle. You can try yourself and see that it is not that easy – hence room for some learning to happen. Everyone in the study learned the task with a kind of Guitar Hero style video to show them which order the fingers had to be pressed. Importantly, one group practiced with just the video (‘control group’), while the other additionally heard musical pitches (presumably like keys on a piano) to indicate the finger order of the sequence to press. This was the key difference between the groups – one had a ‘musical’ version of the task while the other didn’t. They learned the sequences at home, and the task sped up as they got more confident at producing the sequences. They were tested at performing the rehearsed sequences as well as some sequences that they hadn’t rehearsed at three stages: before training, midway through training and after training. MRI scans of their brains were also recorded before and after training. These were used to measure changes in the organisation of fibres which connects neural cells between the auditory and motor regions of the right-hemisphere of the brain (i.e. the side of the brain which would be responsible for sending muscle signals to the left hand). The hypothesis was that learning with the musical sounds may entail greater auditory-motor linkages in the brain hemisphere involved in coordinating the task-to-be-learned.

The results of the study showed that both groups got better at the task, that is, they completed a greater number of correct sequences as a result of training. However, there were no differences in how much the music and control groups improved, that is both groups were getting pretty much the same number of correct sequences by the end of training. This is rightly taken to show that the musical tones did not influence learning (at least in terms of correct sequences). There was also evidence of a training-dependent change in the organisational structure of the axons connecting auditory and motor regions of the brain for the musical group in the right hemisphere, but not in the left hemisphere or at all in the control group. It could be noted that the changes in neural organisation were small (4% change in the main measure, all measures hovering around the 0.05 p-value), but for such a ‘light’ intervention, this may be all the more compelling – if just adding a little music to the task can change the brain, imagine what you could do if you really went to town with musical movement training!

As I said earlier, these results are certainly interesting. The lack of difference in the learning measure between the group, when one might expect the sound to be helpful, points to a need to understand better if and how sound could lead to enhanced learning of the task (e.g. through movement sonification rather than just as a guide). Also, the apparent change in neural fibres in the music group, although small, may represent a fairly surprising amount of neural plasticity in such a short space of time, more than would be expected (although I cannot comment on whether a 4% difference is that surprising or not as I am not up on what counts as small/large changes in neuroplasticity measures).

In spite of these interesting aspects, there are some limitations on what can be taken from this paper and a number of further questions raised. Firstly, it is a shame that the performance of the task was only examined using correct sequences performed. While it is always a good idea to have a primary learning measure, other measures like timing accuracy or movement kinematics may have revealed group differences which would help with understanding what the neural changes correspond to. If the neural changes are not functional for the movements involved, but rather underpin some audio-motor mapping or association which is simultaneously being acquired by the learners, then this might have been examined by looking for disruptions in performance for the music group when an incongruous finger-tone mapping is introduced following training. It is also possible that using movement sonification (fingers trigger the sounds), rather than sounds as mere cues, may have led to enhanced learning for the music group. Each of these additional measures or experimental conditions might have led to observable behavioural effects, through which the neural changes could be interpreted. As it is, however, it cannot be determined yet whether the observed changes in brain have some functional significance, or are merely an artifact, comparable to an indentation in one’s middle finger following a period of intensive handwriting.

To be fair, the authors do acknowledge that the lack of measured learning differences between the two groups limits the conclusions that can be made with regard to the neural data. Nevertheless, they do propose that the study may be valuable for clinical practice, e.g. for movement rehabilitation in Stroke survivors. The mismatch between the excitement over the seeming neural transformation (less in the paper itself but more in its media coverage) and the lack of corresponding behavioural differences between the learning conditions is telling. Recently, a group of pretty eminent neuroscientists published an excellent critique of a trend in the neurosciences to focus disproportionately on the activity and structures of individual neural cells, or networks of cells, without giving due attention to the situated task and behaviour of the organism that such neural activity is supposed to subserve. Studying the properties of stomach acid molecules is supposed to help us understand digestion; studying the brain should help us understand behaviour.

One might ask – if the researchers had carried out the study as a purely behavioural experiment, would they have then been motivated to repeat it with MRI analysis? My guess is that the behavioural component would need to have been more convincing, in which case the cart may be in front of the horse on this one.


Different kinds of ‘skill transfer’

When I was a PhD student, I learned that there are 3 main topics of investigation in skill learning research: skill acquisition, skill retention, and skill transfer. The goal of researchers interested in skill ought to be to understand the processes by which these things happen, and to develop ways to enhance them.

The first two are fairly easy to define. Skill acquisition is the improvement of performance at a previously unlearned task, usually through repetition and training. Researchers might ask how to accelerate this process, or how to ensure that training/practice conditions ensure the most adaptive and long-lasting learning. Skill retention is the ability to perform well at the now-learned task after extended time periods, or following interference of another task/skill. ‘You never forget how to ride a bike’ = skill retention. In the research literature, retention is generally used as a metre stick to evaluate how successful the acquisition phase has been. Both acquisition and retention are (reasonably) well-defined and have been extensively researched.

Skill transfer is far less easy to define than acquisition or retention, and, while the most elusive, it is perhaps the most desirable phenomenon. It is something like the ability to perform an unrehearsed skill as the result of having previously acquired and retained a different-but-related skill. This definition has many problems and does not capture the idea of transfer fully. Examples might be better. Imagine a proficient Gaelic football player switching to Rugby – we might expect that many of the sub-skills required to do well in Gaelic would transfer to rugby, even if not all. Or a cello player taking up the viola. While the new instrument is a fraction of the size of the former, and played in a completely different posture, we might expect some of the learned skill to carry over. Of more general relevance, how can practice of one thing transfer to unpracticed situations? How can set-piece drills transfer to a real match against another team? How can rehearsing jazz scales transfer to a live improvisation during a gig? These are the kinds of phenomena and questions that skill transfer as a concept is supposed to capture.

An idea that I have been wondering about recently (and haven’t yet had the time to fully research) is about the different ways we could conceptualise skill transfer. Typically, the idea is that ‘acquisition of the skill to perform Task A will reliably result in better performance in Task B’ (assuming that Task A and B are similar enough in the relevant ways¹). However, I think there is another way of thinking about transfer. This is that ‘acquisition of the skill to perform Task A will reliably result in faster/better acquisition of the skill to perform Task B’. This may reveal itself independently of initial performance at Task B. Rather, by having tuned into the information that allowed Task A to be learned, and assuming that such information is meaningfully present in Task B, the learner will more readily be able to attune their attention to this in learning Task B and show accelerated acquisition. I am inspired here by the common anecdote² that having learned one instrument to a proficient level, it is easier to learn a second, and then easier still to learn a third, and so on. Thus, one view of transfer is improved performance of the unpractised task, while another would be improved learning of the unpractised task. The distinction may relate in part to a contrast between between ‘pick-up of information for action’ and ‘pick-up of information for learning’, but I am not yet in a position to formulate this idea fully yet.

I am sure that there is research relevant to this question out there. However, I have certainly not come across an explicit distinction between these two possible (and not mutually exclusive) ways that learning one skill could benefit another skill. I hope to look into this question further soon and report back with what I find.


  1. What counts as ‘the relevant ways’ is hugely important, and probably at the heart of the question of skill transfer, but something I will leave alone for now.
  2. I will look for proper evidence of this idea.

Our senses limit our actions, and this is a good thing

It can be so helpful sometimes to revisit older texts that were part of your intellectual trail, but which haven’t been retread for a while. Today, I met with my PhD student Alannah to discuss a book chapter by Karl Newell, ‘Constraints on the Development of Coordination‘. The last time I thought about this paper properly was when Johann Issartel and I set out to write a critique of it 10 years ago (this has yet to materialise, but may happen yet), and I haven’t looked at it since then. Alannah’s project is about motor development in children with visual-impairment, and so it seemed like a relevant source of theoretical ideas for her thesis, and something that would be worth discussing. I’m very glad we did.

The paper sets out a theory of the development of coordination, essentially the principles by which children come to acquire skilful control of their movements. A central idea is as follows. There are too many ways to move. All the possible ways of rotating joints, contracting or relaxing muscles, and shifting limb parts through space, means that there is a huge mathematical problem for the developing brain to solve: how to reduce these possibilities from an infinite set to a workable set for controlling intentional behaviour (this is a crude summary of Bernstein’s Degrees of Freedom problem).

Part of the answer to this problem lies in the concept of ‘constraints‘. Constraints are limits on how physical things can move. Gravity. Limb mass. The material springiness of connections between muscles, ligaments, tendons, and bones. Boundaries of frequencies of signals to and from the central nervous system. The properties of structures, objects and events in the immediate environment. All these things reduce the degrees of freedom available. Thus, coordinated behaviour emerges from how different constraints force organisation of the component parts involved. As a somewhat removed illustration, a murmuration of starlings emerges from the combined constraints of gravity, air-flow, wing shape, and a few simple (though yet undiscovered) rules governing how each bird responds to motions of other birds in their visual field. Rather than thousands of birds all flying around at random, these constraints limit their possible paths of motion to a smaller subset of codependent trajectories. The results is a beautiful, coordinated complex system (see video below). The idea is that human (and other animal) movement obeys similar natural laws, whatever they may turn out to be. Thus, the concept of constraints on coordination provides a starting point to a solution to the Degrees of Freedom problem. This idea is summarised nicely in a line quoted from another paper by Kugler, Kelso and Turvey: “it is not that actions are caused by constraints, it is rather that some actions are excluded by them”.

Importantly, information picked up through our senses can also constrain movement. That is, when functioning to guide action, vision/audition/proprioception/etc., all limit the range movements that can/should be made. We tested (informally) this idea today, by having me close my eyes and draw a figure-of-eight in the air. When I made the same movements with eyes open, the pattern was more accurate, and consistent. The set of finger movement possibilities was reduced by the visual constraint of how my limb moved in relation to the intended pattern. Perception limits action. This brings me to a ‘Eureka’ moment I had when re-reading the paper, and which Alannah and I discussed in earnest today.

Visual-impairment is not a constraint on coordination, but rather a reduction in constraints. Having limited or no visual access to one’s own limbs, or objects/structures/events in the environment, does not limit movement but rather removes a limit on movement. Thus, movement development is affected by having fewer informational stabilisers and contours to follow.  Of course, other modalities (audition, proprioception, etc.) can and do impose constraints on movement, and optimal patterns of coordination may be discovered by someone with visual-impairment through these limiters. The goal now becomes identifying the best ways to organise task and environmental constraints to help the children uncover these solutions, rather than trying to replace visual ‘input’ through other channels. As a result, thinking about vision and other senses as limitations on movement will really shift the way Alannah and I have been viewing perceptual motor development in children with visual-impairment.

(Re-)reading older papers is a good idea!

Musicians keeping together in time

I gave a guest lecture yesterday on the topic of ‘Action’ in Music Psychology. This was for a colleague/friend, Trevor Agus, who runs a course called Music Psychology for students enrolled on Music programmes in the School of Arts, English and Languages. We amuse ourselves that he teaches Music Psychology to music students, while I teach Psychology of Music to psychology students. This was the second time I have given this class.

It is an odd thing for me to teach a class on Psychology of Action to music students, not least because I almost could have become a music student myself at one point in my life. Instead, I became a student of philosophy and psychology, and then movement, and then movement in music, etc. Ah, well. It feels very different trying to impart a message about motor coordination and skill acquisition to musicians than to impart the same message to psychologists. The things that feel the need for emphasis differ, and the ideas that capture the room differ too.

One idea from the class that I was happily reminded of in preparing for it is the complex challenge of musicians coordinating with each other in ensemble performance. It is a miraculous thing enough that one nervous-muscular-skeletal system can coordinate its own behaviour to give rise to musical performance, but it is even more miraculous that many of these systems can not only coordinate their own sounding actions, but also coordinate with each others’ actions. Much of the research into this phenomenon is focussed on either measuring timing between musicians (e.g. the correlations of note interval variations between musicians), or on identifying the perceptual signals that might support musicians in the task of interpersonal musical coordination. In the latter case, the visual cues from body movements and gestures (both intentional and unintentional) seem to play a pretty big part in helping musicians to stay coordinated with each other while enacting a performance.

An example of this that I used in the class is from a concert by the Penguin Cafe Orchestra filmed for the BBC in the mid-80s. In the performance of Air á Danser, a section of the piece involves the group slowing down together a couple of times, then speeding back up to resume the flow of the music. Simon Jeffes, the leader of the group, conducts this process through a combination of head movements, eye contact and body gestures, with the result that around a dozen separate musicians are able to control the timing of their actions as a single unified system. The video clip of the whole track is embedded below, and the section in particular begins at around 1:05. It’s a lovely example of multisensory interpersonal coordination in musical performance, as well as being a very charming piece of music (in my opinion, at least).

Movements that cause sound are different from those that don’t

Research spotlight #1

Neszmélyi, B. & Horváth, J. (in press). Consequences matter: Self‐induced tones are used as feedback to optimize tone‐eliciting actions, Psychophysiology. doi: 10.1111/psyp.12845*

This paper addresses a conceptual and methodological issue in research which studies attenuated perception of sensory feedback of one’s own actions. Research has indicated that neural responses to sensations that are caused by one’s actions (e.g. the sound of a bell you press yourself) result in reduced neural activity when compared to sensations that are not self-generated (e.g. the sound of a bell you didn’t press). This idea makes intuitive sense: you are likely be more sensitive to things in your environment that you did not immediately cause to happen, since things you did cause can be anticipated. How this is studied is by recording brain EEG activity (electrical signals on the scalp) when someone makes a movement that causes a sound, subtracting the activity due to the movement itself (without sound), and comparing the result to just listening to the sound with no movement. The big assumption here, which this paper tackles, is that the muscle movements which cause sound and the same movements without auditory consequences are essentially controlled in the same way.

In the experiment reported, participants had to pinch a Force Sensing Resistor (FSR) with a pre-set amount of force. This constituted the movement of the task. After some training with a visual display to get the required amount of force right, participants then performed the task under two conditions. In one condition, their pinch was effectively sonified. When they applied the right level of force, a 1000 Hz sine tone sounded (audio-motor condition: AM). In another condition, they had to pinch the FSR without any sound feedback (motor only: M). A final condition involved listening to recordings of sounds created by their previous pinches during which they made no finger movements (audio-only: A). The AM condition always came first, while the order of the A and M conditions were varied between participants. Participants did 300 repetitions of each condition (to get good EEG data, lots of repetitions of a given event are necessary). Pinch force was recorded in the AM and M conditions, and EEG activity was recorded in all three conditions.

The first striking thing that was found in the results was that the pinch force applied differed dramatically between the condition in which pinches caused the tone (AM) versus the no-tone condition (M). Force applied was a lot lighter when the pinch produced a sound than when it didn’t. The second result of interest was that as a consequence of the differences in movements when there was an auditory consequence versus none could account for the attenuated EEG response to self-generated sounds, when applying the subtraction approach described above. While this second result is of interest to people working on the neuroscience of agency, for me the first finding is the most interesting.

The difference in force applied when expecting a tone to be caused by your action versus silent pinching suggests that auditory perceptual feedback from movement can materially alter the nature of coordination with the task at hand. When sounds feedback from movements is present, the movements change because the interaction has changed. This needs to be considered when comparing, for example, behaviour during movement sonification with non-sonified movement interactions (something I would routinely do in my own research on sonification in skill learning). It needs to be considered how the auditory feedback may not only change the way that movement is guided, but also the entire nature of the task as experienced by the agent involved (actor, learner, etc.). A shift from an experimenter’s-eye-view to a participant’s-eye-view is, as always, essential.

Another important idea which this study highlights was masterly laid out by John Dewey’s in his 1896 critique of the Reflex Arc Concept in Psychology (essentially the fixation on stimulus-response causality). In the AM condition, the tones may be the result of the pinches, but they also seem to modulate the coordination of subsequent pinch movements. Essentially, participants pinched more softly when they knew a sound would result. Thus, the sensory consequences of one action modified the control of the subsequent actions across the block of trials. Behaviour is here best characterised as an ongoing coordination between action and perception, not a set of isolated stimulus-response pairs.

A final idea that this study highlights which I want to mention is the potential folly of assuming that one can study coordination and the brain by adding and subtracting perceptual and motor elements of a task. In this study,  neural activity in the AM condition was not simply the sum of activity in A and M, as had previously been assumed. If the brain is a non-linear dynamical system (and it surely is), then the different ways that elements of a task inter-relate and affect the overall state of the system need to be very carefully considered.

Of course, there are limitations in the experiment (as there are in every individual scientific study). The sound used (1000 Hz tone) is typical of a lab-based experiment, but is frustratingly un-ecological. A more ‘pinch-relevant’ auditory consequence might have revealed further interesting motor and neural behaviours. The movement itself (pinch force) is rather limited and one-dimensional, so it is not clear how far this effect could generalise to more complex actions. Also, having the A condition after the AM condition for everyone is problematic as this might mean that neural responses to the tones in the listening condition are affected by the newly formed action-sound mapping from the previous 300 AM trials. Still, this paper did address a potentially pernicious assumption in the neural agency literature, and in so doing highlighted important concepts for thinking about auditory-motor coordination processes.


*If you cannot access this article, use the DOI in Sci-Hub to get it. Science should be open to everyone!

Instruments as (complex) landscapes

Today, I attended a very interesting seminar by Dr Scott McLaughlin at the Sonic Arts Research Centre, titled ‘Material Cartographies as Composition’. Scott discussed his approach to musical composition, which involves using multi-stable/chaotic properties of instruments or sound-producing materials (e.g. mics and speaker feedback) as the basis of his compositions. The complex interactions between performers and such chaotic or multi-stable systems forms the topology/cartography/landscape* over which each piece unfolds. A key idea is that once a landscape of interaction is set-up, e.g. by bringing a skilled musician to an ‘instrument’ that responds in a determinate-but-chaotic way, unpredicted sonic things can happen within a reasonably constrained set of conditions. A nice example presented was of a guitar player controlling feedback by moving the guitar around in front of an amplifier. This has the effect of changing the pitches of the feedback in ways that are constrained by the laws of physics, but which are not straightforward to control, given all the complex parameters involved (guitar string tension, resonant frequencies, millimetre distance between guitar and speaker, etc.).

I liked Scott’s approach, particularly the fact that he seemed to have a genuine interest and respect for the ideas he imports from complex systems theory, rather than merely applying the terminology in its colloquial use. However, one question which I had was the degree to which these instruments or landscapes are learn-able to a performer. That is, are the compositions simply the accidents of imparting energy into a complex responding system, or could a user acquire some degree of mastery of the systems? This is an important question for me, coming at this from the perspective of a psychologist interested in skill acquisition. Understanding if and how musicians can learn to perceive and manipulate the regularities (invariant patterns) in complex systems may help in understanding skilful adaptability in lots of areas of life, not just in experimental music.

Part of Scott’s answer to this question, which I think is a useful starting point, is to think of games. Take card games. Even though the rules of a game of poker might be constant, no two poker matches are identical**. Hence, individual poker matches are deterministic but chaotic systems (in some description). When people first start to learn the game, play often faltering, clumsy, and unsatisfying. Over practice, as the patterns and outcomes become perceivable and players learn to detect and act on these, more fluid and satisfying play ensues. In a sense, skilled poker players are experts in managing chaotic, complex systems. While this is a useful analogy, card games generally do have a number of explicit rules which can anchor the learner in discovering the more complex patterns that emerge during different iterations of game-play. With highly indeterminate, complex systems – like some of the instruments Scott described – it is not quite as clear what would anchor a musician’s behaviour in order to get the process of learning off the ground.

Another point Scott made would be that the cultural context of practice in which the musician is skilled, and in which the performance occurs, may also constrain this process. This is a very interesting idea, one that I have become increasingly interested in. However, my challenge, as a psychologist, is going to be figuring out how to scientifically research such a multifaceted problem. This issue is something I will probably devote quite some space to on this blog towards, in the hope of get this research question off the ground.

A final point from today’s seminar that I want to mention here is Scott’s response to a question from Paul Stapleton. As Scott was talking, Paul and I were sat next to each other, and a number of times we spotted ideas that resonated with our own discussions around musical improvisation and skilful adaptability. Paul asked why Scott had not mentioned ‘improvisation’, given how relevant it seemed to the ideas in the seminar. Scott replied that he had found that when he tried to explicitly instruct musicians to improvise in his topology/landscape compositions, the results had generally been ‘shit’. This creates a very interesting question about improvisation as something that interaction within a complex system requires (to manage all the unpredictable fluctuations and chaotic behaviours) versus improvisation as intentional variation within a relatively stable, predictable set of constraints. I cannot yet decide if these are two separate senses of ‘improvisation’ or two flavours of a common concept. Hmmm….


*these terms were used semi-interchangeably, with acknowledgment of important conceptual distinctions

**this is true, at least in part, because the likelihood of two shuffles of a deck of cards producing the same ordered deck is astronomically small.



This is the first post on Coordinating with Sound.

I want to explain the grey waveform image that forms the header to this site. The image is of the transition between an audio signal (a short snippet of Fela Kuti’s Open and Close) and a motion capture signal (taken from an experiment I ran with Mihalis Doumas in which participants had to move their finger side-to-side between two targets in time with different rhythmic sounds). I created the image in Matlab by normalising the two signals (audio and mocap) and phasing from one to the other. The bottom signal is just the reverse of the top.

The reason I mention this is that I sometimes found it useful to think of motion and sound as signals, that is, quantifiable patterns of fluctuation over time. Of course, there are many different ways that both sound and movement can be represented as signals, and lots of theoretical issues in the choices that go into these representations (topics for future discussions no doubt). Nevertheless, by thinking of sound and movement as signals, we can analyse similarities and differences in their forms, and look for connections between these. Hence this image as the header for the site.