Chaining/Compressing: Some Thoughts About Partial Synthesis, Part 2

From Partial Synthesis to Chaining/Compressing

            What’s wrong with the name “partial synthesis”?

            Twenty-three years ago, I wrote that MLTers should change the name “partial synthesis” to something else. I gave three reasons. First, it scares music teachers new to MLT, turns them off to it, makes them feel stupid, drives them away. Second, it reveals nothing, because every level in the skill-learning sequence is partial, and every level is a synthesis. And third, it fails to reveal what actually happens at this skill level.

            While we’re on the subject, what does happen at this skill level? MLTers are likely to give an answer something like the following, which appears on the GIML website:

At the aural/oral and verbal association levels, students learn tonal and rhythm patterns individually. Although the teacher always establishes tonal or rhythm context, syntactical relationships among patterns are not emphasized. At partial synthesis, students learn to give syntax to a series of tonal or rhythm patterns. The teacher performs a series of familiar tonal or rhythm patterns without solfege and without first establishing tonality, and students are able to identify the tonality or meter of the series. The purpose is to assist them in recognizing for themselves familiar tonalities and meters. As a result of acquiring partial synthesis skill, a student is able to listen to music in a sophisticated, musically intelligent manner.

            I like this explanation; and, in fact, you’ll find something similar in my book (2000). I suggested we use the word “chaining,” a term Robert Gagne used in his book The Conditions of Learning (1965). (The term “verbal association” also comes from Gagne.) In his hierarchy of the various types of learning, Gagne describes the process of chaining this way: “[Children] respond to multiple stimuli in a sequence that accomplishes a more complex task than encountered in stimulus/response learning.” 

            I wrote (2000) that “chaining,” in musical terms, means that tonal or rhythm patterns in a series reveal a tonality or meter; individual patterns do not. 

            So why not just call this level “chaining,” or perhaps “contextualization,” a term MLTer Andy Mullen put forth?[i] Because these words are not enough; they give us only half the story. They’re all about the context each series of patterns reveals: major tonality, duple meter, etc., which is fine, as a start. But at this level, students learn a subtler skill: how to compress a long, unwieldy piece of music into shorter forms that are both pliant and bare-boned — series of tonal and rhythm patterns. (If you’d like more information about this, please consult my book The Ways Children Learn Music, pp. 131-141.)

            So then. The word “chaining” is good but incomplete. It gets at the what — the context embodied in a series of patterns. But it doesn’t explain the how: the process of shrinking music down to a series of patterns.

            What’s wrong with the term “contextualization”? It has the word “context” in it, I hear you saying, so people should understand its meaning right away. “Chaining” calls for an explanation; “contextualization” speaks for itself. But is this really true? Let’s take a closer look at the two options — chaining and contextualization.

            I imagine the following 2 scenes:

_________________________________________________________________________________________

SCENE 1: A new MLTer asks what “chaining” means.

NEW MLTer: I get what happens at the aural/oral and verbal association levels. What’s the next level?

OLD-TIME MLTer: Chaining.

NEW MLTer: Chaining? What does that mean? I get that kids sing and chant individual patterns during the aural/oral and verbal association levels. But at those lower levels, don’t several students create a chain of patterns when the teacher calls on one student after another to sing? Chaining happens at those lower levels too, doesn’t it?

OLD-TIME MLTer: Yes, but at this level, the various series, or “chains” of tonal or rhythm patterns have syntactic meaning they would not have if you sang or chanted them in isolation. That’s why, at this level, you don’t have to establish tonality or meter at the outset of the LSA, and you don’t have to sing or chant with syllables. The series of patterns itself reveals the tonal or metrical context.

SCENE 2: A new MLTer asks what “contextualization” means.

NEW MLTer: I get what happens at the aural/oral and verbal association levels. What’s the next level?

OLD-TIME MLTer: Contextualization.

NEW MLTer: Contextualization? What does that mean? I get that kids sing and chant individual patterns during the aural/oral and verbal association levels. But doesn’t the teacher establish the tonal and metrical context at the outset of each LSA? And doesn’t the teacher continue to reestablish tonality and meter throughout the LSA? Contextualization happens at those lower levels too, doesn’t it?

OLD-TIME MLTer: Yes, but at this level, the various series of tonal or rhythm patterns are contextualized. They have syntactic meaning they would not have if you sang or chanted them in isolation. That’s why, at this level, you don’t have to establish tonality or meter at the outset of the LSA, and you don’t have to sing or chant with syllables. The series of patterns itself reveals the tonal or metrical context.

_________________________________________________________________________________________

            What do these dialogues show? First, the two words get at basically the same thing: the chain of patterns reveals the context. Second, no term explains itself! No matter what term you use — “partial synthesis,” “chaining,” or “contextualization” — you still must explain what you mean.

            Unlike “chaining,” the word “contextualization” brings with it a great disadvantage: it’s a bloated, academic-sounding, 7-syllable monster that will frighten newcomers. (And I should add that it’s an aberration for Andy Mullen, whose prose style on his website is always clear, straightforward, fun, and engaging, as you’ll see when you read his material at https://theimprovingmusician.com/.)

            In general, music teachers are turned off to Gordon’s MLT — and this has been the case for more than 40 years — because of the language. Actually I believe the density of the language, not the jargon itself, drives people away.

            One immediate way writers can make their prose less densely packed (and more readable) is to go slow when choosing polysyllabic words. In short, MLT writers should think long and hard before they expand perfectly good root-words with needless prefixes and suffixes. The word “audiate,” for instance, is not a problem; but it grows into a problem when we add suffixes to it. What about audiation? Or audiational? Or (heaven help us) audiationally? Do you feel the difference? The more syllables writers stuff into their words, and the more polysyllabic words they cram into their sentences, the more they poison their writing style. (I’m not, by the way, a fan of the suggestion Strunk and White [1999] offer: omit needless words. Omitting needless syllables strikes me as a better way for writers to improve their style.)

            But let’s get back to the main thrust of this blogpost: chaining vs. contextualization. To be fair, I’m satisfied with neither term. Why? For the reasons I mentioned above: those words fail to account for how students learn to understand tonal and metrical contexts. In short, those words reveal nothing about how our students learn to compress music into series of patterns.

            In Part 3, the final installment, I’ll show with musical examples how chaining/compressing works in practice.

_________________________________________________________________________________________

PS. In each of my posts, I aim for a Flesch Reading-Ease score in the low 60s, with an average sentence length fewer than 17 words, and an average word length of roughly 1.5 syllables per word.

Flesch constructed his readability test so that the average word length and the average sentence length interact. I knew this post would be shot through with polysyllabic words; so I compensated for that sad fact by splitting sentences whenever I could, and by truncating gratuitously tautological and grandiloquent polysyllabic linguistic units — pardon me, by hacking away at syllables until only the roots of words were left.

The average sentence length of this post is 14 words per sentence, and the average word length is 1.56 syllables per word.

This post has a Flesch Reading-Ease score of 63 (placing it on an 8th grade reading level).

_________________________________________________________________________________________

Bluestine, Eric. 2000.  The ways children learn music:  An introduction and practical guide to music learning theory.  Chicago:  GIA.

Flesch, Rudolf. 1951. How To Test Readability. New York: Harper & Brothers.

Gagne, Robert M. 1965.  The Conditions of Learning.  New York:  Holt, Rinehart and Winston.

Mullen, Andrew: https://theimprovingmusician.com/

Strunk, W. and White, E. B. 1999. The Elements of Style. 4th ed. Upper Saddle River, NJ: Pearson.


[i] https://theimprovingmusician.com/partial-synthesis-the-enigma-of-the-skill-learning-sequence-part-1/

Chaining/Compressing: Some Thoughts About Partial Synthesis, Part 3

            This post is an abridged version of a post I wrote a few years ago in which I compared music and linguistic terms. You can find the whole post here: https://thewayschildrenlearnmusic.wordpress.com/2018/07/20/music-language-analogies-part-5-deep-structure-and-surface-structure/

_________________________________________________________________________________________

            Consider Series A below.

Series A (no subtitle)

            We can take Series A and transform it, toy with it, add pitches and rhythm patterns to it, until it takes on a distinct melodic profile, as in Melody #1 shown below.  In other words, we can transform Series A into art.

Melody #1

            Suppose we transform Series A once more by creating Melody #2—a variation on the first melody.

Melody #2

            After hearing Melodies 1 and 2, we can, as astute listeners, generalize that they grew out of the same creative inspiration.  How do we do this?  We audiate the essential pitches of the two melodies; then we realize that those melodies come from the same well-spring, namely Series A (or a series of patterns very much like it).  In short, we compress—this is a crucial point!—we compress the two melodies into a mental structure, a series of patterns, that reveals a vital truth:  though superficially different, the two melodies are, in a deeper sense, the same.

            Here is one final thought:  I want to challenge you to see generalization and creativity not as two distinct levels of learning, but as processes that work in continuous, mutual reversal.  By that, I mean something very simple:  when we generalize, we contract; when we create, we expand.

            Please take one more look at melodies 1 and 2.  When you generalize that those melodies have a deep sameness, you do so by compressing them to a bare-boned series of patterns like Series A.  When you create, you start by audiating essential pitches and durations; then you combine them into patterns in various series—a skeletal structure; and then, through creativity, you add to the skeletal structure.  You expand.  (Of course, a large part of creativity is not expansion, but deletion.  A handy example of that is this blogpost:  I’ve cut dozens of words out of it so far.  Mainly, though, you delete words not when you write, but when you rewrite.)

_________________________________________________________________________________________

            I composed a two-part Invention and a Sinfonia (at least up to the first modulation, because that’s all I had time to compose). I chose the following deep structures to work with:  Series A and B.

Series A (Confirms A major)
Series B (Modulates from A major to E major)

            Conjoining Series A and B was easy.  After all, Series A suggests some kind of exposition in A major, while Series B suggests some kind of modulating bridge to the dominant, E major. Embedding (that is, adding filler) was the challenging and fun part.

            Suppose you’re a listener who has not learned to audiate tonality, keyality, or chord changes. How would you hear these two pieces written below?  Would you hear them as being the same or different? You might pick up that they start with the same motives, but you’d still hear them as being different from each other. And you’d be right. As surface structures, they are different: The Invention is in duple meter, and the Sinfonia is in triple; the Invention is written in 2-part counterpoint, while the Sinfonia is written in 3-part counterpoint. And there are many more surface structure differences I won’t bother mentioning.

            But if you listen deeply, you’ll notice they modulate the same way. Beneath the surface, these pieces are tonal siblings! Yes, this kind of listening is a challenge for our students, but isn’t it, after all, the way we want them to listen to music?

            I will leave you with these two compositions: an Invention in A major and a Sinfonia in A major that I created from (and can generalize to) the same deep structure series of patterns:  Series A and B.

Invention in A major (up to the modulation to E) by Eric Bluestine
Sinfonia in A major (up to the modulation to E major) by Eric Bluestine

Chaining/Compressing: Some Thoughts About Partial Synthesis, Part 1

            Recently at a GIML[i] conference, I chatted with a colleague who read my book and asked what I meant by “chunking.”

MLT Colleague: When I think of chunking, I think of practicing a piece of music in small units, and then putting those units together. Is that what you mean?

Me: (surprised) No. Not at all. What I had in mind was taking an unwieldy amount of musical information, and compressing it into a short, manageable form — not to memorize the music, but to understand its essence.

MLT Colleague: Compression?

Me: Yeah.

MLT Colleague: Then you should call it that: Chaining/Compressing.

Me: Okay, I will.

            It’s not as slick and pithy sounding as chaining/chunking, but it might be clearer. Essentializing is really what I’m reaching for — but no. I refuse to use such a bloated word.

_________________________________________________________________________________________

            Not long ago, I wrote the following message (shown in grey background below) to my MLT colleagues on facebook. I wanted to find out why the partial synthesis skill level is so difficult for music teachers — especially those new to MLT — to understand. What makes partial synthesis so daunting?

            I’m curious about what people think of partial synthesis. I’m especially interested in finding out what makes this skill level so confusing for people new to MLT. What do MLT folks out there think?

            The name doesn’t help, certainly. I think the confusion might also be that it’s placed in the skill-learning sequence as a direct readiness for reading and writing, which makes sense up to a point, but this skill level means more than that.

            Another point of confusion: When I hear MLTers talk about partial synthesis, they usually talk about stage 3 of audiation. What’s the tonality of the piece (or series of patterns) I’m hearing? What’s the meter? And again, that makes sense up to a point.

            But for me, partial synthesis is more about stage 4 — the stage when we keep patterns in our heads so that we can discern things like whether a modulation has taken place, or whether we’re hearing imitative counterpoint; and it’s also the stage when we try to wrap our arms around the whole piece by audiating form, timbre, texture, dynamics, and how these elements interact in cooperation, or perhaps in conflict with each other…

            The responses to this post were fascinating. I won’t print all of them (certainly not without my colleagues’ permission), but four respondents stand out, and they deserve mention. Kudos to Eric Rasmussen, Beau Taillefer, Andy Mullen, and Jennifer Bailey for their thoughtful (and sometimes funky) insights into partial synthesis.

            If you haven’t yet heard Beau Taillefer’s and Eric Rasmussen’s podcast Audiation in the Wild, you’re in for a treat. Beau and Eric ramble, digress, pontificate, laugh at each other’s jokes, laugh at their own jokes, and hack their way through a forest of unsifted ideas about music and music education. And what often emerges are deep insights into music education and MLT. Could it be that children find it easier to audiate and discriminate among tonalities and harmonic functions (presented as homophonic chords) before they get into the weeds of labeling pattern functions, or even singing functional patterns on a neutral syllable? Maybe Beau and Eric are onto something.

            Andy Mullen also raised a good point about the name “partial synthesis”: he suggested the name be changed to “contextualization.” He wrote:

At this level, students are able to recognize the difference between contexts (tonalities and meters) of a series of familiar patterns. The teacher explains how to tell the difference between, for example, major and minor tonalities (by recognizing the resting tone or quality of the tonic chord) or between duple and triple meters (by pairing the patterns with the correct microbeats).

So, if that is Partial Synthesis, then I’ve often wondered if the level was unfortunately named. Eric Bluestine … thought the same thing. He proposed that the level be called Chaining/Chunking.

After much thought, I’m not sure that this name really cuts to the core of this level. For me, the essence of this level is that students bring context (either tonal or rhythm) to the music they are hearing. So, maybe an appropriate name for this level might be contextualization.

           You can read more of Andy Mullen’s thoughts on partial synthesis here. I should add that his videos on how to teach partial synthesis, and how to teach LSAs in general, are an excellent resource.

            Jennifer Bailey offered a nifty comparison between partial synthesis and a scene from The Wizard of Oz: When kids learn to understand the context of a series of patterns, it’s like the moment when Dorothy landed in Munchkin Land, and then opened the door from black and white into color! (Incidentally, Jennifer Bailey’s website SingtoKids is a treasure trove of music education insights and resources.

            I should also mention that another colleague, Heather Shouldice, maintains a podcast in which she discusses and sheds light on various aspects of MLT. Her podcast is ideal for those teachers who are new to MLT, and are looking for a spoken introduction to it. In each episode, Dr. Shouldice tackles a different topic, and her presentations are always lucid, clear, and accessible. In this episode, she discusses all the MLT skill levels including partial synthesis.

            Please join me for Part 2 of this series in which I’ll explain more about the term “chaining/compressing”; and I’ll make the case that the term “contextualization” may not be a better choice.

_________________________________________________________________________________________

Bailey, Jennifer: https://singtokids.com/

Bluestine, Eric.  (2000).  The ways children learn music:  An introduction and practical guide to music learning theory.  Chicago:  GIA.

Mullen, Andrew: https://theimprovingmusician.com/

Shouldice, Heather: https://everydaymusicality.com/podcast/

Taillefer, Beau and Rasmussen, Eric: https://audiation-in-the-wild.simplecast.com/


[i] The acronym GIML stands for the Gordon Institute for Music Learning