An Unnecessary Introduction
Back in 1943, the linguist and all-around genius Rudolf Flesch created a readability formula. Basically, it’s a mathematical formula you can apply to a piece of writing to measure how readable it is. In 1948, Flesch revised the formula; and you can still find this revised version in Microsoft Word if you 1) click on “tools” at the top of your screen, 2) scroll down to “spelling and grammar,” 3) click on “editor,” and then 4) click on “document stats” at the bottom of the page. A window called “Readability Statistics” will pop up telling you, among other things, your document’s Flesch Reading Ease score, which is a number between 0 and 100.
I won’t go into too much detail about what the reading ease score means—mainly because no one writes about Flesch’s work better than Flesch himself, and I hope all of you will seek out his books and read them. For now, I’ll simply say that if your piece of writing scores around 60, then it’s readable without being flippant or sophomoric. Is a higher score closer to 100 automatically better? No. If samples of your writing consistently score above 70, then your writing may not sound mature enough to be taken seriously by the adult audience you want to reach. If, on the other hand, your writing dips below 50 or, heaven forbid, creeps into the low 40s or (shudder) the 30s, then few readers will persevere through your prose. A score in the low 60s is, for me, the Goldilocks ideal.
Random samples of my book The Ways Children Learn Music average a Flesch Reading Ease score of 62—a home run into the left field bleachers! This was not by chance. I revised my book dozens of times, tested dozens of 100-word samples, until I got the Reading Ease score I wanted.
What’s the readability score of this blog post? Read the whole post to find out.
Will Flesch’s readability formula, by itself, make you a good writer? Is it a short-cut that can replace years of writing practice? Of course not. Flesch (1954, p. 18) had this to say: “If you feel that your writing or speaking is not up to par and you apply my formula, it won’t make you feel better like a drug; but it will measure how sick you are—like a thermometer.”
Why am I going into this? To show that you can love writing, love music, love the arts, but still use measuring tools to improve your work. I see no contradiction there.
The same thing applies to measurement in the music classroom. We can measure our students’ music aptitudes; we can chart their growth with achievement tests and rating scales. And we can do so without turning into data-driven robots who treat kids like cogs in a machine in some dystopian nightmare.
What got me started on all this was a thread about assessment on a music education facebook page. I will not mention the name of the page or the names of any respondents, of course. But most of the colleagues who weighed in were flatly against measuring student growth. A few of them suggested, perhaps half-jokingly, that music teachers should simply walk around the classroom with a clipboard and pretend to make notes when the principal is watching.
I wasn’t amused. I have devoted my professional life to measuring student achievement with tests and rating scales—tools I use to help students grow. Some colleagues were, it seemed to me, making light of my life’s work.
At one point, I thanked the music teacher who started the thread, and then I went on to say the following:
“Outside of Gordon MLT circles, music education measurement is a tough sell. I never quite got why many music education colleagues are turned off to measuring and evaluating student progress. It seems like we want to have it both ways. We hate it when we’re treated as if we’re not “real” teachers, as if we’re somehow beneath the homeroom teachers. Still, we refuse to put our grown-up pants on by measuring student growth with rigorous assessment. If we’re not going to measure and chart student achievement, then we can’t complain when we’re not taken seriously by our administration, by parents, and by non-music colleagues. The good news is that we can assess rigorously and still make our classes fun!”
My words fell on deaf ears. And soon the anti-assessment excuses (shown in italics below) started to pour in. My responses are in plain text. Incidentally, I did not respond to these statements on facebook. Not my venue. But this blog is mine, and I’ll say anything I damn please!
- “I’ve seen Gordon’s tests administered a thousand times. They’re just not for me.”
Have you really seen them administered a thousand times? By whom? Have you seen music teachers use the test results to improve instruction? If you’re so against measuring student aptitude, then why did you waste your time watching another teacher administer aptitude tests a thousand times?
- “I excelled in music classes. I didn’t become a music teacher or fall in love with music because of my growth in tonal or rhythmic knowledge or increase in music literacy though. I just loved how I FELT in the music room. That’s the only assessment I need or want to give.”
You loved how you felt in the music room. Well, good for you. Now, what about your students? Don’t they deserve to feel good in the music room too? Surely you will acknowledge that if kids have poor tonal and rhythm skills, then they won’t feel good about their music ability in your classroom for very long. Perhaps you find it easier just to ignore the kids who struggle; instead, you choose to focus, spuriously, on the kids who love participating. Did you stop to think that there may be a connection between 1) how often kids participate in music, and 2) whether their performance skills are improving?
- “I do assess student growth. I just don’t feel a need to formally write it down.”
Since this music teacher clearly went down the rabbit hole, I’ll respond by quoting from Lewis Carroll’s Through the Looking Glass:
“The horror of that moment,” the King went on. “I shall never, never forget!”
“You will though,” the Queen said, “if you don’t make a memorandum of it.”
Take a lesson from the Queen. If you’re serious about assessment, don’t rely on your memory. Write it down!
- We have so little time for assessment.
You and me both, my friend! My belief is that we have time for what we value. Back in the 1980s, when Joe Biden was a senator, he said to his colleagues, “Don’t tell me your values. Show me your budget.” I say something similar to my colleagues and administrators: Don’t tell me your values. Show me your schedule. If you value measurement and assessment, you’ll make time; and you’ll also learn to assess your students’ music growth efficiently. And you’ll chart the growth of every child, not just the ones who are eager to participate.
Is measuring growth the same as grading? Let me put it this way. I love assessment; I hate grading. Charting individual student growth over weeks, months, and years (!) helps to make me a better teacher. Grading, on the other hand, is a necessary evil. I wish I didn’t have to grade students at all. In fact, I don’t believe in grades. I think it’s immoral for one human being to “grade” another. When we grade, we essentially tell a student what they’re worth. And I find that abhorrent. But, of course, I grade students (with mostly As and Bs) because I must, to keep my job. If grades were completely abolished—and I hope, one day, the human race will be enlightened enough to do that—I’d still measure and evaluate each student’s progress.
So where are we? Some teachers say they don’t have time to assess student growth; others believe assessment interferes with classroom fun. Are other factors at play? While I was writing this post, a colleague, Heather Shouldice, sent me a link to a study she recently conducted (2022). Her findings showed a provocative correlation: those music teachers who don’t believe assessment is important are more likely to believe in the idea of innate musical talent—the notion that some students have it, while others don’t. And by extension, if some students lack innate musical talent (and therefore cannot grow tonally and rhythmically no matter how hard they try), then why should music teachers bother with assessment?
Here is my response to her: “Hi Heather, I never thought of it before, but you’re raising a good point: it could be that the MLT folk may be at odds with a high percentage of the rest of the music education community. We MLTers take as a given that all students can achieve musically; no one has zero music aptitude; and because all students can grow, we have a professional obligation to chart the growth of each student.”
MLT folks do something most music teachers don’t do: We ask every child 1) to sing by themselves, and 2) to perform rhythm patterns by themselves. We shift from a group activity to individual assessment. Then group, then individual, then group, and on and on.
And how should we communicate with administrators about assessment? My experience is that administrators want to know what we do rather than what and how we measure growth. To that end, I make audio and video documents of students’ music activities, and I share them with my principal. If my principal knows that my students 1) perform group folk dances with consistent tempo, 2) sing with gorgeous in-tune head voice, and 3) love doing it, then the administration is more inclined to listen to me about assessment. But even then, I never delve too deeply into it. I tell my principal something like the following:
“In my classes, I ask every child to sing and to perform rhythms individually. I go first; they go second. If kids are good at singing and keeping a beat, then they get to perform the tough melodies and rhythms; if a child can’t do the tough melodies and rhythms, I ask them to perform the easy ones. Sometimes 2 or 3 kids will perform together if a student is shy about performing alone. It’s all about meeting the individual needs of each child. And I keep careful records of each child’s achievement so I can meet their individual needs. This way, the high achievers won’t grow bored, and the low achievers won’t grow frustrated.”
No reasonable administrator can argue with that.
We music teachers seem to differ from each other about our versions of success. There may be music teachers reading this post who don’t put much stock in assessment. I’ll leave them with this thought:
Music teachers who say that a lesson was a success based on student engagement are looking only at the kids who are happy and engaged. Those teachers, I believe, turn a blind eye to the kids who are unengaged; they focus too much attention on the forest, with hardly any attention devoted to the trees—the students as individuals. Group engagement in a music class is often an illusion. Those teachers committed to educational measurement and evaluation know that individual assessment plus whole-class involvement is the only way to gauge how well a lesson worked.
PS. The Flesch Reading Ease Score of this blogpost is 66, which means it’s on an 8th grade reading level. It also means that this post was written in a way that’s serious enough to sound serious; but it’s not so densely packed to cause most readers to struggle with the style.
Bluestine, Eric. (2000) The Ways Children Learn Music: An Introduction and Practical Guide to Music Learning Theory. Chicago: GIA.
Flesch, Rudolf. (1943) Marks of Readable Style: A Study in Adult Education. New York: Bur. of Publ., Teachers College, Columbia University.
Flesch, Rudolf. (1948) “A New Readability Yardstick”. Journal of Applied Psychology. 32 (3): 221–233.
Flesch, Rudolf. (1954) How to Make Sense. New York: Harper and Brothers.
Shouldice, Heather N. (2022) “An Exploratory Study of the Relationships Between Teachers’ Beliefs About Musical Ability, Assessment, and the Purpose of Elementary General Music,” Visions of Research in Music Education: Vol. 39, Article 6.