Voice Roles, Part I

This module explores the roles of the different parts in four-voice Baroque counterpoint. This module uses the music21 library.

J. S. Bach’s chorales are often used as a model for musical textures with four voices. The voices are perceived as independent from one another through the unique roles they take on in the composition. By comparing the voices within a single composition, we can get a better idea of these roles.

We’ll use one of J. S. Bach’s harmonization of the melody Jesu, meine Freude (BWV 87):

Let’s import this chorale from the music21 corpus:

from music21 import *

chorale = corpus.parse('bach/bwv87.7.mxl')

chorale.show()

Next, we can divide the chorale up by part:

soprano = chorale.parts[0]

alto = chorale.parts[1]

tenor = chorale.parts[2]

bass = chorale.parts[3]

It’s almost always preferable to use as few lines of code as possible for a given function. When we have to assign multiple variables, we can use commas to separate the variable names and values within a single line like this:

soprano, alto, tenor, bass = chorale.parts[0], chorale.parts[1], chorale.parts[2], chorale.parts[3]

First, let’s analyze the range of each part, using the built-in analyze function in music21:

soprano.analyze('range')
> <music21.interval.Interval m10>

alto.analyze('range')
> <music21.interval.Interval m9>

tenor.analyze('range')
> <music21.interval.Interval m9>

bass.analyze('range')
> <music21.interval.Interval m13>

The bass covers the widest range, while the alto and tenor cover the narrowest ranges, at just over an octave. The soprano lies in between.

Often the bass covers an especially wide range in four-voice textures as it is typically responsible for leaping between notes that are important to the harmony, which may be separated by wide intervals. The bass voice also often makes octave leaps when repeating a given note, as you can see in several places in the score.

Let’s gather the melodic intervals for all of the parts into four new variables (again, using a single line of code):

s_int, a_int, t_int, b_int = soprano.melodicIntervals(), alto.melodicIntervals(), tenor.melodicIntervals(), bass.melodicIntervals()

We can compare the prevalence of melodic intervals in each part by using another built-in analysis feature of music21. Let’s analyze the soprano part first:

analysis.discrete.MelodicIntervalDiversity().countMelodicIntervals(soprano)

> {'M2': [<music21.interval.Interval M2>, 15], 'm2': [<music21.interval.Interval m2>, 12], 'P5': [<music21.interval.Interval P5>, 2], 'P4': [<music21.interval.Interval P4>, 3], 'm3': [<music21.interval.Interval m3>, 2], 'M3': [<music21.interval.Interval M3>, 1]}

This single (if slightly long-winded) statement gives us the frequency of each melodic interval in the melody. The results are given in the form of a dictionary, which organizes data into “key” and “value” pairs. In this case, the key is the type of interval, and the value is the number of times that interval appears.

We use a for loop to print only the parts we want to see for legibility:

s_analysis = analysis.discrete.MelodicIntervalDiversity().countMelodicIntervals(soprano)

for interval in s_analysis:
	print(interval + ":", s_analysis[interval][1])
> M2: 15
> m2: 12
> P5: 2
> P4: 3
> m3: 2
> M3: 1

It’s clear that smaller intervals–seconds, in particular–are heavily favored.

Let’s compare the other parts:

a_analysis, t_analysis, b_analysis = analysis.discrete.MelodicIntervalDiversity().countMelodicIntervals(alto), analysis.discrete.MelodicIntervalDiversity().countMelodicIntervals(tenor), analysis.discrete.MelodicIntervalDiversity().countMelodicIntervals(bass)

for interval in a_analysis:
	print(interval + ":", a_analysis[interval][1])
> m2: 14
> M2: 14
> M3: 3
> m6: 1
> m3: 2
> P4: 4

for interval in t_analysis:
	print(interval + ":", t_analysis[interval][1])
> m2: 14
> P4: 6
> M2: 16
> M6: 1
> m3: 6
> P5: 1
> M3: 4

for interval in b_analysis:
	print(interval + ":", b_analysis[interval][1])
> M2: 20
> m2: 16
> m3: 8
> m6: 2
> P8: 5
> P4: 9
> M3: 1
> P5: 1

Overall, seconds are heavily favored in all parts, though the bass certainly has more of the wider intervals.

Continue this analysis with the next module in the sequence.

Extensions

  1. How would we get the normalized prevalence of each melodic interval (from 0-1), rather than the raw count?
  2. The order in which intervals are output from the dictionaries reflects the order in which the key was first assigned (i.e. the order in which the intervals first appeared). This can be confusing when trying to parse the data. How could we display the intervals from smallest to largest? (Hint: try using the .semitones property.)
  3. What other properties might we analyze to better understand the role of each voice?