All posts by Bill

The Extended Secondary Dominant Principle

If the Secondary Dominant Principle ended there, it would still be a neat trick, but it would only give you one new chord to try at any particular point. In fact it goes much deeper. What Piston calls the Extended Secondary Dominant Principle I like to think of as the Secondary Cadence Principle. The Secondary Dominant Principle states that you can preceed any chord of a key with its dominant. The Extended Secondary Dominant Principle says that you can preceed any chord of a key with a cadential sequence that targets it!

This means that everything you know about cadences can be applied, in principle, to these secondary tonics. Not all of the results will sound good, but the options become almost limitless. Here’s just a few examples.

False Cadences

A false cadence occurs when a dominant resolves irregularily, that is to an unexpected chord. The most common false cadence is from V7 to vi:

False Cadence

The V65 (V7 in first inversion) sets up a strong expectation that the next chord will be I, but instead chord vi follows. Now secondary dominants also set up a strong expectation that the following chord will be their secondary tonic, and we can pull the same trick there:

False Cadence Submediant

Chord V/vi7 expects to be followed by vi but instead we follow with VI/vi (which is also IV—remember that because vi is minor, its submediant in the natural minor scale is major).

Plagal Cadences

You could call these Secondary Subdominants and they work pretty well when they introduce a note outside of the key:

IV ov IV

Cadential Sequences

You are not restricted to just a single chord in a tonicisation. All the common cadential formulae can be applied. For example the cadential six-four:

I of vi 6-4

The common ii, V, I sequence, applied to vi:

ii of vi, V of vi, vi

Secondary Dominants

Yes, that’s not a missprint. The secondary dominant principle can be applied to sequences derived from the extended secondary dominant principle, so we can write sequences like V/V/vi (five of five of six) going to V/vi:

V of V of vi

I can only provide a few random examples of the possibilities here. You really should experiment on your own, finding out what kinds of things work well for you.

Tonicisation

Although this section is supposed to be about modulation, I’m going to start with a much more interesting and immediately useful idea called tonicization.

This is probably the big payoff from music theory. I always used to wonder why, and more importantly how, a composer chose to sharpen or flatten a particular note in a harmony or a melody. That sharp or flat was obviously not a member of the key that the composition was in, nor did the piece obviously modulate (change key) at that point. The answer in almost all cases was a technique variously known as tonicisationborrowed chords or what Piston calls the Secondary Dominant Principle. Rather than waffle on any further, let’s take a slightly modified example from Piston to see how it works.

Before Tonicisation

This short sequence is very much in the key of F major. Note that in the two boxes, the first chord stands in dominant relation to (a fourth below) the second, but in both cases the first chords are minor chords. If we make those chords major then they can act as true dominants to the following chord:

After Tonicisation

These chords are secondary dominants. They function as dominants of the following chord. In fact the Secondary Dominant Priciple states that any major or minor chord of a key may be preceeded by its dominant.

Notice that the names of the chords have changed. Instead of the major iii6 being written III6, which would be correct, it is written V/vi6 which gives a better idea of its function. The / is read as “of”, e.g. V6 of vi (the 6 suffix applies to the whole chord, so can be written last.)

Now these are not real modulations, since in both cases the piece continues in F major as if nothing had happened. In fact the chord IV in bar four has an F♮ and a B♭ that returns the key to F before the B♮ of V/V sets off in a new direction to the key of C. I have a rather colourful analogy that might help to visualise what is happening. It’s rather like a speed boat going very fast. it is both pointing in, and travelling in, one direction (key). If the captain turns the wheel, the boat will start to point in a new direction but will not actually start to change direction yet. If the wheel is righted so that it points in the old direction again, then the captain has just made the journey a little more exciting without actually changing course.

In many cases it is desirable to right the wheel very quickly after a tonicisation, by sounding the flattened leading note (third of the secondary dominant) as soon as possible. For example if the progression is V/V going to V followed by I:

V of V to V

then that final I sounds like IV/V. One common technique to rectify this is to replace chord V with chord V7, because the 7th of V is the flattened leading tone in the key of V:

V of V to V7

Note that in this example the temporary leading tone, F♯ does not rise to the temporary tonic G but instead falls to the flattened seventh F♮ as part of a familiar chromatic sequence known as a Barber Shop progression.

Relationships Between Chords

Any two different chords in sequence constitute a harmonic progression. The relationship between the two chosen chords can be quantified according to its strength, where stronger progressions sound more “convincing”. However a progression containing only strong relationships can sound boring and predictable, so you should not treat the terms “strong” and “weak” as perjorative, you need to balance both types in your compositions.

Given that a triad has only three notes, there are only three basic relationships between any two triads:

  1. The triads have no notes in common.
  2. The triads have one note in common.
  3. The triads have two notes in common.

Of course if the triads have all three notes in common they are the same so that doesn’t count.

In any given key, triads with no notes in common can only occur when the roots are a second (or a seventh) apart. They are said to be in second relation:

Second Relation

These are the strongest progressions, but not the most important, since they are somewhat too strong for continuous use.

Chords in any given key with only one note in common can only occur if the roots are a fourth or a fifth apart. They are said to be in quartan (4) or quintal (5) relation:

These progressions are the most common, because they include both the authentic and plagal cadences.

Chords with two notes in common, in a given key, can only occur if the roots are a third (sixth) apart, in which case they are said to be in tertian relation:

Tertian Relation

These progressions are considered relatively weak, but remember that weak does not mean undesirable.

Beyond the basic classification above, we can further categorize progressions into whether they are ascending or descending. Simply put, an ascending progression is when the root of the second chord is not in the first cord. Ascending progressions are considered stronger. Descending progressions then, are those where the first chord contains the root of the second and are considered relatively weak.

We can therefore describe the relationships between two chords any interval apart as follows:

second
second relation, ascending (super strong)
third
tertian relation, descending (weak)
fourth
quartan relation, ascending (strong)
fifth
quintal relation, descending (relatively strong)
sixth
tertian relation, ascending (relatively weak)
seventh
second relation, ascending (super strong)

Voice Leading

The idea that certain notes have a natural tendancy to move to others is called voice leading. So far we have only seen the leading note rising to the tonic and the seventh of the Dominant Seventh falling to the third of the tonic. There are a few others and if you want your music to flow, you would be wise to let these notes do what they want to do.

Four Part Harmony

Music theory, for the most part, deals with the interaction of four separate “voices” called Soprano, Alto, Tenor and Bass (S, A, T, B). They are written on two staves, the Soprano and Alto in the upper stave with a treble clef, and the Tenor and Bass in the lower stave with a bass clef. Their ranges are roughly as follows:

Voice Ranges

Although the ranges overlap, it is conventional that for any given chord the four voices stack with the Soprano on top, the Alto below the soprano, The Tenor below the Alto and the Bass on the bottom. Here’s an example of four part harmony:

Four Part Harmony

That very simple progression demonstrates quite a lot of the rules of harmony, rules that you need to know even if you intend to break them later on. I need to lay down one really important rule first, then we can examine that progression in more detail. The rule is:

  • Avoid parallel octaves and fifths at all costs.

Parallel octaves and fifths occur between two voices, an octave or a fifth apart, when they both move in the same direction by the same interval. They are especially evil when they occur between the Soprano and the Bass, but should be avoided in all cases.

Now back to our progression. First of all notice that there is often quite a gap betheen the Bass and the Tenor. This is normal and desirable to get a convincing bass line. Let’s discuss each chord in turn, and consider its relation to its neighbours.

 

I
The tonic chord in root position. Notice that the Soprano, Alto and Tenor notes are spread out rather than close together. This is called open position. Their positions are not arbitrary though: they occur as E, G and C (top to bottom) This is the same order as they would appear in a first inversion triad read bottom to top. Put another way, you can get from open to close position (where the notes are as close together as possible) by transposing the Tenor up an octave. Another thing to note is that it is only the Bass of the chord that counts when identifying the inversion of the chord – the S, A and T can appear in any order.
V64 (open position)
While root position and first inversion chords can be used pretty much anywhere, the second inversion has more limited uses, and this is one of them. It is called a passing 6-4 and is placed so that its bass is on the note between the bass of a root position chord (I) and its first inversion (I6).
I6
Here’s a basic rule: don’t double the third in a major 6-3 chord. There is a reason for this. Any major chord really wants to be a dominant, in which case the third acts as the leading note and wants to rise by step to the imagined tonic in the next chord. This tendancy is most pronounced if the third is in the Bass. If you were to double the third, you would have two thirds both wanting to rise by step and parallel octaves would result. For that reason what might have been an E in the Tenor has been moved down to the C, so the chord is neither in open nor close position.
IV (close position)
This chord is often used to prepare an authentic cadence (VI or I64VI).
I64 (close position)
This is the other main use of a 6-4 chord, as a cadential 6-4 where it preceeds the dominant in an authentic cadence. This highlights why 6-4 chords are more awkward to deal with. They behave like a double appogiatura where both the 6 and the 4 want very much to resolve downwards by step to the 5 and the 3 on the same bass. I64V does just this. Unless cleverly hidden (such as in a passing 6-4) almost any other chord following a 6-4 will sound wrong in some way. Also notice how the progression I V64 I6 IV I64 forms a scale. A smooth Bass is a sign of good four part writing.
V7 (close position)
Since it follows a cadential 6-4 it has the same bass note, but here the bass drops an octave for a more final effect. Notice that the third (B) rises to the tonic (C) while the seventh (F) falls to the mediant (E) in the next chord. Again, to allow these natural progressions, the following chord is in neither open nor close position.
I
As discussed, the progression VI at the end of a phrase constitutes an authentic cadence. If furthermore both chords are in root position, and the Soprano of the I is the tonic, then this is a perfect cadence. Many people mistakenly refer to any authentic cadence as a perfect cadence, but there is a difference.

Don’t be fooled into thinking that this is just theory – good four part harmony can stand on its own as a finished piece of music, and it forms the skeleton of most if not all tonal music that has ever been written.

Seventh Chords and their Inversions

If you add the seventh from the root to a triad, you get a seventh chord:

C Major Seventh Root Position

Different chords of the scale have different sevenths. For the major scale, they are

Chord Seventh
I7 Major
ii7 Minor
iii7 Minor
IV7 Major
V7 Minor
vi7 Minor
vii7 Minor or Diminished

Notice that V7 is the only major chord of the key with a minor seventh. This means it is a defining chord of the key. In fact since it contains both the fourth and the seventh degree of the scale, this so-called Dominant Seventh unambiguously defines the key. For example in the key of C, the Dominant seventh consists of the notes G, B, D and F. The B♮ means we cannot be in a flat key, because B♭ is the first flat, and the F♮ means we cannot be in a sharp key, because F♯ is the first sharp.

The Dominant Seventh is the most common seventh, and the progression of dominant seventh to tonic is the strongest progression in Western tonal music:

Dominant Seventh progressing to the Tonic

You may be thinking Chord vii also contains those notes, and you’re right, however if we choose the diminished seventh of vii (which usually sounds better than the minor seventh) then we have a chord completely composed of consecutive minor thirds, and that chord can be the viio7 of four separate keys (this is useful in modulation, discussed later).

Because there are four distinct notes in a seventh chord, there are three inversions, as well as root position:


Using I as an example, the root position is just written I7, as the 3 and the 5 are taken as read. Likewise the first inversion is I65, the second inversion is I43, and the third inversion just I2. There is a useful mnemonic sequence to remember these: 7, 6-5, 4-3, 2.

Inversions

The notes of a triad can be rearranged so that different notes are in the bass. These are called inversions.

When the first note of the triad is in the bass, like all the chords we have seen so far, the chord is said to be in root position:

C Major Root Position

This chord, if it is the key of c major, can be written I53, meaning it is the notes of the tonic chord arranged so that there is a fifth and a third from the bass. But since thirds are taken as read, it is more usually written as just I.

If the third of the triad is in the bass, we have what is called a first inversion:

C Major First Inversion

Because this chord has a sixth and a third above the bass, it can be written in full as I63, but again thirds are taken for granted so it can be abbreviated to just I6.

Unsurprisingly, if the fifth of the chord is in the bass, we have a second inversion:

C Major Second Inversion

 

This chord has a sixth and a fourth above the bass and is written I64, no shorthand allowed. I guess it could be written I4 with the third above the fourth implied, but I’ve never seen that.

This odd convention of a stack of numbers alongside the roman numeral of the root may seem unnecessarily complicated, since it is saying “the notes of such and such a root triad rearranged so that these intervals are above the bass”. They actually come from an old style of musical notation called figured bass where the composer would write out the melody and the bass line in full, but would then write these numbers under the bass line to indicate to the keyboard player which chords they should be playing. In thatcase the numbers (without roman numerals) did indicate the intervals above the given bass note. But when that numbering was carried over into harmonic theory, the root of the chord was considered (rightly) much more important than the bass note of the chord, and so we have this rather counter-intuitive naming system that you just have to get used to.

The Y-Combinator

I’ve struggled a bit in the past to explain why letrec was necessary to allow recursion in a language with first class functions. All we’re trying to achieve is:

But without the use of a global subroutine name, or in fact any environment assignments. If you remember, letrec created a recursive function by creating a symbol naming the function first, with a dummy value,  then evaluated the function in the environment where it’s name was already present, then assigned the resulting closure to the symbol so the function could “see itself”. But in a purely functional setting, assignment is bad, right?

There is a little bit of programming language magic called the “Y-Combinator” that does the job. It’s very succinctly expressed in the λ calculus as:

That is to say, a function taking a function as argument  applying that function to itself, and given (a copy of) itself as argument.

In case this seems all a bit too esoteric, here it is in F♮:

And if that’s still too esoteric here it is in Perl:

Notice that we haven’t named any subroutine, so on the face of it recursion is impossible, but nonetheless, if you give the above code to perl it will very slowly rattle your discs until an out of memory exception, without even a deep recursion error because there’s no function name for perl to attribute the recursion to.

Beore going any further I should point out that none of this is of any value to you whatsoever, other than to assuage your curiosity. Most all modern languages allow recursion, if not support or encourage it (supporting as opposed to just allowing recursion is a fine but important point: scheme supports recursion, Perl and its ilk merely allow it.) Anyway we can use the Y-combinator to calculate a factorial:

Once the inner sub has got hold of itself in  $factorial  it can call  $factorial  as a subref. The outer anonymous sub bootstraps the whole thing by:

  1. Capturing the inner sub in its $factorial
  2. Both calling  $factorial  and passing  $factorial  to it
  3. Passing an extra argument, 5, the number we require the factorial of.

(Off-Topic Rant) Dependency Injection Catalogues

I’m actually quite annoyed, for once. I remember reading a completely lucid description of Dependency Injection some time ago, but recently I’ve done a brief search of the web for documents on the subject and they’re unanimously impenetrable, at least for someone with my attention span. So here’s my explaination of DI Catalogues in as few words as I can.

Firstly we need a catalogue:

Next we need to populate it:

Finally we get to use it:

That is all there is to it! Of course this omits all error checking, but you can add that yourself once you understand the principles.

Algebraic Data Types and Pattern Matching

What may not be clear to readers in a lot of the previous discussions is the use of Algebraic Data Types in combination with patterm matching to define functions. It’s really quite simple, conceptually (implementation may be a different matter, we’ll see.) Here’s an example we’ve seen before, I’ll just be more descriptive this time:

This declaration achieves two things:

  1. It defines a type  list(t)  (list of t) where  t is a type variable that can stand for any type.
  2. It creates two constructor functions, called  cons and null, that accept arguments of the specified types (none in the case of null,) and return data of type list(t).

Reading it aloud, it says define a type list of some unspecified type t which is either a cons of a  t and a  list of t, or a null.

Once defined, we use these type costructors to create lists of a concrete type:

After the above definition, a has type list(bool). The following, on the other hand, would fail to type check:

It fails because:

  • cons('x', null)  is of type list(char) .
  • The outer cons expects arguments  <t>  and list(<t>) , but it gets  bool  and list(char) .
  • The outer cons cannot reconcile  <t> = bool  with  <t> = char  so the type check fails.

That’s all very nice, but how can we use Algeraic Data Types? It turns out that they become very useful in combination with pattern matching in case statements. Consider:

In that case statement, a must match either  cons(head, tail)  or null. Now if it matches cons(head, tail), the (normal) variables  head and  tail are automatically created and instantiated as the relevant components of the  cons in the body of the case statement. This kind of behaviour is so commonplace in languages like ML that special syntax for functions has evolved, which I’m borrowing for F♮:

This version of length, instead of having a single formal argument list outside the body, has alternative formal argument lists inside the body, with mini bodies of their own, just like a case statement. It’s functionally identical to the previous version, but a good deal more concise and readable.

One thing to bear in mind, in both versions, is that  length  has type list(t) int. That is to say, each of the formal argument lists inside the body of a function, or the alternative cases in a case statement, must agree in the number and types of the arguments, and must return the same type of result.

Now, it becomes obvious that, just as we can rewrite a  let to be a lambda, this case statement is in fact just syntactic sugar for an anonymous function call. The earlier definition of  length  above, using a case statement, can be re-written as:

so we get case statements: powerful, pattern matching ones, allowing more than one argument, for free if we take this approach.

length is polymorphic. It does not do anything to the value of head so does not care about its type. Therefore the type of length, namely  list(t) int actually contains a type variable t.

Here’s a function that does care about the type of the list:

Assuming strlen has type string int, that would constrain  sum_strlen to have type list(string) int. Of course that’s a rather silly function, we would be better passing in a function like this:

That would give sum a type:

and we could call it like:

or even, with a Curried application:

This is starting to look like map-reduce. More on that later.

Real-World Applications

Algebraic Data Types really come in to their own when it comes to tree walking. Consider the following definitions:

Given that, we can write an evaluator for arithmetic expressions very easily:

So eval has type expr(int) int . We can call it like:

to get 17.

Pattern matching not only covers variables and type constructors, it can also cope with constants. For example here’s a definition offactorial:

For this and other examples to work, the cases must be checked in order and the first case that matches is selected. so the argument to  factorial  would only match  n  if it failed to match .

As another example, here’s member:

Here I’m using F♮’s built-in list type constructors @, (pronounced cons,) and  [] (pronounced null,) and a wildcard  _ to indicate a don’t care variable that always unifies, but apart from that it’s just the same as the  cons and  null constructors. Anyway, the cases say:

  • member(item, list)  is  true if  item is at the head of the list.
  • member(item, list) is  true if item is a member of the tail of the list.
  • item is not a member of the empty list.

Problems and Solutions

You’ve probably realised that given a type like  list(t) above, it’s not possible to directly create lists of mixed type. That is because it is usually a very bad idea to do so. However if you need to do so, you can get around the restriction without breaking any rules, as follows:

  1. Create a container type for your mixed types:
  2. Create lists of that type:

After the above definition, a has type list(either(string, int)), and you can’t get at the data without knowing its type:

Here,  sum_numbers has type [either(<t>, int)] int. e.g. it doesn’t care what type  first holds. We could have written  first(s) instead of first(_), but the use of a wildcard  _explicitly says we don’t care, stops any potential warnings about unused variables, and is more efficient.