top of page


Chapter 3:
Compositional Aspects
of Generative Music


Nowhere else than in making music with modular synthesizers do musicality and (electronic) technique go such tightly hand in hand, does the artist need such deep understanding of both music and physics.


Therefore we need to talk about both again, even here in this chapter 3, which is dedicated to composing - more than to the mere act of patching and using certain modules.


Nevertheless it´s important to ask “Why?”:
Why do I want a certain sound at a certain moment?
Why should I increase randomness here and predictability there?
Why should there be a certain relation between completely different parameters of music and sound (e.g. of timbre and note length)? etc.


Only our compositional will can give satisfying answers to these (and similar) questions. Let´s sharpen this will then.


Chapter 3.1:
General Thoughts, Strategies
And Basic Compositional Decisions


Are the terms “composing“ and “generative“ mutually exclusive antipodes?
No, they are not.


Composing in classic, neoclassic, pseudo classic, contemporary modern classic etc. ways is like walking a dog on a short leash.

Composing generative music means walking this dog on a quite long leash – but not without a leash at all.

​

​

​

​

​

​

​

​


There are some general decisions to make, the very first of which reads:
 

Decision 1:
Shall our piece have a “spline”, a certain kind/category of sound or rhythm (or no rhythm at all) or melody (pitch development) or (chord) progression etc.?


But let´s be conscious about the following:
Even if we want the piece to have something like a musical spline, even if there is something (audibly) characteristic in our piece, it does not at all mean, that our piece has to develop in a more or less classic way, walking “along this sonic spline”. It can, but it doesn´t need to.
Just a simplified example:
I may have decided to use mainly plucked FM sounds in my piece. This “family” of sounds shall serve as the spline of the whole piece together with the fact, that I won´t use remarkable rhythmic structures in the piece.
Now I can give the pitch development an overall direction (changing the range of randomness), which makes the sonic events averagely getting higher and higher frequencies, then going down again, then jumping up and down etc.

By the way: what I´m saying about pitch in this example is valid for any other sonic parameter as well.

 

 

 

 

 

 

 


Or I can use a certain sonic situation (pitch – phrase – timbre etc.) as an anchor point, from where the piece develops away, only to return to it after some time (perhaps not completely, but in a quite similar shape), then walking away again, returning again etc.

 

 

 

 

 

 

 

 


Or I don´t want any noticeable continuity at all. I let the piece (randomly) jump from sonic situation to sonic situation, from one musical terrain to a (completely) different other musical area, where our piece stays for a
while, exploring the sonic potential of the terrain before it jumps somewhere else.

 

 

 

 

 

 

 


The third (last mentioned) kind of structure denies the existence of a sonic spline more often than not, and in its most perfect appearance it answers the question of Decision 1 with “no, no spline at all”.


This approach is inspired by Stockhausen´s idea of Moment-forms:
"a given moment is not merely regarded as the consequence of the previous one and the prelude to the coming one, but as something individual … capable of existing on its own.“

(from: K.H. Stockhausen “Moment form: Neue Beziehungen zwischen Aufführungsdauer, Werkdaer und Moment“ in “Texte zur Musik 1“, 1963, pp 189-210)


Let me go a bit deeper here – it´s going to be too theoretical otherwise. In the video behind the following link, and based on the preset “MOMENTS.vcv” I demonstrate the last mentioned approach.


There are two completely different sonic areas.


One of which is a plucked sound playing random notes from a certain scale plus a bowed sound playing – also randomly – notes from the same scale. Both sounds are send through a reverb to the main audio output.
There is a lot of - independent – modulation going on To explore the potential of these basic parameters: The rhythm both sounds produce notes at changes permanently, the timbre of both sounds is modulated, and the
bowed sound is sent through a low-pass filter, with the filter frequency being modulated in changing speeds.


The other sonic area consists of a bass-line and a gliding sound, both playing an arpeggio of one and the same chord, with both voices selecting their arpeggio notes independently from each other. The pulse width of the bass-line sound is modulated by an LFO. An LFO-inverter combination lets the whole patch jump between these two different sonic areas (“moments”).
https://youtu.be/m6eL0lXkiKU

 

 

 

 

 

 

 

 

 

 

 


Contrasting these two different moments keeps the listener awake for a while! :-))


Let me only mention, that everything I´m writing here about the structure of our piece is also valid for only parts of if, in case we are working on a longer one. And each part may well follow different decisions of course (some parts following a “spline”, a certain sonic baseline, some others don´t etc.).


If we decide not to have a central sonic theme – I called it a “spline” - we have to make the next decision:


Decision 2:
Shall there be a noticeable development, direction, at all, or do we prefer a succession of random and quite unstructured single sonic events?


A “noticeable sonic direction” is something else, something less binding than what I called a “spline”.

An example:

A spline may be a certain pitch development using a certain class of timbres. Or a certain class of envelopes (e.g. plucked, bowed etc.) Or a certain combination of sonic parameters like timbre, volume, scale, envelope, rhythm etc.

But even without such a central, basic, typical way of using certain sonic parameters a piece can have a noticeable direction.
Let´s say I have chosen an averagely rising and averagely falling pitch development (over different octaves, with different grades of probability and different ranges of probability) as my spline. All other parameters shall be completely random.
When I give up this spline, when I set even the pitch parameter completely at random without limitations, I can – nevertheless – give the piece a direction by – for example – consecutively adding more and more voices
(one after the other, or two voices more followed by 1 voice less followed by two more voices etc.) to the piece. The increasing number of voices is the direction. None of the voices develops along a spline though, nor does
the whole piece.


The preset “direction.vcv” is a simple 7-voice example – more stylised than composed – and the video behind the following link demonstrates it without any talking.
https://youtu.be/8nvytYMGKXA


There are a lot of different ways to give a piece (a part of a piece) direction. Instead of adding up different voices we can add more and more different timbres to the same pitch development/melody, use different envelopes etc. , but the idea stays the same.


Now for the other side of decision 1:

I do want to have a spline, but I prefer the third alternative mentioned above, the “jumpy” structure. The other two ways, which are based on more or less continuous movements allow me to use nearly anything as my sonic spline, because I can slowly, consecutively modify any sonic parameter without causing the impression, that the piece has left one sonic world and entered a completely different one. (e.g. second 5 of the piece resembles second 1, and second 5 develops into second 10, which resembles second 5 etc.)


But when I want jumps instead of continuous developments I have a problem: I want the piece to leave one sonic world and enter another sonic world, but there shall be something similar, something familiar in both. Now I cannot use any sonic parameter as my spline any more. Timbre e.g. will be too strong, too determining, too specific in most of the cases to allow the piece to leave its actual sonic world. With scales it is the same. A weak and not too pointed rhythm will do, but the parameter “no rhythm” is too weak on the other hand to serve as a sonic spline. Pitch development (melodic or not) is too strong again.


So I will have to take to combinations of quite strong parameters, one of which stays stable and only slightly changed (but eventually attenuated, sent to the background), the others change dramatically. When I choose pitch development as one of the strong and dramatically changing parameters (e.g. to contrast unchanged timbre) I run into another problem: I have to tame randomness, because two random pitch developments will never be fundamentally different enough to contrast the unchanged timbre.


Another way – kind of easier quite often – is to fade in remnants of the sonic “spline world” after the jump to another “sonic planet”. Like pale memories of bygone times. The preset “splinejump.vcv” and the video behind the following link show an example.
https://youtu.be/meXizM5iQJY


The “pale memories” are playing with a higher amount of reverb in this example, and the jump away from the “spline world” is emphasized by a cannon shot like sound generated by abruptly switching the reverb.
There are at least two other decisions to be made:
Decision 3:
Do I allow myself to intervene from time to time?

And
Decision 4:
Do I follow a clean puristic approach, or do I allow myself to record single pieces and put them together in a later production process?
Besides pure “ideology” Decision 3 depends on the length of our piece, and on what kind of listener we are addressing. The longer the piece is, the more often it will be needed to intervene, to change the patching, if we want to keep the attention of an active listener, of a listener, who really wants to notice, to understand, and savour the details of our music, whereas the listener, who rather likes relaxing with a bit of sonic goings on in the background, who doesn´t want to get distracted from their own thoughts and flowing feelings or meditations won´t mind, when the piece doesn´t really develop and doesn´t deliver peaks of sensation over a quite long time.


Decision 4 is mere ideology – and not really dependent on whether we are performing life or producing the piece in our studio, because even in life situations we can insert pre-recorded parts and samples.

​

​

​

Chapter 3.2:
Basic Compositional Techniques


There are three groups of basic compositional techniques, which are not exclusively generative music, but are taught at any school, high school and university, which deals with musical matters, only that we meet them in slightly different shape and forming here in generative music than we are used to see them in other musical styles (classic, common pop etc.). These techniques are:

  • contrasting sonic events

  • repeating sonic events

  • inverting relations between sonic parameter

 

Let´s go and have a look of their specific form, that they take in generative music.
 

Chapter 3.2.1:
Contrasting


Pre-recorded sound files vs. patch generated sonic events


We can contrast pitch developments (with or without a noticeable melody line) from parts of our patch with pre-recorded sounds, either from the “real world” (like in musique concrète), or from more or less conventionally composed pieces of music.

 

 

 

 

 

 

 

 

 

 

 


The preset “realworld.vcv” and the video behind the following link give an example. The sounds from the last two presets/videos alternates with a recording from a supermarket in the Czech Republic. An intentional choice. The the Czech girl asks “What is the meaning of all of this?” pointing at the goings on in the supermarket. I was lucky to catch this moment with my field recorder.
https://youtu.be/rP2eA4wkPBY


Classic and common ways to generate contrast


We can contrast long sounds (notes or other kinds of sonic phenomena) with short ones, we can let our pitch development (random or not) limit to rather low frequency regions and contrast it with pitches from rather high frequency regions from time to time, and we can contrast different timbre families, and last but not least we can contrast different scales and keys (in classic musical theory they call it “modulation”) – all of this is self-evident and common knowledge I think. No further explanations or examples needed.


A bit less common may be the idea of contrasting different envelopes, not only the stereotypes of plucked versus bowed, hit versus blown etc. but also no release versus long release, multistage envelopes versus AR envelopes and others.


Melody vs. random pitches


Contrasting melody vs. random pitch developments is a specific method in generative music. Preset “contrasts.vcv” and the video behind the following link combine some of the last mentioned contrasting methods.
https://youtu.be/ObIUn-tI--8


Chapter 3.2.2:
Repeating, Modifying and Inverting Relations


Repeating and modifying is bread-and-butter exercise in classical music, in pop (well, more repeating than modifying) and other common fields of sonic manifestation.

But how can I repeat random goings on?

And what shall I call a modification, a variation (as a compositional technique), if everything is randomly changing anyway?


Well, let me go back to the idea of musical “moments” from some pages earlier. Each of these moments is distinguished by a certain relation of a couple of sonic parameters or by a succession of those relations. Different relations of sonic parameters cause different sensual and emotional impressions.


It´s these relations (combinations) of sonic parameters which we can repeat, modify and invert. Not a certain pitch development (or even melody) alone. Not a certain rhythm alone. Not certain timbres (= instruments in classic music) alone. But all of these parameters together in their specific relation, set by a compositional will at a certain (but not always fixed) point in the whole piece, that is, what we can return to, in the same way and simply repeating, or in modified shapes, or mirrored along sonic axis (e.g. high pitches plus bowed plus random mirrored along the sonic axis of the envelope gets low pitches plus bowed plus recognisable melody).


To make it absolutely clear: we don´t change one parameter, but the relation of (all) sonic parameters, which make the musical (sub-)moment in question.


The preset “mirror.vcv” and the video behind the following link give an example and further explanations and demonstrations. Here the inverting, the mirroring doesn´t happen along an axis, but related to a point, it´s like a mathematical point reflection.

And there are two inversions going on: the pairs random pitch plus plugged vs. melody plus blown run all at the same BPM and with the same rhythm:
https://youtu.be/sIqFyA-c5og

​

​

​

​

​

​

​

 

​

​

​


Chapter 3.2.3:
Basic but Exclusively Generative Techniques


The above mentioned aspect of relations leads us to the real power and meaning of networks of modulations (described in chapter 1): defining and setting relations between otherwise different and independent sonic parameters.

Please look at the following block diagram:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The left LFO modulates the frequency of the right LFO as well as the cutoff frequency of the filter.
The right LFO modulates the pitch of the VCO.
This means, that when the frequency of the changes of the VCO´s pitch (not the pitch itself) increases, then the cut-off frequency of the filter increases too. The pitch of the VCO´s sound and the cut-off frequency of the VCF (two musically tightly-knit parameters) stay completely independent from each other, but the frequency of pitch changes and the cut-off frequency (two formerly completely independent and musically not nearer related parameters) are bound into a fixed relation now.


The preset “relations_1.vcv” and the video behind the following link show
this patch.
https://youtu.be/_GxhOo9sKmo


The next graphic shows a more complex example.

The modulation network containing 5 LFOs, two submixers and one so called “main mixer” is one of the networks, which I discussed in chapter 1.

I have added two voices, which are modulated by this network. Each voice contains a VCO, a VCA (in the video I have substituted this VCA by the channel CV of a mixer) and a quantizer.

The VCA in voice 1 (VCO 1) is modulated by an envelope, but the VCA in voice 2 (VCO 2 is modulated from within the modulation network (square wave of LFO 2) with “tamed” flanks (Slew Limiter).

Voice 1 has got a sequencer, but the pitch CV of voice two is generated by the mixer called “main mixer”.
The sequencer is clocked by LFO 5, and its key changes according to LFO 1, and the timbre of VCO 1 is modulated by LFO 4.
The timbre of LFO 2 is modulated by Submixer 1.


The preset “relations_2a.vcv” and the video behind the following link show this patch.
https://youtu.be/7B4dJPs81Jo


But let me talk about the sonic relations, that this patch establishes now:


When the output level of LFO 1 rises, and the rise is not compensated by a decrease of the the output of LFO 2, then the submixers output level rises as well, which leads to a change of the waveform of VCO 2 as if we had
manually turned the waveform knob to the right.
At the same time the level of the main mixer rises, if not neutralised by what is coming out of LFO 5. This means that VCO 2 generates higher pitches.


The sonic relation reads (for voice 2):
higher pitches <---> more (inharmonic) partials
lower pitches <---> less partials

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


But this relation can be destroyed by the sub-net of LFO 3,4 and 5.

How often this relation is made invalid (= how often the CV level coming from submixer 1 is partly or completely cancelled out by LFO 3 – 5) depends on the frequency relation of LFO 3, 4 and 5.

With similar frequencies the phases of the CV from submixer 1 and submixer 2 are only very seldom opposite to each other.
But the above mentioned relation can also be set invalid by LFO 2 alone.
But we wouldn´t hear that, because when the triangle wave of LFO 2 starts going down, the square wave switches to LOW, and therefore the VCA (mixer channel in the video) of VCO 2 is switched off.
But LFO 1 modulates also the key of the sequencer (= voice 1).


The second sonic relation reads therefore:
More inharmonic partials in voice 2 to means increasing key values (C → Csharp → D etc.) of voice 1 most of the times.
Let´s look at the other modulation of voice 1 now.
When the level at the output of LFO 3 rises the frequency of LFO 4 rises as well, and with that the frequency of changes in the timbre, the spectrum of VCO 1.
At the same time the frequency of the changes of the output level of submixer 2 increases, but at an increasing base-level (because of the rising slope of LFO 3, which is added to the (faster changing) level of LFO 4 in submixer 2, and the output level of LFO 5 follows this CV level development – and so do the changes of the clock (“speed”) of the sequencer.


The third sonic relation is:
Timbre changes and rhythm are related to each other.
Just watch the video behind the last link again to follow these explanations.


Well, let´s leave the matter of networks and setting up relations for a moment.

There is another important compositional technique, which is not specific for generative music – but which gets an increased importance with generative music:

using stable elements, elements which can give the listener orientation, which return from time to time (perhaps slightly changed), or which are always audible, sometimes in the background, sometimes more prominent.

Those elements make the overwhelmingly big ocean of seemingly unstructured (and sometimes very small) changes, which are dominant in generative music better digestible for the listener, as they serve as a lighthouse giving a direction.


And the last aspect of this sub-chapter is not new to us at all:

it´s our good old “set limits to randomness” techniques, limits which can change over time of course.

​

​

Chapter 3.3:
Specific Compositional Techniques

 


This chapter is not at all meant to deliver a conclusive enumeration, but rather talks about and shows some especially remarkable and useful techniques. And as always: the mentioned VCV Rack presets are available only in the books. Well, let´s start now.


Chapter 3.3.1:
Pitch Dependency


I talked about relations between normally independent sonic parameters in the last chapter.

A quite special relation is pitch dependency of parameters. There are a couple of ways to establish pitch dependency.

The preset “pitchdependency.vcv” and the video behind the following link show one of them, a way which you should be able to go whatever system or modules you are using.


In the example the length of the notes as well as the amount of reverb depend on the pitch of the note. The higher the notes pitch the shorter they are, but the more reverb is added.
The lower the pitch of the notes is, the longer the notes are, and the less reverb is added.


The block diagram shows the systematic, and the video behind the following link demonstrates the patch.
https://youtu.be/ZTqgzpbKVD0

 

 

 

 

 

 

 

 

 

​

​

​

​

​

​

​


Chapter 3.3.2:
Rhythm


An interesting way to deal with rhythm is to cross-fade two basically different rhythms, but sustaining the melodic parts of the piece (bass line, lead etc.).
Each of the different rhythms interprets these sustained melodic elements in its own way, and in the middle of the cross-fade process, when both rhythms are audible, we get interesting rhythmical side effects.


The patch “patternmerge.vcv” and the video behind the following link demonstrate this compositional technique. In this example the lead line changes its timbre parallel to the changes of the underlying rhythm – its timbre, not the melody itself.
The patch could be made a lot smaller by using less but more versatile modules. But for the sake of system compatibility I have decided to set it up with a couple of individual modules.
And there is another aspect of using more modules, each of which having not that many functions, instead of using less but very versatile modules equipped with a bunch of functions:
You get more patch points to intervene!


With rather basic modules each function of the module is mostly equipped with a CV input jack, whereas “larger” modules, which offer a whole lot of functions commonly allow only some of them to be modulated by external CV, whereas the parameters of the other functions need to be set with knobs plus menu diving.

​

Well, back to the patch now.
The two merging rhythms are a simple 8/8 and a simple 6/6 rhythm. The block schematic shows the general relation of all functional groups of modules.
https://youtu.be/gEyPAoH8v1Q

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Another method to hold the listeners attention and add (sonic) colour to our rhythm is to replace (all or some) drum/percussion instruments by wooshes, glitches, clicks, pings etc. and even melodic elements. So, not changing the rhythm but the sonic elements, that build it.

It cannot be a matter of this eBook (not at all of this chapter) to go through all the methods to produce interesting sounds of the mentioned kind.

This would be a matter of a whole new eBook about designing sounds (and probably will? Next year? Just follow my website and my social media – links in Appendix B).


The video behind the following link shows a little example, where wooshes and glitches and a melody line build a multi-rhythmical pattern. The rhythmical aspects of the melody are not generated by different note lengths and breaks, but by rhythmical changes of the timbre of the instrument.
https://youtu.be/_ZrKIAI04Bo


I have already talked about adding randomness to (poly-)rhythms, so only this short reminder here.
And changing the rhythm of a melody is bread and butter practise when composing, and is not more important in generative music than in any other musical style and way to produce music. So no further explanations needed.


In a world, which is still suffering from the loudness war, which recently tried to devastate the last rests of listeners´ susceptibility for sonic details it may be appropriate to mention, that we can intentionally compose even volume and loudness and their development (not an aspect of rhythm, but not a matter for a chapter of its own either).


Chapter 3.3.3:
Tension and Layers


A special strategy to add and increase tension is adding more and more aspect layers to a musical development.

We can do so by adding voices, e.g. simply doubling a succession of phrases and using different timbres for each new “clone”, or by adding different voices, or by consecutively adding notes to a melody, e.g. starting with a simple phrase of long notes and adding more and more melody building notes while making the note values shorter and shorter.


We can add different kinds of layers (leads, different bass lines with different timbres, percussion, chord progressions, a second, third etc. melody line and more).
The preset “layers.vcv” and the video behind the following link show an example of different percussion tracks, a bass line, and a lead.
https://youtu.be/xENFGFMHh_0


And the preset “layermelody.vcv” and the video behind the following link show an example only with different melody lines.
https://youtu.be/9I-OfLqInT4

​

​

Chapter 3.4:
Certain Patch Techniques And Examples



We are approaching chapter 4 about building blocks of generative patching with this sub-chapter 3.4. But whereas in chapter 4 the more technical and organisational aspects prevail, I´d like to talk about a few patches of special musical and compositional meaning here in chapter 3.4 at first.


Chapter 3.4.1:
Switching Voices and Larger Parts of the Patch


When we have a couple of voices – different percussion instruments, VCOs etc. - we can switch between different combinations of them. If each of these voices is fed into an individual mixer channel, we can easily do so by (randomly) fading the channels in and out (or “hard switching” between them). This gives the whole piece a structure.

 

 

 

 

 

 

 

 

 

 

 


The preset “channelswitch.vcv” and the video behind the following link demonstrate an example. Based on the preset “layers.vcv” there´s a Gray Code module added (and a second clock divider to make the switches take place less often).

The eight outputs of the Gray Code module switch different combinations of the 8 mixer channels on and off. Inserting slew limiters between the Gray Code module and the mixer channel CV leads to cross-fades instead of “hard switches”.

https://youtu.be/qxOGreuQ5R0


We can use the same principle (with or without slew limiters) to switch larger parts of our patch on and off, parts, which are not voices, but parts of modulation networks. Instead of audio mixer channels we switch VCAs or CV mixer channels, which open and close the path to the subpatches in question.

In case we wanted only to switch a single sub-patch on and off we can use a Bernoulli module instead of a Gray Code module.

 

 

 

 

 

 

 

 

 

 

 

 


The preset “subnet.vcv” and the video behind the following link show an example.
https://youtu.be/8Fy_I0APcj0


Chapter 3.4.2:
Sculpture Randomness and Setting Borders


By feeding a Sample & Hold module with regular waveforms I can sculpture the random output of the S&H module down to no randomness at all. Instead of feeding in simple regular waves like sine, triangle, saw I can produce my own waves of course (using an LFO network or a cycled envelope or a more versatile LFO etc.).

 

And so I can give the random voltages, which come out of the S&H module a kind of structure, which I can even emphasize by modulating the pulses, which trigger the Sample & Hold module.


The preset “sculptureRandom.vcv” and the video behind the following link demonstrate this.
https://youtu.be/hAhIS4FUGNQ


Sometimes I need the level of the random voltage to be quite precise between certain upper and lower borders.

To achieve this I can patch a VCA (setting the upper border) and a CV offset module (setting the lower border) between the sample source and the S&H unit like shown in the following graphic.

 

 

 

 

 

 

 

 

 

 

 

 


And there is an example of course: the preset “randomborder.vcv” and the video behind the following link.
https://youtu.be/k0g20ma1utY


Chapter 3.4.3:
Jumping between certain BPM and Inverting Pitch Lines


There are dedicated BPM LFOs, and a lot (if not most) of the MIDI modules offer easy to handle BPM functionality, but there is also a rather basic way to adjust BPM and to jump between certain BPM rates.

 

 

 

 

 

 

 

We can do the latter with a switch, e.g. a Bernoulli gate, and an CV offset module, the latter changing the CV input at a sequencer´s clock or at other clock generating units.


To choose the right CV offset – what is: choosing the right clock frequency – we have to remember the following relation:

 

 

 

 

 

 

 

 

 

 

 


The presets “BPM.vcv” and “BPM2.vcv” as well as the video behind the following link demonstrate this technique.
https://youtu.be/Nz29MZa7pF4


A small oscilloscope comes in handy when you want to do things like that.
But even if you work exclusively with hardware systems, you can get some quite inexpensive ones. I use the one shown in the photograph (check my website for details: https://dev.rofilm-media.net). These little helpers come something around 80 dollars and offer a lot of useful functions, even if you don´t integrate them as an ordinary part of your system. But integrating these handheld oscilloscopes is not a difficult task.


I have made a video about that matter. If you want to know more about it, just send me a message via my website https://dev.rofilm-media.net.


When we want to invert a pitch development, e.g. a melody line, a bass line or an arpeggio by simply using an inverter module, we have to be understand, that inverting this way means starting at the same note, but going into different directions.

It will be necessary to offset one of the lines by 1 or two octaves most of the times.

 

 

 

 

 

 

 

 

 

Just follow the 1 V/Oct characteristic to choose the right CV offset. The preset “invert.vcv” and the video behind the following link demonstrate this.
https://youtu.be/ck7q31E2cwk

 


Chapter 3.4.4:
Mixing Stable and Random Elements


Let´s talk about one example of how to mix stable elements with random elements of the piece. You find the preset to this chapter under the name “mix.vcv”, and as always there´s also a video about it, which you reach by clicking the next link (below the explanations).


The principle is switching between a regular and stable sequence generated by a sequencer and a random pitch development generated by an S&H module.
To make things a bit more interesting the key of the sequence changes from time to time as well as the pitch ranges of the S&H unit.

 

Therefore we have 4 different situations:

  • stable + key 1 (C in the example)

  • stable + key 2 (F in the example)

  • random + quite low pitches and a small pitch range

  • random + a broad pitch range and rather high pitches

 

Instead of (random) switches - I use LFOs to generate the changes from one of the four situations to the next. I have more control over the share, which each of these situations shall have in the piece (respectively part of the piece) this way.

The diagram shows the principle, and the preset (“mix.vcv”) and the video demonstrate the details.
https://youtu.be/ck7q31E2cwk

 

​

 

 

 

 

 

​

​

​

​

​

 

 

 

 

 

 

 

... to be continued.

​

Rolf Kasten

Chapter3_1.jpg
Chapter3_6.jpg
Chapter3_10.jpg
Chapter3_12.jpg
bottom of page