music package

Submodules

music.effects module

class music.effects.Chorus

Bases: object

class music.effects.Delays

Bases: object

class music.effects.LPFilter

Bases: object

class music.effects.Notch

Bases: object

class music.effects.Reverb

Bases: object

class music.effects.ScatterLocation

Bases: object

class music.effects.ShuffleSound

Bases: object

class music.effects.Vocoder

Bases: object

class music.effects.Wavelet

Bases: object

music.synths module

class music.synths.CanonicalSynth(s, **statevars)

Bases: object

Simple synth for sound synthesis with vibrato, tremolo and ADSR.

All functions but absorbState returns a sonic array. You can parametrize the synth in any function call. If you want to keep some set of states for specific calls, clone your CanonicalSynth or create a new instance. You can also pass arbitrary variables to user latter on.

f : scalar
The frequency of the note in Hertz.
d : scalar
The duration of the note in seconds.
fv : scalar
The frequency of the vibrato oscillations in Hertz.
nu : scalar
The maximum deviation of pitch in the vibrato in semitones.
tab : array_like
The table with the waveform to synthesize the sound.
tabv : array_like
The table with the waveform of the vibrato oscillatory pattern.
>>> cs =  CanonicalSynth()
absorbState(s, **statevars)
adsrApply(audio_vec)
adsrSetup(A=100.0, D=40, S=-5.0, R=50, render_note=False, adsr_method='absolute')
rawRender(**statevars)
render(**statevars)

Render a note with f0 Hertz and d seconds

render2(**statevars)
synthSetup(table=None, vibrato_table=None, tremolo_table=None, vibrato_depth=0.1, vibrato_frequency=2.0, tremolo_depth=3.0, tremolo_frequency=0.2, duration=2, fundamental_frequency=220)

Setup synth engine. ADSR is configured seperately

tremoloEnvelope(sonic_vector=None, **statevars)
class music.synths.IteratorSynth(s, **statevars)

Bases: music.synths.CanonicalSynth

A synth that iterates through arbitrary lists of variables

Any variable used by the CanonicalSynth can be used. Just append the variable name with the token _sequence.

Example: >>> isynth=M.IteratorSynth() >>> isynth.fundamental_frequency_sequence = [220, 400, 100, 500] >>> isynth.duration_sequence = [2, 1, 1.5] >>> isynth.vibrato_frequency_sequence = [3, 6.5, 10] >>> sounds=[] >>> for i in range(300):

sounds += [isynth.renderIterate(tremolo_frequency=.2*i)]
>>> M.utils.write(M.H(*sounds),"./example.wav")
iterateElements()
renderIterate(**statevars)
music.synths.makeGaussianNoise(self, mean, std, DUR=2)
music.synths.sequenceOfStretches(x, s=[1, 4, 8, 12], fs=44100)

Makes a sequence of squeezes of the fragment in x.

x : array_like
The samples made to repeat as original or squeezed. Assumed to be in the form (channels, samples), i.e. x[1][120] is the 120th sample of the second channel.
s : list of numbers
Durations in seconds for each repeat of x.
>>> asound = H(*[V(f=i, fv=j) for i, j in zip([220,440,330,440,330],
                                              [.5,15,6,5,30])])
>>> s = sequenceOfStretches(asound)
>>> s = sequenceOfStretches(asound,s=[.2,.3]*10+[.1,.2,.3,.4]*8+[.5,1.5,.5,1.,5.,.5,.25,.25,.5, 1., .5]*2)
>>> W(s, 'stretches.wav')
Notes
-----
This function is useful to render musical sequences given any material.

music.tables module

class music.tables.Basic(size=2048)

Bases: object

Provide primary tables for lookup

create sine, triangle, square and saw wave periods with size samples.

drawTables()
makeTables(size)

music.utils module

music.utils.CF(s1, s2, dur=500, method='lin', fs=44100)

Cross fade in dur milisseconds.

music.utils.H(*args)
music.utils.J(s1, s2, d=0, nsamples=0, fs=44100)

Mix s1 and s2 placing the beggining of s2 after s1 by dur seconds

s1 : numeric array
A sequence of PCM samples.
s2 : numeric array
Another sequence of PCM samples.
d : numeric
The offset of the second sound, i.e. the displacement that the start of the second sound. (First sound has offset 0). Might be negative, denoting to start sound2 |d| seconds before s1 ends.

if d<0, it should satisfy -d*fs < s1.shape[-1]

TODO: enhance/recycle J_ and mix2 or delete them. TTM

(.functions).mix2 : a better mixer

music.utils.J_(*args)

Mix sonic vectors with offsets.

J_ receives a sequence of sonic vectors, each a sequence of PCM samples. Or a sequence alternating the sonic vectors and their offsets.

(.functions).mix2 : a better mixer

music.utils.V(*args)
music.utils.amp2Db(amp_difference)

Receives amplitude proportion, returns decibel difference

music.utils.db2Amp(db_difference)

Receives difference in decibels, returns amplitude proportion

music.utils.hz2Midi(hz_val)

Receives Herz value and returns midi note value

music.utils.midi2Hz(midi_val)

Receives midi note value and returns corresponding Herz frequency

music.utils.midi2HzInterval(midi_interval)
music.utils.mix(self, list1, list2)

(.functions).mix2 : a better mixer

music.utils.mix2(sonic_vectors, end=False, offset=0, fs=44100)

Mix sonic vectors. MALFUNCTION! TTM TODO

The operation consists in summing sample by sample [1]. This function helps when the sonic_vectors are not of the same size.

sonic_vectors : list of sonic_arrays
The sonic vectors to be summed.
end : boolean
If True, sync the final samples. If False (default) sync the initial samples.
offset : list of scalars
A list of the offsets for each sonic vectors in seconds.
fs : integer
The sample rate. Only used if offset is supplied.
S : ndarray
A numpy array where each value is a PCM sample of the resulting sound.
>>> W(mix2(sonic_vectors=[V(), N()]))  # writes a WAV file with nodes

Cite the following article whenever you use this function.

[1]Fabbri, Renato, et al. “Musical elements in the

discrete-time representation of sound.” arXiv preprint arXiv:abs/1412.6853 (2017)

music.utils.mixS(l1, l2=[], end=False)
music.utils.normalize(vector)
music.utils.normalizeRows(vector)

Normalize each row of a bidimensional vector to [0,1]

music.utils.normalize_(vector)
music.utils.p2f(f0=220.0, semitones=[0, 7, 7, 4, 7, 0])
music.utils.panTransitions(p=[(1, 1), (1, 0), (0, 1), (1, 1)], d=[2, 2, 2], method=['lin', 'circ', 'exp'], fs=44100, sonic_vector=None)

Each pan transition i starts and ends amplitude envelope of channel c in p[i][c] and p[i+1][c].

Consider only one of such fades to understand the pan transition methods:

‘lin’ fades linearly in and out:
x*k_i+y*(1-k_i) or s1_i*x_i +s2_i*(1-x_i) = (s1-s2)*x_i + s_2
‘circ’ keeps amplitude one using
cos(x)**2 + sin(y)**2 = 1

‘exp’ makes the cross_fade using exponentials.

‘exp’ entails linear loudness variation for each channel, but total loudness is not preserved because final amplitude’s ambit is not preserved. ‘lin’ and ‘circ’, on the other hand, preserve total loudness but does not provide a linear variation of loudness for each sound on the cross-fade.

For now, each channel’s signal are kept from mixing. One immediate possibility is to maintain the expected tessiture of the sample amplitudes. Say p = [.5,1,0,.5] ~ [(1,1),(1,0),(0,1),(1,1)]. Then pi,pj = .5,1 might be performed as: s1 = s1*.5 -> 0 s2 = s1*.5 -> (s1+s2)*.5 Or through sinusoids and expotentials

Make fast and slow fades and parameter transitions using weber-fechner and steven’s laws. E.g. pitch_trans = [pitch0*X**(i/Y) for i in range(12)]

pitch_trans = [pitch0 + X*i**Y for i in range(12)]
music.utils.profile(adict)
Should return a dictionary with the following structure:

d[‘type’][‘scalar’] should return all the names of scalar variables as strings. scalar: all names in numeric, string, float, integer, collections: all names in dict, list, set, ndarray

d[‘analyses’][‘ndarray’] should return a general analysis of the ndarrays, including size in seconds of each considering fs. mean and mean square values to have an idea of whats there. RMS values in different scales and the overal RMS standard deviation on a scale is helpful in grasping disconttinuities. The overal RMS mean of a scale is a hint of whether the variable is meant to be used (or usable as) PCM samples or parametrization. E.g.

  • Large arrays, i.e. with many elements, are usable as PCM samples.

If the mean is zero, and they are bound to [-1,1] or to some power of 2, specially [-2**15, 2**15-1], it is probably PCM samples synthesized or sampled or derivatives. If it has more than one or two dimensions where the many samples are, it might be a collection of audio samples with the sample size

  • Arrays with an offset (abs(mean) << 0) and small number of elements

are good candidates for parametrization. They might be used for repetition, yielding a clear rhythm. They might also be used to derive more ellaborate patterns, such as by using the values of more then one arrays, and using them simultaneously, often creating patterns because of the different sizes of each array.

  • Values in the order of hundreds and thousands are

candidates for frequency. Values within zero and 150 are candidates for decibels, and for absolute pitch or pitch interval through MIDI notes and semitones count, respectively. If the values are integers of very close to them, or have many consecutive values deviating less then 10, it is more likely to be related to pitches. If the consecutive values deviate by tens to about a hundred, it is kin to decibels notation.

music.utils.read(fname)
music.utils.resolveStereo(afunction, argdict, stereovars=['sonic_vector'])
music.utils.stereo(sonic_vector)
music.utils.write(sonic_vector, filename='sound_music_name.wav', normalize=True, samplerate=44100)

Module contents