MUSICAL NOTATION

Hand-written musical notation by J. S. Bach: beginning of the Prelude from the Suite for Lute in G minor BWV 995 (transcription of Cello Suite No. 5, BWV 1011) BR Bruxelles II. 4805.
Music notation or musical notation is any system which represents aurally perceived music, through the use of written symbols.

Hand-written musical notation by J. S. Bach: beginning of the Prelude from the Suite for Lute in G minor BWV 995 (transcription of Cello Suite No. 5, BWV 1011) BR Bruxelles II. 4805.

Contents:-
1 History
1.1 Ancient Greece
1.2 Arab world
1.3 Early Europe
2 Modern notation
2.1 Variations
3 Notation in various countries
3.1 India
3.2 Russia
3.3 China
3.4 Japan
3.5 Indonesia
4 Other systems and practices
4.1 Cipher notation
4.2 Solfège
4.3 Letter notation
4.4 Tablature
4.5 Klavar notation
4.6 12-note non-equal temperament
4.7 Chromatic staff notations
4.8 Graphic notation
4.9 Simplified Music Notation
4.10 Parsons code
4.11 Braille music
4.12 Integer notation
4.13 Turntablist transcription methodology
5 Computer musical notation
6 Perspectives of musical notation in composition and musical performance
7 Patents


History
It has been suggested that the earliest form of musical notation can be found in a cuneiform tablet that was created at Nippur in about 2000 B.C. Apparently the tablet represents fragmentary instructions for performing music, that the music was composed in harmonies of thirds, and that it was written using a diatonic scale. A tablet from about 1250 B.C. shows a more developed form of notation. Although the interpretation of the notation system is still controversial, it is clear that the notation indicates the names of strings on a lyre, the tuning of which is described in other tablets. Although they were fragmentary, these tablets represent the earliest recorded melodies found anywhere in the world.

Ancient Greece

Photograph of the original stone at Delphi containing the second of the two hymns to Apollo. The music notation is the line of occasional symbols above the main, uninterrupted line of Greek lettering.

Ancient Greek musical notation was capable of representing pitch and note-duration, and to a limited extent, harmony. It was in use from at least the 6th century BC until approximately the 4th century AD; several complete compositions and fragments of compositions using this notation survive. The notation consists of symbols placed above text syllables. An example of a complete composition is the Seikilos epitaph, which has been variously dated between the 2nd century BC to the 1st century AD. Three hymns by Mesomedes of Crete exist in manuscript. The Delphic Hymns, dated to the 2nd century BC, also use this notation, but they are not completely preserved. Ancient Greek notation appears to have fallen out of use around the time of the Decline of the Roman Empire.

Arab world
Al-Kindi (801–873 AD) was the first great theoretician of Arabic music. He proposed adding a fifth string to the 'ud and discussed the cosmological connotations of music. He surpassed the achievement of the Greek musicians, in using the alphabetical annotation for one eighth. He published fifteen treatises on music theory, but only five have survived. Al-Farabi (872–950) wrote a notable book on music theory entitled Kitab al-Musiqa al-Kabir (The Book of Music). His pure Arabian tone system is still used in Arabic music.
Arabic maqam is the system of melodic modes used in traditional Arabic music, which is mainly melodic. The word maqam in Arabic means place, location or rank. The Arabic maqam is a melody type. Each maqam is built on a scale, and carries a tradition that defines its habitual phrases, important notes, melodic development and modulation. Both compositions and improvisations in traditional Arabic music are based on the maqam system. Maqams can be realized with either vocal or instrumental music, and do not include a rhythmic component.
A theory on the origins of the Western solfège musical notation suggests that it may have had Arabic origins. It has been argued that the solfège syllables (do, re, mi, fa, sol, la, ti) may have been derived from the syllables of the Arabic solmization system Durr-i-Mufassal ("Separated Pearls") (dal, ra, mim, fa, sad, lam). This origin theory was first proposed by Francis Meninski in his Thesaurus Linguarum Orientalum (1680) and then by Jean Benjamin De Laborde in his Essai sur la Musique Ancienne et Moderne (1780).

Early Europe
Scholar and music theorist Isidore of Seville, writing in the early 7th century, remarked that it was impossible to notate music. By the middle of the 9th century, however, a form of notation began to develop in monasteries in Europe for Gregorian chant, using symbols known as neumes; the earliest surviving musical notation of this type is in the Musica disciplina of Aurelian of Réôme, from about 850. There are scattered survivals from the Iberian Peninsula before this time, of a type of notation known as Visigothic neumes, but its few surviving fragments have not yet been deciphered.
The ancestors of modern symbolic music notation originated in the Roman Catholic Church, as monks developed methods to put plainchant (sacred songs) to paper. The earliest of these ancestral systems, from the 8th century, did not originally utilise a staff, and used neum (or neuma or pneuma), a system of dots and strokes that were placed above the text. Although capable of expressing considerable musical complexity, they could not exactly express pitch or time and served mainly as a reminder to one who already knew the tune, rather than a means by which one who had never heard the tune could sing it exactly at sight.

Early Music Notation

To address the issue of exact pitch, a staff was introduced consisting originally of a single horizontal line, but this was progressively extended until a system of four parallel, horizontal lines was standardized. The vertical positions of each mark on the staff indicated which pitch or pitches it represented (pitches were derived from a musical mode, or key). Although the four-line staff has remained in use until the present day for plainchant, for other types of music, staffs with differing numbers of lines have been used at various times and places for various instruments. The modern five-line staff was first adopted in France and became almost universal by the 16th century (although the use of staffs with other numbers of lines was still widespread well into the 17th century).
Because the neum system arose from the need to notate songs, exact timing was initially not a particular issue because the music would generally follow the natural rhythms of the Latin language. However, by the 10th century a system of representing up to four note lengths had been developed. These lengths were relative rather than absolute and depended on the duration of the neighbouring notes. It was not until the 14th century that something like the present system of fixed note lengths arose. Starting in the 15th century, vertical bar lines were used to divide the staff into sections. These did not initially divide the music into measures (bars) of equal length (as most music then featured far fewer regular rhythmic patterns than in later periods), but appear to have been introduced as an aid to the eye for "lining up" notes on different staves that were to be played or sung at the same time. The use of regular measures (bars) became commonplace by the end of the 17th century.
The founder of what is now considered the standard music stave was Guido d'Arezzo, an Italian Benedictine monk who lived from 995–1050 His revolutionary method—combining a four-line stave with the first form of notes known as 'neumes'—was the precursor to the five-line stave, which was introduced in the 14th century and is still in use today. Guido D'Arezzo's achievements paved the way for the modern form of written music, music books, and the modern concept of a composer.


Modern notation

An example of modern musical notation: Prelude, Op. 28, No. 7, by Frederic Chopin

Modern music notation originated in European classical music and is now used by musicians of many different genres throughout the world.
The system uses a five-line staff. Pitch is shown by placement of notes on the staff (sometimes modified by accidentals), and duration is shown with different note values and additional symbols such as dots and ties. Notation is read from left to right, which makes setting music for right-to-left scripts difficult.
A staff of written music generally begins with a clef, which indicates the particular range of pitches encompassed by the staff. Notes representing a pitch outside of the scope of the five line staff can be represented using ledger lines, which provide a single note with additional lines and spaces.
Following the clef, the key signature on a staff indicates the key of the piece by specifying certain notes to be flat or sharp throughout the piece, unless otherwise indicated.
Following the key signature is the time signature. Measures (bars) divide the piece into regular groupings of beats, and the time signatures specify those groupings.
Directions to the player regarding matters such as tempo and dynamics are added above or below the staff. For vocal music, lyrics are written.
In music for ensembles, a "score" shows music for all players together[dubiousdiscuss], while "parts" contain only the music played by an individual musician. A score can be constructed (laboriously) from a complete set of parts and vice versa.

Variations
Percussion notation conventions are varied because of the wide range of percussion instruments. Percussion instruments are generally grouped into two categories: pitched and non-pitched. The notation of non-pitched percussion instruments is the more problematic and less standardized.
Figured bass notation originated in baroque basso continuo parts. It is also used extensively in accordion notation. The bass notes of the music are conventionally notated, along with numbers and other signs which determine the chords to be played. It does not, however, specify the exact pitches of the harmony, leaving that for the performer to improvise.

A lead sheet

A lead sheet specifies only the melody, lyrics and harmony, using one staff with chord symbols placed above and lyrics below. It is used to capture the essential elements of a popular song without specifying how the song should be arranged or performed.

A chord chart

A chord chart or "chart" contains little or no melodic information at all but provides detailed harmonic and rhythmic information, using slash notation and rhythmic notation. This is the most common kind of written music used by professional session musicians playing jazz or other forms of popular music and is intended primarily for the rhythm section (usually containing piano, guitar, bass and drums).
Simpler chord charts for songs may contain only the chord changes, placed above the lyrics where they occur. Such charts depend on prior knowledge of the melody, and are used as reminders in performance or informal group singing.
The shape note system is found in some church hymnals, sheet music, and song books, especially in the Southern United States. Instead of the customary elliptical note head, note heads of various shapes are used to show the position of the note on the major scale. Sacred Harp is one of the most popular tune books using shape notes.


Notation in various countries

India

Indian music, early 20th century

The Indian scholar and musical theorist Pingala (c. 200 BC), in his Chanda Sutra, used marks indicating long and short syllables to indicate meters in Sanskrit poetry.
In the notation of Indian rāga, a solfege-like system called sargam is used. As in Western solfege, there are names for the seven basic pitches of a major scale (Shadja, Rishabh, Gandhar, Madhyam, Pancham, Dhaivat and Nishad, usually shortened Sa Re Ga ma Pa Dha Ni). The tonic of any scale is named Sa, and the dominant Pa. Sa is fixed in any scale, and Pa is fixed at a fifth above it (a Pythagorean fifth rather than an equal-tempered fifth). These two notes are known as achala swar ('fixed notes'). Each of the other five notes, Re, Ga, ma, Dha and Ni, can take a 'regular' (shuddha) pitch, which is equivalent to its pitch in a standard major scale (thus, shuddha Re, the second degree of the scale, is a whole-step higher than Sa), or an altered pitch, either a half-step above or half-step below the shuddha pitch. Re, Ga, Dha and Ni all have altered partners that are a half-step lower (Komal-"flat") (thus, komal Re is a half-step higher than Sa). Ma has an altered partner that is a half-step higher (teevra-"sharp") (thus, tivra Ma is an augmented fourth above Sa). Re, Ga, ma, Dha and Ni are called vikrut swar ('movable notes'). In the written system of Indian notation devised by Ravi Shankar, the pitches are represented by Western letters. Capital letters are used for the achala swar, and for the higher variety of all the vikrut swar. Lowercase letters are used for the lower variety of the vikrut swar.
Other systems exist for non-twelve-tone equal temperament and non-Western music, such as the Indian svar lippi. New systems that remove handicaps in existing systems are also being developed like Ome Swarlipi.

Russia
In ancient Byzantium and Russia, sacred music was notated with special 'hooks and banners'.

China

Chinese Qin notation, 1425

The earliest known examples of text referring to music in China are inscriptions on musical instruments found in the Tomb of Marquis Ye of Zeng (d. 433 B.C.E.). Sets of 41 chimestones and 65 bells bore lengthy inscriptions concerning pitches, scales, and transposition. The bells still sound the pitches that their inscriptions refer to. Although no notated musical compositions were found, the inscriptions indicate that the system was sufficiently advanced to allow for musical notation. Two systems of pitch nomenclature existed, one for relative pitch and one for absolute pitch. For relative pitch, a solmization system was used.
The tablature of the guqin is unique and complex; the older form is composed of written words describing how to play a melody step-by-step using the plain language of the time, i.e. Descriptive Notation (Classical Chinese); the newer form, composed of bits of Chinese characters put together to indicate the method of play is called Prescriptive Notation. Rhythm is only vaguely indicated in terms of phrasing. Tablatures for the qin are collected in what is called qinpu.
The jianpu system of notation (probably an adaptation of a French Galin-Paris-Cheve system) had gained widespread acceptance by 1900 C.E. In this system, notes of the scale are numbered. For a typical Pentatonic scale, the numbers 1,2,3,5,6 would be used as notes and 0 as rests. Dots above or below a numeral indicate the octave of the note it represents. Key signatures, barlines, and time signatures are also employed. Many symbols from Western standard notation, such as bar lines, time signatures, accidentals, tie and slur, and the expression markings are also used. The number of dashes following a numeral represents the number of crotchets (4th notes) by which the note extends. The number of underlines is analogous to the number of flags or beams on notes or rests in standard notation. In the present-day jianpu system, the melody is notated alone or with chords. Harmonic and rhythmic elements are left to the discretion of the performers.

Japan
Japanese music is highly diversified, and therefore requires various systems of notation. In Japanese shakuhachi music, for example, glissandos and timbres are often more significant than distinct pitches, whereas taiko notation focuses on discrete strokes.

Indonesia
Notation plays a relatively minor role in the oral traditions of Indonesia. However, in Java and Bali, several systems were devised beginning at the end of the 19th century, initially for archival purposes. Today the most widespread are cipher notations ("not angka" in the broadest sense) in which the pitches are represented with some subset of the numbers 1 to 7, with 1 corresponding to either highest note of a particular octave, as in Sundanese gamelan, or lowest, as in the kepatihan notation of Javanese gamelan. Notes in the ranges outside the central octave are represented with one or more dots above or below the each number. For the most part, these cipher notations are mainly used to notate the skeletal melody (the balungan) and vocal parts (gerongan), although transcriptions of the elaborating instrument variations are sometimes used for analysis and teaching. Drum parts are notated with a system of symbols largely based on letters representing the vocables used to learn and remember drumming patterns; these symbols are typically laid out in a grid underneath the skeletal melody for a specific or generic piece. The symbols used for drum notation (as well as the vocables represented) are highly variable from place to place and performer to performer. In addition to these current systems, two older notations used a kind of staff: the Solonese script could capture the flexible rhythms of the pesinden with a squiggle on a horizontal staff, while in Yogyakarta a ladder-like vertical staff allowed notation of the balungan by dots and also included important drum strokes. In Bali, there are a few books published of Gamelan gender wayang pieces, employing alphabetical notation in the old Balinese script.
Composers and scholars both Indonesian and foreign have also mapped the slendro and pelog tuning systems of gamelan onto the western staff, with and without various symbols for microtones. The Dutch composer Ton de Leeuw also invented a three line staff for his composition Gending. However, these systems do not enjoy widespread use.
In the second half of the twentieth century, Indonesian musicians and scholars extended cipher notation to other oral traditions, and a diatonic scale cipher notation has become common for notating western-related genres (church hymns, popular songs, and so forth). Unlike the cipher notation for gamelan music, which uses a "fixed Do" (that is, 1 always corresponds to the same pitch, within the natural variability of gamelan tuning), Indonesian diatonic cipher notation is "moveable-Do" notation, so scores must indicate which pitch corresponds to the number 1 (for example, "1=C."

Other systems and practices

Cipher notation
In many cultures, including Chinese (jianpu or gongche), Indonesian (kepatihan), and Indian (sargam), the "sheet music" consists primarily of the numbers, letters or native characters representing notes in order. Those different systems are collectively known as cipher notations. The numbered notation is an example, so are letter notation and Solfège if written in musical sequence.

Solfège
Solfège is a way of assigning syllables to names of the musical scale. In order, they are today: Do, Re, Mi, Fa, Sol, La, Ti, and Do (for the octave). The classic variation is: Do, Re, Mi, Fa, Sol, La, Si, Do. These functional names of the musical notes were introduced by Guido of Arezzo (c.991 – after 1033) using the beginning syllables of the first six musical lines of the Latin hymn Ut queant laxis. The original sequence was Ut, Re, Mi, Fa, Sol, La, where each verse would start a note higher. "Ut" later became "Do". The equivalent syllables used in Indian music are: Sa, Ri, Ga, Ma, Pa, Dha, and Ni, while the 'bilinear music notation' system offers a chromatic method: Li, (Je), Ja, (Bo), Baw, Zu, (Zer or Fer), Fee, (De), Da, (Go), and Gaw. See also: solfège, sargam, Kodály Hand Signs. In China Qi is used instead of Ti (Qi for 七, Chinese 7).
Tonic sol-fa is a type of notation using the initial letters of solfège.

Letter notation
The notes of the 12-tone scale can be written by their letter names A–G, possibly with a trailing sharp or flat symbol, such as A♯ or B♭. This is the most common way of specifying a note in English speech or written text.

Tablature
Tablature was first used in the Renaissance for lute music. A staff is used, but instead of pitch values, the fret or frets to be fingered are written instead. Rhythm is written separately and durations are relative and indicated by horizontal space between notes. In later periods, lute and guitar music was written with standard notation. Tablature caught interest again in the late 20th century for popular guitar music and other fretted instruments, being easy to transcribe and share over the internet in ASCII format. Websites like OLGA.net (currently off-line pending legal disputes) have archives of text-based popular music tablature.

Klavar notation
Klavar notation (or "klavarskribo") is a chromatic system of notation geared mainly towards keyboard instruments, which transposes the usual "graph" of music. The pitches are indicated horizontally, with "staff" lines in twos and threes like the keyboard, and the sequence of music is read vertically from top to bottom. A considerable body of repertoire has been transcribed into Klavar notation. Klavar notation eliminates the need of accidentals and key signatures, and its advocates claim that this facilitates music-reading.

12-note non-equal temperament
Sometimes the pitches of music written in just intonation are notated with the frequency ratios, while Ben Johnston has devised a system for representing just intonation with traditional western notation and the addition of accidentals which indicate the cents a pitch is to be lowered or raised.

Chromatic staff notations
Over the past three centuries, hundreds of music notation systems have been proposed as alternatives to traditional western music notation. Many of these systems seek to improve upon traditional notation by using a "chromatic staff" in which each of the 12 pitch classes has its own unique place on the staff. Examples are the Ailler-Brennink notation, Tom Reed's Twinline notation, John Keller's Express Stave, and José A. Sotorrio's Bilinear Music Notation. These notation systems do not require the use of standard key signatures, accidentals, or clef signs. They also represent interval relationships more consistently and accurately than traditional notation. The Music Notation Project (formerly known as the Music Notation Modernization Association) has a website with information on many of these notation systems.

Graphic notation
The term 'graphic notation' refers to the contemporary use of non-traditional symbols and text to convey information about the performance of a piece of music. It is used for experimental music, which in many cases is difficult to transcribe in standard notation. Practitioners include Christian Wolff, Earle Brown, John Cage, Morton Feldman, Krzysztof Penderecki, Cornelius Cardew, and Roger Reynolds. See Notations, edited by John Cage and Alison Knowles, ISBN 0-685-14864-5.

Simplified Music Notation
Simplified Music Notation is an alternative form of musical notation designed to make sight-reading easier. It is based on classical staff notation, but sharps and flats are incorporated into the shape of the noteheads. Notes such as double sharps and double flats are written at the pitch at which they are actually played, but preceded by symbols called 'History Signs' to show that they have been transposed. The notation was designed to help people who struggle with sight-reading, including those who suffer from working memory impairments, dyslexia and other learning difficulties.

Parsons code
Parsons code is used to encode music so that it can be easily searched. This style is designed to be used by individuals without any musical background.

Braille music
Braille music is a complete, well developed, and internationally accepted musical notation system that has symbols and notational conventions quite independent of print music notation. It is linear in nature, similar to a printed language and different from the two-dimensional nature of standard printed music notation. To a degree Braille music resembles musical markup languages such as XML for Music or NIFF.

Integer notation
In integer notation, or the integer model of pitch, all pitch classes and intervals between pitch classes are designated using the numbers 0 through 11. It is not used to notate music for performance, but is a common analytical and compositional tool when working with chromatic music, including twelve-tone technique, serial, or otherwise atonal music.


Computer musical notation
Beside notations developed for human readers and performers, there are also many computer oriented representations of music designed to either be turned into conventional notation, or read directly by the computer.
There are a great many software programs designed to produce musical notation. These are called musical notation software, or sometimes Scorewriters. In addition to this software, there are many file formats used to store musical information that this software and other programs can convert into notation, sound, or into some other usable form. In a sense, these file formats are a "notation" for computers.
The most common musical file format is probably the MIDI file format, which stores pitch and timing information about music (as well as velocity, volume, pitch bend, and modulation) and can be used to control a MIDI instrument which will produce the specified sound.
There are also hybrid formats, such as ABC notation, GNU LilyPond and MusicXML, that are text files that can be read and edited by a capable human, but can also be manipulated by the computer. One notable system is the NEUMES standard, which is being used to form a computerized catalog of Medieval plainchant that can be searched by melody, text, or any encoded aspect of the music. Similarly, the Mutopia Project maintains a library of scores available in such formats (though they are not searchable by content).
Finally there are notational forms that are not intended to be processed by computer, but are nonetheless commonly used to transmit information via computer, such as text file guitar tablature which has become extremely popular following the growth of the World Wide Web.

Perspectives of musical notation in composition and musical performance
According to Richard Middleton (1990, p.104–6), and also Philip Tagg (1979, p.28–32), musicology and to a degree European-influenced musical practice suffer from a 'notational centricity'; a methodology slanted by the characteristics of notation.
Notation-centric training induces particular forms of listening, and these then tend to be applied to all sorts of music, appropriately or not. Musicological methods tend to foreground those musical parameters which can be easily notated...they tend to neglect or have difficulty with widened parameters which are not easily notated. Examples include the unique vocal style of Joni Mitchell and the String Quartets of Elliott Sharp. Because of the limitations of conventional musical notation, many present-day composers of various genres prefer to compose music which is either not notated, or notated only through the computer language of digital recording.
A further perspective on musical notation is provided in the "Composer's Note" from Fredrick Pritchard's "Brushed With Blue", Op. 55 "(No. 2, performed by Fredrick Pritchard)", pub. Effel Publications, 2002.
"The written language of music is at once indispensable yet hopelessly inadequate in conveying every detail of a musical concept. While musical scores are static, music itself is a living art, and as such requires the freedom to change, not only from bar to bar but from day to day and from year to year, the elements of experience and spontaneity unleashing the various potentials of a given work. The composer therefore entrusts the performer as co-creator of his art."


Patents

Recent US patent 6987220 on a new color based musical notation scheme
In some countries, new musical notations can be patented. In the United States, for example, about 90 patents have been issued on new notation systems. The earliest patent, U.S. Patent 1,383 was published in 1839.



Read Users' Comments ( 0 )

MUSIC THEORY

Music theory is the field of study that deals with how music works. It examines the language and notation of music. It identifies patterns that govern composers' techniques. In a grand sense, music theory distills and analyzes the parameters or elements of music – rhythm, harmony (harmonic function), melody, structure, form, and texture. Broadly, theory may include any statement, belief, or conception of or about music (Boretz, 1995). People who study these properties are known as music theorists. Some have applied acoustics, human physiology, and psychology to the explanation of how and why music is perceived.


Elements of music
Music has many different elements. The main elements are: rhythm, melody, harmony, structure, timbre, dynamics, and texture. Each element—and each of its sub-elements, if any—is discussed below.

Melody
A melody is a series of notes sounding in succession. The notes of a melody are typically created with respect to pitch systems such as scales or modes. The rhythm of a melody is often based on the inflections of language, the physical rhythms of dance, or simply periodic pulsation. Melody is typically divided into phrases within a larger overarching structure. The elements of a melody are pitch, duration, dynamics, and timbre.
In the context of theory, a piece of music may be melodically based. In this instance, a composer will first take a melody, and use that to create his work. A harmonically based piece, on the contrary, will focus on a chord progression, with the melody as a secondary or incidental factor of composition.

Pitch
Pitch is determined by the sound's frequency of vibration. It refers to the relative highness or lowness of a given tone: the greater the frequency, the higher sounding the pitch.
The process of assigning note names to pitches is called Tuning. 440 Hz is assigned to modern concert A.
The difference in frequency between two pitches is called an interval. The most basic interval is the octave, which indicates either a doubling or halving of the base frequency. In mathematical terms, every A can be expressed as:
2n × 440 Hz
Thus, the list of As within the human hearing range (approximately 20 Hz - 20,000 Hz) is: A 27.5, A 55, A 110, A 220, A 440, A 880, A 1,760, A 3,520, A 7,040, A 14,080.

Scales and modes
Notes can be arranged into different scales and modes. Western music theory generally divides the octave into a series of 12 notes that might be included in a piece of music. This series of twelve notes is called a chromatic scale. In the chromatic scale, each note is called a half-step or semitone. Patterns of half and whole steps (2 half steps, or a tone) can make up a scale in that octave. The scales most commonly encountered are the seven toned major, the harmonic minor, the melodic minor, and the natural minor. Other examples of scales used are the octatonic scale, and the pentatonic or five-toned scale which is common in but not limited to folk musics. There are scales that do not follow the chromatic 12-note pattern, for example in classical Persian, Indian and Arabic music. These cultures often make use of quarter-tones, half the size of a semitone, as the name suggests. However, most contemporary compositions use the Western system.
In music written using the system of major-minor tonality, the key of a piece determines the scale used. Transposing a piece from C major to D major will make all the notes two semitones (or one full step) higher. Even in modern equal temperament, changing the key can change the feel of a piece of music, because it changes the relationship of the composition's pitches to the pitch range of the instruments on which the piece is being performed. This often affects the music's timbre, as well as having technical implications for the performers. However, performing a piece in one key rather than another may go unrecognized by the casual listener, since changing the key does not change the relationship of the individual pitches to each other. A key change, or modulation, may occur during a piece, which is more easily heard as a difference of intervals in sound.

Rhythm
Rhythm is the arrangement of sounds in time. Meter animates time in regular pulse groupings, called measures or bars. The time signature or meter signature specifies how many beats are in a measure, and which value of written note is counted and felt as a single beat. Through increased stress and attack (and subtle variations in duration), particular tones may be accented. There are conventions in most musical traditions for a regular and hierarchical accentuation of beats to reinforce the meter. Syncopated rhythms are rhythms that accent unexpected parts of the beat. Playing simultaneous rhythms in more than one time signature is called polymeter. See also polyrhythm.
In recent years, rhythm and meter have become an important area of research among music scholars. Recent work in these areas includes books by Bengt-Olov Palmqvist, Fred Lerdahl and Ray Jackendoff, Jonathan Kramer, Christopher Hasty, William Rothstein, and Joel Lester.
Rhythm is one of the most central features of many styles of music, especially jazz and hip-hop. Both of these styles of music involve an underlying repeated rhythm or beat into which more complex patterns are interwoven.

Harmony
Harmony is the study of vertical sonorities in music. Vertical sonority refers to considering the relationships between pitches that occur together; usually this means at the same time, although harmony can also be implied by a melody that outlines a harmonic structure.
The vertical relationship between two pitches is referred to as an interval. A larger structure involving multiple pitches is called a chord. In Common practice and Popular music, harmonies are generally tertian. This means that the interval of which the chords are composed is a third. Therefore, a root-position triad (with the root note in the lowest voice) consists of the root note, a note a third above, and a note a third above that (a fifth above the root). Seventh chords add a third above the top note of a triad (a seventh above the root). There are some notable exceptions. In 20th century classical music, many alternative types of harmonic structure were explored. One way to analyze harmony in Common practice music is through a roman numeral system; in Popular Music and Jazz a system of chord symbols is used; and in post-tonal music, a variety of approaches are used, most frequently set theory.

Consonance and dissonance
Consonance can be roughly defined as harmonies whose tones complement and increase each others' resonance, and dissonance as those which create more complex acoustical interactions (called 'beats'). A simplistic example is that of "pleasant" sounds versus "unpleasant" ones. Another manner of thinking about the relationship regards stability; dissonant harmonies are sometimes considered to be unstable and to "want to move" or "resolve" toward consonance. However, this is not to say that dissonance is undesirable. A composition made entirely of consonant harmonies may be pleasing to the ear and yet boring because there are no instabilities to be resolved.
Melody is often organized so as to interact with changing harmonies (sometimes called a chord progression) that accompany it, setting up consonance and dissonance. The art of melody writing depends heavily upon the choices of tones for their nonharmonic or harmonic character.
"Harmony" as used by music theorists can refer to any kind of simultaneity without a value judgement, in contrast with a more common usage of "in harmony" or "harmonious", which in technical language might be described as consonance.

Dynamics
In music, dynamics normally refers to the softness or loudness of a sound or note, e.g. pianissimo or fortissimo. Until recently, most of these dynamics and signs were written in Italian, but recently are becoming written or translated into English. However, to every aspect of the execution of a given piece, either stylistic (staccato, legato etc.) or functional (velocity) are also known as dynamics. The term is also applied to the written or printed musical notation used to indicate dynamics.

Texture
Musical texture is the overall sound of a piece of music commonly described according to the number of and relationship between parts or lines of music: monophony, heterophony, polyphony, homophony, or monody. The perceived texture of a piece may also be affected by the timbre of the instruments, the number of instruments used, and the interval between each musical line, among other things.
Monophony is the texture of a melody heard only by itself. If a melody is accompanied by chords, the texture is homophony. In homophony, the melody is usually but not always voiced in the highest notes. A third texture, called polyphony, consists of several simultaneous melodies of equal importance.

Form or structure
Form is a facet of music theory that explores the concept of musical syntax, on a local and global level. The syntax is often explained in terms of phrases and periods (for the local level) or sections or genre (for the global scale). Examples of common forms of Western music include the fugue, the invention, sonata-allegro, canon, strophic, theme and variations, and rondo. Popular Music often makes use of strophic form often in conjunction with Twelve bar blues.


Theories of harmonization

Four-part writing
Four part chorale writing is used to teach and analyze the basic conventions of Common-Practice Period music. Johann Sebastian Bach's four voice chorales written for liturgial purposes serve as a model for students. These chorales exhibit a fusion of linear and vertical thinking. In analysis, the harmonic function and rhythm are analyzed as well as the shape and implications of each of the four lines. Students are then instructed to compose chorales, often using given melodies (as Bach would have done), over a given bass line, or to compose within a chord progression, following rules of voice leading. Though traditionally conceived as a vocal exercise for Soprano, Alto, Tenor, and Bass, other common four-part writings could consist of a brass quartet (two Trumpets, French Horn, and Trombone) or a string quartet (including violin I, violin II, viola and cello).
There are seven chords used in four-part writing that are based upon each note of the scale. The chords are usually given Roman Numerals I, II, III, IV, V, VI and VII to refer to triadic (three-note) chords which are based upon each successive note of the major or minor scale which the piece is in. Chords may be analyzed in two ways. Case-sensitive harmonic analysis would state that major-mode chords (I, IV, V7, etc.), including augmented (for example, VII+), would be notated with upper-case Roman numerals, and minor-mode chords, including diminished (ii, iii, vi, and the diminished vii chord, viio), would be notated with lower-case Roman numerals. Schenkerian harmonic analysis, patterned after the theories of Heinrich Schenker, would state that the mode does not matter in the final analysis, and thus all harmonies are notated in upper-case.
The skill in harmonising a Bach chorale lies in being able to begin a phrase in one key and to modulate to another key either at the end of the first phrase, the beginning of the next one, or perhaps by the end of the second phrase. Each chorale often has the ability to modulate to various tonally related areas: the relative major (III) or minor (vi), the Dominant (V) or its relative minor (iii), the Sub-Dominant (IV) or its relative minor (ii). Other chromatic chords may be used, like the diminished seventh (made up of minor thirds piled on top of each other) or the Secondary Dominant (the Dominant's Dominant — a kind of major version of chord II). Certain standard cadences are observed, most notably IIb7 – V7 – I. The standard collection of J. S. Bach's chorales were edited by Albert Riemenschneider and this collection is readily available, e.g. here; the student is greatly rewarded by playing them at the piano, singing the lines by themselves, singing them in groups, analyzing them by writing the Key and the Chords employed and by taking the melody and bass line from any chorale and trying to fill in the inner alto and tenor parts. Once this has been accomplished the student can then begin to complete their own bass lines —whilst carefully watching for modulations— and then they can fill in the inner alto and tenor parts. Parallel octave and fifth motion is forbidden, and this often proves to be the pons asinorum of the average music student.

Music perception and cognition
Jackendoff and Lerdahl attempt to develop a "musical grammar." Using Jackendoff's background as a linguist and Lerdahl's compositional and theoretical background, a series of generative rules are defined to explain the hierarchical structure of tonal music. The rules focus on musical grouping, or methods in which rhythmic groups of notes, as well as formal hierarchies, are perceived by listeners. Three sets of rules are given: "Grouping Well-Formedness Rules," "Grouping Preference Rules," and "Transformational Rules." These rules are designed to interpret how listeners group structures in tonal music. These groupings then play into the segmentation of events by listeners, which in turn determine the hierarchical structure perceived by the listener. Although this theory is well developed and complete, it is by far not the only system designed to discuss music in this manner, and there is no acceptance of this theory as being the sole theory by which to discuss perception of music (see Jonathan Kramer).

Serial composition and set theory
Twelve Tone Serialism is a technique developed by Arnold Schoenberg to order and repeat all the 12 pitches of the Chromatic Scale with specific order. An ordered row of the 12 pitches is created, then all possible transformations are explored. The analytic techniques involve writing a 12x12 matrix of the tone row, and all of its forms (Transposition, Inversion, Retrograde, Retrograde Inversion) This technique is strongly related to the composers of the Second Viennese School, but also has been incorporated into the languages of many other composers. Serialism does not always appear in the strict 12-note form; many composers have explored with serialism using fewer than 12 notes, repeating tones inside of the row, serialism of microtonal scales. Also, composers such as Pierre Boulez and his teacher Oliver Messiaen explored integral serialism, or the serialization of all possible musical parameters (pitch, rhythm, dynamics, etc.). Composers such as Igor Stravinsky and Milton Babbitt developed personal approaches to Serialism; Stravinsky using a method of Rotational Arrays, and Babbitt using Combinatoriality of the rows. Set Theory is another approach to understanding atonal music that may or may not be serial. Although more akin to the mathematical field of Group Theory than mathematical Set Theory, the nomenclature has become standard inside the musical community. Set theory represents the pitch classes as numbers to allow a methodology of examining music without tonic or triadic functional harmony. This technique allows for exploration of the construction of a serial tone row as well as less strict atonal works. This technique has been extended with a great deal of mathematical rigor to both tonal and atonal systems by David Lewin in his transformational approach utilizing networks of related sets.


Music subjects

Notation
Musical notation is the symbolic representation of music (not to be confused with audio recording). Historically, and in the narrow sense, this is achieved with graphic symbols. Computer file formats have become important as well. Spoken language systems and even hand signs or other body language, are also used to symbolically represent music, primarily in teaching.
In standard Western notation, pitches are represented on the vertical axis and time is represented by notation symbols on the horizontal axis. Thus, notes are properly placed on the musical staff with appropriate time values to show musicians what note to play and when to play it. Also added are directions indicating the key, tempo, dynamics, accents, rests, etc.

Mathematics
Music and mathematics are strongly intertwined. As noted above, our concept of pitch and temperament are both strongly tied to mathematics, and acoustics in particular. Analysis often takes a mathematical route, musical set theory and Transformational theory are both steeped in mathematics.
Some methods of composition are mathematically based. Iannis Xenakis developed several methods using stochastic methods. The French school of spectral music uses mathematical analysis of sounds to develop compositional materials.

Analysis
Analysis is the effort to describe and explain music using only the music as a starting point. Analysis at once is a catch-all term describing the process of describing any portion of the music, as well as a specific field of formal analysis or the field of stylistic analysis. Formal analysis attempts to answer questions of hierarchy and form, and stylistic analysis attempts to describe the style of the piece. These two distinct sub-fields often coincide.
Analysis of harmonic structures is typically presented through a roman numeral analysis. However, over the years, as music and the theory of music have both grown, a multitude of methods of analyzing music have presented themselves. Two very popular methods Shenkerian analysis and Neo-Riemannian analysis have dominated much of the field. Shenkerian analysis attempts to "reduce" music through layers of foreground, middleground, and, eventually an importantly, the background. Neo-Riemannian (or Transformational) analysis began as an extension of Hugo Riemann's theories of music, and then expanding Riemann's concepts of pitch and transformation into a mathematically rich language of analysis. While both theories originated as methods of analysis for tonal music, both have been extended to use in non-tonal music as well.

Ear training
Aural skills — the ability to identify musical patterns by ear, as opposed to by the reading of notation — form a key part of a musician's craft and are usually taught alongside music theory. Most aural skills courses train the perception of relative pitch (the ability to determine pitch in an established context) and rhythm. Sight-singing — the ability to sing unfamiliar music without assistance — is generally an important component of aural skills courses.

Sources
-> Boretz, Benjamin (1995) Meta-Variations: Studies in the Foundations of Musical Thought. Red Hook, New York: Open Space.
-> Bent, Ian D. and Anthony Pople. "Analysis." Grove Dictionary of Music and Musicians. London: Oxford University Press.
-> Jackendoff, Ray and Fred Lerdahl. "Generative Music Theory and its relation to Psychology." Journal of Music Theory, 1981. New Haven, Yale University Press.
-> Kramer, Jonathan. The Time of Music. New York: Schirmer Books, 1988.
-> Lerdahl, Fred. Tonal Pitch Space. Oxford: Oxford University Press, 2001.
-> Lewin, David. Generalized Musical Intervals and Transformations. New Haven: Yale University Press, 1987.



Read Users' Comments ( 0 )

MUSIC COGNITION

Music cognition is an interdisciplinary approach to understanding the mental processes that support musical behaviors, including perception, comprehension, memory, attention, and performance. Originally arising in fields of psychoacoustics and sensation, cognitive theories of how people understand music more recently encompass neuroscience, music theory, computer science, philosophy, and linguistics.


Overview

Music cognition clearly came to be recognized as a discipline in the early 1980s, with the creation of the Society for Music Perception and Cognition, European Society for the Cognitive Sciences of Music, and the journal Music Perception. The field of music cognition focuses on how the mind makes sense of music as it is heard. It also deals with the related question of the cognitive processes involved when musicians perform music. Like language, music is a uniquely human capacity that arguably played a central role in the origins of human cognition. The ways in which music can illuminate fundamental issues in cognition have been underexamined, or even dismissed as epiphenomenal. However, cognition in music is more and more acknowledged as fundamental to our understanding of cognition as a whole, hence music cognition should be able to contribute both conceptually and methodologically to cognitive science. Topics in the field include the following and others:

-> A listener's perception of grouping structure (motives, phrases, sections, etc.)
-> Rhythm and meter (perception and production)
-> Key inference
-> Expectation (including
melodic expectation).
-> Musical similarity
-> Emotional, affective, or arousal response
-> Expressive, musical performance

Some aspects of cognitive music theory describe how sound is perceived by a listener. While the study of human interpretations of sound is called psychoacoustics, the cognitive aspects of how listeners interpret sounds as musical events is commonly known as music cognition.
In the 1970s, music was studied in the sciences mainly for its acoustical and perceptual properties, in what were then relatively novel disciplines such as psychophysics and music psychology. Music scholars criticized much of this research for focusing too much on low-level issues of sensation and perception, often using impoverished stimuli (e.g., small rhythmic fragments) or music restricted to the Western classical repertoire, as well as a general unawareness of the role of music in its wider social and cultural context. However, the cognitive revolution made scientists more aware of the role and importance of these aspects. While twenty years ago, music was hardly mentioned in any handbook of psychology (or appeared only in a subsection on pitch or rhythm perception), it is now recognized, along with vision and language, as an important and informative domain in which to study a variety of aspects of cognition, including expectation, emotion, perception, and memory. The role of music scholars and scientists in this research seems to be greater than ever. It could well be that music cognition will evolve into a prominent discipline contributing to our understanding of cognition as a whole.



Read Users' Comments ( 0 )

COMPUTER MUSIC

Computer music is a term that was originally used within academia to describe a field of study relating to the applications of computing technology in music composition; particularly that stemming from the Western art music tradition. It includes the theory and application of new and existing technologies in music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, and psychoacoustics. The field of computer music can trace its roots back to the origin of electronic music, and the very first experiments and innovations with electronic instruments at the turn of the 20th century. More recently, with the advent of personal computing, and the growth of home recording, the term computer music is now sometimes used to describe any music that has been created using computing technology.


History

Much of the work on computer music has drawn on the relationship between music theory and mathematics. The world's first computer to play music was CSIRAC which was designed and built by Trevor Pearcey and Maston Beard. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the Colonel Bogey March of which no known recordings exist. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice which is current computer music practice.
The oldest known recordings of computer generated music were played by the Ferranti Mark I computer, a commercial version of the Baby Machine from the University of Manchester in the autumn of 1951. The music program was written by Christopher Strachey. During a session recorded by the BBC, the machine managed to work its way through Baa Baa Black Sheep, God Save the King and part of In the Mood. Subsequently, Lejaren Hiller (e.g., the Illiac Suite) used a computer in the mid 1950s to compose works that were then played by conventional musicians. Later developments included the work of Max Mathews at Bell Laboratories, who developed the influential MUSIC I program. Vocoder technology was also a major development in this early era.
Early computer music programs typically did not run in real-time. Programs would run for hours or days, on multi-million dollar computers, in order to generate a few minutes of music. John Chowning's work on FM synthesis, in the early 70s, and the advent of inexpensive digital chips and microcomputers opened the door to real-time generation of computer music. By the early 90s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible.


Advances

Advances in computing power have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception.


Research

Despite the ubiquity of computer music in contemporary culture, there is considerable activity in the field of computer music, as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches.Throughout the world there are many organizations and institutions dedicated to the area of computer and electronic music study and research, including the ICMA (International Computer Music Association), IRCAM, Princeton Sound Lab, GRAME, SEAMUS (Society for Electro Acoustic Music in the United States), and a great number of institutions of higher learning around the world.


Computer Generated music

Computer-generated music is music composed by, or with the extensive aid of, a computer. Although any music which uses computers in its composition or realisation is computer-generated to some extent, the use of computers is now so widespread (in the editing of pop songs, for instance) that the phrase computer-generated music is generally used to mean a kind of music which could not have been created without the use of computers.
We can distinguish two groups of computer-generated music: music in which a computer generated the score, which could be performed by humans, and music which is both composed and performed by computers.There is a large genre of music that is organized, synthesized, and created on computers.

Computer-generated scores for performance by human players
Many systems for generating musical scores actually existed well before the time of computers. One of these was Musikalisches Würfelspiel, a system which used throws of the dice to randomly select measures from a large collection of small phrases. When patched together, these phrases combined to create musical pieces which could be performed by human players. Although these works were not actually composed with a computer in the modern sense, it uses a rudimentary form of the random combinatorial techniques sometimes used in computer-generated composition.
The world's first digital computer music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard, although it was only used to play standard tunes of the day. Subsequently, one of the first composers to write music with a computer was Iannis Xenakis. He wrote programs in the FORTRAN language that generated numeric data that he transcribed into scores to be played by traditional musical instruments. An example is ST/48 of 1962. Although Xenakis could well have composed this music by hand, the intensity of the calculations needed to transform probabilistic mathematics into musical notation was best left to the number-crunching power of the computer.
Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope. He wrote computer programs that analyse works of other composers to produce new works in a similar style. He has used this program to great effect with composers such as Bach and Mozart (his program Experiments in Musical Intelligence is famous for creating "Mozart's 42nd Symphony"), and also within his own pieces, combining his own creations with that of the computer.

Music composed and performed by computers
Later, composers such as Gottfried Michael Koenig had computers generate the sounds of the composition as well as the score. Koenig produced algorithmic composition programs which were a generalisation of his own serial composition practice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at the Institute of Sonology in Utrecht, Holland in the 1970s.
Procedures such as those used by Koenig and Xenakis are still in use today. Since the invention of the MIDI system in the early 1980s, for example, some people have worked on programs which map MIDI notes to an algorithm and then can either output sounds or music through the computer's sound card or write an audio file for other programs to play.
Some of these simple programs are based on fractal geometry, and can map midi notes to specific fractals, or fractal equations. Although such programs are widely available and are sometimes seen as clever toys for the non-musician, some professional musicians have given them attention also. The resulting 'music' can be more like noise, or can sound quite familiar and pleasant. As with much algorithmic music, and algorithmic art in general, more depends on the way in which the parameters are mapped to aspects of these equations than on the equations themselves. Thus, for example, the same equation can be made to produce both a lyrical and melodic piece of music in the style of the mid-nineteenth century, and a fantastically dissonant cacophony more reminiscent of the avant-garde music of the 1950's and 1960's.
Other programs can map mathematical formulae and constants to produce sequences of notes. In this manner, an irrational number can give an infinite sequence of notes where each note is a digit in the decimal expression of that number. This sequence can in turn be a composition in itself, or simply the basis for further elaboration.
Operations such as these, and even more elaborate operations can also be performed in computer music programming languages such as Max/MSP, SuperCollider, Csound, Pure Data (Pd), Keykit, and ChucK. These programs now easily run on most personal computers, and are often capable of more complex functions than those which would have necessitated the most powerful mainframe computers several decades ago.

Diagram illustrating the position of CAAC in relation to other Generative music Systems

There exist programs that generate "human-sounding" melodies by using a vast database of phrases. One example is Band-in-a-Box, which is capable of creating jazz, blues and rock instrumental solos with almost no user interaction. Another is Impro-Visor, which uses a stochastic context-free grammar to generate phrases and complete solos.
Another 'cybernetic' approach to computer composition uses specialized hardware to detect external stimuli which are then mapped by the computer to realize the performance. Examples of this style of computer music can be found in the middle-80's work of David Rokeby (Very Nervous System) where audience/performer motions are 'translated' to MIDI segments. Computer controlled music is also found in the performance pieces by the Canadian composer Udo Kasemets (1919-) such as the Marce(ntennia)l Circus C(ag)elebrating Duchamp (1987), a realization of the Marcel Duchamp process piece Music Errata using an electric model train to collect a hopper-car of stones to be deposited on a drum wired to an Analog:Digital converter, mapping the stone impacts to a score display (performed in Toronto by pianist Gordon Monahan during the 1987 Duchamp Centennial), or his installations and performance works (eg Spectrascapes) based on his Geo(sono)scope (1986) 15x4-channel computer-controlled audio mixer. In these latter works, the computer generates sound-scapes from tape-loop sound samples, live shortwave or sine-wave generators.

Computer-Aided Algorithmic Composition
Computer-Aided Algorithmic Composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label "computer-aided composition" lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label "algorithmic composition" is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as Computer-Aided Design


Machine Improvisation

Machine Improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic reinjection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples.

Statistical style modeling
Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson’s Illiac Suite in the 1950s and Xenakis’ uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, Prediction Suffix Tree and string searching by factor oracle algorithm

Uses of Machine Improvisation
Machine Improvisation encourages musical creativity by providing automatic modeling and transformation structures for existing music. This creates a natural interface with the musician without need for coding musical algorithms. In live performance, the system re-injects the musician's material in several different ways, allowing a semantics-level representation of the session and a smart recombination and transformation of this material in real-time. In offline version, Machine Improvisation can be used to achieve style mixing, an approach inspired by Vannevar Bush's memex imaginary machine.

Implementations
Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox.
OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (Aka the OMax Brothers) in the Ircam Music Representations group.

Musicians working with machine improvisation
Gerard Assayag (IRCAM, France), Tim Blackwell (Goldsmiths College, Great Brittan), George Bloch (Composer, France), Marc Chemiller (IRCAM/CNRS, France), Shlomo Dubnov (Composer, Israel / USA), Mari Kimura (Julliard, New York City), George Lewis (Columbia University, New York City), Bernard Lubat (Pianist, France), Joel Ryan (Institute of Sonology, Netherlands), Michel Waisvisz (STEIM, Netherlands), David Wessel (CNMAT, California), Michael Young (Goldsmiths College, Great Brittan)


Live coding
Live coding (sometimes known as 'interactive programming', 'on-the-fly programming', 'just in time programming') is the name given to the process of writing software in realtime as part of a performance. Historically, this technique has been around since computers were used to produce early computer art, but recently it has been explored as a more rigorous alternative to laptop DJs who, live coders often feel, lack the charisma and pizzazz of musicians performing live.
Generally, this practise stages a more general approach: one of interactive programming, of writing (parts of) programs while they run. Traditionally most computer music programs have tended toward the old write/compile/run model which evolved when computers were much less powerful. This approach has locked out code-level innovation by people whose programming skills are more modest. Some programs have gradually integrated real-time controllers and gesturing (for example, MIDI-driven software synthesis and parameter control). Until recently, however, the musician/composer rarely had the capability of real-time modification of program code itself. This legacy distinction is somewhat erased by languages such as ChucK, SuperCollider, and impromptu.
TOPLAP, an ad-hoc conglomerate of artists interested in live coding was set up in 2003, and promotes the use, proliferation and exploration of a range of software, languages and techniques to implement live coding. This is a parallel and collaborative effort e.g. with research at the Princeton Sound Lab, the University of Cologne, and Computational Arts Research Group at Queensland University of Technology.



Read Users' Comments ( 0 )