Cover and article title page from SOURCE issue no. 8

The displays shown on these pages were generated electronically from audio–frequency information. The elements of sound, image, and color operate together in time, combining to become perceptible to both eye and ear over a broad range of kinetic interactions.

My involvement with visual displays produced from audio sources began in 1965 while I was working at the University of Toronto Electronic Music Studio. I had become increasingly aware of the lack of visual interest in most performances of electronic music, unless the music played an accompanying role to dance, films, or some form of theatre. While electronic music has provided unprecedented opportunities for total musical abstraction, its application in most cases has denied participants the visual and tactile stimuli that they have learned to associate with musical processes. My interest in this property of electronic music motivated an exploration into methods for generating electronic sounds and images simultaneously. I began a series of experimental works with the generic title Video, working at first with an old 17–inch monochrome television set that I converted into an x–y display device. The first pieces were composed on two–channel tape, using electronic generators mixed through a touch–sensitive keyboard a ring modulator, a reverberation plate as an acoustical delay line, and other standard equipment in the Toronto studio. Development of techniques for z–axis modulation, RF signal injection, and color television followed shortly thereafter.

In 1966 I composed Musica Instrumentalis for David Tudor. He has exploited the stereophonic, or x–y, properties of his bandoneon as an audio/video generating device in a number of memorable performances of this piece. A performance of Musica Instrumentalis requires constant manipulation of acoustical space, by varying microphone placement and mixing, producing audio feedback, and gradually moving the musical instrument about as it is played. Originally, the score consisted of colored drawings of images that the performer tries to reproduce on the color TV screen as a result of his sonic activities. A new version of the score being readied for publication will instead use photographs of color TV images recorded during one of our performances.

Later in 1966, Tudor asked me to produce the video images for his Bandoneon ! (Bandoneon Factorial), to be performed in New York during October as part of “9 Evenings: Theatre & Engineering” under the auspices of Experiments in Art and Technology, Inc. (E.A.T.). For the first time, I was able to work with television projectors that could display x–y images on a large scale. Unfortunately, the projectors available for Bandoneon ! were a type using small, high–intensity cathode–ray tubes and Schmidt optics, a design that becomes almost self–destructive when converted to the application. We were able to sustain the projected images only for relatively short periods during the two performances. In February 1968, Tudor and I gave a performance of Musica Instrumentalis in London, Ontario, with a monochrome Eidophor television projector. The sophisticated scanning system of the Eidophor, consisting of a high–intensity incandescent light source reflected off a modulated liquid surface, appeared in theory to be adaptable to my requirements. However, the operator in charge refused my request to modify the Eidophor internally. The images were ultimately projected for the audience during the performance, but in an unsatisfactory manner. The Eidophor simply became the final link in a closed–circuit television system, with a camera monitoring the displays on an auxiliary cathode–ray screen, unseen by the audience. The resulting 525–line scanned projections lacked the brilliance and resolution that I had hoped to achieve.

Video II (B)

Video II (B)

On March 5, 1968, David Behrman, John Cage, Marcel Duchamp, Teeny Duchamp, Gordon Mumma, David Tudor, and I performed Reunion at Ryerson Polytechnical Institute in Toronto. During this occasion, Tudor asked if I could connect the electronic audio signals that he was generating into my television equipment. This successful interconnection of circuitry led to a joint work, Video III, which we performed at the invitation of Pauline Oliveros at the University of California, San Diego, on May 10, 1968. Another performance of Reunion, including Video III, was given at the Electric Circus in New York on May 27, 1968. A significant difference in the production of Video III from the techniques of the previous works is in the interconnection of a totally electronic system. No acoustical devices are used except for the loudspeakers in the sound system. But as in Video I, Video II (B), Video II (C), and Musica Instrumentalis, all information required for x–y displays, monochrome z–axis modulation, chrominance, luminance, color shifting and synchronization, and two (or more) channels of audio is derived from signals having a common timebase.



A highly satisfactory technique for electronic x–y projection finally became accessible when I was asked to develop a laser display device for E.A.T. (Experiments in Art and Technology, Inc.). The ultimate destination of the system was to be Expo ’70 in Osaka as part of the art/technology environment that E.A.T. was commissioned to provide for the Pepsi–Cola Pavilion. My participation was requested by David Tudor (1926–1996), who had been chosen as one of four “core artists” in the E.A.T. project. I sought the collaboration of Carson Jeffries (1922–1995), a physicist, sculptor, and developer of a highly sophisticated group of kinetic systems. The laser deflection apparatus that we built for Expo ’70 became VIDEO/LASER II.


VIDEO/LASER II in deflection mode

The first stage in the laser project was the public performance of a new Cross–Jeffries–Tudor work, Audio/Video/Laser, using our VIDEO/LASER I, sponsored by the Tape Music Center at Mills College on May 9, 1969. Jeffries and I borrowed a collection of expensive electronic and electro–optical equipment for VIDEO/LASER I, the first multi–color x–y laser projection system, from several San Francisco Bay Area manufacturers:

  1. A Coherent Radiation Laboratories Model 52K krypton ion laser system, including laser head, power supply, and an accessory prism wavelength selector—since this laser produces several lines of output across the visible spectrum simultaneously, its normal output is essentially white. Individual colors may be selected, one at a time, with the installation of the wavelength selector; or, the total output may be separated into numerous red, yellow, yellow–green, green, blue–green, and blue beams by an external prism.
  2. Two lightbeam (or mirror) galvanometers with magnet banks and a multi–channel galvanometer amplifier from Honeywell Test Instruments Division—such “galvos” are transducers and usually operate from DC (0 Hz) into the audio range. Their most common application is the deflection of a finely focused light source in recording oscillographs.
  3. A wideband laser light modulator and Glan polarizer from Technical Operations, Inc.—these devices are often used for information transmission over laser beams, while our application was to produce color effects in the projected x–y displays. In VIDEO/LASER I, the light modulator was driven by the 120–volt output of a McIntosh MC–60 audio amplifier.

VIDEO/LASER II in diffraction mode

Other equipment for the performance included a two–channel sound system, Tudor’s and my audio devices, and my television equipment.

The performance of VIDEO/LASER I demonstrated the need for improved galvos, or scanners, for the Expo project, to allow simultaneous and independent control of separate red, yellow, green and blue displays. Jeffries and I concluded that custom–made scanners with smaller, self–contained magnetic structures would be required. We ordered four x and four y scanners, to be built in accordance with our specifications, from the Transducer Division of Bell and Howell. The Data Instruments Division of the same company furnished two 4–channel scanner amplifiers. We completed the final version of the improved system for Expo ’70 in December 1969 and readied it for shipment in January and February 1970. Jeffries hand–tooled he precision machine work required for mounting the laser, the scanners, and the other optical and acoustical hardware in his Berkeley sculpture studio. To augment the functions of the scanner amplifiers, he and I each built electronic control panels.

Video II (C)

Video II (C)

During the short time that we were able to work with our completed system, Jeffries, Tudor, and I investigated its application to the very special properties of laser light, and we all contributed additional audio/video/laser pieces in this new medium. The works for scanned projections took advantage of the sharpness, intensity, and pure spectrum colors of the rapidly moving collimated laser beams. Stationary scanned projections (usually produced with stable function generators) permitted the generation of intersecting colored surfaces through mist or smoke. The connection of photocells into the system led to regenerative conditions of optical feedback. With photocells, amplitude followers, or function generators driving the scanners at very slow scanning rates, the single–frequency nature of the laser’s coherent radiation produced large kinetic diffraction patterns as the beams swept through various translucent optical materials. These patterns, high in information content but not directly controllable, underwent totally unpredictable organic transformations in time as the scanners responded to the amplitudes of our input signals.


Tudor, Jeffries, Cross

At Expo ’70, the laser deflection system projected a shower of constantly changing coherent light onto visitors to the Pepsi Pavilion as they walked through the “clam room” (so named from its ellipsoidal shape and dark interior). Here, the people themselves form the projection surface, thereby encouraging them to let their own movements and reactions become a part of the total system.