

The paper discusses the reasoning behind the use of high-dimensional synthesizer algorithms and then presents the designs of two individual software synthesizers at use in the NN Synths instrument. This paper outlines NN Synths 1, a software instrument that uses multi-mapped regression-based deep learning neural networks2 to control mul- tiple high dimensional synthesizers. Multi-mapped Neural Networks for Control of High Dimensional Synthesis Systems The NessStretch: This Paul Stretch Goes to 9
CHICAGO SUPERCOLLIDER CODE
The source code for our project can be viewed on its GitHub page by following the link below: Implemented in Rust, python, and SuperCollider.

The NessStretch's layered analysis bands are a better match for human frequency perception, and do a better job of resolving shorter, noisier high-frequency sounds (sibilance, snares, etc.). PaulStretch uses a single frame size throughout the entire frequency range. The NessStretch is a refinement of Paul Nasca's excellent PaulStretch algorithm. Implements a phase randomized rfft time stretch algorithm, the NessStretch, which splits the original sound file into 9 discrete frequency bands, and uses a decreasing frame size to correspond to increasing frequency. Software - The NessStretch (with Alex Ness):
CHICAGO SUPERCOLLIDER PATCH
Lastly, Mihály Csíkszentmihályi's concept of flow psychology is applied to the three stages of creation in the laptop performance process - software design, patch design, and performance.Ī couple of years ago, I also made a short video outlining the main design features of the software.

Specifics of the author's personal approach to this problem, a software environment coded in SuperCollider, are then shared. At the same time, traversing this multi-dimensional environment produces a perceptible sonic language that can add structural signposts for the listener to latch on to in performance. A summary of the argument is that by creating a multi-dimensional environment of Sonic Vector Spaces (see page 17) and implementing a method for quickly traversing that environment, a performer is able to create enough information flow to achieve laptop virtuosity. Using information theory as a foundation, this paper defines virtuosity in the context of laptop performance, outlines a number of challenges that face laptop performers and software designers, and provides solutions that have been implemented in the author's own software environment. Laptop Improvisation in a Multi-Dimensional Space The design philosophy for the Live Modular Instrument is outlined in my doctoral dissertation, Laptop Improvisation in a Multi-Dimensional Space, which can be found on Columbia's Academic Commons website:

Writing - Laptop Improvisation in a Multi-Dimensional Space: The source code for my project can be viewed on its GitHub page by following the link below: The unique non-linear design of my software helps me achieve a unique versatility as a performer, able to approach any musical situation with attentive sensitivity, and lead and follow in any group. I use it to perform composed and improvised music with groups like Wet Ink Ensemble, ICE, The Evan Parker Electro-Acoustic Ensemble, and The Peter Evans Quintet. This software is the main focus of my research, and almost every piece of electronic music found on my website uses it in some way. His works have been presented in concert halls, galleries, and abandoned warehouses.Over the past decade-plus I have been developing a software instrument, written in SuperCollider, for live performance with instrumentalists. He uses technology to create interactive music, algorithmic compositions, and sound design. Randall seeks to tell stories through music while exploring theatrics, decision-making processes, and technology. Randall is a composer, sound artist, software developer, and artistic director of the Chicago Composers Orchestra.
