

you can also sound design presets directly on the synthesiser. When you are happy interact with your handset to train your model/sį. Adjust the Wekinator sliders to morph/explore audio-visual presets. Click World On to start the rendering and synthesis on the Wavetable-VisualSynth. Turn on the DSP from the bottom right corner of the app. Ensure that you select your audio- and midi device from Max/MSP Audio/ Midi preferences. Open the Wavetable-VisualSynth_output file. You should see the OSC in lead turn green when you interact with your hand set.ĭ. As a default this project will listen on port 6448 and send on port 12000.

Open the WekinatorProject_waverider.wekproj. You should now see these values within the WaveRider_input app in Max/MSP.Ĭ. Now you should see in yellow that only the have been selected. Return to the IP address page by selecting the first icon from the left (bottom menu). Turn the Gyroscope on and switch off everything else. From the bottom menu, select the second icon from the left. Type your IP address and make sure the port is set to 9999. Th.polar.wave~ and th.wave.table objects by Timo Hoogland are needed:ī.Open the GyrOSC app on your end set.
MAX MSP 5 JITTER HOW TO
Instructions for how to run and use your project.Įnsure that the following dependencies are installed.Max/MSP (developed and tested on version 8.1)
MAX MSP 5 JITTER WINDOWS 10
PC or Mac (developed and tested on Windows 10 and OSX 10.14.5) IOS handset (developed and tested on iPhone12-iOS 14.8) Instructions for compiling any associated inputs and/or outputs, including information about any external libraries that are required, or requirements on the operating system or hardware needed to run the project.įor the WaveRider_input the cnmat external library is needed:įor the Wavetable-VisualSynth th.polar.wave~ and th.wave.table objects by Timo Hoogland are needed:.The algorithms used to train the regression-based model are Neural networks (NN)with 1 layer Machine Learning is used as a sound exploration method, controlling 13 parameters of the synthesizer using my iPhone gyroscope (pitch, roll, yaw) via the Wekinator regression-based model. The synth is based on th.polar.wave~ and th.wave.table objects by Timo Hoogland The outputcode is Wavetable-VisualSynth a wave-table audio-visual synthesiser developed in Max/MSP using real-time DSP and Jitter (visual-library). It extracts the gyroscope data from the iPhone (pitch, roll, yaw) and sends them to Wekinator fro Machine Learning traning. The inputcode is a regression-based feature extractor called WaveRider. The input and output devices have been developed in Max/MSP, implementing OSC, wavetable synthesis and Jitter for visuals. WaveRiding, consisting of a feature extractor, a wave-table audio-visual synthesiser and a Wekinator machine learning model.
