
IN PROGRESS
EA+IA
Artificial Intelligence Integration Project
in the creation of live musical-pictorial works
The conceptual basis of the ELECTRO-ACRYLIC project is an observation and reflection on the “Low Tech” and “Hi Tech” of our time.
ELECTRO-ACRYLIC has always asked itself the question of whether “Low Tech” influenced “Hi Tech”, whether one needed the other and vice versa.
With the avenue of artificial intelligence, it seems that this question is even more interesting to explore. In the case of ELECTRO-ACRYLIC, it is perfectly clear that it would be of no interest for AI to do all the work. What is the point of seeing a performance such as ELECTRO-ACRYLIC offers, where artists take risks by creating a pictorial, sound and visual work live. The whole appeal of this performance is to see a unique work created each time by the interaction of humans bringing their technical and artistic knowledge, their sensitivity and their complicity developed over time. There can be nothing artificial in this. This is where humanity reveals all its strength and creativity.
The help of an artificial intelligence tool is very useful in the case of the ELECTRO-ACRYLIC concept because the practice is complex and requires operational technical speed that sometimes comes at the expense of creative spontaneity. This is where the relationship between the "Low Tech" of the artistic dimension and the "Hi Tech" of the execution and production of this work enters into an interesting relationship.
The first exploration of ELECTRO-ACRYLIC then takes on its full meaning.


Integrating artificial intelligence into the creation of a live work, not just as a random reactive composer of incoming data but as a proactive intelligence reacting to the creative intentions of the artists who perform with it live.
We want to open the collaborative, creative and intuitive potential of AI to react in real time with other creators.
AI: an intuitive and creative ally.
We demonstrated with the Electro-acrylic project that the interaction of Low-Tech and Hi-Tech could generate a creative loop to which we could no longer determine who was the source.
With the intervention of artificial intelligence, we tend to show that the latter is not only confined to an effective executive role but can above all realize its potential of creative and inter-reactive free will in a context in "real time" and to be a true ally like a member of a music group who brings their style, skills and knowledge while integrating into the whole.
APPROACH
We want to open the collaborative, creative and intuitive potential of AI to react in real time with other creators.
Like an artist who builds his personality and style with and in relation to others, as well as with the elements that surround him, this specialized AI must be able to build his own artistic personality.
We want to break the misconception that AI is either master or slave. It is not "thanks" or "because" of her, but "with" her that creation takes place.
It must be fully participatory according to its acquired skills.
We believe that the contribution of art, culture, experience and general knowledge can make AI better intelligence.
DRAFT PROCESS
Machine learning
Audio automation and sequencing
The audio data is picked up by the sound designer. The AI already intervenes at this stage by facilitating the work of the sound designer by a series of automations (recording, recording of loops, etc.) which it will have integrated as the sound designer requests during the preparatory practices.
The sound designer will modulate the audio data and manufacture the first basic rhythm sequences which he will put on immediate release. Then it will modulate other incoming data and deposit it in a database which the AI will use to establish its sequences.
Automation and video sequencing
On the other hand, the video process is very similar to audio but the source is captured by a camera. The AI intervenes by automating the color identification and the cutting of shapes or spots on the canvas. The AI, using audio and video algorithms, will compose sequences with the material created by the two designers and arrange them according to the basic rhythmic pattern.
Supervised learning
and unsupervised
Source and data
The source is the canvas and its sensors. The starting data are the impulses and vibrations emitted on the canvas by the percussionist painter in the development of his work.
Deep learning AI
AI must first get to know the creative reflexes and artistic intentions as well as the style of each of the three artists and ultimately develop their own style that fits into the whole.
Musical style and structures
The AI will analyze a large set of musical pieces created by the sound designer, both pieces from Electro-acrylic performances and his own pieces.
The sound designer will also submit pieces of his choice that reflect his tastes and also what is close to what Electro-Acrylic does.
The AI will analyze the compositional structure of a quantity of musical pieces that the sound designer has submitted to him. Creation of neural networks specialized in musical structures.
Contextual learning - Jams sessions
AI will learn to react during practices by composing with its skills (via specialized neural networks) in deep learning and in reactive and generative mode.
Predictive mode
Analysis and reactivity of creative intensity
The AI will also analyze the variations in amplitude of the painter's percussive velocity on the canvas to adjust the intensity of the musical and visual composition.
Predictiveness of musical structure
With the sum of previous data, the AI will be able to establish its predictive patterns, according to recurrent, punctual and / or global stylistic patterns, integrated in deep learning.
Learning and predicting visual composition
We follow the same learning process as audio, but this time for the visual.
Image and texture bank determined by the visual designer.
The pictorial style
The AI will analyze a set of fabrics created during the performances of Electro-Acrylic, but also details, textures and colors of these fabrics.
Inter-reactive mode,
intuitive and collaborative
Inter-responsiveness of AI
At any time, the sound or visual designer can add his own sequences, modify the parameters, in short: play live!
-
The AI will have to take them into account and adjust the whole.
-
It will also be influenced by the rhythm and the amplitude of the painter's reactions on his canvas.
IA: An intuitive conductor
GOAL 1: The AI becomes a fourth creator, but also the collaborator and partner of each of the artists during the live performance of the work.
-
She also becomes a sort of conductor of a reactive orchestra of improvisational musicians, a director of the ensemble.
-
It must therefore react and adapt in real time.
Creative instinct and participative spirit.
GOAL 2: Allow AI to develop a creative instinct after learning.
-
Learn to interact in real time with other artists.
-
Integrate into a whole by bringing its distinctive potential, both executive and creative.








