Sofcomputing Approach to Melody Generation Based on Harmonic Analysis

Authors

  • Jacek Mazurkiewicz Wrocław University of Science and Technology Faculty of Information and Communication Technology Department of Computer Engineering http://orcid.org/0000-0002-7708-907X

Abstract

This work aims to create an ANN-based system for a musical improviser. An artificial improviser of "hearing" music will create a melody. The data supplied to the improviser is MIDI-type musical data. This is the harmonic-rhythmic course, the background for improvisation, and the previously made melody notes. The harmonic run is fed into the system as the currently ongoing chord and the time to the next chord, while the supplied few dozen notes performed earlier will indirectly carry information about the entire run and the musical context and style. Improvisation training is carried out to check ANN as a correct-looking musical improvisation device. The improviser generates several hundred notes to be substituted for a looped rhythmic-harmonic waveform and examined for quality.

References

J. P. Briot, F. Pachet, “Deep learning for music generation: challenges and directions”, Neural Comput. Appl., vol. 32, no. 4, pp. 981–993, 2020.

J. P. Briot, G. Hadjeres, F. D. Pachet, “Deep Learning Techniques for Music Generation”, Springer Nature Switzerland AG, 2020.

D. Herremans, C. H. Chuan, E. Chew, “A functional taxonomy of music generation systems”, ACM Comput. Surv. (CSUR), vol. 50, no. 5, pp. 1–30, 2017.

C. F. Huang, C. Y. Huang, “Emotion-based AI music generation system with CVAE-GAN”, in 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), pp. 220–222, 2020.

T. M. Association, “Standard MIDI Files (SMF) specification”, https://www.midi.org/specifications-old/item/standard-midi-files-smf, 2020.

Y. Bengio, P. Simard, P. Frasconi, “Learning long-term dependencies with gradient descent is difficult”, IEEE Trans. Neural Networks, vol. 5, no. 2, pp. 157–166, 1994.

K. Zhao, S. Li, J. Cai, H. Wang, J. Wang, “An emotional symbolic music generation system based on LSTM networks”, in: 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), pp. 2039–2043, 2019.

A. Karpathy, “The unreasonable effectiveness of recurrent neural network”, https://karpathy.github.io/2015/05/21/rnn-effectiveness/, 2015.

H. G. Zimmermann, R. Neuneier, “Neural network architectures for the modeling of dynamical systems”, in: A Field Guide to Dynamical Recurrent Networks, pp. 311–350, IEEE Press, Los Alamitos, 2001.

S. Mangal, R. Modak, P. Joshi, “LSTM based music generation system”, arXiv https://doi.org/10.17148/IARJSET.2019.6508, 2019.

S. Hochreiter, J. Schmidhuber, “Long short-term memory”, Neural Comput. vol. 9, no. 8, pp. 1735–1780, 1997.

K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, J. Schmidhuber, “LSTM: a search space odyssey”, IEEE Trans. Neural Netw. Learn. Syst., vol. 28, pp. 2222–2232, 2017.

A. Everest, K. Pohlmann, “Master Handbook of Acoustics”, 5th ed. edition, New York: McGraw-Hill, 2009.

B. Thom, "Unsupervised Learning and Interactive Jazz/Blues Improvisation," in American Association for Artificial Intelligence, 2000.

I. Simon, D. Morris, and S. Basu, "Exposing Parameters of a Trained Dynamic Model for Interactive Music Creation," in Association for the Advancement of Artificial Intelligence, 2008.

C. Schmidt-Jones, “Understanding Basic Music Theory”, Rice University, Houston, Texas: Connexions, 2007.

P. Ponce, J. Inesta, “Feature-Driven Recognition of Music Styles”, Lecture Notes in Computer Science 2652, pp. 773–781, 2003. https://doi.org/10.1007/978-3-540-44871-6_90

J. Mazurkiewicz, “Softcomputing Approach to Music Generation”, in: Dependable Computer Systems and Networks. DepCoS-RELCOMEX 2023. Lecture Notes in Networks and Systems, vol 737, pp. 149–161, Springer, Cham, 2023. https://doi.org/10.1007/978-3-031-37720-4_14

Downloads

Published

2024-04-15

Issue

Section

ARTICLES / PAPERS / General