References¶
- 1
Michael Scott Cuthbert and Christopher Ariza. Music21: a toolkit for computer-aided musicology and symbolic music data. In ISMIR. 2010.
- 2
JQ Sun and Seok-Pil Lee. Query by singing/humming system based on deep learning. Int. J. Appl. Eng. Res, 12(13):973–4562, 2017.
- 3
Yu-Siang Huang and Yi-Hsuan Yang. Pop music transformer: generating music with rhythm and harmony. arXiv preprint arXiv:2002.00212, 2020.
- 4
Hung-Chen Chen and Arbee LP Chen. A music recommendation system based on music data grouping and user interests. In Proceedings of the tenth international conference on Information and knowledge management, 231–238. 2001.
- 5
Albert Meroño-Peñuela, Rinke Hoekstra, Aldo Gangemi, Peter Bloem, Reinier de Valk, Bas Stringer, Berit Janssen, Victor de Boer, Alo Allik, Stefan Schlobach, and others. The midi linked data cloud. In International Semantic Web Conference, 156–164. Springer, 2017.
- 6
Genki Yamaguchi and Makoto Fukumoto. A music recommendation system based on melody creation by interactive ga. In 2019 20th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 286–290. IEEE, 2019.
- 7
Qiuqiang Kong, Keunwoo Choi, and Yuxuan Wang. Large-scale midi-based composer classification. arXiv preprint arXiv:2010.14805, 2020.
- 8
Zheng Jiang and Roger B Dannenberg. Melody identification in standard midi files. In Proceedings of the 16th sound & music computing conference, 65–71. 2019.
- 9
Zhang Liumei, Jiang Fanzhi, Li Jiao, Ma Gang, and Liu Tianshi. K-means clustering analysis of chinese traditional folk music based on midi music textualization. In 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP), 1062–1066. IEEE, 2021.
- 10
José Pedro Magalhaes. Chordify: three years after the launch. In ISMIR. 2015.
- 11
Kin Wai Cheuk, Yin-Jvun Luo, Emmanouil Benetos, and Dorien Herremans. The effect of spectrogram reconstruction on automatic music transcription: an alternative approach to improve transcription accuracy. In 2020 25th International Conference on Pattern Recognition (ICPR), 9091–9098. IEEE, 2021.
- 12
Kin Wai Cheuk, Dorien Herremans, and Li Su. ReconVAT: A Semi-Supervised Automatic Music Transcription Framework for Low-Resource Real-World Data, pages 3918–3926. Association for Computing Machinery, New York, NY, USA, 2021. URL: https://doi.org/10.1145/3474085.3475405.
- 13
T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8):1979–1993, 2019. doi:10.1109/TPAMI.2018.2858821.
Executing code in your markdown files¶
If you’d like to include computational content inside these markdown files, you can use MyST Markdown to define cells that will be executed when your book is built. Jupyter Book uses jupytext to do this.
First, add Jupytext metadata to the file. For example, to add Jupytext metadata to this markdown page, run this command:
jupyter-book myst init markdown.md
Once a markdown file has Jupytext metadata in it, you can add the following directive to run the code at build time:
```{code-cell}
print("Here is some code to execute")
```
When your book is built, the contents of any {code-cell}
blocks will be
executed with your default Jupyter kernel, and their outputs will be displayed
in-line with the rest of your content.
For more information about executing computational content with Jupyter Book, see The MyST-NB documentation.