Music Genre Classification using Amplitude and frequency Variants of MFCC

Ashish Sharma, Abhishek Tomar

Abstract


Music is a good domain for computational recognition of auditory events because multiple instruments are usually played simultaneously. The difficulty in handling music resides in the fact that signals (events to be recognized) and noises (events to be ignored) are not uniquely defined. This is the main difference from studies of speech recognition under noisy environments. Musical instrument recognition is also important from an industrial standpoint. The recent development of digital audio and network technologies has enabled us to handle a tremendous number of musical pieces and therefore efficient music information retrieval (MIR) is required. Musical instrument recognition will serve as one of the key technologies for sophisticated MIR because the types of instruments played characterize musical pieces; some musical forms, in fact, are based on instruments, for example “piano sonata” and “string quartet.” Despite the importance of musical instrument recognition, studies have until recently mainly dealt with monophonic sounds. Although the number of studies dealing with polyphonic music has been increasing, their techniques have not yet achieved a sufficient level to be applied to MIR or other real applications. This paper investigates musical instrument recognition using two main approaches like MFCC and BPNN. BPNN have good learning rate in comparison to other classifiers. The whole experiments have been taken place in MATLAB and achieved accuracy is around, FAR= .017 FRR =.018 and accuracy= 100%.
KEYWORDS: MFCC; Music Recognition; Neural Network

Full Text:

PDF




Copyright (c) 2015 Ashish Sharma, Abhishek Tomar

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

All published Articles are Open Access at  https://journals.pen2print.org/index.php/ijr/ 


Paper submission: ijr@pen2print.org