官术网_书友最值得收藏!

How to do it...

The strategy that we defined previously is coded as follows (Please refer to the Audio classification.ipynb file in GitHub while implementing the code):

  1. Import the dataset:
import pandas as pd
data = pd.read_csv('/content/train.csv')
  1. Extract features for each audio input:
ids = data['ID'].values
def extract_feature(file_name):
X, sample_rate = librosa.load(file_name)
stft = np.abs(librosa.stft(X))
mfccs = np.mean(librosa.feature.mfcc(y=X,sr=sample_rate, n_mfcc=40).T,axis=0)
return mfccs

In the preceding code, we defined a function that takes file_name as input, extracts the 40 MFCC corresponding to the audio file, and returns the same.

  1. Create the input and the output dataset:
x = []
y = []
for i in range(len(ids)):
try:
filename = '/content/Train/'+str(ids[i])+'.wav'
y.append(data[data['ID']==ids[i]]['Class'].values)
x.append(extract_feature(filename))
except:
continue
x = np.array(x)

In the preceding code, we loop through one audio file at a time, extracting its features and storing it in the input list. Similarly, we will be storing the output class in the output list. Additionally, we will convert the output list into a categorical value that is one-hot-encoded:

y2 = []
for i in range(len(y)):
y2.append(y[i][0])
y3 = np.array(pd.get_dummies(y2))

The pd.get_dummies method works very similar to the to_categorical method we used earlier; however, to_categorical does not work on text classes (it works on numeric values only, which get converted to one-hot-encoded values).

  1. Build the model and compile it:
model = Sequential()
model.add(Dense(1000, input_shape = (40,), activation = 'relu'))
model.add(Dense(10,activation='sigmoid'))
from keras.optimizers import Adam
adam = Adam(lr=0.0001)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['acc'])

The summary of the preceding model is as follows:

  1. Create the train and test datasets and then fit the model:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y3, test_size=0.30,random_state=10)
model.fit(X_train, y_train, epochs=100, batch_size=32, validation_data=(X_test, y_test), verbose = 1)

Once the model is fitted, you will notice that the model has 91% accuracy in classifying audio in the right class.

主站蜘蛛池模板: 台州市| 德化县| 古丈县| 开封市| 黔东| 通辽市| 会理县| 武胜县| 台安县| 白玉县| 临桂县| 敦煌市| 耿马| 阜平县| 亳州市| 巴林左旗| 峨边| 古蔺县| 临澧县| 新乡市| 霍州市| 德令哈市| 宜良县| 河津市| 白城市| 南江县| 扶绥县| 丹东市| 蓬安县| 遵化市| 龙门县| 久治县| 泾阳县| 抚远县| 延长县| 鲁山县| 博湖县| 勃利县| 滕州市| 平陆县| 长兴县|