CNN based classification of Cat and Dog WITHOUT transfer learning: 95% validation accuracy

Abstract

The purpose of this script is to understand various stages of building a good CNN model. Using transfer learning techniques, one can achieve nearly 100% accuracy for this task. However, in this script I will try to build a model from scratch that leads a reasonable accuracy. The final result shows close to 95% on validation dataset.

Key highlights:

  • Create a model from scratch without transfer learning.
  • Gain insights on how the depth and width of the model affects the loss curve
  • Create your own data feeder
  • Write custom callbacks for saving best model or early stopper
  • Prediction on any online/downloaded images
In [2]:
import os
import zipfile
import random
import shutil
from shutil import copyfile
from os import getcwd
import pathlib
import datetime

import cv2
from PIL import Image


import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing import image_dataset_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop, Adam, SGD
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.models import model_from_json
from sklearn.model_selection import train_test_split


import matplotlib.pylab as plt
import numpy as np

#Write twice, because sometimes it does not work the first time, especially if you are switching back to notebook
%matplotlib notebook
%matplotlib notebook

Set path of the dataset

The images are under two folders, /Cat and /Dog. I do not wish to create separate train and validation folders and copy the images into it. Instead I will write a custom data feeder that will read images, perform preprocessing and feed to the model. We will perform on-the-fly augmentation within the model (by adding extra layers at the start).

In [15]:
path_to_images = pathlib.Path('/CatsAndDogs/Pet_images_new')
Dogs_dir = os.path.join(path_to_images, 'Dog')
Cats_dir = os.path.join(path_to_images, 'Cat')

print(len(os.listdir(Dogs_dir)))
print(len(os.listdir(Cats_dir)))


dog_image_fnames = os.listdir(Dogs_dir)
cat_image_fnames = os.listdir(Cats_dir)
11663
11735

Delete corrupted images

There are roughly ~1800 corrupt images.

In [16]:
num_skipped = 0
for folder_name in ("Cat", "Dog"):
    folder_path = os.path.join(path_to_images, folder_name)
    for fname in os.listdir(folder_path):
        fpath = os.path.join(folder_path, fname)
        try:
            fobj = open(fpath, "rb")
            is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10)
        finally:
            fobj.close()

        if not is_jfif:
            num_skipped += 1
            # Delete corrupted image
            os.remove(fpath)

print("Deleted %d images" % num_skipped)
Deleted 0 images
In [17]:
#remove 0 size images and non jpg images
import os

j=0
for catagory in os.listdir(path_to_images):
    sub_dir = os.path.join(path_to_images, catagory)
    for name in os.listdir(sub_dir):
        fpath = os.path.join(sub_dir,name)
        if name.split('.')[1] == 'jpg':
            if os.path.getsize(fpath) <= 0:
                if os.path.isfile(fpath) == False:
                    print(fpath)
                os.remove(fpath)
            elif os.path.getsize(fpath) > 0:
                j +=1
        if name.split('.')[1] != 'jpg':
            print(fpath)
            os.remove(fpath)
print(j)
23398
In [18]:
#Count total images
image_count = len(list(path_to_images.glob('*/*.jpg')))
print(image_count)
23398

Remove non- cat/dog images

After carefully inspecting the images, I found some images that are neither cats nor dogs. These images may cause the problems in training. If they appear in validation dataset, they may cause spikes in validation loss curve. Some examples are plotted below.

In [13]:
path_bad_images = pathlib.Path("/CatsAndDogs/bad_images")
bad_images = os.listdir(path_bad_images)
print(len(bad_images))
11
In [14]:
plt.figure(figsize=(8,8))
for i in np.arange(9):
    img = Image.open(os.path.join(path_bad_images,bad_images[i]))
    ax = plt.subplot(3, 3, i + 1)
    plt.imshow(img)
    #plt.title(class_names[labels[i]])
    plt.axis("off")

Create data feeder

  • Resize images
  • define data split fraction
  • batch size
  • seed to shuffle and randomly dividing the training and validation dataset
In [19]:
batch_size = 64  # max 1000 of these images can fit in gpu memory
new_img_size = (200,200)
data_split = 0.2 #0.0005
seed_value = 42

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    path_to_images,
    validation_split=data_split,
    subset="training",
    seed=seed_value,
    image_size=new_img_size,
    batch_size=batch_size)

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    path_to_images,
    validation_split=data_split,
    subset="validation",
    seed=seed_value,
    image_size=new_img_size,
    batch_size=batch_size)
Found 23398 files belonging to 2 classes.
Using 18719 files for training.
Found 23398 files belonging to 2 classes.
Using 4679 files for validation.
In [20]:
class_names = train_ds.class_names
print(class_names)
['Cat', 'Dog']

Inspect some images

Take 1 batch from the validation dataset and plot.

In [21]:
plt.figure(figsize=(8,8))
for images, labels in val_ds.take(1):
    for i in range(9):
        ax = plt.subplot(3, 3, i + 1)
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        plt.axis("off")

Create model

Use at least 3 layers. We will try two models, first a shallow one, second with more layers. For this task yous should expect the shallow model causing an under fit. It is indicative in the loss curve. You may see that the training_loss and accuracy does not improve after a while.

Note also that, we have included on-the-fly image augmentation in the first two layers. Furthermore, we use dropout for regularization, to avoid over fitting. We do not want high dropout in the initial layers. It is set to a relatively higher value after flattening the convolution layer outputs.

And finally, since we are using ReLu activation function, using kernel_initializer='he_uniform' is highly recommended.

In [15]:
#A shallow model
# model = tf.keras.models.Sequential([
#     tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(new_img_size[0],new_img_size[1],3)),
#     tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
#     tf.keras.layers.MaxPool2D(2,2),
#     tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
#     tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
#     tf.keras.layers.MaxPool2D(2,2),
#     tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
#     tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
#     tf.keras.layers.MaxPool2D(2,2),
#     tf.keras.layers.Flatten(),
#     tf.keras.layers.Dropout(0.3),
#     tf.keras.layers.Dense(256, activation='relu'),
#     tf.keras.layers.Dropout(0.3),
#     tf.keras.layers.Dense(256, activation='relu'),
#     tf.keras.layers.Dense(1, activation='sigmoid')
# ])

#Secind model with more layers
model = tf.keras.models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255,input_shape=(new_img_size[0],new_img_size[1],3)),
    tf.keras.layers.experimental.preprocessing.RandomFlip('horizontal'),
    tf.keras.layers.experimental.preprocessing.RandomRotation(0.3),
    
    tf.keras.layers.Conv2D(16, (3,3), activation='relu', kernel_initializer='he_uniform', padding='same'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.Conv2D(32, (3,3), activation='relu', kernel_initializer='he_uniform', padding='same'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.Conv2D(32, (3,3), activation='relu', kernel_initializer='he_uniform', padding='same'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.Conv2D(64, (3,3), activation='relu', kernel_initializer='he_uniform', padding='same'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.Conv2D(256, (3,3), activation='relu', kernel_initializer='he_uniform', padding='same'),
    tf.keras.layers.BatchNormalization(),
    tf.keras.layers.MaxPool2D(2,2),
    tf.keras.layers.Dropout(0.2),
    
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation='relu', kernel_initializer='he_uniform'),
    tf.keras.layers.Dropout(0.35),
    #tf.keras.layers.BatchNormalization(),
    tf.keras.layers.Dense(1, activation='sigmoid')
])

# model = tf.keras.models.Sequential([
#     layers.experimental.preprocessing.Rescaling(1./255),
#     layers.Conv2D(32, 3, activation='relu'),
#     layers.MaxPooling2D(),
#     layers.Conv2D(32, 3, activation='relu'),
#     layers.MaxPooling2D(),
#     layers.Conv2D(32, 3, activation='relu'),
#     layers.MaxPooling2D(),
#     layers.Flatten(),
#     layers.Dense(128, activation='relu'),
#     layers.Dense(num_classes)
# ])

model.compile(optimizer=Adam(lr=0.001), loss='binary_crossentropy', metrics=['acc']) #SGD(lr=0.007, momentum=0.9), #RMSprop(lr=0.0001), #Adam(lr=0.001)
#model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])

#print(model.summary())

Create callbacks

In [16]:
class MyThresholdCallback(tf.keras.callbacks.Callback):
    def __init__(self, threshold):
        super(MyThresholdCallback, self).__init__()
        self.threshold = threshold

    def on_epoch_end(self, epoch, logs=None): 
        val_acc = logs["val_acc"]
        if val_acc >= self.threshold:
            self.model.stop_training = True

early_stopping_callback = MyThresholdCallback(threshold=0.951)

logdir = os.path.join(
    "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))

modelfname = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")+'.h5'
model_folder = pathlib.Path('/CatsAndDogs/model')
mcp_save = tf.keras.callbacks.ModelCheckpoint(os.path.join(model_folder, modelfname), save_best_only=True, monitor='val_loss', mode='min')

tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)

print(logdir)
# # ======= Go to the logs folder within the current project. ===========
# # ======= Open a terminal, activate correct env, run following command to launch tensorboard
# # tensorboard --port=6007 --logdir /CatsAndDogs/logs
# # =====================================================================

epochs = 300
history = model.fit(
    train_ds,
    validation_data=val_ds,
    epochs=epochs,
    verbose=1,
    callbacks=[tensorboard_callback, mcp_save, early_stopping_callback]
)

#model.save(os.path.join(model_folder, modelfname))

# model.fit(
#   train_ds,
#   validation_data=val_ds,
#   epochs=3
# )
logs/20201022-232355
Epoch 1/300
293/293 [==============================] - 77s 263ms/step - loss: 0.7119 - acc: 0.6132 - val_loss: 1.3321 - val_acc: 0.5005
Epoch 2/300
293/293 [==============================] - 77s 262ms/step - loss: 0.6044 - acc: 0.6681 - val_loss: 0.6647 - val_acc: 0.6692
Epoch 3/300
293/293 [==============================] - 76s 260ms/step - loss: 0.5756 - acc: 0.7049 - val_loss: 0.6830 - val_acc: 0.6653
Epoch 4/300
293/293 [==============================] - 77s 262ms/step - loss: 0.5408 - acc: 0.7302 - val_loss: 0.5560 - val_acc: 0.7309
Epoch 5/300
293/293 [==============================] - 77s 264ms/step - loss: 0.5225 - acc: 0.7489 - val_loss: 0.6917 - val_acc: 0.7023
Epoch 6/300
293/293 [==============================] - 77s 263ms/step - loss: 0.4960 - acc: 0.7607 - val_loss: 0.7278 - val_acc: 0.6739
Epoch 7/300
293/293 [==============================] - 77s 262ms/step - loss: 0.4792 - acc: 0.7766 - val_loss: 0.7240 - val_acc: 0.6984
Epoch 8/300
293/293 [==============================] - 77s 263ms/step - loss: 0.4530 - acc: 0.7960 - val_loss: 0.7870 - val_acc: 0.7068
Epoch 9/300
293/293 [==============================] - 77s 262ms/step - loss: 0.4457 - acc: 0.7987 - val_loss: 0.3901 - val_acc: 0.8271
Epoch 10/300
293/293 [==============================] - 77s 262ms/step - loss: 0.4288 - acc: 0.8069 - val_loss: 0.4218 - val_acc: 0.8134
Epoch 11/300
293/293 [==============================] - 77s 262ms/step - loss: 0.4143 - acc: 0.8203 - val_loss: 0.4129 - val_acc: 0.8235
Epoch 12/300
293/293 [==============================] - 77s 262ms/step - loss: 0.3965 - acc: 0.8263 - val_loss: 0.4337 - val_acc: 0.8147
Epoch 13/300
293/293 [==============================] - 76s 260ms/step - loss: 0.3745 - acc: 0.8369 - val_loss: 0.3171 - val_acc: 0.8696
Epoch 14/300
293/293 [==============================] - 76s 259ms/step - loss: 0.3645 - acc: 0.8419 - val_loss: 0.5352 - val_acc: 0.7707
Epoch 15/300
293/293 [==============================] - 76s 261ms/step - loss: 0.3586 - acc: 0.8454 - val_loss: 0.4533 - val_acc: 0.8115
Epoch 16/300
293/293 [==============================] - 76s 260ms/step - loss: 0.3458 - acc: 0.8511 - val_loss: 0.3757 - val_acc: 0.8412
Epoch 17/300
293/293 [==============================] - 76s 259ms/step - loss: 0.3354 - acc: 0.8562 - val_loss: 0.3638 - val_acc: 0.8431
Epoch 18/300
293/293 [==============================] - 76s 259ms/step - loss: 0.3309 - acc: 0.8583 - val_loss: 0.3281 - val_acc: 0.8602
Epoch 19/300
293/293 [==============================] - 76s 259ms/step - loss: 0.3222 - acc: 0.8645 - val_loss: 0.2886 - val_acc: 0.8737
Epoch 20/300
293/293 [==============================] - 77s 261ms/step - loss: 0.3091 - acc: 0.8699 - val_loss: 0.3438 - val_acc: 0.8517
Epoch 21/300
293/293 [==============================] - 76s 260ms/step - loss: 0.3104 - acc: 0.8683 - val_loss: 0.2896 - val_acc: 0.8850
Epoch 22/300
293/293 [==============================] - 76s 260ms/step - loss: 0.3035 - acc: 0.8722 - val_loss: 0.3070 - val_acc: 0.8666
Epoch 23/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2912 - acc: 0.8780 - val_loss: 0.4756 - val_acc: 0.8098
Epoch 24/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2920 - acc: 0.8789 - val_loss: 0.2707 - val_acc: 0.8831
Epoch 25/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2769 - acc: 0.8833 - val_loss: 0.2748 - val_acc: 0.8837
Epoch 26/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2799 - acc: 0.8845 - val_loss: 0.2957 - val_acc: 0.8716
Epoch 27/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2737 - acc: 0.8873 - val_loss: 0.2446 - val_acc: 0.9036
Epoch 28/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2751 - acc: 0.8858 - val_loss: 0.5135 - val_acc: 0.8049
Epoch 29/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2708 - acc: 0.8851 - val_loss: 0.2994 - val_acc: 0.8722
Epoch 30/300
293/293 [==============================] - 77s 261ms/step - loss: 0.2655 - acc: 0.8900 - val_loss: 0.2720 - val_acc: 0.8861
Epoch 31/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2697 - acc: 0.8873 - val_loss: 0.3357 - val_acc: 0.8730
Epoch 32/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2606 - acc: 0.8944 - val_loss: 0.4164 - val_acc: 0.8470
Epoch 33/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2583 - acc: 0.8947 - val_loss: 0.2674 - val_acc: 0.8869
Epoch 34/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2518 - acc: 0.8983 - val_loss: 0.3181 - val_acc: 0.8634
Epoch 35/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2526 - acc: 0.8957 - val_loss: 0.2641 - val_acc: 0.8906
Epoch 36/300
293/293 [==============================] - 76s 261ms/step - loss: 0.2485 - acc: 0.8964 - val_loss: 0.2659 - val_acc: 0.8863
Epoch 37/300
293/293 [==============================] - 77s 261ms/step - loss: 0.2455 - acc: 0.8996 - val_loss: 0.2559 - val_acc: 0.8901
Epoch 38/300
293/293 [==============================] - 76s 258ms/step - loss: 0.2454 - acc: 0.8992 - val_loss: 0.2760 - val_acc: 0.8878
Epoch 39/300
293/293 [==============================] - 77s 262ms/step - loss: 0.2422 - acc: 0.8978 - val_loss: 0.2720 - val_acc: 0.8803
Epoch 40/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2442 - acc: 0.9009 - val_loss: 0.2691 - val_acc: 0.8901
Epoch 41/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2383 - acc: 0.9031 - val_loss: 0.2554 - val_acc: 0.8966
Epoch 42/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2330 - acc: 0.9033 - val_loss: 0.2282 - val_acc: 0.9043
Epoch 43/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2353 - acc: 0.9034 - val_loss: 0.2683 - val_acc: 0.8831
Epoch 44/300
293/293 [==============================] - 76s 261ms/step - loss: 0.2264 - acc: 0.9088 - val_loss: 0.2665 - val_acc: 0.8908
Epoch 45/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2266 - acc: 0.9079 - val_loss: 0.1974 - val_acc: 0.9177
Epoch 46/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2241 - acc: 0.9070 - val_loss: 0.3849 - val_acc: 0.8440
Epoch 47/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2226 - acc: 0.9117 - val_loss: 0.2879 - val_acc: 0.8792
Epoch 48/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2256 - acc: 0.9092 - val_loss: 0.2245 - val_acc: 0.9013
Epoch 49/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2251 - acc: 0.9114 - val_loss: 0.2717 - val_acc: 0.8814
Epoch 50/300
293/293 [==============================] - 76s 258ms/step - loss: 0.2194 - acc: 0.9106 - val_loss: 0.3158 - val_acc: 0.8701
Epoch 51/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2182 - acc: 0.9091 - val_loss: 0.2414 - val_acc: 0.9004
Epoch 52/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2131 - acc: 0.9147 - val_loss: 0.2481 - val_acc: 0.8897
Epoch 53/300
293/293 [==============================] - 75s 258ms/step - loss: 0.2086 - acc: 0.9161 - val_loss: 0.3549 - val_acc: 0.8446
Epoch 54/300
293/293 [==============================] - 75s 257ms/step - loss: 0.2079 - acc: 0.9158 - val_loss: 0.4132 - val_acc: 0.8410
Epoch 55/300
293/293 [==============================] - 76s 261ms/step - loss: 0.2139 - acc: 0.9151 - val_loss: 0.3686 - val_acc: 0.8393
Epoch 56/300
293/293 [==============================] - 75s 258ms/step - loss: 0.2007 - acc: 0.9161 - val_loss: 0.2353 - val_acc: 0.9051
Epoch 57/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2040 - acc: 0.9162 - val_loss: 0.2546 - val_acc: 0.8940
Epoch 58/300
293/293 [==============================] - 76s 261ms/step - loss: 0.2085 - acc: 0.9144 - val_loss: 0.3143 - val_acc: 0.8814
Epoch 59/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2047 - acc: 0.9172 - val_loss: 0.3161 - val_acc: 0.8724
Epoch 60/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2023 - acc: 0.9185 - val_loss: 0.2271 - val_acc: 0.9132
Epoch 61/300
293/293 [==============================] - 77s 261ms/step - loss: 0.2013 - acc: 0.9173 - val_loss: 0.2620 - val_acc: 0.8842
Epoch 62/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1968 - acc: 0.9188 - val_loss: 0.2736 - val_acc: 0.8901
Epoch 63/300
293/293 [==============================] - 76s 258ms/step - loss: 0.2030 - acc: 0.9176 - val_loss: 0.2112 - val_acc: 0.9141
Epoch 64/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2004 - acc: 0.9160 - val_loss: 0.1876 - val_acc: 0.9265
Epoch 65/300
293/293 [==============================] - 76s 259ms/step - loss: 0.2037 - acc: 0.9190 - val_loss: 0.1938 - val_acc: 0.9235
Epoch 66/300
293/293 [==============================] - 77s 261ms/step - loss: 0.1986 - acc: 0.9203 - val_loss: 0.1933 - val_acc: 0.9254
Epoch 67/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1961 - acc: 0.9222 - val_loss: 0.2098 - val_acc: 0.9137
Epoch 68/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2014 - acc: 0.9199 - val_loss: 0.2221 - val_acc: 0.9113
Epoch 69/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1902 - acc: 0.9240 - val_loss: 0.2751 - val_acc: 0.8916
Epoch 70/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1865 - acc: 0.9234 - val_loss: 0.3912 - val_acc: 0.8386
Epoch 71/300
293/293 [==============================] - 76s 260ms/step - loss: 0.2000 - acc: 0.9197 - val_loss: 0.2293 - val_acc: 0.9100
Epoch 72/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1978 - acc: 0.9188 - val_loss: 0.1971 - val_acc: 0.9192
Epoch 73/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1942 - acc: 0.9223 - val_loss: 0.2065 - val_acc: 0.9156
Epoch 74/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1898 - acc: 0.9232 - val_loss: 0.3006 - val_acc: 0.8788
Epoch 75/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1878 - acc: 0.9253 - val_loss: 0.2039 - val_acc: 0.9160
Epoch 76/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1933 - acc: 0.9223 - val_loss: 0.2001 - val_acc: 0.9186
Epoch 77/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1880 - acc: 0.9228 - val_loss: 0.2255 - val_acc: 0.9060
Epoch 78/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1826 - acc: 0.9257 - val_loss: 0.2602 - val_acc: 0.8884
Epoch 79/300
293/293 [==============================] - 75s 258ms/step - loss: 0.1898 - acc: 0.9244 - val_loss: 0.2020 - val_acc: 0.9128
Epoch 80/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1913 - acc: 0.9217 - val_loss: 0.1921 - val_acc: 0.9224
Epoch 81/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1872 - acc: 0.9253 - val_loss: 0.2666 - val_acc: 0.8833
Epoch 82/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1864 - acc: 0.9242 - val_loss: 0.2036 - val_acc: 0.9181
Epoch 83/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1858 - acc: 0.9235 - val_loss: 0.2141 - val_acc: 0.9117
Epoch 84/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1858 - acc: 0.9248 - val_loss: 0.2011 - val_acc: 0.9237
Epoch 85/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1826 - acc: 0.9260 - val_loss: 0.1580 - val_acc: 0.9393
Epoch 86/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1800 - acc: 0.9270 - val_loss: 0.3713 - val_acc: 0.8442
Epoch 87/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1786 - acc: 0.9280 - val_loss: 0.2053 - val_acc: 0.9228
Epoch 88/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1743 - acc: 0.9307 - val_loss: 0.2717 - val_acc: 0.8872
Epoch 89/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1790 - acc: 0.9295 - val_loss: 0.2187 - val_acc: 0.9115
Epoch 90/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1761 - acc: 0.9294 - val_loss: 0.1949 - val_acc: 0.9252
Epoch 91/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1713 - acc: 0.9301 - val_loss: 0.1567 - val_acc: 0.9412
Epoch 92/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1668 - acc: 0.9331 - val_loss: 0.1797 - val_acc: 0.9320
Epoch 93/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1698 - acc: 0.9314 - val_loss: 0.2669 - val_acc: 0.8976
Epoch 94/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1783 - acc: 0.9283 - val_loss: 0.1752 - val_acc: 0.9308
Epoch 95/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1731 - acc: 0.9310 - val_loss: 0.1859 - val_acc: 0.9271
Epoch 96/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1708 - acc: 0.9319 - val_loss: 0.2554 - val_acc: 0.8989
Epoch 97/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1702 - acc: 0.9315 - val_loss: 0.2421 - val_acc: 0.8985
Epoch 98/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1660 - acc: 0.9323 - val_loss: 0.1730 - val_acc: 0.9323
Epoch 99/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1678 - acc: 0.9327 - val_loss: 0.2101 - val_acc: 0.9117
Epoch 100/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1684 - acc: 0.9323 - val_loss: 0.1702 - val_acc: 0.9320
Epoch 101/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1706 - acc: 0.9319 - val_loss: 0.2131 - val_acc: 0.9239
Epoch 102/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1656 - acc: 0.9327 - val_loss: 0.2228 - val_acc: 0.9102
Epoch 103/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1718 - acc: 0.9310 - val_loss: 0.1880 - val_acc: 0.9203
Epoch 104/300
293/293 [==============================] - 75s 258ms/step - loss: 0.1709 - acc: 0.9315 - val_loss: 0.2872 - val_acc: 0.9019
Epoch 105/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1699 - acc: 0.9341 - val_loss: 0.1845 - val_acc: 0.9224
Epoch 106/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1679 - acc: 0.9340 - val_loss: 0.1744 - val_acc: 0.9282
Epoch 107/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1669 - acc: 0.9358 - val_loss: 0.2357 - val_acc: 0.9113
Epoch 108/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1682 - acc: 0.9329 - val_loss: 0.1781 - val_acc: 0.9293
Epoch 109/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1602 - acc: 0.9361 - val_loss: 0.1846 - val_acc: 0.9256
Epoch 110/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1662 - acc: 0.9322 - val_loss: 0.1856 - val_acc: 0.9267
Epoch 111/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1650 - acc: 0.9323 - val_loss: 0.1701 - val_acc: 0.9340
Epoch 112/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1601 - acc: 0.9357 - val_loss: 0.2288 - val_acc: 0.9105
Epoch 113/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1640 - acc: 0.9348 - val_loss: 0.1802 - val_acc: 0.9314
Epoch 114/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1645 - acc: 0.9330 - val_loss: 0.2012 - val_acc: 0.9166
Epoch 115/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1614 - acc: 0.9351 - val_loss: 0.1955 - val_acc: 0.9209
Epoch 116/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1650 - acc: 0.9338 - val_loss: 0.1650 - val_acc: 0.9399
Epoch 117/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1635 - acc: 0.9349 - val_loss: 0.1815 - val_acc: 0.9267
Epoch 118/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1580 - acc: 0.9386 - val_loss: 0.1939 - val_acc: 0.9196
Epoch 119/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1591 - acc: 0.9357 - val_loss: 0.1709 - val_acc: 0.9320
Epoch 120/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1567 - acc: 0.9370 - val_loss: 0.1824 - val_acc: 0.9273
Epoch 121/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1586 - acc: 0.9384 - val_loss: 0.1999 - val_acc: 0.9243
Epoch 122/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1618 - acc: 0.9358 - val_loss: 0.1904 - val_acc: 0.9228
Epoch 123/300
293/293 [==============================] - 75s 258ms/step - loss: 0.1529 - acc: 0.9381 - val_loss: 0.3030 - val_acc: 0.8797
Epoch 124/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1567 - acc: 0.9382 - val_loss: 0.2064 - val_acc: 0.9126
Epoch 125/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1563 - acc: 0.9393 - val_loss: 0.2152 - val_acc: 0.9107
Epoch 126/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1521 - acc: 0.9388 - val_loss: 0.1585 - val_acc: 0.9376
Epoch 127/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1573 - acc: 0.9369 - val_loss: 0.2499 - val_acc: 0.9008
Epoch 128/300
293/293 [==============================] - 75s 258ms/step - loss: 0.1602 - acc: 0.9334 - val_loss: 0.2089 - val_acc: 0.9160
Epoch 129/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1534 - acc: 0.9385 - val_loss: 0.2071 - val_acc: 0.9130
Epoch 130/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1549 - acc: 0.9383 - val_loss: 0.1755 - val_acc: 0.9327
Epoch 131/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1551 - acc: 0.9377 - val_loss: 0.2053 - val_acc: 0.9190
Epoch 132/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1563 - acc: 0.9391 - val_loss: 0.1644 - val_acc: 0.9303
Epoch 133/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1551 - acc: 0.9359 - val_loss: 0.1901 - val_acc: 0.9284
Epoch 134/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1550 - acc: 0.9380 - val_loss: 0.1828 - val_acc: 0.9295
Epoch 135/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1569 - acc: 0.9369 - val_loss: 0.1569 - val_acc: 0.9346
Epoch 136/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1476 - acc: 0.9417 - val_loss: 0.1618 - val_acc: 0.9325
Epoch 137/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1547 - acc: 0.9366 - val_loss: 0.2211 - val_acc: 0.9179
Epoch 138/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1515 - acc: 0.9387 - val_loss: 0.2028 - val_acc: 0.9141
Epoch 139/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1515 - acc: 0.9373 - val_loss: 0.2424 - val_acc: 0.9030
Epoch 140/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1491 - acc: 0.9427 - val_loss: 0.1681 - val_acc: 0.9320
Epoch 141/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1462 - acc: 0.9418 - val_loss: 0.2242 - val_acc: 0.9109
Epoch 142/300
293/293 [==============================] - 75s 256ms/step - loss: 0.1487 - acc: 0.9407 - val_loss: 0.1866 - val_acc: 0.9263
Epoch 143/300
293/293 [==============================] - 75s 258ms/step - loss: 0.1494 - acc: 0.9425 - val_loss: 0.1675 - val_acc: 0.9346
Epoch 144/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1473 - acc: 0.9404 - val_loss: 0.2356 - val_acc: 0.9092
Epoch 145/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1522 - acc: 0.9375 - val_loss: 0.1819 - val_acc: 0.9280
Epoch 146/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1501 - acc: 0.9417 - val_loss: 0.1535 - val_acc: 0.9384
Epoch 147/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1491 - acc: 0.9398 - val_loss: 0.1694 - val_acc: 0.9312
Epoch 148/300
293/293 [==============================] - 75s 256ms/step - loss: 0.1506 - acc: 0.9395 - val_loss: 0.1923 - val_acc: 0.9235
Epoch 149/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1491 - acc: 0.9393 - val_loss: 0.1618 - val_acc: 0.9357
Epoch 150/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1445 - acc: 0.9428 - val_loss: 0.1715 - val_acc: 0.9327
Epoch 151/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1423 - acc: 0.9432 - val_loss: 0.1840 - val_acc: 0.9301
Epoch 152/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1472 - acc: 0.9416 - val_loss: 0.1756 - val_acc: 0.9318
Epoch 153/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1419 - acc: 0.9465 - val_loss: 0.2076 - val_acc: 0.9190
Epoch 154/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1407 - acc: 0.9423 - val_loss: 0.2134 - val_acc: 0.9188
Epoch 155/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1457 - acc: 0.9437 - val_loss: 0.1950 - val_acc: 0.9265
Epoch 156/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1442 - acc: 0.9425 - val_loss: 0.1774 - val_acc: 0.9284
Epoch 157/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1427 - acc: 0.9423 - val_loss: 0.1988 - val_acc: 0.9261
Epoch 158/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1431 - acc: 0.9417 - val_loss: 0.1666 - val_acc: 0.9414
Epoch 159/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1509 - acc: 0.9405 - val_loss: 0.2358 - val_acc: 0.9083
Epoch 160/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1429 - acc: 0.9449 - val_loss: 0.1835 - val_acc: 0.9308
Epoch 161/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1478 - acc: 0.9414 - val_loss: 0.1619 - val_acc: 0.9357
Epoch 162/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1434 - acc: 0.9426 - val_loss: 0.1771 - val_acc: 0.9316
Epoch 163/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1424 - acc: 0.9433 - val_loss: 0.1537 - val_acc: 0.9434
Epoch 164/300
293/293 [==============================] - 76s 260ms/step - loss: 0.1395 - acc: 0.9444 - val_loss: 0.1403 - val_acc: 0.9457
Epoch 165/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1397 - acc: 0.9444 - val_loss: 0.1803 - val_acc: 0.9239
Epoch 166/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1421 - acc: 0.9445 - val_loss: 0.1474 - val_acc: 0.9468
Epoch 167/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1393 - acc: 0.9437 - val_loss: 0.1431 - val_acc: 0.9410
Epoch 168/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1385 - acc: 0.9459 - val_loss: 0.1957 - val_acc: 0.9216
Epoch 169/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1392 - acc: 0.9427 - val_loss: 0.1415 - val_acc: 0.9440
Epoch 170/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1395 - acc: 0.9464 - val_loss: 0.2248 - val_acc: 0.9081
Epoch 171/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1404 - acc: 0.9437 - val_loss: 0.1734 - val_acc: 0.9337
Epoch 172/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1423 - acc: 0.9441 - val_loss: 0.5887 - val_acc: 0.7957
Epoch 173/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1379 - acc: 0.9447 - val_loss: 0.1548 - val_acc: 0.9367
Epoch 174/300
293/293 [==============================] - 76s 261ms/step - loss: 0.1432 - acc: 0.9438 - val_loss: 0.2156 - val_acc: 0.9207
Epoch 175/300
293/293 [==============================] - 75s 258ms/step - loss: 0.1399 - acc: 0.9452 - val_loss: 0.1518 - val_acc: 0.9440
Epoch 176/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1386 - acc: 0.9452 - val_loss: 0.1852 - val_acc: 0.9293
Epoch 177/300
293/293 [==============================] - 76s 258ms/step - loss: 0.1403 - acc: 0.9429 - val_loss: 0.1570 - val_acc: 0.9389
Epoch 178/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1408 - acc: 0.9443 - val_loss: 0.1806 - val_acc: 0.9320
Epoch 179/300
293/293 [==============================] - 75s 257ms/step - loss: 0.1390 - acc: 0.9448 - val_loss: 0.1997 - val_acc: 0.9201
Epoch 180/300
293/293 [==============================] - 76s 259ms/step - loss: 0.1375 - acc: 0.9464 - val_loss: 0.1227 - val_acc: 0.9530
In [17]:
# PLOT LOSS AND ACCURACY

#import matplotlib.image  as mpimg
#import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']

epochs=np.arange(len(acc)) # Get number of epochs

#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.figure()
plt.plot(epochs, acc, 'r', label="Training Accuracy")
plt.plot(epochs, val_acc, 'b', label="Validation Accuracy")
plt.title('Training and validation accuracy')
plt.legend()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.figure()
plt.plot(epochs, loss, 'r', label="Training Loss")
plt.plot(epochs, val_loss, 'b', label="Validation Loss")
plt.legend()

plt.title('Training and validation loss')

# Desired output. Charts with training and validation metrics. No crash :)
Out[17]:
Text(0.5, 1.0, 'Training and validation loss')

Save and Load best model

In [18]:
# serialize model to JSON
model_json = model.to_json()
with open(pathlib.Path("/CatsAndDogs/model/best_model_v2/model_structure.json"), "w") as json_file:
    json_file.write(model_json)
In [19]:
# load json and create model
json_file = open(pathlib.Path("/CatsAndDogs/model/best_model_v2/model_structure.json"), 'r')
loaded_model_json = json_file.read()
json_file.close()

loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights(pathlib.Path("/CatsAndDogs/model/best_model_v2/20201022-232355.h5"))
print("Loaded model from disk")
 
# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
Loaded model from disk

Predictions

Test on validation set images

In [55]:
for images, labels in val_ds.take(1):
    i = random.randint(0,10)

img_array = keras.preprocessing.image.img_to_array(images[i])
img_array = tf.expand_dims(img_array, 0)  # Create batch axis

true_class = np.array(labels[i])
predicted_class = model.predict_classes(img_array)[0]
predictions = model.predict(img_array)
score = predictions[0]

print("Prediction: ",class_names[predicted_class[0]], " | Truth: ", class_names[true_class])
print("Score: ", np.round(score,2))

plt.figure()
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(class_names[true_class])
plt.axis("off")
Prediction:  Cat  | Truth:  Cat
Score:  [0.]
Out[55]:
(-0.5, 199.5, 199.5, -0.5)

Test on online/downloaded images

In [47]:
i=0
img_name = r"Coronavirus-and-Cats-Science-Roundup-Catipilla.jpg"
path_to_Downloaded_images = pathlib.Path("/CatsAndDogs/online_test_images")
img_file_path = os.path.join(path_to_Downloaded_images,img_name)

img = keras.preprocessing.image.load_img( img_file_path, target_size=new_img_size)

img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0)  # Create batch axis

predicted_class = loaded_model.predict_classes(img_array)[0]
predictions = loaded_model.predict(img_array)
score = predictions[0]

print("Prediction: ",class_names[predicted_class[0]], " | Truth: ", class_names[i])
print("Score: ", np.round(score,2))

plt.figure()
plt.imshow(img)
plt.title(class_names[i])
plt.axis("off")
Prediction:  Cat  | Truth:  Cat
Score:  [0.]
Out[47]:
(-0.5, 199.5, 199.5, -0.5)

Conclusions

We have developed Convolutional Neural Network based model without using any pre-trained models or transfer learning methods. Achieves accuracy of ~95% on validation dataset. The original dataset had many corrupted files, that needed to be removed. Custom image feeder as well as callbacks are created.