Skip to content

Syukta8/GreenRed_AppleClassification

Repository files navigation

GreenRed_AppleClassification

This is my proposed project during the KPT-PACE Machine Learning workshop. This project objective to help colorblind people differentiate between green and red apple and also between fresh and rotten one. Please note that all the code below need to be run in terminal and to be safe, run inside your Docker or Environment

Fresh apple

1) Download project to your Jetson Nano:

git clone https://github.com/Syukta8/GreenRed_AppleClassification.git
pip install -r requirements.txt

It also can be download on your local machine to do the training beforehand. Use pip command to install all required modules. Put your dataset inside /data folder. You can search the dataset on Kaggle .To train your dataset, you need to make sure that all library inside requirements.txt are installed inside your docker(Jetson Nano) or environment(Local Machine)

2) To start training your dataset, run this:

python train.py --model-dir=models/apple --epochs=50 --batch-size=4 --workers=2 --lr=0.001 --arch=resnet34 data/apple

You can adjust the epoch, batch size, learning rate and pre-trained model accordingly. The model file will be save inside /models/apple with name model_best.pth.tar

3) After finish training, run the conversion file onnx_export.py to convert the tar format to onnx format:

python onnx_export.py --model-dir=models/apple

You will find a file inside /models/apple with name resnet34.onnx. The file will depends on the pretrained model name that you use during the training.You can also download the file from my drive,

gdown https://drive.google.com/uc?id=1JDL9Ttwepzze0nQJobEIxUsownitDeAF
or
gdown https://drive.google.com/uc?id=1irmM1qMHo178qSXRtO-36UOSY3JxqKeC

4) To run the inference:

from image:

python imagenet.py --model=models/apple/resnet34.onnx --input_blob:input_0 --output_blob:output_0 --labels=data/apple/labels.txt image.jpg

from video

python imagenet.py --model=models/apple/resnet34.onnx --input_blob:input_0 --output_blob:output_0 --labels=data/apple/labels.txt video.mp4

from webcam

python imagenet.py --model=models/apple/resnet34.onnx --input_blob:input_0 --output_blob:output_0 --labels=data/apple/labels.txt /dev/video0

To check which webcam available in your jetson, run /dev/video* in the terminal before running inference code.

Please comment any improvement that can be add into the code.Thank you for using this programs.

Here my full video on this project:

Syukta8

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages