diff --git a/Images and Videos/Images/WORKFLOW/Workflow.JPG b/Images and Videos/Images/WORKFLOW/Workflow.JPG
new file mode 100644
index 0000000..3097809
Binary files /dev/null and b/Images and Videos/Images/WORKFLOW/Workflow.JPG differ
diff --git a/Mechanical Design/Actuators/Stepper Motor.SLDPRT b/Mechanical Design/Actuators/Stepper Motor.SLDPRT
new file mode 100644
index 0000000..7779f95
Binary files /dev/null and b/Mechanical Design/Actuators/Stepper Motor.SLDPRT differ
diff --git a/Mechanical Design/Components Of Bin/Circular Cover.SLDPRT b/Mechanical Design/Components Of Bin/Circular Cover.SLDPRT
new file mode 100644
index 0000000..9408a75
Binary files /dev/null and b/Mechanical Design/Components Of Bin/Circular Cover.SLDPRT differ
diff --git a/Mechanical Design/Components Of Bin/Compartment Division Lower Part.SLDPRT b/Mechanical Design/Components Of Bin/Compartment Division Lower Part.SLDPRT
new file mode 100644
index 0000000..b78eade
Binary files /dev/null and b/Mechanical Design/Components Of Bin/Compartment Division Lower Part.SLDPRT differ
diff --git a/Mechanical Design/Components Of Bin/Plate For Object Keeping.SLDPRT b/Mechanical Design/Components Of Bin/Plate For Object Keeping.SLDPRT
new file mode 100644
index 0000000..e2ce6e3
Binary files /dev/null and b/Mechanical Design/Components Of Bin/Plate For Object Keeping.SLDPRT differ
diff --git a/Mechanical Design/Components Of Bin/Upper Cover.SLDPRT b/Mechanical Design/Components Of Bin/Upper Cover.SLDPRT
new file mode 100644
index 0000000..80d6f3c
Binary files /dev/null and b/Mechanical Design/Components Of Bin/Upper Cover.SLDPRT differ
diff --git a/Mechanical Design/Components Of Bin/Upper Flap System.SLDPRT b/Mechanical Design/Components Of Bin/Upper Flap System.SLDPRT
new file mode 100644
index 0000000..a1fa8c4
Binary files /dev/null and b/Mechanical Design/Components Of Bin/Upper Flap System.SLDPRT differ
diff --git a/Mechanical Design/Components Of Bin/Wooden Plate.SLDPRT b/Mechanical Design/Components Of Bin/Wooden Plate.SLDPRT
new file mode 100644
index 0000000..eb1a207
Binary files /dev/null and b/Mechanical Design/Components Of Bin/Wooden Plate.SLDPRT differ
diff --git a/Mechanical Design/Fully Assembled Bin Model/Complete Bin Model Assembly.SLDASM b/Mechanical Design/Fully Assembled Bin Model/Complete Bin Model Assembly.SLDASM
new file mode 100644
index 0000000..f8edff9
Binary files /dev/null and b/Mechanical Design/Fully Assembled Bin Model/Complete Bin Model Assembly.SLDASM differ
diff --git a/Mechanical Design/Sensors/Camera.SLDPRT b/Mechanical Design/Sensors/Camera.SLDPRT
new file mode 100644
index 0000000..e7b43de
Binary files /dev/null and b/Mechanical Design/Sensors/Camera.SLDPRT differ
diff --git a/Mechanical Design/Sensors/Strain Gauge.SLDPRT b/Mechanical Design/Sensors/Strain Gauge.SLDPRT
new file mode 100644
index 0000000..432f9b7
Binary files /dev/null and b/Mechanical Design/Sensors/Strain Gauge.SLDPRT differ
diff --git a/Mechanical Design/Sensors/Ultrasonic Sensor.SLDPRT b/Mechanical Design/Sensors/Ultrasonic Sensor.SLDPRT
new file mode 100644
index 0000000..67a420f
Binary files /dev/null and b/Mechanical Design/Sensors/Ultrasonic Sensor.SLDPRT differ
diff --git a/README.md b/README.md
index 7ed03e4..02cf762 100644
--- a/README.md
+++ b/README.md
@@ -1 +1,199 @@
-# eWaste-Dustbin
\ No newline at end of file
+
+
+# **E-Waste Bin**
+
+## **Abstract**
+
+The E-Waste Bin is a smart bin which can collect and segregate 5 types
+of waste namely Phones, Headphones, Watches, Mouses and chargers.
+
+
+
+
+## **Motivation**
+
+Nowadays, we are facing significant issues with E-Waste, they are
+hazardous to the environment if not disposed of properly, and some of
+this waste can be recycled easily. We need to collect them and segregate
+them for proper disposal or recycling, and this E-Waste Bin can be
+beneficial in doing so.
+
+Also one must gain valuable experience in working with sensors and
+**Arduino**, making physical models, and digital models on software like
+**Solidworks**.
+
+**Components**
+
+- Stepper motor & drivers L298n x2
+- Load cell & Hx711 x1
+- Arduino mega x1
+- Ultrasonic sensor x2
+- Dustbin x1
+- Webcam x1
+- Flywheel frame x1
+- Pi-shaped panel x2
+- Rectangular flap x1
+- Circular disk x1
+- Rectangular cardboard x5
+- Aluminium rods x5
+- Angle brackets x5
+
+## **Workflow**
+
+
+
+
+## **Mechanical Aspect of the Design**
+
+**BIN**
+
+The Bin is a standard-sized P.V.C. dust bin with a circular face;
+
+**Pi shaped panels**
+
+Two pi-shaped panels were cut out from thick cardboard. They are holding
+Load cell between them attached with nut and bolts, and a small sector
+from the lower panel is removed with centres of both coinciding, and
+thus the emerging part of the upper pane is attached to the Stepper
+motor.
+
+**Fly-Wheel**
+
+It was cut out from a wooden panel, the spikes are strengthened by
+attaching aluminium strips. The solid circle in the centre holds the
+stepper motor (which is rotating the Pi-shaped panel) along the central
+axis of the bin, it works as a frame for cardboards used for inner
+partitioning in the bin and it also works as a base for the circular
+disk working as an upper lid.
+
+This Flywheel is placed on the top of the bin, fixed by nuts and bolts.
+
+**Flap and Upper lid**
+
+The upper lid which is a wooden circular disk is standing on the
+flywheel frame by five aluminium rods.
+
+A stepper motor is fixed on the centre of the lid along the central axis
+and its rotating part is holding the wooden flap. The height of lid so
+adjusted that it leaves a gap of 1cm between flap and Pi panels.
+
+
+
+
+## **Electronics Aspect of the Design**
+
+**Stepper motor**
+
+The bin uses two 12-volt stepper motors, 200 steps per rotation. These
+motors are controlled by L298n stepper drivers.
+
+Using the stepper motor along with drivers provide rotations at required
+angles with precision at a faster speed, increasing the accuracy of the
+bin and saving time consumed in motion.
+
+
+
+**Arduino Mega**
+
+- It has a large number of pins, hence all the sensors and motors can
+ be connected easily.
+- Can be used for years as, 4 KB of EEPROM space available.
+
+
+
+
+**Load-cell & HX711**
+
+- We used 1 Kg load cell with HX711 ADC chip.
+- The strain gauge provides it a high precision, it can measure very
+ slight changes in weights making it suitable for lighter electronic
+ waste like earphones.
+
+
+
+**Ultra sonic sensor (HC-SR04)**
+
+- It can calculate accurate position of an object across two meters.
+- We have used it to control when to energise the Load cell and HX711
+ which increases their life.
+
+**Object Detection**
+
+- Object detection is done by using Ultralytics Model based on YOLOv5
+ and Pytorch and is trained over the custom dataset.
+- YOLOv5 is one of the most high-performing object detectors out
+ there. It is fast, has high accuracy and is incredibly easy to
+ train.
+
+**Cost Structure**
+
+ -----------------------------------------------------------------------
+ | **Components** | **Quantity** | **Cost(INR.)** |
+ | :-----------------------: | :--------------: | :---------------------: |
+ | Motor Driver L2898 | 2 | 240 |
+ | | | |
+ | Stepper Motors | 2 | 2050 |
+ | | | |
+ | Arduino Mega | 1 | 2200 |
+ | | | |
+ | 12v Battery | 1 | 900 |
+ | | | |
+ | Ultrasonic Sensor | 2 | 100 |
+ | | | |
+ | Load Cell | 1 | 520 |
+ | | | |
+ | HX711 | 1 | 150 |
+ | | | |
+ | Dustbin | 1 | 750 |
+ | | | | | | |
+ | **Total** | | 6,910 |
+
+## **Applications**
+
+- Collects and Segregates the E-Waste.
+- Increases the recyclability of E-Wastes.
+- Can be applied with a reward/royalty point-based system.
+
+## **Limitations**
+
+- Can only able to collect & separate only a few specific wastes like
+ chargers, Mouse, smartphones etc.
+- Object detection is carried out on a laptop.
+- If modified for segregation in more categories then, it will consume
+ a large space.
+- It's hard to empty the bin, one needs to completely remove the upper
+ portion for doing.
+
+> ## **Future Improvements**
+
+- We can use Rasberry pi to carry out image processing and object
+ detection and can add some more sensors to carry out segregation on
+ basis of recyclability making it more practical.
+- Can be applied with an online reward-based mechanism to make it more
+ appealing.
+
+## **Team Members**
+
+1. [Aastha Tembhare](https://github.com/Aastha-tembhare)
+2. [Jitesh Bhati](https://github.com/jiteshbhati305)
+3. [Kaivalya](https://github.com/kai-013)
+4. [Shreya Mittal](https://github.com/ShreyaMittalSM)
+
+## **Mentors**
+
+1. [Abhay Pratap Singh](https://github.com/DarthEkLen)
+2. [Harikhrishnan P.B.](https://github.com/MurkeyCube)
+3. [Sanjeev Krishnan](https://github.com/SanjeevKrishnan)
+
+## **References**
+
+- [HX711 module](https://youtu.be/sxzoAGf1kOo)
+- [Ultralytics github](https://github.com/ultralytics/yolov5)
+- [LabelImg](https://github.com/heartexlabs/labelImg)
+- [SolidWorks tutorials](https://youtu.be/36Bry_57Pcc)
+- [Yolo Drowsiness Detection](https://github.com/nicknochnack/YOLO-Drowsiness-Detection)
+- [Coustom dataset training Yolov5](https://youtu.be/80Q3HIBy7Qg)
+- [Arduino tutorial](https://randomnerdtutorials.com/arduino-load-cell-hx711)
+
+
+
diff --git a/Report and Poster/Poster.jpg b/Report and Poster/Poster.jpg
new file mode 100644
index 0000000..f053912
Binary files /dev/null and b/Report and Poster/Poster.jpg differ
diff --git a/Report and Poster/Report.pdf b/Report and Poster/Report.pdf
new file mode 100644
index 0000000..dd44354
Binary files /dev/null and b/Report and Poster/Report.pdf differ
diff --git a/src/object detection.ipynb b/src/object detection.ipynb
new file mode 100644
index 0000000..38b0f52
--- /dev/null
+++ b/src/object detection.ipynb
@@ -0,0 +1,1032 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "72297a04",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Looking in links: https://download.pytorch.org/whl/lts/1.8/torch_lts.html\n",
+ "Requirement already satisfied: torch==1.8.1+cu111 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (1.8.1+cu111)\n",
+ "Requirement already satisfied: torchvision==0.9.1+cu111 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (0.9.1+cu111)\n",
+ "Requirement already satisfied: torchaudio===0.8.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (0.8.1)\n",
+ "Requirement already satisfied: typing-extensions in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from torch==1.8.1+cu111) (4.1.1)\n",
+ "Requirement already satisfied: numpy in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from torch==1.8.1+cu111) (1.21.5)\n",
+ "Requirement already satisfied: pillow>=4.1.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from torchvision==0.9.1+cu111) (9.0.1)\n"
+ ]
+ }
+ ],
+ "source": [
+ "!pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "35356979",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Cloning into 'yolov5'...\n"
+ ]
+ }
+ ],
+ "source": [
+ "!git clone https://github.com/ultralytics/yolov5\n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "b5b11fe9",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Requirement already satisfied: matplotlib>=3.2.2 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 5)) (3.5.1)\n",
+ "Requirement already satisfied: numpy>=1.18.5 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 6)) (1.21.5)\n",
+ "Requirement already satisfied: opencv-python>=4.1.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 7)) (4.5.5.64)\n",
+ "Requirement already satisfied: Pillow>=7.1.2 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 8)) (9.0.1)\n",
+ "Requirement already satisfied: PyYAML>=5.3.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 9)) (6.0)\n",
+ "Requirement already satisfied: requests>=2.23.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 10)) (2.27.1)\n",
+ "Requirement already satisfied: scipy>=1.4.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 11)) (1.7.3)\n",
+ "Requirement already satisfied: torch>=1.7.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 12)) (1.8.1+cu111)\n",
+ "Requirement already satisfied: torchvision>=0.8.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 13)) (0.9.1+cu111)\n",
+ "Requirement already satisfied: tqdm>=4.64.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 14)) (4.64.0)\n",
+ "Requirement already satisfied: protobuf<4.21.3 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 15)) (3.19.1)\n",
+ "Requirement already satisfied: tensorboard>=2.4.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 18)) (2.9.1)\n",
+ "Requirement already satisfied: pandas>=1.1.4 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 22)) (1.4.2)\n",
+ "Requirement already satisfied: seaborn>=0.11.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 23)) (0.11.2)\n",
+ "Requirement already satisfied: ipython in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 37)) (8.2.0)\n",
+ "Requirement already satisfied: psutil in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from -r requirements.txt (line 38)) (5.8.0)\n",
+ "Collecting thop>=0.1.1\n",
+ " Downloading thop-0.1.1.post2207130030-py3-none-any.whl (15 kB)\n",
+ "Requirement already satisfied: fonttools>=4.22.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from matplotlib>=3.2.2->-r requirements.txt (line 5)) (4.25.0)\n",
+ "Requirement already satisfied: python-dateutil>=2.7 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from matplotlib>=3.2.2->-r requirements.txt (line 5)) (2.8.2)\n",
+ "Requirement already satisfied: packaging>=20.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from matplotlib>=3.2.2->-r requirements.txt (line 5)) (21.3)\n",
+ "Requirement already satisfied: pyparsing>=2.2.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from matplotlib>=3.2.2->-r requirements.txt (line 5)) (3.0.4)\n",
+ "Requirement already satisfied: cycler>=0.10 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from matplotlib>=3.2.2->-r requirements.txt (line 5)) (0.11.0)\n",
+ "Requirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from matplotlib>=3.2.2->-r requirements.txt (line 5)) (1.3.2)\n",
+ "Requirement already satisfied: idna<4,>=2.5 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from requests>=2.23.0->-r requirements.txt (line 10)) (3.3)\n",
+ "Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from requests>=2.23.0->-r requirements.txt (line 10)) (1.26.9)\n",
+ "Requirement already satisfied: charset-normalizer~=2.0.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from requests>=2.23.0->-r requirements.txt (line 10)) (2.0.4)\n",
+ "Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from requests>=2.23.0->-r requirements.txt (line 10)) (2021.10.8)\n",
+ "Requirement already satisfied: typing-extensions in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from torch>=1.7.0->-r requirements.txt (line 12)) (4.1.1)\n",
+ "Requirement already satisfied: colorama in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tqdm>=4.64.0->-r requirements.txt (line 14)) (0.4.4)\n",
+ "Requirement already satisfied: grpcio>=1.24.3 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (1.42.0)\n",
+ "Requirement already satisfied: setuptools>=41.0.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (61.2.0)\n",
+ "Requirement already satisfied: werkzeug>=1.0.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (2.0.3)\n",
+ "Requirement already satisfied: google-auth<3,>=1.6.3 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (1.33.0)\n",
+ "Requirement already satisfied: markdown>=2.6.8 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (3.3.4)\n",
+ "Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (1.8.1)\n",
+ "Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (0.4.6)\n",
+ "Requirement already satisfied: absl-py>=0.4 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (1.1.0)\n",
+ "Requirement already satisfied: wheel>=0.26 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (0.37.1)\n",
+ "Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from tensorboard>=2.4.1->-r requirements.txt (line 18)) (0.6.1)\n",
+ "Requirement already satisfied: pytz>=2020.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from pandas>=1.1.4->-r requirements.txt (line 22)) (2021.3)\n",
+ "Requirement already satisfied: jedi>=0.16 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (0.18.1)\n",
+ "Requirement already satisfied: backcall in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (0.2.0)\n",
+ "Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (3.0.20)\n",
+ "Requirement already satisfied: pygments>=2.4.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (2.11.2)\n",
+ "Requirement already satisfied: matplotlib-inline in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (0.1.2)\n",
+ "Requirement already satisfied: traitlets>=5 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (5.1.1)\n",
+ "Requirement already satisfied: stack-data in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (0.2.0)\n",
+ "Requirement already satisfied: pickleshare in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (0.7.5)\n",
+ "Requirement already satisfied: decorator in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from ipython->-r requirements.txt (line 37)) (5.1.1)\n",
+ "Requirement already satisfied: pyasn1-modules>=0.2.1 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1->-r requirements.txt (line 18)) (0.2.8)\n",
+ "Requirement already satisfied: six>=1.9.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1->-r requirements.txt (line 18)) (1.16.0)\n",
+ "Requirement already satisfied: rsa<5,>=3.1.4 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1->-r requirements.txt (line 18)) (4.7.2)\n",
+ "Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1->-r requirements.txt (line 18)) (4.2.2)\n",
+ "Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.4.1->-r requirements.txt (line 18)) (1.3.1)\n",
+ "Requirement already satisfied: parso<0.9.0,>=0.8.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from jedi>=0.16->ipython->-r requirements.txt (line 37)) (0.8.3)\n",
+ "Requirement already satisfied: wcwidth in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->-r requirements.txt (line 37)) (0.2.5)\n",
+ "Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard>=2.4.1->-r requirements.txt (line 18)) (0.4.8)\n",
+ "Requirement already satisfied: oauthlib>=3.0.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.4.1->-r requirements.txt (line 18)) (3.2.0)\n",
+ "Requirement already satisfied: executing in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from stack-data->ipython->-r requirements.txt (line 37)) (0.8.3)\n",
+ "Requirement already satisfied: asttokens in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from stack-data->ipython->-r requirements.txt (line 37)) (2.0.5)\n",
+ "Requirement already satisfied: pure-eval in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from stack-data->ipython->-r requirements.txt (line 37)) (0.2.2)\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Installing collected packages: thop\n",
+ " Attempting uninstall: thop\n",
+ " Found existing installation: thop 0.1.0.post2206102148\n",
+ " Uninstalling thop-0.1.0.post2206102148:\n",
+ " Successfully uninstalled thop-0.1.0.post2206102148\n",
+ "Successfully installed thop-0.1.1.post2207130030\n"
+ ]
+ }
+ ],
+ "source": [
+ "!cd yolov5 & pip install -r requirements.txt\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "af50df8f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "from matplotlib import pyplot as plt\n",
+ "import numpy as np\n",
+ "import cv2"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "bd53a0c4",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Using cache found in C:\\Users\\latas/.cache\\torch\\hub\\ultralytics_yolov5_master\n",
+ "YOLOv5 2022-6-14 Python-3.9.12 torch-1.8.1+cu111 CPU\n",
+ "\n",
+ "Downloading https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt to yolov5s.pt...\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "f960af3e581b4468b33b025c2357a753",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0.00/14.1M [00:00, ?B/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Fusing layers... \n",
+ "YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients, 16.4 GFLOPs\n",
+ "Adding AutoShape... \n"
+ ]
+ }
+ ],
+ "source": [
+ "\n",
+ "model = torch.hub.load('ultralytics/yolov5', 'yolov5s')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "5fa8e457",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "AutoShape(\n",
+ " (model): DetectMultiBackend(\n",
+ " (model): Model(\n",
+ " (model): Sequential(\n",
+ " (0): Conv(\n",
+ " (conv): Conv2d(3, 32, kernel_size=(6, 6), stride=(2, 2), padding=(2, 2))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (1): Conv(\n",
+ " (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (2): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (3): Conv(\n",
+ " (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (4): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " (1): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (5): Conv(\n",
+ " (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (6): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " (1): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " (2): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (7): Conv(\n",
+ " (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (8): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (9): SPPF(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)\n",
+ " )\n",
+ " (10): Conv(\n",
+ " (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (11): Upsample(scale_factor=2.0, mode=nearest)\n",
+ " (12): Concat()\n",
+ " (13): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (14): Conv(\n",
+ " (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (15): Upsample(scale_factor=2.0, mode=nearest)\n",
+ " (16): Concat()\n",
+ " (17): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (18): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (19): Concat()\n",
+ " (20): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (21): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (22): Concat()\n",
+ " (23): C3(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv3): Conv(\n",
+ " (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (m): Sequential(\n",
+ " (0): Bottleneck(\n",
+ " (cv1): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " (cv2): Conv(\n",
+ " (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n",
+ " (act): SiLU(inplace=True)\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " (24): Detect(\n",
+ " (m): ModuleList(\n",
+ " (0): Conv2d(128, 255, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (1): Conv2d(256, 255, kernel_size=(1, 1), stride=(1, 1))\n",
+ " (2): Conv2d(512, 255, kernel_size=(1, 1), stride=(1, 1))\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ ")"
+ ]
+ },
+ "execution_count": 7,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "41d2a6f4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import uuid # Unique identifier\n",
+ "import os\n",
+ "import time"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "0f828e0c",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "IMAGES_PATH = os.path.join('data', 'images') #/data/images\n",
+ "labels = ['phone', 'watch']\n",
+ "number_imgs = 23"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "5804fb03",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Collecting images for phone\n",
+ "Collecting images for phone, image number 0\n",
+ "data\\images\\phone.96f4d9b9-083e-11ed-b16d-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 1\n",
+ "data\\images\\phone.96f7668c-083e-11ed-bb60-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 2\n",
+ "data\\images\\phone.96f7668d-083e-11ed-9a77-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 3\n",
+ "data\\images\\phone.96f7668e-083e-11ed-a1f4-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 4\n",
+ "data\\images\\phone.96f7668f-083e-11ed-bf4f-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 5\n",
+ "data\\images\\phone.96f76690-083e-11ed-ad33-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 6\n",
+ "data\\images\\phone.96f76691-083e-11ed-94ec-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 7\n",
+ "data\\images\\phone.96f76692-083e-11ed-b745-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 8\n",
+ "data\\images\\phone.96f76693-083e-11ed-9124-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 9\n",
+ "data\\images\\phone.96f76694-083e-11ed-bb49-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 10\n",
+ "data\\images\\phone.96f76695-083e-11ed-ac82-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 11\n",
+ "data\\images\\phone.96f76696-083e-11ed-a271-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 12\n",
+ "data\\images\\phone.96f76697-083e-11ed-9de8-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 13\n",
+ "data\\images\\phone.96f76698-083e-11ed-914c-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 14\n",
+ "data\\images\\phone.96f76699-083e-11ed-bce7-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 15\n",
+ "data\\images\\phone.96f7669a-083e-11ed-9bdc-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 16\n",
+ "data\\images\\phone.96f8b423-083e-11ed-a8c3-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 17\n",
+ "data\\images\\phone.96f8b424-083e-11ed-b796-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 18\n",
+ "data\\images\\phone.96f8b425-083e-11ed-b418-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 19\n",
+ "data\\images\\phone.96f8b426-083e-11ed-941e-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 20\n",
+ "data\\images\\phone.96f8b427-083e-11ed-80e3-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 21\n",
+ "data\\images\\phone.96f8b428-083e-11ed-881e-90ccdf6b987d.jpg\n",
+ "Collecting images for phone, image number 22\n",
+ "data\\images\\phone.96f8c7a4-083e-11ed-969f-90ccdf6b987d.jpg\n",
+ "Collecting images for watch\n",
+ "Collecting images for watch, image number 0\n",
+ "data\\images\\watch.96f8db27-083e-11ed-b131-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 1\n",
+ "data\\images\\watch.96f8db28-083e-11ed-b3a3-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 2\n",
+ "data\\images\\watch.96f8db29-083e-11ed-af32-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 3\n",
+ "data\\images\\watch.96f8db2a-083e-11ed-9c3b-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 4\n",
+ "data\\images\\watch.96f8db2b-083e-11ed-9a5b-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 5\n",
+ "data\\images\\watch.96f8db2c-083e-11ed-8ca1-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 6\n",
+ "data\\images\\watch.96f8db2d-083e-11ed-ba10-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 7\n",
+ "data\\images\\watch.96f8db2e-083e-11ed-9c93-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 8\n",
+ "data\\images\\watch.96f8db2f-083e-11ed-958d-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 9\n",
+ "data\\images\\watch.96f8db30-083e-11ed-9627-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 10\n",
+ "data\\images\\watch.96f8db31-083e-11ed-9f7c-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 11\n",
+ "data\\images\\watch.96f8db32-083e-11ed-9971-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 12\n",
+ "data\\images\\watch.96f8db33-083e-11ed-83ff-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 13\n",
+ "data\\images\\watch.96f8db34-083e-11ed-8705-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 14\n",
+ "data\\images\\watch.96f8db35-083e-11ed-b831-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 15\n",
+ "data\\images\\watch.96f8db36-083e-11ed-a460-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 16\n",
+ "data\\images\\watch.96f8db37-083e-11ed-825f-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 17\n",
+ "data\\images\\watch.96f8db38-083e-11ed-95c1-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 18\n",
+ "data\\images\\watch.96f8db39-083e-11ed-8c84-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 19\n",
+ "data\\images\\watch.96f8db3a-083e-11ed-96d1-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 20\n",
+ "data\\images\\watch.96f8db3b-083e-11ed-80b6-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 21\n",
+ "data\\images\\watch.96f8db3c-083e-11ed-a999-90ccdf6b987d.jpg\n",
+ "Collecting images for watch, image number 22\n",
+ "data\\images\\watch.96f8db3d-083e-11ed-80e5-90ccdf6b987d.jpg\n"
+ ]
+ }
+ ],
+ "source": [
+ "for label in labels:\n",
+ " print('Collecting images for {}'.format(label))\n",
+ " for img_num in range(number_imgs):\n",
+ " print('Collecting images for {}, image number {}'.format(label, img_num))\n",
+ " imgname = os.path.join(IMAGES_PATH, label+'.'+str(uuid.uuid1())+'.jpg')\n",
+ " print(imgname) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "73712e9a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Cloning into 'labelImg'...\n"
+ ]
+ }
+ ],
+ "source": [
+ "!git clone https://github.com/tzutalin/labelImg"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "0892d711",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Requirement already satisfied: pyqt5 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (5.15.7)\n",
+ "Requirement already satisfied: lxml in c:\\users\\latas\\anaconda_3\\lib\\site-packages (4.9.1)\n",
+ "Requirement already satisfied: PyQt5-sip<13,>=12.11 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from pyqt5) (12.11.0)\n",
+ "Requirement already satisfied: PyQt5-Qt5>=5.15.0 in c:\\users\\latas\\anaconda_3\\lib\\site-packages (from pyqt5) (5.15.2)\n"
+ ]
+ }
+ ],
+ "source": [
+ "!pip install pyqt5 lxml --upgrade\n",
+ "!cd labelImg && pyrcc5 -o libs/resources.py resources.qrc"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "e3d9d3ad",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "training images are : 36\n",
+ "Validation images are : 9\n"
+ ]
+ }
+ ],
+ "source": [
+ "import os\n",
+ "from random import choice\n",
+ "import shutil\n",
+ "\n",
+ "#arrays to store file names\n",
+ "imgs =[]\n",
+ "xmls =[]\n",
+ "\n",
+ "#setup dir names\n",
+ "trainPath = r\"C:\\Users\\latas\\ewb2.0\\dataset\\images\\train\"\n",
+ "valPath = r\"C:\\Users\\latas\\ewb2.0\\dataset\\images\\val\"\n",
+ "crsPath = r\"C:\\Users\\latas\\images\" #dir where images and annotations stored\n",
+ "\n",
+ "#setup ratio (val ratio = rest of the files in origin dir after splitting into train and test)\n",
+ "train_ratio = 0.8\n",
+ "val_ratio = 0.2\n",
+ "\n",
+ "\n",
+ "#total count of imgs\n",
+ "totalImgCount = len(os.listdir(crsPath))/2\n",
+ "\n",
+ "#soring files to corresponding arrays\n",
+ "for (dirname, dirs, files) in os.walk(crsPath):\n",
+ " for filename in files:\n",
+ " if filename.endswith('.txt'):\n",
+ " xmls.append(filename)\n",
+ " else:\n",
+ " imgs.append(filename)\n",
+ "\n",
+ "\n",
+ "#counting range for cycles\n",
+ "countForTrain = int(len(imgs)*train_ratio)\n",
+ "countForVal = int(len(imgs)*val_ratio)\n",
+ "print(\"training images are : \",countForTrain)\n",
+ "print(\"Validation images are : \",countForVal)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "74e91de1",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "'C:\\\\Users\\\\latas\\\\ewb2.0\\\\dataset\\\\images\\\\val\\\\images'"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "trainimagePath = r\"C:\\Users\\latas\\ewb2.0\\dataset\\images\\train\"\n",
+ "trainlabelPath = r\"C:\\Users\\latas\\ewb2.0\\dataset\\label\\train\"\n",
+ "valimagePath = r\"C:\\Users\\latas\\ewb2.0\\dataset\\images\\val\"\n",
+ "vallabelPath = r\"C:\\Users\\latas\\ewb2.0\\dataset\\label\\val\"\n",
+ "#cycle for train dir\n",
+ "for x in range(countForTrain):\n",
+ "\n",
+ " fileJpg = choice(imgs) # get name of random image from origin dir\n",
+ " fileXml = fileJpg[:-4] +'.txt' # get name of corresponding annotation file\n",
+ "\n",
+ " #move both files into train dir\n",
+ " #shutil.move(os.path.join(crsPath, fileJpg), os.path.join(trainimagePath, fileJpg))\n",
+ " #shutil.move(os.path.join(crsPath, fileXml), os.path.join(trainlabelPath, fileXml))\n",
+ " shutil.copy(os.path.join(crsPath, fileJpg), os.path.join(trainimagePath, fileJpg))\n",
+ " shutil.copy(os.path.join(crsPath, fileXml), os.path.join(trainlabelPath, fileXml))\n",
+ "\n",
+ "\n",
+ " #remove files from arrays\n",
+ " imgs.remove(fileJpg)\n",
+ " xmls.remove(fileXml)\n",
+ "\n",
+ "\n",
+ "\n",
+ "#cycle for test dir \n",
+ "for x in range(countForVal):\n",
+ "\n",
+ " fileJpg = choice(imgs) # get name of random image from origin dir\n",
+ " fileXml = fileJpg[:-4] +'.txt' # get name of corresponding annotation file\n",
+ "\n",
+ " #move both files into train dir\n",
+ " #shutil.move(os.path.join(crsPath, fileJpg), os.path.join(valimagePath, fileJpg))\n",
+ " #shutil.move(os.path.join(crsPath, fileXml), os.path.join(vallabelPath, fileXml))\n",
+ " shutil.copy(os.path.join(crsPath, fileJpg), os.path.join(valimagePath, fileJpg))\n",
+ " shutil.copy(os.path.join(crsPath, fileXml), os.path.join(vallabelPath, fileXml))\n",
+ " \n",
+ " #remove files from arrays\n",
+ " imgs.remove(fileJpg)\n",
+ " xmls.remove(fileXml)\n",
+ "\n",
+ "#rest of files will be validation files, so rename origin dir to val dir\n",
+ "#os.rename(crsPath, valPath)\n",
+ "shutil.move(crsPath, valPath) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "a64ce999",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "python: can't open file 'C:\\Users\\latas\\ewb2.0\\train.py': [Errno 2] No such file or directory\n"
+ ]
+ }
+ ],
+ "source": [
+ "!python train.py --img 415 --batch 16 --epochs 30 --data dataset.yaml --weights yolov5s.pt --cache"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "609adcb1",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "C:\\Users\\latas\\ewb2.0\\yolov5\n"
+ ]
+ }
+ ],
+ "source": [
+ "cd yolov5\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "53666e69",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "python: can't open file 'C:\\Users\\latas\\ewb2.0\\train.py': [Errno 2] No such file or directory\n"
+ ]
+ }
+ ],
+ "source": [
+ "!python train.py --img 12 --batch 16 --epochs 30 --data dataset.yml --weights yolov5s.pt --cache"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "a7d03ec5",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "C:\\Users\\latas\\ewb2.0\\yolov5\n"
+ ]
+ }
+ ],
+ "source": [
+ "cd yolov5\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "9cd17edb",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Traceback (most recent call last):\n",
+ " File \"C:\\Users\\latas\\ewb2.0\\yolov5\\train.py\", line 26, in