以下为卖家选择提供的数据验证报告:
数据描述
Let’s begin with recalling the classic game of “Stone Paper Scissors”. usually played between two people, in which each player simultaneously forms one of three patterns amongst the stone, paper, or scissors with the hands. This project is a web application, in which users can play the game with the computer using the webcam. So, there is a need to build a classifier that takes images of hands as inputs and gives the user’s expected to move as output.
The fundamental rules of Stone, Paper, Scissors apply: -> Scissors are defeated by Stone. -> Scissors cut paper. -> Paper is defeated by Scissors.
For the project, there is a need to build a classifier that takes images of hands as inputs and gives the user’s expected to move as output. Mediapipe is a library by Google that provides solutions for the recognition of key hand points. We use the library to gather Hand Landmarks for the three patterns that have meaning to the project i.e., Stone, Paper, and Scissors. The Hand Landmark Model in Mediapipe allows us to collect precise key points for 21 hand-knuckle coordinates on x and y axes inside the detected hand region.
Dataset- 15 thousand samples each for the patterns: “Stone”, “Paper”, and “Scissors” were collected, and used to construct a Machine Learning Model to classify new images(that have a hand representing one of the above patterns) into its respective class. These samples were used to build a machine learning model for the classification.
It is expected for you to build a classifier for the project using the given dataset. A notebook has also been uploaded which would guide you towards the process. Accuracy of around 0.98 has been achieved on the test set, using XGBoost Classifier which can be seen in the notebook. A better model would be appreciated.
