Part 1: Neural Chess Player

October 9, 2017 Francisc Camillo

Part 1: Neural Chess Player — From Data Gathering to Data Augmentation

Image Link: https://static.pexels.com/photos/40796/chess-strategy-chess-board-leadership-40796.jpeg

Playing chess has been part of my life ever since gradeschool, playing for my high school, for the chess club, at college and for the city all while having fun. Chess truly filled my childhood with wonder.

Learning opening strategies, middlegame tactics and end game blunders, specially king movements like zugzwangs, sure was one heck of a headache back then, but it truly helped me play well and prepare for the tournaments.

Knowing this concepts are the fundamental in playing chess, although one could learn all of this through experience and playing a lot of chess games (Neural Network! Aha!).

In this project we’re going to create a 2 different neural networks that would be able to play chess using this 2 methods I mention above. Learning through supervised data from books, and learning by experience using rewards and policies.

I still don’t know how long this project will run and how many medium post parts will I create to finish the whole project, but it will surely be one great roller coaster ride of learning. So come and join me.

A bit added information:

How will it work: ( from my github readme.md )
A camera will be placed on top of the gaming board making sure that it captures the whole board. Next, if the user is the white player he starts by moving a piece and then pressing enter on the keyboard. The press signifies the end of the player’s turn. The Neural Chess Player will then take an image and assesses the current position then predicts the next move. A terminal timer will act as the game clock. The Neural Chess Player will send a press call to the timer to say that the turn has ended.

Quick Note: Still figuring out how did AlphaGo work wrt to time. Would be really happy if anyone could direct me to an article or video that talks something about it.

http://design-engine.com/designing-a-chess-ai/

Data Gathering

Since now I have identified what we are going to do and how we are going to do it, lets begin with gathering our data.

First came to mind was searching for any relevant data on the net, hopefully I could find a dataset on which top views of the chess pieces are created. Sadly nothing of this sort exist (if anybody could link me up if they know something like this, please do share on the responses, it would be gladly appreciated ). So we have to create it ourselves.

The following are the steps that I took in creating the data.

Step 1 :

Gathering the actual objects needed. I first took a chess set from our stock pile and laid it out flat, placing all the chess pieces on the side arranged from their points. Then got the camera and my laptop, and placed it beside the chess pieces. You may also create a rig that would hold camera allowing you to create a top view image of the chess pieces

Step 2:

Then I set up a camera on top of the laid out chess set where it is only focusing on one square tile ( alternating from green to white colors )

Here is an example image:

Black Bishop on Top of White Tile

The image on top is a sample of a black piece on top a white tile.

This image is a framed out image, from a one minute video I’ve took with the camera. Then we are going to do the same with the other pieces alternating from white tile, then green tile. Here is another example:

White Queen on top of White Tile

Step 3:

Now we have gathered ample amount of videos of the different chess pieces on different tile colors, we could now start converting the videos into images. You could do this using Video Editing Softwares or you could do this using python. Although I could take the video editing software route, I think doing it on python would surely be quite fun for the process. I could still go back and do it on a Video Editing Software if the images created by the code below doesn’t suffice what I needed.

Now for the explanation of the code.

os.popen3('ffmpeg -i ../../input/video/%s' % (filename))

This line allows us to access the file specified in filename and apply the ffmpeg command to it, just like as if we are running the ffmpeg command on the terminal. You could learn more about ffmpeg here. Take note: a, b, c before this line means that function (referring to os.popen3 ) is returning 3 values and we are explicitly placing them inside a, b,and c.

After getting and opening the video file, we could now chop it all off into frames. As you could see, we will be getting the duration of the video on duration then splitting it out in to hours, minutes, seconds on the line hh,mm, ss lastly we will be converting that time into seconds, and that is what we are doing at total .

Now that we have the total time of the video, we could rerun the video and get the images displayed on that specific time. For ease, I’ve decided to go with per second. Then using ffmpeg again we would save the frame from the time we specified into a different folder.

Now we have our collection of images,that will serve as our initial dataset.

Data Augmentation / Data Synthesis

Data Data Data.

AI / Machine Learning heavily relies on the data that you are going to feed it, whether its for supervised learning ( much more data tbh but visualize it as a diminishing returns curve, there is a limit ) or unsupervised learning. Data heavily powers the algorithm we use for our machine learning models, from making it work, predicting classifications, finding outliers, and up to knowing if the object is an apple or an orange. Although there are techniques that are already developed and currently being develop for small data sets, it is always a great mindset to make sure you have sufficient data for the project you are going to do.

With this, let’s start creating our augmented / synthetic dataset

First let start of with our imports. We will be needing this following libraries to access the images, augment the images, and play with the pixels.

Then we need to get all the filenames of the images inside the output directories we specified after converting the videos into images, in the code below we are calling the `os.listdir` method to do this.

Next, we are going to create our helper function convert_images, inside we will do the augmentation / synthesis. Since our dataset is only about 400–500 images, we could rely on data augmentation to increase this number 4x or 5x more, heavily depending on how many and what data augmentation method we will use ( data augmentation method must be thoroughly decided, if the augmented output will be possible in reality or will it only lead the algorithm in the wrong way, failing to understand the data ). For this project I only decided to go for the basic rotations and flips. Resizing the data was also done inside the function because working with a lot of pixels will make the learning process longer and compute intensive and currently we don’t have any powerful graphics card laying around.

Lastly, let’s set the final image size and then run everything.

For the code below, I’ve shorten it to show the use of the functions from above.

After running the following lines of codes, you would have quadrupled the number of training sets you have. You could now do an initial run on any model and gradient check if the images are enough and for the task or if we still need more. If we still need to add more images, we could do it by taking more videos on different environments and running the code again, or we could also add more data augmentation methods, like warping, skewing, adding some noise, etc.

Conclusion

In this project we’ve been through the start of doing a deep learning task, the steps that we did for now, involves the messy part of a deep learning project, which is gathering, cleaning, augmenting and processing the data. This was very tedious since we are still learning about the data and the what to do with it.

On this project, data gathering was limited to capturing top view videos then converting them to images, we could do more by actually recording the whole board, with the piece, a specific quadrant with a piece, etc. But for now we will be using what we have.

Overall I learned how to start this project and process the data we found. Although our methods might still be complete, it is a great stepping stone to learn from, encountering mistakes in the process and learning from it. Specially in documenting every part of the process I took, it was something new for me and a bit hard to do, but as I said, it’s a great stepping stone and I’ll continue on doing it.

For Part 2 we will go through converting the images we gathered into ndarrays so that we could convert the data we had into a dataset. This dataset will then be feed into the model that we will be using, but that would be for another part, part 3 or 4 .

Join 30,000+ people who read the weekly 🤖Machine Learnings🤖 newsletter to understand how AI will impact the way they work and live.


Part 1: Neural Chess Player was originally published in Machine Learnings on Medium, where people are continuing the conversation by highlighting and responding to this story.

Previous Article
Solving physical world problems with machine learning
Solving physical world problems with machine learning

Awesome, not awesome.#Awesome“What we now call AI is just the next stage of us weaving our intelligence tog...

Next Article
AI is not coming for Customer Service teams. AI is coming to elevate Customer Service teams.
AI is not coming for Customer Service teams. AI is coming to elevate Customer Service teams.

One of the most heartening discussions around AI is that it will “increase productivity by sharpening the h...