Every year the makerspace in the high school goes from being a reasonably well-organized, clean space to looking like a tornado blew through and then realized that it left its stuff in the makerspace. Seeing this problem previous years was what gave birth to my idea. The original idea was to make wheely cubies that everyone could put their project parts into, and possibly also have small boards on the wall with groups of tools that you could take down that you tend to need for a task. For example for soldering you need a soldering iron, wire stripers, and helping hands. Then one day talking with Mr. Moody about the project he was saying that he didn’t think it was totally original and didn’t give him a HCTA™ reaction (“holy crap, that’s awesome”). We started talking about other ways to keep the makerspace organized, and I said that I didn’t know where many of the tools went on the back wall and thought that we could have a projector that showed where everything goes, although that would be expensive. Mr. Moody said that he had some RGB lights that I could use instead, and I thought that I could use them to shine on the wall to show where the tools go. Since I had previously talked to Mr. Moody about the fact that I was interested in AI, he suggested a tutorial called Practical Deep Learning For Coders at fast.ai that teaches people how use a server to train a neural network to differentiate among categories of objects with images.
I started the tutorial and got the AI to work with the default image set. I then spent a couple days gathering photos of screwdrivers and pliers for the data set, taking most photos myself but also asking my classmates to take photos of their tools and grabbing several photos from Google image searches. It took a little while to get the data set formatted correctly for the AI to process. I ended up with about 200 photos of screwdrivers and pliers each. These photos where enough to get the AI to identify the new photos with about 98% accuracy. At one point the AI stopped working because there was a file in one of the data sets that wasn’t supposed to be there, and I had to go in and delete it.
As for the physical board and light display, I used Rhinoceros CAD software and our laser cutter to make a wooden replica of the pegboard tool wall in the class. I then cut some meter-long rolls of NeoPixels into a couple of proper-length strips, soldered them together, and taped them to the back of the wooden replica. I thought about putting the lights on the front of the board; however, they are very bright when they are on and look kind of ugly when they are off. Since, the wall is white it reflects the majority of the light pointed at it, I thought that I could put the light in the space between the wall and the board. This turned out to spread the light quite nicely in the area around them and didn’t make them look too bright. The lights looked really cool, so I decided to also make some cool animations for the board. The final pattern that I decided on had the lights go around in a circle and increase the color values in different orders while the recognizer wasn’t in use.
The final form of AI Organize will make a prediction from a camera that takes a photo that the user triggers by pressing a button. I wasn’t able to automate the photo-taking and the passing of that image to the neural network for prediction yet. Completing that is what I will try to do next. I hope to integrate the AI into the Arduino with a camera and a button to trigger it. I will likely try to learn to make my own neural network on a local machine so that I can have more complete control of the system. I also plan to expand the board so that it has space for more tools.