Your Site Title



Human - AI Interaction

Fall 2018


Project Brief

With the proliferation of AI and Machine Learning in recent years, it is imperative that we educate ourselves and future generations on this subject. The consequences of not doing so may include unethical usage of AI, infractions, privacy violations, and cyber-terrorism. 

As such, i've made an educational tool in the form of a game that teaches children the very basics of machine learning and how ML models are made. It is a simple, linear styled game that prompts the user to input a png image file (640x480) to help train a Cyborg crime-fighting hero. The Cyborg hero will then take the image file and use its “targeting system” to then identify cars in the image and draw bounding boxes around what it thinks are cars.


The ML Model


Making the Model

This object detection model, is made using the Turi library (its object detection is based on YOLO algorithm). It has a mean average precision of 67%. The model was trained with 1000 iterations on 420 images of cars (each image has a couple of masks denoting where the cars are so this makes a total of 1120 pictures + masks) and a batch size of 32.

This is way better than just randomly guessing for cars in an image. It may not be great for self driving cars but it is good enough to pick up on where cars generally are. Of course if the dataset was larger and had more varied images as well as being trained more than 1000 iterations, this model would become more accurate. But for the purposes of this project, this is more than enough.


Model Limitations

This dataset came from this website:

I picked the dataset that has cars in it. The obvious biases in this dataset is that all these images were taken at roughly eye level. This makes the model unable to detect cars from any other pov be it ground up or birdseye. Below is an image from ispy that i’ve somewhat modified to demonstrate this point.

Screen Shot 2018-11-27 at 11.01.08 AM.png

As you can see, our POV of the car closer to us is more of a top-down view. The model was unable to detect it, while the cars further away have a much more straight-on view and the model was able to detect those (albeit very poorly). As far as I can tell, color contrast, vehicle shape, and size are all accounted for in this dataset. There just isn’t enough images with masks to go along with it to make this a very accurate model... for a self-driving car that is.


The Visuals

The visual assets are done in Adobe Illustrator, Photoshop, and After Effects. 

The visual assets are done in Adobe Illustrator, Photoshop, and After Effects.