A.I. in a very tiny box

Artificial intelligence…AI. We use the term almost daily these days and we’re not talking about the future dreams, but rather present reality. AI is everywhere these days – self driving cars, real-time language translation, object recognition, image classification, information security and technology, robotics, Netflix, your smart phone and even in apps like Snapchat and Instagram. AI is everywhere in 2019.

The common perception is that AI is expensive, complicated, not for the ordinary folk to understand or even think about. To some, it’s a scary prospect for humanity’s future. Leading visionaries have warned us about giving AI too much control in our lives and yet we continue to click “Allow” or “I accept” when prompted. We continue to use popular apps like “FaceApp” because it’s cool, never considering it’s true purpose – to train AI models.

AI Models – A misnomer

Artificial intelligence is confused with terms like machine learning or deep learning. People – even those in the Information Technology space misuse these terms on a daily basis. Mostly because they want to sound like they know what they’re talking about and not actually knowing the difference…AI posers you might call them. To understand what AI is, one must understand how Machine Learning (ML) and Deep Learning (DL) fit into the landscape.

What is Machine Learning?

Machine learning is a subset of AI. That is to say, ML is one part of AI – not separate and not the same. In the most simple definition, Machine Learning “empowers computers with the ability to learn.” A human baby could never grow into a functioning, mature adult without the ability to learn throughout life. Similarly, AI could never evolve and grow if it didn’t have the ability to learn from experience – that’s where Machine Learning fits into the puzzle. That said, it learns in black and white terms and lacks rationality and the ability to make connections between data sets. 

So what, then, is Deep Learning?

Well, Deep Learning is a subset of Machine Learning (this is a very nested approach isn’t it?). Deep Learning is a model inspired by the functionality of neurons to mimic a neural network. What’s a neural network? The oldest example of this is our brain. The human brain is one giant neural network – able to learn, be rational, make connections, deduce, extrapolate, and so much more. Researchers are constantly developing neural network models using Deep Learning to try and mimic the human brain as closely as possible. Scary right?

Deep Learning is obviously better

Yes and no. It is a superior approach to AI but there is always a cost associated with the best available. In the AI world, that cost is Deep Learning requires oodles of data to train these neural network models. Just like a human grows up and experiences life 24/7 for decades, an AI model would have to do the same – except none of us want to wait 50 years to get a functioning AI model (or see if we even programmed it correctly). The solution? Use the data of 7.5 Billion people using the internet and social media to train these models. Even the simplest concept – identifying objects in photos – can require massive amounts of data to train it. To give you an idea, you can train a simple object classification model with as few as 25 images, but at the cost of accuracy. If you’re a Google sized company and want to be accurate for your Google Photos classification of a cat, you’ve probably trained your model with tens of millions of photos of cats, same with cars, and trees and everything else. I’d guess that Google Photos as a platform has been trained with more than 10 Billion images. 

Why do you care today?

Well, the great thing about AI, ML and DL is that you don’t necessarily require multiple $100k computers to actually enjoy the benefits. A few years ago, it was known that you could use an NVIDIA GPU (yes, a gaming graphics card) and CUDA to run ML and DL code on and the GPU was powerful enough to process the data for these models. Today with the RTX cards from NVIDIA and the inclusion of tensor cores in the higher end models, consumers are in an even better position to develop and test these models in the comfort of their own home. Shelling out $300-$4000 for a graphics card to see if you can develop something is an expensive proposition though. 

NVIDIA has you covered

Introducing the Jetson Nano. This is a small, standalone, purpose built computer (akin in size to the Raspberry Pi). It contains 4GB RAM, a quad-core A57 ARM CPU, Gigabit ethernet, HDMI, Displayport, USB 3.0 and 128 Maxwell CUDA cores for all that ML and DL processing you might need to do. That’s right, a tiny little computer designed for enthusiasts or just about anyone to develop and test AI models anywhere, anytime. 

It’s got to be expensive right? WRONG! While NVIDIA has more expensive, beefier AI computers in the Jetson line-up, the Nano will only cost $99 USD. That gets you the Jetson Nano, a small paper stand and a quick start guide. You’ll have to supply either a 2amp MicroUSB or 4amp power brick, MicroSD card (used to store the OS and any code – recommend a 32GB+ high speed card) and potentially a 40mm fan to keep that heatsink cool. 

Our kit came with a Raspberry Pi camera module as well that will connect into the ZIF connector on the side of the PCB. 

One thing to note – I love that NVIDIA used black PCB’s for the Nano instead of something clinically ugly like the standard green. The black PCB gives the Nano a very slick, almost black box like appearance. 

Setup and Configuration

There is some setup and configuration to the Nano – like any good computer, it’s not just plug and play. Step one is to copy the Jetson Nano Dev Kit SD Card Image to your MicroSD card – this is the OS. Step two is to connect up all your peripherals, power supply and insert the MicrSD card. Step three – boot up and configure simple things like keyboard layout, time zone, language and create your username and password. It’s that simple to get started.

What’s next?

Now the fun begins. From this state, you can begin building your AI application using ML and DL models to get the most out of the Jetson Nano. 

We followed a simple tutorial to build a real-time facial recognition program that functions very similarly to a video doorbell. The program is really neat because in real time it identifies the face in the frame, gives you a readout of the amount of time the face is in the frame and how many times the face has been detected (first visit, second, 50th). This all happens in real-time on a $99 piece of hardware!

Granted we’re not talking 60fps or anything crazy here, but for the purposes of a security camera it functions nicely. For the purposes of demonstrating the power of the Jetson Nano, it’s amazingly simple. This whole project took us about an hour to complete following a tutorial.

Dreaming and working

Our goal, in progress, is to build an object detection program on the Jetson Nano that appends the objects identified to the image file properties as tags. This would be an automatic image classification engine for our in house library of images.

Imagine that for $99 you could build a system at home to tag all your digital camera photos and keep them updated. That you never had to go and tag images manually – just dump them on your computer or network storage and the Jetson Nano did the rest. For $99, Google Photos like functionality without uploading your photos to the cloud. We believe the Nano could do something on this scale. Once we have our working program, we’ll publish a tutorial on how to accomplish this.

So…the Jetson Nano

How does the Jetson Nano impact the landscape? Well, it certainly won’t be impacting major players like Google or IBM or Apple or Microsoft in the AI space. Rather, the Jetson Nano and it’s bigger brothers the Jetson TX2 with Pascal CUDA cores and the AGX Xavier with Volta CUDA and Tensor cores aim to improve AI at the edge. It gives people like me an opportunity to play around with AI development and applications on a small scale and see what I can do to improve the life of the average user. 

With the latest release of DeepStream and an update to the OS for the Jetson Nano, the platform becomes even more powerful and useful to developers. The best part of the whole thing is the price tag. At $99 you cannot go wrong if you’re at all interested in developing an AI application. It’s cheaper than a graphics card, more powerful than an x86 CPU, more compact than a used computer and has more potential to impact your life than any of the aforementioned devices. 

As stated, we’re still working on an application that, when complete, we’ll be happy to publish the code and a tutorial on.

Leave a Reply

Your email address will not be published. Required fields are marked *