One of the great things about my job is that I get to work with emerging technology, while it’s still emerging. The emerging tech topic I want to dive into today is deep learning. Everyone’s talking about it, and there are loads of research papers written about it. So let’s take a closer look.
If you’re like me, research papers might as well be written in Ancient Mayan. Honestly, the math classes I took 25-30 years ago mean the symbols look familiar, but I no longer have any idea what they mean. So part of my interest in deep learning or machine learning, was to determine if my lack of understanding was OK. Do people really need to understand the algorithm to use it? I mean I’m not a plastics engineer but I can build things with Legos…
It turns out that the answer is: It depends. I know that sounds like a cop-out, but if you need really high accuracy, say 95% or higher, or near real time performance, or there isn’t an algorithm that does what you want, then probably not. If you're OK with 90% accuracy for sentiment analysis or a more common problem, it might “just work.”
My project was a version of sentiment analysis that needed to be re-trained. It turns out that with Amazon Web Services (AWS) it’s actually really easy, although I didn’t believe it at first. I had built a file of about 3,000 phrases and scored them. Once I uploaded the comma separated values (CSV) into Amazon S3, I was able to create a dataset.
I had the option to create 80% for training and 20% for testing. That’s a pretty good breakdown, I also left out 20 phrases for my sanity check. I never uploaded them into AWS so there was no chance I’d accidentally train with them, which would have made the results appear better than they really are.
There weren’t a lot of options and the defaults all seemed to do the right thing. I did experiment using a recipe that can do things like group words into bigrams or trigrams but it didn’t seem to be needed.
When the dataset was done it reported back a score. I ignored the score and instead used the web interface AWS provides to enter my 20 phrases and see how well it did detecting secure versus insecure phrases. It turned out to be slightly below 90% accurate. That is good enough for what I was trying to do. You can easily expose the trained model via API gateway and then call it like any other REST API using JSON.
I also experimented with the Google and Microsoft versions as well and even some locally-run Python code.
One thing to note, since I used largely off-the-shelf algorithms, was that the training data was what really made my model unique. Most of what I did was build a really accurate score for each phrase and then used that to train my model. This is one key to keep in mind because it means the training data is actually as important as the algorithm. So, while it is always critical to keep your data secure, that security is also necessary when using deep or machine learning in order to get an accurate outcome.
Because of this, when using a cloud-based machine learning vendor, check to make sure that your data is protected, and that you retain ownership of it. Or if you are really worried, a local instance might be worthwhile. However, depending on the size and complexity that may require quite a bit of hardware to run. I didn’t see any issues in the contracts from Google. Microsoft or AWS, but a legal and compliance review is always important.