Artificial intelligence. Data mining. Neural networks. Machine learning. Sound familiar? Not too long ago, artificial intelligence was a concept reserved for Hollywood & academics, but of late, these concepts have taken the enterprise software industry by storm. From tradeshows to social media, these futuristic buzzwords have become so common that it’s more surprising when they’re not mentioned during a sales cycle. The incredible success of many notable early AI adopters has predictably inspired imitation. Lots of it. In fact, the technology landscape has become so enamored with artificial intelligence that we now have everything from state-of-the-art supercomputers to AI-powered toasters.
While this overwhelming torrent of AI-powered solutions unleashed on the market is generally a positive development for consumers, there is also a steady dose of vaporware floating around. Firms can get away with this sleight of hand because AI remains a relatively erudite concept many buyers don’t fully understand. As a technology enthusiast, it’s easy to get caught up in the hype of any new technology. As a buyer, it can be especially difficult to sift through the noise. As you’re being constantly bombarded with these buzzwords, what you should really be asking yourself is what do these terms mean and why should I care? In short, what problem does this technology solve and how does it provide value?
What is Artificial Intelligence?
Let’s start with artificial intelligence. If we follow the academic textbook definition, AI is an extremely broad category of technology, with many different subsets and disciplines. Because this term is so broad, when a software company tells a prospect that their solution is powered by AI, it’s the equivalent of saying that their software is powered by computer science, which would justifiably result in a blank stare. In both cases, it’s a factual statement but it doesn’t provide detail about the available features of the software or the value the solution provides to the prospect.
One thing most AI subsets have in common is providing the ability to self-adjust a computational model. This means that AI can encounter an unforeseen scenario (input to an algorithm) that was not explicitly accounted for in the programming and still successfully determine an appropriate action. While this is not quite as exciting as Hollywood may suggest, with self-aware robots that decide to take over the world, the concept is extremely powerful when compared to traditional computer programming where a software developer must painstakingly write rules for every possible combination of inputs or scenarios. Traditional programming models work well enough for simple calculations with fixed inputs but for more complex computing challenges with dynamic unpredictable inputs, traditional programming models are difficult to scale. As the possible combination of inputs grows exponentially, a programmer could spend a lifetime trying to write a program that accounts for every possible use case. In short, AI algorithms optimize for specific outcomes instead of having to be spoon-fed specific instructions for every possible scenario. At first glance, a computational model that automatically optimizes for desired outputs might seem trivial, especially if you’ve never done any computer programming before, but this seemingly minor pivot completely changes the paradigm of how software is written and maintained.
Basic Examples of Artificial Intelligence
To help illustrate why this type of computational model is so valuable, let’s walk through some examples of supervised machine learning, which is a specific discipline of AI. If you’ve read through any introductory machine learning tutorials, basic examples tend to involve building a solution that answers a yes or no question about an image. This was infamously portrayed on HBO’s Silicon Valley, if you’re in need of a good laugh, but for our example, let’s talk about an algorithm that accepts any image as an input and provides a response stating whether the provided image is a forklift or not. This example may not seem impressive but let’s talk about how artificial intelligence makes it significantly easier for software developers to write & maintain these types of algorithms.
If we were to ask a team of software developers to build an algorithm from scratch that answers this simple question, it would take a very long time and the team would write a lot of static rules. After releasing the algorithm, we’d inevitably find some images that were misidentified, and the software developers would have to update the algorithm. Repeat those steps and we now have the traditional software development cycle where features are continuously being written, released, and adjusted – a time-consuming and expensive process.
Supervised machine learning completely changes the paradigm by focusing on outputs instead of writing complex rules. Rather than trying to write complex image processing rules to consistently identify a forklift, the ‘supervisor’ (the person training the machine learning algorithm) simply provides an image of a forklift as an example. The more example data points that the supervisor provides, the more the machine learning algorithm self-corrects and becomes more accurate. For example, if we only provided a single image of one type of forklift, then our algorithm would probably be able to rule out images that are completely different, like a baseball, but it stands to reason that with only having this single point of reference that the algorithm could easily mistake a golf cart for a forklift. If we provide the machine learning algorithm with millions of images of forklifts, however, images that cover a wide variety of makes and models, along with millions of images that are not forklifts, then our algorithm will ‘learn’ from those images and be much more accurate.
Practical Applications of Machine Learning
In the real world, this type of supervised machine learning becomes extremely useful for use cases that involve validating images. Example applications include everything from evaluating x-rays to validating an image of a paper check when processing a mobile deposit. In the supply chain space specifically, there are a wide variety of applications that range from identifying products that are damaged or have quality issues as they come off an assembly line to proactively identifying physical security issues at a facility.
Here at MacGregor Partners, we currently leverage machine learning in our M.Folio solution to validate a driver’s license that is provided by a truck driver. Our M.Folio software, which completely automates the driver check-in and check-out processes at warehouses, provides the ability for a truck driver to validate their driver license in a self-service way by using one of our kiosks or by using their mobile phone from the comfort of their cab. Of course, the danger with providing these self-service elements is that there will always be bad apples that try to take advantage of any system. If a shipper requires a truck driver to show proof of a valid driver’s license to pick up a shipment, then there is a huge difference between a valid driver’s license and a library card. When our team evaluated the best way to validate a driver’s license, the answer was clear. It didn’t make sense for us to write discrete rules that would cover every driver’s license from every state and country. Instead, we rely on a simple machine learning algorithm that has provided accuracy in the 99th percentile and the rules never need to be updated. This is a huge win for all parties involved because not only did machine learning enable us to provide the feature in a condensed timeline with a minimal maintenance liability, but it also provided an extremely useful & reliable feature to our customers.
Bringing it All Together
You’re hearing a lot about artificial intelligence for a reason. It’s an extremely powerful technology that is revolutionizing how software is built and maintained. At the same time, there is a lot of hyperbole, and as a buyer, when one of these AI-related buzzwords are injected into the conversation it’s best to view these claims with a healthy dose of skepticism. It’s nice to hear that vendors are evaluating & leveraging new technology, but just because a solution claims to leverage AI, doesn’t mean that the solution is autonomous or provides any value to your organization.
For example, I recently evaluated a software utility for our software development team (yes, software companies purchase software too) and one of the features listed for the premium license was simply listed as ‘machine learning’ without providing any context whatsoever. Sure, it sounds cool but what does that actually mean? Machine learning is a type of technology, not a software feature. The most appropriate response to these claims is to inquire how machine learning is leveraged and what value it provides to the users of the solution. Similar to any other technology out there, AI is simply an available tool in the toolkit of software vendors and it’s only as valuable as the problem it solves.
This post is the first post in a series of posts that will be focused on artificial intelligence. Be on the lookout for subsequent posts that will take a deeper look at how artificial intelligence is being leveraged in the supply chain space.